WorldWideScience

Sample records for digital objects algorithm

  1. Thinning an object boundary on digital image using pipelined algorithm

    International Nuclear Information System (INIS)

    Dewanto, S.; Aliyanta, B.

    1997-01-01

    In digital image processing, the thinning process to an object boundary is required to analyze the image structure with a measurement of parameter such as area, circumference of the image object. The process needs a sufficient large memory and time consuming if all the image pixels stored in the memory and the following process is done after all the pixels has ben transformed. pipelined algorithm can reduce the time used in the process. This algorithm uses buffer memory where its size can be adjusted. the next thinning process doesn't need to wait all the transformation of pixels. This paper described pipelined algorithm with some result on the use of the algorithm to digital image

  2. Phase recovering algorithms for extended objects encoded in digitally recorded holograms

    Directory of Open Access Journals (Sweden)

    Peng Z.

    2010-06-01

    Full Text Available The paper presents algorithms to recover the optical phase of digitally encoded holograms. Algorithms are based on the use of a numerical spherical reconstructing wave. Proof of the validity of the concept is performed through an experimental off axis digital holographic set-up. Two-color digital holographic reconstruction is also investigated. Application of the color set-up and algorithms concerns the simultaneous two-dimensional deformation measurement of an object submitted to a mechanical loading.

  3. Digital and discrete geometry theory and algorithms

    CERN Document Server

    Chen, Li

    2014-01-01

    This book provides comprehensive coverage of the modern methods for geometric problems in the computing sciences. It also covers concurrent topics in data sciences including geometric processing, manifold learning, Google search, cloud data, and R-tree for wireless networks and BigData.The author investigates digital geometry and its related constructive methods in discrete geometry, offering detailed methods and algorithms. The book is divided into five sections: basic geometry; digital curves, surfaces and manifolds; discretely represented objects; geometric computation and processing; and a

  4. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  5. Digital Geometry Algorithms Theoretical Foundations and Applications to Computational Imaging

    CERN Document Server

    Barneva, Reneta

    2012-01-01

    Digital geometry emerged as an independent discipline in the second half of the last century. It deals with geometric properties of digital objects and is developed with the unambiguous goal to provide rigorous theoretical foundations for devising new advanced approaches and algorithms for various problems of visual computing. Different aspects of digital geometry have been addressed in the literature. This book is the first one that explicitly focuses on the presentation of the most important digital geometry algorithms. Each chapter provides a brief survey on a major research area related to the general volume theme, description and analysis of related fundamental algorithms, as well as new original contributions by the authors. Every chapter contains a section in which interesting open problems are addressed.

  6. Architectural Implications for Spatial Object Association Algorithms*

    Science.gov (United States)

    Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste

    2013-01-01

    Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244

  7. Structured Course Objects in a Digital Library

    Science.gov (United States)

    Maly, K.; Zubair, M.; Liu, X.; Nelson, M.; Zeil, S.

    1999-01-01

    We are developing an Undergraduate Digital Library Framework (UDLF) that will support creation/archiving of courses and reuse of existing course material to evolve courses. UDLF supports the publication of course materials for later instantiation for a specific offering and allows the addition of time-dependent and student-specific information and structures. Instructors and, depending on permissions, students can access the general course materials or the materials for a specific offering. We are building a reference implementation based on NCSTRL+, a digital library derived from NCSTRL. Digital objects in NCSTRL+ are called buckets, self-contained entities that carry their own methods for access and display. Current bucket implementations have a two level structure of packages and elements. This is not a rich enough structure for course objects in UDLF. Typically, courses can only be modeled as a multilevel hierarchy and among different courses, both the syntax and semantics of terms may vary. Therefore, we need a mechanism to define, within a particular library, course models, their constituent objects, and the associated semantics in a flexible, extensible way. In this paper, we describe our approach to define and implement these multilayered course objects. We use XML technology to emulate complex data structures within the NCSTRL+ buckets. We have developed authoring and browsing tools to manipulate these course objects. In our current implementation a user downloading an XML based course bucket also downloads the XML-aware tools: an applet that enables the user to edit or browse the bucket. We claim that XML provides an effective means to represent multi-level structure of a course bucket.

  8. [Review of digital ground object spectral library].

    Science.gov (United States)

    Zhou, Xiao-Hu; Zhou, Ding-Wu

    2009-06-01

    A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.

  9. Algorithms for classification of astronomical object spectra

    Science.gov (United States)

    Wasiewicz, P.; Szuppe, J.; Hryniewicz, K.

    2015-09-01

    Obtaining interesting celestial objects from tens of thousands or even millions of recorded optical-ultraviolet spectra depends not only on the data quality but also on the accuracy of spectra decomposition. Additionally rapidly growing data volumes demands higher computing power and/or more efficient algorithms implementations. In this paper we speed up the process of substracting iron transitions and fitting Gaussian functions to emission peaks utilising C++ and OpenCL methods together with the NOSQL database. In this paper we implemented typical astronomical methods of detecting peaks in comparison to our previous hybrid methods implemented with CUDA.

  10. Buckets: Smart Objects for Digital Libraries

    Science.gov (United States)

    Nelson, Michael L.

    2001-01-01

    Current discussion of digital libraries (DLs) is often dominated by the merits of the respective storage, search and retrieval functionality of archives, repositories, search engines, search interfaces and database systems. While these technologies are necessary for information management, the information content is more important than the systems used for its storage and retrieval. Digital information should have the same long-term survivability prospects as traditional hardcopy information and should be protected to the extent possible from evolving search engine technologies and vendor vagaries in database management systems. Information content and information retrieval systems should progress on independent paths and make limited assumptions about the status or capabilities of the other. Digital information can achieve independence from archives and DL systems through the use of buckets. Buckets are an aggregative, intelligent construct for publishing in DLs. Buckets allow the decoupling of information content from information storage and retrieval. Buckets exist within the Smart Objects and Dumb Archives model for DLs in that many of the functionalities and responsibilities traditionally associated with archives are pushed down (making the archives dumber) into the buckets (making them smarter). Some of the responsibilities imbued to buckets are the enforcement of their terms and conditions, and maintenance and display of their contents.

  11. Algorithm of resonance orders for the objects

    Science.gov (United States)

    Zhang, YongGang; Zhang, JianXue

    2018-03-01

    In mechanical engineering, the object resonance phenomena often occur when the external incident wave frequency is close to object of the natural frequency. Object resonance phenomena get the maximum value when the external incident frequency is equal to object the natural frequency. Experiments found that resonance intension of the object is changed, different objects resonance phenomena present different characteristics of ladders. Based on object orders resonance characteristics, the calculation method of object orders resonance is put forward in the paper, and the application for the light and sound waves on the seven order resonance characteristics by people feel, the result error is less than 1%.Visible in this paper, the method has high accuracy and usability. The calculation method reveals that some object resonance occur present order characteristic only four types, namely the first-orders resonance characteristics, third-orders characteristics, five orders characteristic, and seven orders characteristic.

  12. Process Architecture for Managing Digital Object Identifiers

    Science.gov (United States)

    Wanchoo, L.; James, N.; Stolte, E.

    2014-12-01

    In 2010, NASA's Earth Science Data and Information System (ESDIS) Project implemented a process for registering Digital Object Identifiers (DOIs) for data products distributed by Earth Observing System Data and Information System (EOSDIS). For the first 3 years, ESDIS evolved the process involving the data provider community in the development of processes for creating and assigning DOIs, and guidelines for the landing page. To accomplish this, ESDIS established two DOI User Working Groups: one for reviewing the DOI process whose recommendations were submitted to ESDIS in February 2014; and the other recently tasked to review and further develop DOI landing page guidelines for ESDIS approval by end of 2014. ESDIS has recently upgraded the DOI system from a manually-driven system to one that largely automates the DOI process. The new automated feature include: a) reviewing the DOI metadata, b) assigning of opaque DOI name if data provider chooses, and c) reserving, registering, and updating the DOIs. The flexibility of reserving the DOI allows data providers to embed and test the DOI in the data product metadata before formally registering with EZID. The DOI update process allows the changing of any DOI metadata except the DOI name unless the name has not been registered. Currently, ESDIS has processed a total of 557 DOIs of which 379 DOIs are registered with EZID and 178 are reserved with ESDIS. The DOI incorporates several metadata elements that effectively identify the data product and the source of availability. Of these elements, the Uniform Resource Locator (URL) attribute has the very important function of identifying the landing page which describes the data product. ESDIS in consultation with data providers in the Earth Science community is currently developing landing page guidelines that specify the key data product descriptive elements to be included on each data product's landing page. This poster will describe in detail the unique automated process and

  13. ADAPTIVE SELECTION OF AUXILIARY OBJECTIVES IN MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS

    Directory of Open Access Journals (Sweden)

    I. A. Petrova

    2016-05-01

    Full Text Available Subject of Research.We propose to modify the EA+RL method, which increases efficiency of evolutionary algorithms by means of auxiliary objectives. The proposed modification is compared to the existing objective selection methods on the example of travelling salesman problem. Method. In the EA+RL method a reinforcement learning algorithm is used to select an objective – the target objective or one of the auxiliary objectives – at each iteration of the single-objective evolutionary algorithm.The proposed modification of the EA+RL method adopts this approach for the usage with a multiobjective evolutionary algorithm. As opposed to theEA+RL method, in this modification one of the auxiliary objectives is selected by reinforcement learning and optimized together with the target objective at each step of the multiobjective evolutionary algorithm. Main Results.The proposed modification of the EA+RL method was compared to the existing objective selection methods on the example of travelling salesman problem. In the EA+RL method and its proposed modification reinforcement learning algorithms for stationary and non-stationary environment were used. The proposed modification of the EA+RL method applied with reinforcement learning for non-stationary environment outperformed the considered objective selection algorithms on the most problem instances. Practical Significance. The proposed approach increases efficiency of evolutionary algorithms, which may be used for solving discrete NP-hard optimization problems. They are, in particular, combinatorial path search problems and scheduling problems.

  14. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    Science.gov (United States)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  15. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    same six pairs of modalities were significantly different, but the JAFROC confidence intervals were about 32% smaller than ROC confidence intervals. This study shows that image processing has a significant impact on the detection of microcalcifications in digital mammograms. Objective measurements, such as described here, should be used by the manufacturers to select the optimal image processing algorithm.

  16. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan

    2012-06-02

    In this paper we present an agent-based algorithm for the spatial distribution of objects. The algorithm is a generalization of the bubble mesh algorithm, initially created for the point insertion stage of the meshing process of the finite element method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing application as well as a graph visualization application.

  17. Surpassing digital holography limits by lensless object scanning holography.

    Science.gov (United States)

    Micó, Vicente; Ferreira, Carlos; García, Javier

    2012-04-23

    We present lensless object scanning holography (LOSH) as a fully lensless method, capable of improving image quality in reflective digital Fourier holography, by means of an extremely simplified experimental setup. LOSH is based on the recording and digital post-processing of a set of digital lensless holograms and results in a synthetic image with improved resolution, field of view (FOV), signal-to-noise ratio (SNR), and depth of field (DOF). The superresolution (SR) effect arises from the generation of a synthetic aperture (SA) based on the linear movement of the inspected object. The same scanning principle enlarges the object FOV. SNR enhancement is achieved by speckle suppression and coherent artifacts averaging due to the coherent addition of the multiple partially overlapping bandpass images. And DOF extension is performed by digital refocusing to different object's sections. Experimental results showing an impressive image quality improvement are reported for a one-dimensional reflective resolution test target. © 2012 Optical Society of America

  18. Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate

    Directory of Open Access Journals (Sweden)

    We-Duke Cho

    2008-09-01

    Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

  19. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  20. A hybrid multi-objective evolutionary algorithm approach for ...

    Indian Academy of Sciences (India)

    V K MANUPATI

    for handling sequence- and machine-dependent set-up times ... algorithm has been compared to that of multi-objective particle swarm optimization (MOPSO) and conventional ..... position and cognitive learning factor are considered for.

  1. Agent-based Algorithm for Spatial Distribution of Objects

    KAUST Repository

    Collier, Nathan; Sieniek, Marcin

    2012-01-01

    method. The bubble mesh algorithm treats objects in space as bubbles, which repel and attract each other. The dynamics of each bubble are approximated by solving a series of ordinary differential equations. We present numerical results for a meshing

  2. Comparison of two global digital algorithms for Minkowski tensor estimation

    DEFF Research Database (Denmark)

    The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....

  3. Hybrid phase retrieval algorithm for solving the twin image problem in in-line digital holography

    Science.gov (United States)

    Zhao, Jie; Wang, Dayong; Zhang, Fucai; Wang, Yunxin

    2010-10-01

    For the reconstruction in the in-line digital holography, there are three terms overlapping with each other on the image plane, named the zero order term, the real image and the twin image respectively. The unwanted twin image degrades the real image seriously. A hybrid phase retrieval algorithm is presented to address this problem, which combines the advantages of two popular phase retrieval algorithms. One is the improved version of the universal iterative algorithm (UIA), called the phase flipping-based UIA (PFB-UIA). The key point of this algorithm is to flip the phase of the object iteratively. It is proved that the PFB-UIA is able to find the support of the complicated object. Another one is the Fienup algorithm, which is a kind of well-developed algorithm and uses the support of the object as the constraint among the iteration procedure. Thus, by following the Fienup algorithm immediately after the PFB-UIA, it is possible to produce the amplitude and the phase distributions of the object with high fidelity. The primary simulated results showed that the proposed algorithm is powerful for solving the twin image problem in the in-line digital holography.

  4. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  5. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  6. Digital signal processing algorithms for nuclear particle spectroscopy

    International Nuclear Information System (INIS)

    Zejnalova, O.; Zejnalov, Sh.; Hambsch, F.J.; Oberstedt, S.

    2007-01-01

    Digital signal processing algorithms for nuclear particle spectroscopy are described along with a digital pile-up elimination method applicable to equidistantly sampled detector signals pre-processed by a charge-sensitive preamplifier. The signal processing algorithms are provided as recursive one- or multi-step procedures which can be easily programmed using modern computer programming languages. The influence of the number of bits of the sampling analogue-to-digital converter on the final signal-to-noise ratio of the spectrometer is considered. Algorithms for a digital shaping-filter amplifier, for a digital pile-up elimination scheme and for ballistic deficit correction were investigated using a high purity germanium detector. The pile-up elimination method was originally developed for fission fragment spectroscopy using a Frisch-grid back-to-back double ionization chamber and was mainly intended for pile-up elimination in case of high alpha-radioactivity of the fissile target. The developed pile-up elimination method affects only the electronic noise generated by the preamplifier. Therefore the influence of the pile-up elimination scheme on the final resolution of the spectrometer is investigated in terms of the distance between pile-up pulses. The efficiency of the developed algorithms is compared with other signal processing schemes published in literature

  7. Research on digital PID control algorithm for HPCT

    International Nuclear Information System (INIS)

    Zeng Yi; Li Rui; Shen Tianjian; Ke Xinhua

    2009-01-01

    Digital PID applied in high-precision HPCT (High-precision current transducer) based on Digital Signal Processor (DSP) TMS320F2812 and special D/A converter was researched. By using increment style PID Control algorithm, the stability and precision of high-precision HPCT output voltage is improved. On basis of deeply analysing incremental digital PID, the scheme model of HPCT is proposed, the feasibility simulation using Matlab is given. Practical hardware circuit verified the incremental PID has closed-loop control process in tracking HPCT output voltage. (authors)

  8. Mathematical (diagnostic algorithms in the digitization of oral histopathology: The new frontier in histopathological diagnosis

    Directory of Open Access Journals (Sweden)

    Abhishek Banerjee

    2015-01-01

    Full Text Available The technological progress in the digitalization of a complete histological glass slide has opened a new door in the tissue based diagnosis. Automated slide diagnosis can be made possible by the use of mathematical algorithms which are formulated by binary codes or values. These algorithms (diagnostic algorithms include both object based (object features, structures and pixel based (texture measures. The intra- and inter-observer errors inherent in the visual diagnosis of a histopathological slide are largely replaced by the use of diagnostic algorithms leading to a standardized and reproducible diagnosis. The present paper reviews the advances in digital histopathology especially related to the use of mathematical algorithms (diagnostic algorithms in the field of oral histopathology. The literature was reviewed for data relating to the use of algorithms utilized in the construction of computational software with special applications in oral histopathological diagnosis. The data were analyzed, and the types and end targets of the algorithms were tabulated. The advantages, specificities and reproducibility of the software, its shortcomings and its comparison with traditional methods of histopathological diagnosis were evaluated. Algorithms help in automated slide diagnosis by creating software with possible reduced errors and bias with a high degree of specificity, sensitivity, and reproducibility. Akin to the identification of thumbprints and faces, software for histopathological diagnosis will in the near future be an important part of the histopathological diagnosis.

  9. COOMA: AN OBJECT-ORIENTED STOCHASTIC OPTIMIZATION ALGORITHM

    Directory of Open Access Journals (Sweden)

    Stanislav Alexandrovich Tavridovich

    2017-09-01

    Full Text Available Stochastic optimization methods such as genetic algorithm, particle swarm optimization algorithm, and others are successfully used to solve optimization problems. They are all based on similar ideas and need minimal adaptation when being implemented. But several factors complicate the application of stochastic search methods in practice: multimodality of the objective function, optimization with constraints, finding the best parameter configuration of the algorithm, the increasing of the searching space, etc. This paper proposes a new Cascade Object Optimization and Modification Algorithm (COOMA which develops the best ideas of known stochastic optimization methods and can be applied to a wide variety of real-world problems described in the terms of object-oriented models with practically any types of parameters, variables, and associations between objects. The objects of different classes are organized in pools and pools form the hierarchical structure according to the associations between classes. The algorithm is also executed according to the pool structure: the methods of the upper-level pools before changing their objects call the analogous methods of all their subpools. The algorithm starts with initialization step and then passes through a number of iterations during which the objects are modified until the stop criteria are satisfied. The objects are modified using movement, replication and mutation operations. Two-level version of COOMA realizes a built-in self-adaptive mechanism. The optimization statistics for a number of test problems shows that COOMA is able to solve multi-level problems (with objects of different associated classes, problems with multimodal fitness functions and systems of constraints. COOMA source code on Java is available on request.

  10. A Moving Object Detection Algorithm Based on Color Information

    International Nuclear Information System (INIS)

    Fang, X H; Xiong, W; Hu, B J; Wang, L T

    2006-01-01

    This paper designed a new algorithm of moving object detection for the aim of quick moving object detection and orientation, which used a pixel and its neighbors as an image vector to represent that pixel and modeled different chrominance component pixel as a mixture of Gaussians, and set up different mixture model of Gauss for different YUV chrominance components. In order to make full use of the spatial information, color segmentation and background model were combined. Simulation results show that the algorithm can detect intact moving objects even when the foreground has low contrast with background

  11. A Dedicated Genetic Algorithm for Localization of Moving Magnetic Objects

    Directory of Open Access Journals (Sweden)

    Roger Alimi

    2015-09-01

    Full Text Available A dedicated Genetic Algorithm (GA has been developed to localize the trajectory of ferromagnetic moving objects within a bounded perimeter. Localization of moving ferromagnetic objects is an important tool because it can be employed in situations when the object is obscured. This work is innovative for two main reasons: first, the GA has been tuned to provide an accurate and fast solution to the inverse magnetic field equations problem. Second, the algorithm has been successfully tested using real-life experimental data. Very accurate trajectory localization estimations were obtained over a wide range of scenarios.

  12. Separated reconstruction of images from ultrasonic holograms with tridimensional object by digital processing

    International Nuclear Information System (INIS)

    Son, J.H.

    1979-01-01

    Because of much attractiveness, digital reconstruction of image from ultrasonic hologram by computer has been widely studied in recent years. But the method of digital reconstruction of image is displayed in the plain only, so study is done mainly of the hologram obtained from bidimensional objects. Many applications of the ultrasonic holography such as the non-distructive testing and the ultrasonic diagnosis are mostly of the tridimensional object. In the ordinary digital reconstruction of the image from the hologram obtained from tridimensional object, a question of hidden-image problem arises, and the separated reconstruction of the image for the considered part of the object is required. In this paper, multi-diffraction by tridimensional object is assumed to have linearity, ie. superposition property by each diffraction of bidimensional objects. And a new algorithm is proposed here, namely reconstructed image for considered one of bidimensional objects in tridimensional object obtained by means of operation from the two holograms tilted in unequal angles. Such tilted holograms are obtained from the tilted linear array receivers by scanning method. That images can be reconstructed by the operation from two holograms means that the new algorithm is verified. And another new method of the transformation of hologram, that is, transformation of a hologram to arbitrarily tilted hologram, has been proved valid. The reconstructed images obtained with the method of transformation and the method of operation, are the images reconstructed from one hologram by the tridimensional object and more distinctly separated that any images mentioned above. (author)

  13. System for objective assessment of image differences in digital cinema

    Science.gov (United States)

    Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2014-09-01

    There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.

  14. Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping

    OpenAIRE

    Li, Na; Yang, Jiquan; Feng, Chunmei; Yang, Jianfei; Zhu, Liya; Guo, Aiqing

    2016-01-01

    An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combi...

  15. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  16. Bi-objective branch-and-cut algorithms

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Ehrgott, Matthias; Nielsen, Lars Relund

    Most real-world optimization problems are of a multi-objective nature, involving objectives which are conflicting and incomparable. Solving a multi-objective optimization problem requires a method which can generate the set of rational compromises between the objectives. In this paper, we propose...... are strengthened by cutting planes. In addition, we suggest an extension of the branching strategy "Pareto branching''. Extensive computational results obtained for the bi-objective single source capacitated facility location problem prove the effectiveness of the algorithms....... and compares it to an upper bound set. The implicit bound set based algorithm, on the other hand, fathoms branching nodes by generating a single point on the lower bound set for each local nadir point. We outline several approaches for fathoming branching nodes and we propose an updating scheme for the lower...

  17. Use of multiple objective evolutionary algorithms in optimizing surveillance requirements

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Villanueva, J.F.; Sanchez, A.I; Galvan, B.; Salazar, D.; Cepin, M.

    2006-01-01

    This paper presents the development and application of a double-loop Multiple Objective Evolutionary Algorithm that uses a Multiple Objective Genetic Algorithm to perform the simultaneous optimization of periodic Test Intervals (TI) and Test Planning (TP). It takes into account the time-dependent effect of TP performed on stand-by safety-related equipment. TI and TP are part of the Surveillance Requirements within Technical Specifications at Nuclear Power Plants. It addresses the problem of multi-objective optimization in the space of dependable variables, i.e. TI and TP, using a novel flexible structure of the optimization algorithm. Lessons learnt from the cases of application of the methodology to optimize TI and TP for the High-Pressure Injection System are given. The results show that the double-loop Multiple Objective Evolutionary Algorithm is able to find the Pareto set of solutions that represents a surface of non-dominated solutions that satisfy all the constraints imposed on the objective functions and decision variables. Decision makers can adopt then the best solution found depending on their particular preference, e.g. minimum cost, minimum unavailability

  18. The multiplicity of the digital textbook as design object

    DEFF Research Database (Denmark)

    Riis Ebbesen, Toke

    2015-01-01

    Building on a preliminary case study of the Danish educational publisher Systime A/S and its flagship product, the web-based ‘iBog’/‘iBook’, this article explores how digital textbooks can be understood as design. The shaping of digital books is seen as being intertwined in a wider circuit...... reorganization of the publishing company, web-based user interfaces, and ultimately the branding, which market these new digital objects, are building power- ful discourses around the product. Thus it is suggested that the design process of the iBog case can be understood in a model of database-based publishing...... with multiple levels. In the final analysis, the iBog is much more than a product and a technology. It is a brand that goes beyond what can be studied by looking at the digital textbook as a singular artefact....

  19. A Survey of Complex Object Technologies for Digital Libraries

    Science.gov (United States)

    Nelson, Michael L.; Argue, Brad; Efron, Miles; Denn, Sheila; Pattuelli, Maria Cristina

    2001-01-01

    Many early web-based digital libraries (DLs) had implicit assumptions reflected in their architecture that the unit of focus in the DL (frequently "reports" or "e-prints") would only be manifested in a single, or at most a few, common file formats such as PDF or PostScript. DLs have now matured to the point where their contents are commonly no longer simple files. Complex objects in DLs have emerged from in response to various requirements, including: simple aggregation of formats and supporting files, bundling additional information to aid digital preservation, creating opaque digital objects for e-commerce applications, and the incorporation of dynamic services with the traditional data files. We examine a representative (but not necessarily exhaustive) number of current and recent historical web-based complex object technologies and projects that are applicable to DLs: Aurora, Buckets, ComMentor, Cryptolopes, Digibox, Document Management Alliance, FEDORA, Kahn-Wilensky Framework Digital Objects, Metadata Encoding & Transmission Standard, Multivalent Documents, Open eBooks, VERS Encapsulated Objects, and the Warwick Framework.

  20. A Minimum Path Algorithm Among 3D-Polyhedral Objects

    Science.gov (United States)

    Yeltekin, Aysin

    1989-03-01

    In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.

  1. Digital fabrication of multi-material biomedical objects

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, H H; Choi, S H, E-mail: shchoi@hku.h [Department of Industrial and Manufacturing Systems Engineering, University of Hong Kong, Pokfulam Road (Hong Kong)

    2009-12-15

    This paper describes a multi-material virtual prototyping (MMVP) system for modelling and digital fabrication of discrete and functionally graded multi-material objects for biomedical applications. The MMVP system consists of a DMMVP module, an FGMVP module and a virtual reality (VR) simulation module. The DMMVP module is used to model discrete multi-material (DMM) objects, while the FGMVP module is for functionally graded multi-material (FGM) objects. The VR simulation module integrates these two modules to perform digital fabrication of multi-material objects, which can be subsequently visualized and analysed in a virtual environment to optimize MMLM processes for fabrication of product prototypes. Using the MMVP system, two biomedical objects, including a DMM human spine and an FGM intervertebral disc spacer are modelled and digitally fabricated for visualization and analysis in a VR environment. These studies show that the MMVP system is a practical tool for modelling, visualization, and subsequent fabrication of biomedical objects of discrete and functionally graded multi-materials for biomedical applications. The system may be adapted to control MMLM machines with appropriate hardware for physical fabrication of biomedical objects.

  2. Digital fabrication of multi-material biomedical objects

    International Nuclear Information System (INIS)

    Cheung, H H; Choi, S H

    2009-01-01

    This paper describes a multi-material virtual prototyping (MMVP) system for modelling and digital fabrication of discrete and functionally graded multi-material objects for biomedical applications. The MMVP system consists of a DMMVP module, an FGMVP module and a virtual reality (VR) simulation module. The DMMVP module is used to model discrete multi-material (DMM) objects, while the FGMVP module is for functionally graded multi-material (FGM) objects. The VR simulation module integrates these two modules to perform digital fabrication of multi-material objects, which can be subsequently visualized and analysed in a virtual environment to optimize MMLM processes for fabrication of product prototypes. Using the MMVP system, two biomedical objects, including a DMM human spine and an FGM intervertebral disc spacer are modelled and digitally fabricated for visualization and analysis in a VR environment. These studies show that the MMVP system is a practical tool for modelling, visualization, and subsequent fabrication of biomedical objects of discrete and functionally graded multi-materials for biomedical applications. The system may be adapted to control MMLM machines with appropriate hardware for physical fabrication of biomedical objects.

  3. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    Directory of Open Access Journals (Sweden)

    Zhenghao Xi

    2014-01-01

    Full Text Available To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  4. Analysis algorithm for digital data used in nuclear spectroscopy

    CERN Document Server

    AUTHOR|(CDS)2085950; Sin, Mihaela

    Data obtained from digital acquisition systems used in nuclear spectroscopy experiments must be converted by a dedicated algorithm in or- der to extract the physical quantities of interest. I will report here the de- velopment of an algorithm capable to read digital data, discriminate between random and true signals and convert the results into a format readable by a special data analysis program package used to interpret nuclear spectra and to create coincident matrices. The algorithm can be used in any nuclear spectroscopy experimental setup provided that digital acquisition modules are involved. In particular it was used to treat data obtained from the IS441 experiment at ISOLDE where the beta decay of 80Zn was investigated as part of ultra-fast timing studies of neutron rich Zn nuclei. The results obtained for the half-lives of 80Zn and 80Ga were in very good agreement with previous measurements. This fact proved unquestionably that the conversion algorithm works. Another remarkable result was the improve...

  5. Multi-objective optimization using genetic algorithms: A tutorial

    International Nuclear Information System (INIS)

    Konak, Abdullah; Coit, David W.; Smith, Alice E.

    2006-01-01

    Multi-objective formulations are realistic models for many complex engineering optimization problems. In many real-life problems, objectives under consideration conflict with each other, and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. In this paper, an overview and tutorial is presented describing genetic algorithms (GA) developed specifically for problems with multiple objectives. They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity

  6. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    Science.gov (United States)

    2009-03-10

    xfar by xint. Else, generate a new individual, using the Sobol pseudo- random sequence generator within the upper and lower bounds of the variables...12. Deb, K., Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons. 2002. 13. Sobol , I. M., "Uniformly Distributed Sequences

  7. Construct validation of an interactive digital algorithm for ostomy care.

    Science.gov (United States)

    Beitz, Janice M; Gerlach, Mary A; Schafer, Vickie

    2014-01-01

    The purpose of this study was to evaluate construct validity for a previously face and content validated Ostomy Algorithm using digital real-life clinical scenarios. A cross-sectional, mixed-methods Web-based survey design study was conducted. Two hundred ninety-seven English-speaking RNs completed the study; participants practiced in both acute care and postacute settings, with 1 expert ostomy nurse (WOC nurse) and 2 nonexpert nurses. Following written consent, respondents answered demographic questions and completed a brief algorithm tutorial. Participants were then presented with 7 ostomy-related digital scenarios consisting of real-life photos and pertinent clinical information. Respondents used the 11 assessment components of the digital algorithm to choose management options. Participant written comments about the scenarios and the research process were collected. The mean overall percentage of correct responses was 84.23%. Mean percentage of correct responses for respondents with a self-reported basic ostomy knowledge was 87.7%; for those with a self-reported intermediate ostomy knowledge was 85.88% and those who were self-reported experts in ostomy care achieved 82.77% correct response rate. Five respondents reported having no prior ostomy care knowledge at screening and achieved an overall 45.71% correct response rate. No negative comments regarding the algorithm were recorded by participants. The new standardized Ostomy Algorithm remains the only face, content, and construct validated digital clinical decision instrument currently available. Further research on application at the bedside while tracking patient outcomes is warranted.

  8. Digitization of natural objects with micro CT and photographs.

    Science.gov (United States)

    Ijiri, Takashi; Todo, Hideki; Hirabayashi, Akira; Kohiyama, Kenji; Dobashi, Yoshinori

    2018-01-01

    In this paper, we present a three-dimensional (3D) digitization technique for natural objects, such as insects and plants. The key idea is to combine X-ray computed tomography (CT) and photographs to obtain both complicated 3D shapes and surface textures of target specimens. We measure a specimen by using an X-ray CT device and a digital camera to obtain a CT volumetric image (volume) and multiple photographs. We then reconstruct a 3D model by segmenting the CT volume and generate a texture by projecting the photographs onto the model. To achieve this reconstruction, we introduce a technique for estimating a camera position for each photograph. We also present techniques for merging multiple textures generated from multiple photographs and recovering missing texture areas caused by occlusion. We illustrate the feasibility of our 3D digitization technique by digitizing 3D textured models of insects and flowers. The combination of X-ray CT and a digital camera makes it possible to successfully digitize specimens with complicated 3D structures accurately and allows us to browse both surface colors and internal structures.

  9. EFFICIENT MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM FOR JOB SHOP SCHEDULING

    Institute of Scientific and Technical Information of China (English)

    Lei Deming; Wu Zhiming

    2005-01-01

    A new representation method is first presented based on priority rules. According to this method, each entry in the chromosome indicates that in the procedure of the Giffler and Thompson (GT) algorithm, the conflict occurring in the corresponding machine is resolved by the corresponding priority rule. Then crowding-measure multi-objective evolutionary algorithm (CMOEA) is designed,in which both archive maintenance and fitness assignment use crowding measure. Finally the comparisons between CMOEA and SPEA in solving 15 scheduling problems demonstrate that CMOEA is suitable to job shop scheduling.

  10. The research of Digital Holographic Object Wave Field Reconstruction in Image and Object Space

    Institute of Scientific and Technical Information of China (English)

    LI Jun-Chang; PENG Zu-Jie; FU Yun-Chang

    2011-01-01

    @@ For conveniently detecting objects of different sizes using digital holography, usual measurements employ the object wave transformed by an optical system with different magnifications to fit charge coupled devices (CCDs), then the object field reconstruction involves the diffraction calculation of the optic wave passing through the optical system.We propose two methods to reconstruct the object field.The one is that, when the object is imaging in an image space in which we reconstruct the image of the object field, the object field can be expressed according to the object-image relationship.The other is that, when the object field reaching CCD is imaged in an object space in which we reconstruct the object field, the optical system is described by introducing matrix optics in this paper.The reconstruction formulae which easily use classic diffraction integral are derived.Finally, experimental verifications are also accomplished.%For conveniently detecting objects of different sizes using digital holography, usual measurements employ the object wave transformed by an optical system with different magnifications to fit charge coupled devices (CCDs), then the object Reid reconstruction involves the diffraction calculation of the optic wave passing through the optical system. We propose two methods to reconstruct the object field. The one is that, when the object is imaging in an image space in which we reconstruct the image of the object field, the object field can be expressed according to the object-image relationship. The other is that, when the object field reaching CCD is imaged in an object space in which we reconstruct the object field, the optical system is described by introducing matrix optics in this paper. The reconstruction formulae which easily use classic diffraction integral are derived. Finally, experimental verifications are also accomplished.

  11. Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping

    Science.gov (United States)

    Yang, Jiquan; Feng, Chunmei; Yang, Jianfei; Zhu, Liya; Guo, Aiqing

    2016-01-01

    An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing. PMID:26981110

  12. Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping.

    Science.gov (United States)

    Li, Na; Yang, Jiquan; Feng, Chunmei; Yang, Jianfei; Zhu, Liya; Guo, Aiqing

    2016-01-01

    An integrate fabrication framework is presented to build heterogeneous objects (HEO) using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing.

  13. Digital Microdroplet Ejection Technology-Based Heterogeneous Objects Prototyping

    Directory of Open Access Journals (Sweden)

    Na Li

    2016-01-01

    Full Text Available An integrate fabrication framework is presented to build heterogeneous objects (HEO using digital microdroplets injecting technology and rapid prototyping. The heterogeneous materials part design and manufacturing method in structure and material was used to change the traditional process. The net node method was used for digital modeling that can configure multimaterials in time. The relationship of material, color, and jetting nozzle was built. The main important contributions are to combine the structure, material, and visualization in one process and give the digital model for manufacture. From the given model, it is concluded that the method is effective for HEO. Using microdroplet rapid prototyping and the model given in the paper HEO could be gotten basically. The model could be used in 3D biomanufacturing.

  14. Towards Automatic Controller Design using Multi-Objective Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf

    of evolutionary computation, a choice was made to use multi-objective algorithms for the purpose of aiding in automatic controller design. More specifically, the choice was made to use the Non-dominated Sorting Genetic Algorithm II (NSGAII), which is one of the most potent algorithms currently in use...... for automatic controller design. However, because the field of evolutionary computation is relatively unknown in the field of control engineering, this thesis also includes a comprehensive introduction to the basic field of evolutionary computation as well as a description of how the field has previously been......In order to design the controllers of tomorrow, a need has risen for tools that can aid in the design of these. A desire to use evolutionary computation as a tool to achieve that goal is what gave inspiration for the work contained in this thesis. After having studied the foundations...

  15. A Fast DCT Algorithm for Watermarking in Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    S. E. Tsai

    2017-01-01

    Full Text Available Discrete cosine transform (DCT has been an international standard in Joint Photographic Experts Group (JPEG format to reduce the blocking effect in digital image compression. This paper proposes a fast discrete cosine transform (FDCT algorithm that utilizes the energy compactness and matrix sparseness properties in frequency domain to achieve higher computation performance. For a JPEG image of 8×8 block size in spatial domain, the algorithm decomposes the two-dimensional (2D DCT into one pair of one-dimensional (1D DCTs with transform computation in only 24 multiplications. The 2D spatial data is a linear combination of the base image obtained by the outer product of the column and row vectors of cosine functions so that inverse DCT is as efficient. Implementation of the FDCT algorithm shows that embedding a watermark image of 32 × 32 block pixel size in a 256 × 256 digital image can be completed in only 0.24 seconds and the extraction of watermark by inverse transform is within 0.21 seconds. The proposed FDCT algorithm is shown more efficient than many previous works in computation.

  16. Improved multi-objective clustering algorithm using particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Congcong Gong

    Full Text Available Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.

  17. Improved multi-objective clustering algorithm using particle swarm optimization.

    Science.gov (United States)

    Gong, Congcong; Chen, Haisong; He, Weixiong; Zhang, Zhanliang

    2017-01-01

    Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO) is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.

  18. Mixed play spaces: Augmenting digital storytelling with tactile objects

    OpenAIRE

    Glowacki, B. R.

    2018-01-01

    In 2016, the U.K. media watchdog Ofcom published a report from a three-year study that focused on how children use media. Their findings suggest that younger children tend to use mobile devices for passive media consumption. We have identified two possibilities for exploring more social ways of engaging with digital media: creative play with stories and using media with tangible objects. In this project, we explored how the crossover of both tangible interaction and storytelling could scaffol...

  19. The Modified Frequency Algorithm of Digital Watermarking of Still Images Resistant to JPEG Compression

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-01-01

    Full Text Available Digital watermarking is an effective copyright protection for multimedia products (in particular, still images. Digital marking represents process of embedding into object of protection of a digital watermark which is invisible for a human eye. However there is rather large number of the harmful influences capable to destroy the watermark which is embedded into the still image. The most widespread attack is JPEG compression that is caused by efficiency of this format of compression and its big prevalence on the Internet.The new algorithm which is modification of algorithm of Elham is presented in the present article. The algorithm of digital marking of motionless images carries out embedding of a watermark in frequency coefficients of discrete Hadamard transform of the chosen image blocks. The choice of blocks of the image for embedding of a digital watermark is carried out on the basis of the set threshold of entropy of pixels. The choice of low-frequency coefficients for embedding is carried out on the basis of comparison of values of coefficients of discrete cosine transformation with a predetermined threshold, depending on the product of the built-in watermark coefficient on change coefficient.Resistance of new algorithm to compression of JPEG, noising, filtration, change of color, the size and histogram equalization is in details analysed. Research of algorithm consists in comparison of the appearance taken from the damaged image of a watermark with the introduced logo. Ability of algorithm to embedding of a watermark with a minimum level of distortions of the image is in addition analysed. It is established that the new algorithm in comparison by initial algorithm of Elham showed full resistance to compression of JPEG, and also the improved resistance to a noising, change of brightness and histogram equalization.The developed algorithm can be used for copyright protection on the static images. Further studies will be used to study the

  20. Comparison of analytic and iterative digital tomosynthesis reconstructions for thin slab objects

    Science.gov (United States)

    Yun, J.; Kim, D. W.; Ha, S.; Kim, H. K.

    2017-11-01

    For digital x-ray tomosynthesis of thin slab objects, we compare the tomographic imaging performances obtained from the filtered backprojection (FBP) and simultaneous algebraic reconstruction (SART) algorithms. The imaging performance includes the in-plane molulation-transfer function (MTF), the signal difference-to-noise ratio (SDNR), and the out-of-plane blur artifact or artifact-spread function (ASF). The MTF is measured using a thin tungsten-wire phantom, and the SDNR and the ASF are measured using a thin aluminum-disc phantom embedded in a plastic cylinder. The FBP shows a better MTF performance than the SART. On the contrary, the SART outperforms the FBP with regard to the SDNR and ASF performances. Detailed experimental results and their analysis results are described in this paper. For a more proper use of digital tomosynthesis technique, this study suggests to use a reconstuction algorithm suitable for application-specific purposes.

  1. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    International Nuclear Information System (INIS)

    Fannjiang, Albert C

    2011-01-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n=O(s 2 ) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n=O(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n=O(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well-resolved case while

  2. Multi-Objective Optimization of Grillages Applying the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2012-01-01

    Full Text Available The article analyzes the optimization of grillage-type foundations seeking for the least possible reactive forces in the poles for a given number of poles and for the least possible bending moments of absolute values in the connecting beams of the grillage. Therefore, we suggest using a compromise objective function (to be minimized that consists of the maximum reactive force arising in all poles and the maximum bending moment of the absolute value in connecting beams; both components include the given weights. The variables of task design are pole positions under connecting beams. The optimization task is solved applying the algorithm containing all the initial data of the problem. Reactive forces and bending moments are calculated using an original program (finite element method is applied. This program is integrated into the optimization algorithm using the “black-box” principle. The “black-box” finite element program sends back the corresponding value of the objective function. Numerical experiments revealed the optimal quantity of points to compute bending moments. The obtained results show a certain ratio of weights in the objective function where the contribution of reactive forces and bending moments to the objective function are equivalent. This solution can serve as a pilot project for more detailed design.Article in Lithuanian

  3. The Inclusion Potential of Student Production of Digital Learning Objects

    DEFF Research Database (Denmark)

    Levinsen, Karin Tweddell; Sørensen, Birgitte Holm

    2016-01-01

    This account of the inclusion potential of students’ digital production is based on the large-scale research and development project Students’ Digital Production and Students as Learning Designers (2013–2015), funded by the Danish Ministry of Education. The target groups were primary and lower......-designed framework that accommodates and empowers students’ agency. The Danish parliament passed the Law of Inclusion In 2012 with the objective that by 2015, 96% of all students would be included in normal classes. Inclusion was not part of the initial research agenda, but this changed unexpectedly during...... the project. Specifically, students who did not participate or participated only sporadically in everyday school activities at the beginning of the project adopted new positions as participants and agents. We understand these changes as inclusive processes initiated by the combination of teacher...

  4. CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, T.M.

    1994-02-21

    In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.

  5. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  6. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  7. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    Science.gov (United States)

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  8. Object-Oriented Wavelet-Layered Digital Watermarking Technique

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-yun; YU Jue-bang; LI Ming-yu

    2005-01-01

    In this paper, an object-oriented digital watermarking technique is proposed in the wavelet domain for still images. According to the difference of recognition degree of the human eye to the different region of the image, the image is divided into the interested region and uninterested region of human eye vision in this scheme. Using the relativity of position and the difference to ocular sensitivity of the multiresolution wavelet among each subband, the image is processed with layered watermarking append technique. Experimental results show that the proposed technique successfully survives image processing operations, additive noise and JPEG compression.

  9. Objectness Supervised Merging Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Haifeng Sima

    2016-01-01

    Full Text Available Ideal color image segmentation needs both low-level cues and high-level semantic features. This paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms.

  10. Object-oriented classification of drumlins from digital elevation models

    Science.gov (United States)

    Saha, Kakoli

    Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.

  11. Area collapse algorithm computing new curve of 2D geometric objects

    Science.gov (United States)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  12. Image noise reduction algorithm for digital subtraction angiography: clinical results.

    Science.gov (United States)

    Söderman, Michael; Holmin, Staffan; Andersson, Tommy; Palmgren, Charlotta; Babic, Draženko; Hoornaert, Bart

    2013-11-01

    To test the hypothesis that an image noise reduction algorithm designed for digital subtraction angiography (DSA) in interventional neuroradiology enables a reduction in the patient entrance dose by a factor of 4 while maintaining image quality. This clinical prospective study was approved by the local ethics committee, and all 20 adult patients provided informed consent. DSA was performed with the default reference DSA program, a quarter-dose DSA program with modified acquisition parameters (to reduce patient radiation dose exposure), and a real-time noise-reduction algorithm. Two consecutive biplane DSA data sets were acquired in each patient. The dose-area product (DAP) was calculated for each image and compared. A randomized, blinded, offline reading study was conducted to show noninferiority of the quarter-dose image sets. Overall, 40 samples per treatment group were necessary to acquire 80% power, which was calculated by using a one-sided α level of 2.5%. The mean DAP with the quarter-dose program was 25.3% ± 0.8 of that with the reference program. The median overall image quality scores with the reference program were 9, 13, and 12 for readers 1, 2, and 3, respectively. These scores increased slightly to 12, 15, and 12, respectively, with the quarter-dose program imaging chain. In DSA, a change in technique factors combined with a real-time noise-reduction algorithm will reduce the patient entrance dose by 75%, without a loss of image quality. RSNA, 2013

  13. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    International Nuclear Information System (INIS)

    Cordero, Jose; Garzon Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  14. [An automatic color correction algorithm for digital human body sections].

    Science.gov (United States)

    Zhuge, Bin; Zhou, He-qin; Tang, Lei; Lang, Wen-hui; Feng, Huan-qing

    2005-06-01

    To find a new approach to improve the uniformity of color parameters for images data of the serial sections of the human body. An auto-color correction algorithm in the RGB color space based on a standard CMYK color chart was proposed. The gray part of the color chart was auto-segmented from every original image, and fifteen gray values were attained. The transformation function between the measured gray value and the standard gray value of the color chart and the lookup table were obtained. In RGB color space, the colors of images were corrected according to the lookup table. The color of original Chinese Digital Human Girl No. 1 (CDH-G1) database was corrected by using the algorithm with Matlab 6.5, and it took 13.475 s to deal with one picture on a personal computer. Using the algorithm, the color of the original database is corrected automatically and quickly. The uniformity of color parameters for corrected dataset is improved.

  15. MACHINE LEARNING METHODS IN DIGITAL AGRICULTURE: ALGORITHMS AND CASES

    Directory of Open Access Journals (Sweden)

    Aleksandr Vasilyevich Koshkarov

    2018-05-01

    Full Text Available Ensuring food security is a major challenge in many countries. With a growing global population, the issues of improving the efficiency of agriculture have become most relevant. Farmers are looking for new ways to increase yields, and governments of different countries are developing new programs to support agriculture. This contributes to a more active implementation of digital technologies in agriculture, helping farmers to make better decisions, increase yields and take care of the environment. The central point is the collection and analysis of data. In the industry of agriculture, data can be collected from different sources and may contain useful patterns that identify potential problems or opportunities. Data should be analyzed using machine learning algorithms to extract useful insights. Such methods of precision farming allow the farmer to monitor individual parts of the field, optimize the consumption of water and chemicals, and identify problems quickly. Purpose: to make an overview of the machine learning algorithms used for data analysis in agriculture. Methodology: an overview of the relevant literature; a survey of farmers. Results: relevant algorithms of machine learning for the analysis of data in agriculture at various levels were identified: soil analysis (soil assessment, soil classification, soil fertility predictions, weather forecast (simulation of climate change, temperature and precipitation prediction, and analysis of vegetation (weed identification, vegetation classification, plant disease identification, crop forecasting. Practical implications: agriculture, crop production.

  16. A generic algorithm for constructing hierarchical representations of geometric objects

    International Nuclear Information System (INIS)

    Xavier, P.G.

    1995-01-01

    For a number of years, robotics researchers have exploited hierarchical representations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N 2 logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives

  17. Solution of the problem of superposing image and digital map for detection of new objects

    Science.gov (United States)

    Rizaev, I. S.; Miftakhutdinov, D. I.; Takhavova, E. G.

    2018-01-01

    The problem of superposing the map of the terrain with the image of the terrain is considered. The image of the terrain may be represented in different frequency bands. Further analysis of the results of collation the digital map with the image of the appropriate terrain is described. Also the approach to detection of differences between information represented on the digital map and information of the image of the appropriate area is offered. The algorithm for calculating the values of brightness of the converted image area on the original picture is offered. The calculation is based on using information about the navigation parameters and information according to arranged bench marks. For solving the posed problem the experiments were performed. The results of the experiments are shown in this paper. The presented algorithms are applicable to the ground complex of remote sensing data to assess differences between resulting images and accurate geopositional data. They are also suitable for detecting new objects in the image, based on the analysis of the matching the digital map and the image of corresponding locality.

  18. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  19. e-Labs and Work Objects: Towards Digital Health Economies

    Science.gov (United States)

    Ainsworth, John D.; Buchan, Iain E.

    The optimal provision of healthcare and public health services requires the synthesis of evidence from multiple disciplines. It is necessary to understand the genetic, environmental, behavioural and social determinants of disease and health-related states; to balance the effectiveness of interventions with their costs; to ensure the maximum safety and acceptability of interventions; and to provide fair access to care services for given populations. Ever expanding databases of knowledge and local health information, and the ability to employ computationally expensive methods, promises much for decisions to be both supported by best evidence and locally relevant. This promise will, however, not be realised without providing health professionals with the tools to make sense of this information rich environment and to collaborate across disciplines. We propose, as a solution to this problem, the e-Lab and Work Objects model as a sense-making platform for digital health economies - bringing together data, methods and people for timely health intelligence.

  20. Particle filters for object tracking: enhanced algorithm and efficient implementations

    International Nuclear Information System (INIS)

    Abd El-Halym, H.A.

    2010-01-01

    Object tracking and recognition is a hot research topic. In spite of the extensive research efforts expended, the development of a robust and efficient object tracking algorithm remains unsolved due to the inherent difficulty of the tracking problem. Particle filters (PFs) were recently introduced as a powerful, post-Kalman filter, estimation tool that provides a general framework for estimation of nonlinear/ non-Gaussian dynamic systems. Particle filters were advanced for building robust object trackers capable of operation under severe conditions (small image size, noisy background, occlusions, fast object maneuvers ..etc.). The heavy computational load of the particle filter remains a major obstacle towards its wide use.In this thesis, an Excitation Particle Filter (EPF) is introduced for object tracking. A new likelihood model is proposed. It depends on multiple functions: position likelihood; gray level intensity likelihood and similarity likelihood. Also, we modified the PF as a robust estimator to overcome the well-known sample impoverishment problem of the PF. This modification is based on re-exciting the particles if their weights fall below a memorized weight value. The proposed enhanced PF is implemented in software and evaluated. Its results are compared with a single likelihood function PF tracker, Particle Swarm Optimization (PSO) tracker, a correlation tracker, as well as, an edge tracker. The experimental results demonstrated the superior performance of the proposed tracker in terms of accuracy, robustness, and occlusion compared with other methods Efficient novel hardware architectures of the Sample Important Re sample Filter (SIRF) and the EPF are implemented. Three novel hardware architectures of the SIRF for object tracking are introduced. The first architecture is a two-step sequential PF machine, where particle generation, weight calculation and normalization are carried out in parallel during the first step followed by a sequential re

  1. Harmonized Spaces, Dissonant Objects, Inventing Europe? Mobilizing Digital Heritage

    NARCIS (Netherlands)

    Badenoch, A.W.|info:eu-repo/dai/nl/298400715

    2011-01-01

    Technology, particularly digitization and the online availability of cultural heritage collections, provides new possibilities for creating new forms of ‘European cultural heritage’. This essay analyzes the emerging sphere of European digital heritage as a project of technological harmonization.

  2. Objective and Subjective Assessment of Digital Pathology Image Quality

    Directory of Open Access Journals (Sweden)

    Prarthana Shrestha

    2015-03-01

    Full Text Available The quality of an image produced by the Whole Slide Imaging (WSI scanners is of critical importance for using the image in clinical diagnosis. Therefore, it is very important to monitor and ensure the quality of images. Since subjective image quality assessments by pathologists are very time-consuming, expensive and difficult to reproduce, we propose a method for objective assessment based on clinically relevant and perceptual image parameters: sharpness, contrast, brightness, uniform illumination and color separation; derived from a survey of pathologists. We developed techniques to quantify the parameters based on content-dependent absolute pixel performance and to manipulate the parameters in a predefined range resulting in images with content-independent relative quality measures. The method does not require a prior reference model. A subjective assessment of the image quality is performed involving 69 pathologists and 372 images (including 12 optimal quality images and their distorted versions per parameter at 6 different levels. To address the inter-reader variability, a representative rating is determined as a one-tailed 95% confidence interval of the mean rating. The results of the subjective assessment support the validity of the proposed objective image quality assessment method to model the readers’ perception of image quality. The subjective assessment also provides thresholds for determining the acceptable level of objective quality per parameter. The images for both the subjective and objective quality assessment are based on the HercepTestTM slides scanned by the Philips Ultra Fast Scanners, developed at Philips Digital Pathology Solutions. However, the method is applicable also to other types of slides and scanners.

  3. Practical algorithms for simulation and reconstruction of digital in-line holograms.

    Science.gov (United States)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2015-03-20

    Here we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase-shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  4. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    International Nuclear Information System (INIS)

    Dise, J; McDonald, D; Ashenafi, M; Peng, J; Mart, C; Koch, N; Vanek, K

    2016-01-01

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels with similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed

  5. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    Energy Technology Data Exchange (ETDEWEB)

    Dise, J; McDonald, D; Ashenafi, M; Peng, J; Mart, C; Koch, N; Vanek, K [Medical University of South Carolina, Charleston, SC (United States)

    2016-06-15

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels with similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed

  6. Designing optimal degradation tests via multi-objective genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Cipollone, Maurizio

    2003-01-01

    The experimental determination of the failure time probability distribution of highly reliable components, such as those used in nuclear and aerospace applications, is intrinsically difficult due to the lack, or scarce significance, of failure data which can be collected during the relatively short test periods. A possibility to overcome this difficulty is to resort to the so-called degradation tests, in which measurements of components' degradation are used to infer the failure time distribution. To design such tests, parameters like the number of tests to be run, their frequency and duration, must be set so as to obtain an accurate estimate of the distribution statistics, under the existing limitations of budget. The optimisation problem which results is a non-linear one. In this work, we propose a method, based on multi-objective genetic algorithms for determining the values of the test parameters which optimise both the accuracy in the estimate of the failure time distribution percentiles and the testing costs. The method has been validated on a degradation model of literature

  7. Multi Objective Optimization Using Genetic Algorithm of a Pneumatic Connector

    Science.gov (United States)

    Salaam, HA; Taha, Zahari; Ya, TMYS Tuan

    2018-03-01

    The concept of sustainability was first introduced by Dr Harlem Brutland in the 1980’s promoting the need to preserve today’s natural environment for the sake of future generations. Based on this concept, John Elkington proposed an approach to measure sustainability known as Triple Bottom Line (TBL). There are three evaluation criteria’s involved in the TBL approach; namely economics, environmental integrity and social equity. In manufacturing industry the manufacturing costs measure the economic sustainability of a company in a long term. Environmental integrity is a measure of the impact of manufacturing activities on the environment. Social equity is complicated to evaluate; but when the focus is at the production floor level, the production operator health can be considered. In this paper, the TBL approach is applied in the manufacturing of a pneumatic nipple hose. The evaluation criteria used are manufacturing costs, environmental impact, ergonomics impact and also energy used for manufacturing. This study involves multi objective optimization by using genetic algorithm of several possible alternatives for material used in the manufacturing of the pneumatic nipple.

  8. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  9. Evolution of NASA's Earth Science Digital Object Identifier Registration System

    Science.gov (United States)

    Wanchoo, Lalit; James, Nathan

    2017-01-01

    NASA's Earth Science Data and Information System (ESDIS) Project has implemented a fully automated system for assigning Digital Object Identifiers (DOIs) to Earth Science data products being managed by its network of 12 distributed active archive centers (DAACs). A key factor in the successful evolution of the DOI registration system over last 7 years has been the incorporation of community input from three focus groups under the NASA's Earth Science Data System Working Group (ESDSWG). These groups were largely composed of DOI submitters and data curators from the 12 data centers serving the user communities of various science disciplines. The suggestions from these groups were formulated into recommendations for ESDIS consideration and implementation. The ESDIS DOI registration system has evolved to be fully functional with over 5,000 publicly accessible DOIs and over 200 DOIs being held in reserve status until the information required for registration is obtained. The goal is to assign DOIs to the entire 8000+ data collections under ESDIS management via its network of discipline-oriented data centers. DOIs make it easier for researchers to discover and use earth science data and they enable users to provide valid citations for the data they use in research. Also for the researcher wishing to reproduce the results presented in science publications, the DOI can be used to locate the exact data or data products being cited.

  10. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  11. Robust digital image inpainting algorithm in the wireless environment

    Science.gov (United States)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    Image or video inpainting is the process/art of retrieving missing portions of an image without introducing undesirable artifacts that are undetectable by an ordinary observer. An image/video can be damaged due to a variety of factors, such as deterioration due to scratches, laser dazzling effects, wear and tear, dust spots, loss of data when transmitted through a channel, etc. Applications of inpainting include image restoration (removing laser dazzling effects, dust spots, date, text, time, etc.), image synthesis (texture synthesis), completing panoramas, image coding, wireless transmission (recovery of the missing blocks), digital culture protection, image de-noising, fingerprint recognition, and film special effects and production. Most inpainting methods can be classified in two key groups: global and local methods. Global methods are used for generating large image regions from samples while local methods are used for filling in small image gaps. Each method has its own advantages and limitations. For example, the global inpainting methods perform well on textured image retrieval, whereas the classical local methods perform poorly. In addition, some of the techniques are computationally intensive; exceeding the capabilities of most currently used mobile devices. In general, the inpainting algorithms are not suitable for the wireless environment. This paper presents a new and efficient scheme that combines the advantages of both local and global methods into a single algorithm. Particularly, it introduces a blind inpainting model to solve the above problems by adaptively selecting support area for the inpainting scheme. The proposed method is applied to various challenging image restoration tasks, including recovering old photos, recovering missing data on real and synthetic images, and recovering the specular reflections in endoscopic images. A number of computer simulations demonstrate the effectiveness of our scheme and also illustrate the main properties

  12. Developing Data Citations from Digital Object Identifier Metadata

    Science.gov (United States)

    Wanchoo, L.; James, N.

    2015-12-01

    NASA's Earth Science Data and Information System (ESDIS) Project has been processing information for the registration of Digital Object Identifiers (DOI) for the last five years of which an automated system has been in operation for the last two years. The ESDIS DOI registration system has registered over 2000 DOIs with over 1000 DOIs held in reserve until all required information has been collected. By working towards the goal of assigning DOIs to the 8000+ data collections under its management, ESDIS has taken the first step towards facilitating the use of data citations with those products. When registering DOIs, ESDIS requires certain DOI elements be collected for the DOI landing page as recommended by NASA's Earth Science Data System Working Group (ESDSWG). The landing page provides sufficient information to 1) identify NASA data as referenced in a science publication, 2) credit data creators and distributors, and 3) access the data itself enabling the trace-ability and reproducibility of the data. However, the required elements for this DOI landing page are also the core required elements for forming an Earth science data citation. Data citations are getting significant attention from the scientific community and data centers alike. So to encourage the citing of Earth science data products, each product DOI landing page displays a sample data citation and makes the required citation elements available to DataCite for use in its Data Citation generation tool. This paper will describe that process. ESDIS data centers are constantly developing technologies to better serve the Earth science user community such as Geospatial Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI), Land and Atmospheric Near Real-Time Capability for EOS (LANCE), and advanced tools that support virtual data collections, and virtual data products. These all provide easier access to data and make possible the creation of data products with user specified parameters

  13. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    International Nuclear Information System (INIS)

    Choi, Seok Yoon; Kim, Chang Soo

    2009-01-01

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  14. Study of key technology of ghost imaging via compressive sensing for a phase object based on phase-shifting digital holography

    International Nuclear Information System (INIS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2015-01-01

    In this article, the algorithm of compressing sensing is used to improve the imaging resolution and realize ghost imaging via compressive sensing for a phase object based on the theoretical analysis of the lensless Fourier imaging of the algorithm of ghost imaging based on phase-shifting digital holography. The algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography uses the bucket detector to measure the total light intensity of the interference and the four-step phase-shifting method is used to obtain the total light intensity of differential interference light. The experimental platform is built based on the software simulation, and the experimental results show that the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography can obtain the high-resolution phase distribution figure of the phase object. With the same sampling times, the phase clarity of the phase distribution figure obtained by the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography is higher than that obtained by the algorithm of ghost imaging based on phase-shift digital holography. In this article, this study further extends the application range of ghost imaging and obtains the phase distribution of the phase object. (letter)

  15. Bearing and Range Estimation Algorithm for Buried Object in Underwater Acoustics

    Directory of Open Access Journals (Sweden)

    Dong Han

    2009-01-01

    (DOA of objects and objects-sensors distances, is used in MUSIC algorithm instead of classical model. The influence of the depth of buried objects is discussed. Finally, the numerical results are given in the case of buried cylindrical shells.

  16. Study on hybrid multi-objective optimization algorithm for inverse treatment planning of radiation therapy

    International Nuclear Information System (INIS)

    Li Guoli; Song Gang; Wu Yican

    2007-01-01

    Inverse treatment planning for radiation therapy is a multi-objective optimization process. The hybrid multi-objective optimization algorithm is studied by combining the simulated annealing(SA) and genetic algorithm(GA). Test functions are used to analyze the efficiency of algorithms. The hybrid multi-objective optimization SA algorithm, which displacement is based on the evolutionary strategy of GA: crossover and mutation, is implemented in inverse planning of external beam radiation therapy by using two kinds of objective functions, namely the average dose distribution based and the hybrid dose-volume constraints based objective functions. The test calculations demonstrate that excellent converge speed can be achieved. (authors)

  17. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    Science.gov (United States)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  18. Use of trapezoidal shaping algorithm in the digital multi-channel system

    International Nuclear Information System (INIS)

    Wang Jihong; Wang Lianghou; Fang Zongliang

    2011-01-01

    It discusses one kind of digital filter technology-trapezoidal algorithm based on actual need of studying the digital multi-channel. Firstly, demonstrating the feasibility of the arithmetic with theoretical analysis; secondly, predigesting the process of the arithmetic; thirdly, simulating with MATLAB; lastly, using the arithmetic to measure data. The result of testing indicates trapezoidal shaping algorithm accords with the need of digital multi-channel shaping extraordinary. The best filter can be obtained by means of setting parameter due to superiority of digital multi-channel. (authors)

  19. An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    Rafał Dreżewski

    2017-08-01

    Full Text Available Algorithms based on the process of natural evolution are widely used to solve multi-objective optimization problems. In this paper we propose the agent-based co-evolutionary algorithm for multi-objective portfolio optimization. The proposed technique is compared experimentally to the genetic algorithm, co-evolutionary algorithm and a more classical approach—the trend-following algorithm. During the experiments historical data from the Warsaw Stock Exchange is used in order to assess the performance of the compared algorithms. Finally, we draw some conclusions from these experiments, showing the strong and weak points of all the techniques.

  20. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability.

    Science.gov (United States)

    Abdellah, Marwan; Eldeib, Ayman; Owis, Mohamed I

    2015-01-01

    This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.

  1. IMPLEMENTATION OF OBJECT TRACKING ALGORITHMS ON THE BASIS OF CUDA TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    B. A. Zalesky

    2014-01-01

    Full Text Available A fast version of correlation algorithm to track objects on video-sequences made by a nonstabilized camcorder is presented. The algorithm is based on comparison of local correlations of the object image and regions of video-frames. The algorithm is implemented in programming technology CUDA. Application of CUDA allowed to attain real time execution of the algorithm. To improve its precision and stability, a robust version of the Kalman filter has been incorporated into the flowchart. Tests showed applicability of the algorithm to practical object tracking.

  2. RSA Algorithm. Features of the C # Object Programming Implementation

    Directory of Open Access Journals (Sweden)

    Elena V. Staver

    2012-08-01

    Full Text Available Public-key algorithms depend on the encryption key and the decoding key, connected with the first one. For data public key encryption, the text is divided into blocks, each of which is represented as a number. To decrypt the message a secret key is used.

  3. Creating an anthropomorphic digital MR phantom—an extensible tool for comparing and evaluating quantitative imaging algorithms

    International Nuclear Information System (INIS)

    Bosca, Ryan J; Jackson, Edward F

    2016-01-01

    Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland–Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms. (paper)

  4. Automated analysis of art object surfaces using time-averaged digital speckle pattern interferometry

    Science.gov (United States)

    Lukomski, Michal; Krzemien, Leszek

    2013-05-01

    Technical development and practical evaluation of a laboratory built, out-of-plane digital speckle pattern interferometer (DSPI) are reported. The instrument was used for non-invasive, non-contact detection and characterization of early-stage damage, like fracturing and layer separation, of painted objects of art. A fully automated algorithm was developed for recording and analysis of vibrating objects utilizing continuous-wave laser light. The algorithm uses direct, numerical fitting or Hilbert transformation for an independent, quantitative evaluation of the Bessel function at every point of the investigated surface. The procedure does not require phase modulation and thus can be implemented within any, even the simplest, DSPI apparatus. The proposed deformation analysis is fast and computationally inexpensive. Diagnosis of physical state of the surface of a panel painting attributed to Nicolaus Haberschrack (a late-mediaeval painter active in Krakow) from the collection of the National Museum in Krakow is presented as an example of an in situ application of the developed methodology. It has allowed the effectiveness of the deformation analysis to be evaluated for the surface of a real painting (heterogeneous colour and texture) in a conservation studio where vibration level was considerably higher than in the laboratory. It has been established that the methodology, which offers automatic analysis of the interferometric fringe patterns, has a considerable potential to facilitate and render more precise the condition surveys of works of art.

  5. A generalization of the preset count moving average algorithm for digital rate meters

    International Nuclear Information System (INIS)

    Arandjelovic, Vojislav; Koturovic, Aleksandar; Vukanovic, Radomir

    2002-01-01

    A generalized definition of the preset count moving average algorithm for digital rate meters has been introduced. The algorithm is based on the knowledge of time intervals between successive pulses in random-pulse sequences. The steady state and transient regimes of the algorithm have been characterized. A measure for statistical fluctuations of the successive measurement results has been introduced. The versatility of the generalized algorithm makes it suitable for application in the design of the software of modern measuring/control digital systems

  6. A satellite digital controller or 'play that PID tune again, Sam'. [Position, Integral, Derivative feedback control algorithm for design strategy

    Science.gov (United States)

    Seltzer, S. M.

    1976-01-01

    The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.

  7. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    larger number of cores using the IBM QS22 Blade for handling higher video processing workloads (but at higher cost per core), low power consumption and...Cell/B.E. Blade processors which have a lot more main memory but also higher power consumption . More detailed performance figures for HD and SD video...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  8. New methods for the investigation of phase objects by digital Speckle-pattern-interferometry

    International Nuclear Information System (INIS)

    Fliesser, W.

    1996-11-01

    This work shows new possibilities for spatially resolved interferometric investigations of transparent objects (phase-objects) applying the method of Digital-Speckle-Pattern-Interferometry (DSPI). A feedback-system using an additional MICHELSON-interferometer in the reference-arm of the Speckle-interferometer was installed for computer-controlled digitalization of the primary interferograms at defined phase-shifts. A special program for interferogram analysis allows the evaluation of two-dimensional phase distributions using different phase-stepping algorithms as 3-Frame-, 4-Frame-, CARRE-, 4+1-Frame- and 6+1-Frame technique. Special DSPI-setups with modified MACH-ZEHNDER-interferometers were used to check the system. In some basic experiments the temperature distribution in a cross section of convective heat-flows of air was measured. As an application on plasma-diagnostics the space-resolved electron density in a high pressure mercury lamp was determined using two-wavelength-DSPI. To increase the sensitivity of the method a Nd-YAG-laser at 1064 nm was employed in addition to an Ar + -laser at 488 Nm. The electron density in one cross-section of the lamp could be calculated by ABEL-inversion of the measured phase data. A multi-directional optical setup with a special mirror system was developed to investigate asymmetric phase-objects by DSPI. The setup allows to store six primary interferograms from different directions in one step, while using a single reference-beam only. Helium flows in air with different flow geometries were used as phase-objects. Tomographic reconstruction procedures such as the convolution-method yield the distributions of the refractive-index and the related helium concentrations in selected cross sections of the flow. (author)

  9. Quasiinvariant Automatic Control Digital Systems of Inertia Objects

    OpenAIRE

    Lvov, Volodymyr; Andrieiev, Anatoliy

    2010-01-01

    The two-connected automatic control digital system (ACDS) and system of ACDS with combined control are examined. The two-connected and combined system of ACDS with work in the mode of tracking and stabilizing are analyzed. The discrete transfer function of two-connected and combined systems are obtained.

  10. Device for direct digitized radiology of small objects

    International Nuclear Information System (INIS)

    Thomas, G.; Favier, C.; Brebant, C.; Mogavero, R.

    1983-06-01

    A radiological device has been developed to obtain direct digitized views with a large integration time. A micro computer system with programs specially developed for the automatic materials defects evaluation is used. Some results concerning the weldings of electro-nuclear fuel pins are presented here [fr

  11. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  12. Optimization of Combined Thermal and Electrical Behavior of Power Converters Using Multi-Objective Genetic Algorithms

    NARCIS (Netherlands)

    Malyna, D.V.; Duarte, J.L.; Hendrix, M.A.M.; Horck, van F.B.M.

    2007-01-01

    A practical example of power electronic converter synthesis is presented, where a multi-objective genetic algorithm, namely non-dominated sorting genetic algorithm (NSGA-II) is used. The optimization algorithm takes an experimentally-derived thermal model for the converter into account. Experimental

  13. Algorithm of search and track of static and moving large-scale objects

    Directory of Open Access Journals (Sweden)

    Kalyaev Anatoly

    2017-01-01

    Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.

  14. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    Science.gov (United States)

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  15. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  16. Evaluation of digital breast tomosynthesis reconstruction algorithms using synchrotron radiation in standard geometry

    International Nuclear Information System (INIS)

    Bliznakova, K.; Kolitsi, Z.; Speller, R. D.; Horrocks, J. A.; Tromba, G.; Pallikarakis, N.

    2010-01-01

    Purpose: In this article, the image quality of reconstructed volumes by four algorithms for digital tomosynthesis, applied in the case of breast, is investigated using synchrotron radiation. Methods: An angular data set of 21 images of a complex phantom with heterogeneous tissue-mimicking background was obtained using the SYRMEP beamline at ELETTRA Synchrotron Light Laboratory, Trieste, Italy. The irradiated part was reconstructed using the multiple projection algorithm (MPA) and the filtered backprojection with ramp followed by hamming windows (FBR-RH) and filtered backprojection with ramp (FBP-R). Additionally, an algorithm for reducing the noise in reconstructed planes based on noise mask subtraction from the planes of the originally reconstructed volume using MPA (MPA-NM) has been further developed. The reconstruction techniques were evaluated in terms of calculations and comparison of the contrast-to-noise ratio (CNR) and artifact spread function. Results: It was found that the MPA-NM resulted in higher CNR, comparable with the CNR of FBP-RH for high contrast details. Low contrast objects are well visualized and characterized by high CNR using the simple MPA and the MPA-NM. In addition, the image quality of the reconstructed features in terms of CNR and visual appearance as a function of the initial number of projection images and the reconstruction arc was carried out. Slices reconstructed with more input projection images result in less reconstruction artifacts and higher detail CNR, while those reconstructed from projection images acquired in reduced angular range causes pronounced streak artifacts. Conclusions: Of the reconstruction algorithms implemented, the MPA-NM and MPA are a good choice for detecting low contrast objects, while the FBP-RH, FBP-R, and MPA-NM provide high CNR and well outlined edges in case of microcalcifications.

  17. Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm

    Science.gov (United States)

    Smith, J. Scott

    2011-01-01

    Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.

  18. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  19. An experimental analysis of design choices of multi-objective ant colony optimization algorithms

    OpenAIRE

    Lopez-Ibanez, Manuel; Stutzle, Thomas

    2012-01-01

    There have been several proposals on how to apply the ant colony optimization (ACO) metaheuristic to multi-objective combinatorial optimization problems (MOCOPs). This paper proposes a new formulation of these multi-objective ant colony optimization (MOACO) algorithms. This formulation is based on adding specific algorithm components for tackling multiple objectives to the basic ACO metaheuristic. Examples of these components are how to represent multiple objectives using pheromone and heuris...

  20. Multi-objective ant algorithm for wireless sensor network positioning

    International Nuclear Information System (INIS)

    Fidanova, S.; Shindarov, M.; Marinov, P.

    2013-01-01

    It is impossible to imagine our modern life without telecommunications. Wireless networks are a part of telecommunications. Wireless sensor networks (WSN) consist of spatially distributed sensors, which communicate in wireless way. This network monitors physical or environmental conditions. The objective is the full coverage of the monitoring region and less energy consumption of the network. The most appropriate approach to solve the problem is metaheuristics. In this paper the full coverage of the area is treated as a constrain. The objectives which are optimized are a minimal number of sensors and energy (lifetime) of the network. We apply multi-objective Ant Colony Optimization to solve this important telecommunication problem. We chose MAX-MIN Ant System approach, because it is proven to converge to the global optima

  1. Algorithmic impediments filtration using the α-truncated mean method in resolver-to-digital converter

    Directory of Open Access Journals (Sweden)

    Gordiyenko V. I.

    2009-02-01

    Full Text Available A test diagram of the microcontroller-type resolver-to-digital converter and algorithms for impediments filtration therein are developed. Experimental verification of the α-truncated mean algorithm intended for the suppression of impulse and noise interference is conducted. The test results are given.

  2. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    Science.gov (United States)

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  3. Algorithms for ADC multi-site test with digital input stimulus

    NARCIS (Netherlands)

    Sheng, Xiaoqin; Kerkhoff, Hans G.; Zjajo, Amir; Gronthoud, Guido

    2009-01-01

    This paper reports two novel algorithms based on time-modulo reconstruction method intended for detection of the parametric faults in analogue-to-digital converters (ADC). In both algorithms, a pulse signal, in its slightly adapted form to allow sufficient time for converter settling, is taken as

  4. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    Science.gov (United States)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  5. In-plane object detection : detection algorithms and visibility problems

    NARCIS (Netherlands)

    Jovanovic, N.

    2011-01-01

    A large number of devices today incorporate some form of detection of objects and people in a given environment. Various detection technologies have been developed over the years, as a response to many different demands. The devices such as video surveillance systems, scanners, touch screens and

  6. Algorithms for Learning Preferences for Sets of Objects

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric

    2010-01-01

    A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical

  7. Storytelling in the digital world: achieving higher-level learning objectives.

    Science.gov (United States)

    Schwartz, Melissa R

    2012-01-01

    Nursing students are not passive media consumers but instead live in a technology ecosystem where digital is the language they speak. To prepare the next generation of nurses, educators must incorporate multiple technologies to improve higher-order learning. The author discusses the evolution and use of storytelling as part of the digital world and how digital stories can be aligned with Bloom's Taxonomy so that students achieve higher-level learning objectives.

  8. A digital elevation analysis: Spatially distributed flow apportioning algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang-Hyun; Kim, Kyung-Hyun [Pusan National University, Pusan(Korea); Jung, Sun-Hee [Korea Environment Institute, (Korea)

    2001-06-30

    A flow determination algorithm is proposed for the distributed hydrologic model. The advantages of a single flow direction scheme and multiple flow direction schemes are selectively considered to address the drawbacks of existing algorithms. A spatially varied flow apportioning factor is introduced in order to accommodate the accumulated area from upslope cells. The channel initiation threshold area(CIT) concept is expanded and integrated into the spatially distributed flow apportioning algorithm in order to delineate a realistic channel network. An application of a field example suggests that the linearly distributed flow apportioning scheme provides some advantages over existing approaches, such as the relaxation of over-dissipation problems near channel cells, the connectivity feature of river cells, the continuity of saturated areas and the negligence of the optimization of few parameters in existing algorithms. The effects of grid sizes are explored spatially as well as statistically. (author). 28 refs., 7 figs.

  9. COMET Multimedia modules and objects in the digital library system

    Science.gov (United States)

    Spangler, T. C.; Lamos, J. P.

    2003-12-01

    Over the past ten years of developing Web- and CD-ROM-based training materials, the Cooperative Program for Operational Meteorology, Education and Training (COMET) has created a unique archive of almost 10,000 multimedia objects and some 50 web based interactive multimedia modules on various aspects of weather and weather forecasting. These objects and modules, containing illustrations, photographs, animations,video sequences, audio files, are potentially a valuable resource for university faculty and students, forecasters, emergency managers, public school educators, and other individuals and groups needing such materials for educational use. The COMET Modules are available on the COMET educational web site http://www.meted.ucar.edu, and the COMET Multimedia Database (MMDB) makes a collection of the multimedia objects available in a searchable online database for viewing and download over the Internet. Some 3200 objects are already available at the MMDB Website: http://archive.comet.ucar.edu/moria/

  10. Digital Image Encryption Algorithm Design Based on Genetic Hyperchaos

    Directory of Open Access Journals (Sweden)

    Jian Wang

    2016-01-01

    Full Text Available In view of the present chaotic image encryption algorithm based on scrambling (diffusion is vulnerable to choosing plaintext (ciphertext attack in the process of pixel position scrambling, we put forward a image encryption algorithm based on genetic super chaotic system. The algorithm, by introducing clear feedback to the process of scrambling, makes the scrambling effect related to the initial chaos sequence and the clear text itself; it has realized the image features and the organic fusion of encryption algorithm. By introduction in the process of diffusion to encrypt plaintext feedback mechanism, it improves sensitivity of plaintext, algorithm selection plaintext, and ciphertext attack resistance. At the same time, it also makes full use of the characteristics of image information. Finally, experimental simulation and theoretical analysis show that our proposed algorithm can not only effectively resist plaintext (ciphertext attack, statistical attack, and information entropy attack but also effectively improve the efficiency of image encryption, which is a relatively secure and effective way of image communication.

  11. Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking

    International Nuclear Information System (INIS)

    Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.

    2009-01-01

    Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.

  12. Algorithmic information theory mathematics of digital information processing

    CERN Document Server

    Seibt, Peter

    2007-01-01

    Treats the Mathematics of many important areas in digital information processing. This book covers, in a unified presentation, five topics: Data Compression, Cryptography, Sampling (Signal Theory), Error Control Codes, Data Reduction. It is useful for teachers, students and practitioners in Electronic Engineering, Computer Science and Mathematics.

  13. An objective procedure for evaluation of adaptive antifeedback algorithms in hearing aids.

    Science.gov (United States)

    Freed, Daniel J; Soli, Sigfrid D

    2006-08-01

    This study evaluated the performance of nine adaptive antifeedback algorithms. There were two goals: first, to identify objective procedures that are useful for evaluating these algorithms, and second, to identify strengths and weaknesses of existing algorithms. The algorithms were evaluated in behind-the-ear implementations on the Knowles Electronics Manikin for Acoustic Research (KEMAR). Different acoustic conditions were created by placing a telephone handset or a hat on KEMAR. Electroacoustic techniques were devised to measure the following performance aspects of each algorithm: (1) additional gain made available before oscillation, (2) gain lost in specific frequency regions, (3) reduction of suboscillatory peaks in the frequency response, (4) speed of adaptation to changing acoustic conditions, and (5) robustness in the presence of tonal input signals. For each measurement, performance varied widely across algorithms. No single algorithm was clearly superior or inferior to the others. Generally, the feedback cancellation algorithms were less likely to sacrifice gain in specific frequency regions and better at reducing suboscillatory peaks, whereas the algorithms that used noncancellation techniques were more tolerant of tonal input signals. For those algorithms equipped with special operational modes intended for music listening, the music mode improved the response to tonal inputs but sometimes sacrificed other performance aspects. Algorithms that required an acoustic measurement for initialization purposes tended to perform poorly in acoustic conditions dissimilar to the condition in which initialization was performed. The objective methods devised for this study appear useful for evaluating the performance of adaptive antifeedback algorithms. Currently available algorithms demonstrate a wide range of performance, and further research is required to develop new algorithms that combine the best features of existing algorithms.

  14. GuidosToolbox: universal digital image object analysis

    Science.gov (United States)

    Peter Vogt; Kurt Riitters

    2017-01-01

    The increased availability of mapped environmental data calls for better tools to analyze the spatial characteristics and information contained in those maps. Publicly available, userfriendly and universal tools are needed to foster the interdisciplinary development and application of methodologies for the extraction of image object information properties contained in...

  15. Efficient Tracking, Logging, and Blocking of Accesses to Digital Objects

    Science.gov (United States)

    2015-09-01

    suggestions for reducing this burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and Reports...4.3.1 Initial User Feedback ................................................................................... 33 4.4 Objective Benchmarks of the System...used and, that we can trap guest OS page faults. Shadow paging is a technique that creates a copy of guest page tables, sanitizes and propagates the

  16. Crossref an update on article level linking and digital object identifiers

    CERN Multimedia

    2002-01-01

    Description of the CrossRef initiative, "an independent non-profit membership organization that was established by the publishing community to permit article linking based on digital object identifiers (DOIs)" (1 page).

  17. Convergence of iterative image reconstruction algorithms for Digital Breast Tomosynthesis

    DEFF Research Database (Denmark)

    Sidky, Emil; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    Most iterative image reconstruction algorithms are based on some form of optimization, such as minimization of a data-fidelity term plus an image regularizing penalty term. While achieving the solution of these optimization problems may not directly be clinically relevant, accurate optimization s...

  18. From digital earth to digital neighbourhood: A study of subjective measures of walkability attributes in objectively assessed digital neighbourhood

    International Nuclear Information System (INIS)

    Qureshi, S; Ho, C S

    2014-01-01

    According to IEA report (2011), about 23% of the World's CO 2 emissions result from transport and this is one of the few areas where emissions are still rapidly increasing. The use of private vehicles is one of the principle contributors to green house gas emissions from transport sector. Therefore this paper focuses on the shift to more sustainable and low carbon forms of transportation mode such as walking. Neighbourhood built environment attributes may influence walkability. For this study, the author used a modified version of the ''Neighbourhood Environment Walkability Scale'' to make comparison between respondents' perceptions regarding attributes of two neighborhoods of Putrajaya. The 21st Century really needs planners to use the Digital Earth Concept, to go from global to regional to national to very local issues, using integrated, advanced technologies such as earth observation, GIS, virtual reality, etc. For this research, two (2) neighborhoods of different densities (High and Low density) were selected. A sample total of 381(195 and 186) between 7 to 65 years old participants were selected For subjective measures we used 54 questions questionnaire survey where as for the objective measures we used desktop 9.3 version of Arc GIS soft ware. Our results shows that respondents who reside in high-walkable neighbourhood precinct 9 in Putrajaya rated factors such as residential density, land use mix, proximity to destination and street connectivity, consistently higher then did respondents of the low walkable neighbourhood precinct 8 in Putrajaya

  19. La ondina digital: Apariciones posmodernas de la ninfa acuática en el imaginario del algoritmo digital = Digital Undine: Postmodern Appearances of the Water Nymph in the Digital Algorithm Imaginarium

    Directory of Open Access Journals (Sweden)

    Jaime Repollés Llauradó

    2014-12-01

    Full Text Available Los artistas digitales recurren a los algoritmos para generar sistemas lineales y vectoriales que se conforman como paisajes y figuras que frecuentemente evocan imaginarios acuáticos. En estos escenarios virtuales se multiplican y reverberan los atributos de las ninfas clásicas. Tanto el cibernauta posmoderno como el espectador del cine de animación pueden contemplar estas aguas digitales con la misma mirada evocadora que permitió a los poetas clásicos advertir la emergencia de las nereidas de las fuentes míticas. Aquella ninfomanía que agitó el imaginario artístico fin de siècle revive cuando advertimos ondinas posmodernas surgidas de curvas algorítmicas, emanando del mismo manantial mítico que el imaginario de la ninfa moderna, desde los objetos art nouveau hasta el cine de la era digital. AbstractDigital artists use algorithms to generate linear and vector systems which take the shape of landscapes and human figures and often evoke water imaginaria. Inside this virtual setting the attributes of classic Nymphs resonate and multiply. Both the Postmodern Internet user and the audience of animation cinema can contemplate these digital waters with the same evocative gaze that allowed classic poets to notice the emergence of Nereids from the mythical springs. The same “nymphomania” that stirred the fin de siècle artistic imaginarium revives when we notice the presence of Postmodern Undines, emerged from algorithm curves, emanating from the same mythical spring as the imaginarium of the Modern Nymph, from the Art Nouveau objects to the digital age cinema.

  20. Analysis of Various Multi-Objective Optimization Evolutionary Algorithms for Monte Carlo Treatment Planning System

    CERN Document Server

    Tydrichova, Magdalena

    2017-01-01

    In this project, various available multi-objective optimization evolutionary algorithms were compared considering their performance and distribution of solutions. The main goal was to select the most suitable algorithms for applications in cancer hadron therapy planning. For our purposes, a complex testing and analysis software was developed. Also, many conclusions and hypothesis have been done for the further research.

  1. The artificial-free technique along the objective direction for the simplex algorithm

    International Nuclear Information System (INIS)

    Boonperm, Aua-aree; Sinapiromsaran, Krung

    2014-01-01

    The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm

  2. The artificial-free technique along the objective direction for the simplex algorithm

    Science.gov (United States)

    Boonperm, Aua-aree; Sinapiromsaran, Krung

    2014-03-01

    The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm.

  3. Twin-image reduction in inline digital holography using an object segmentation heuristic

    International Nuclear Information System (INIS)

    McElhinney, Conor P; Hennelly, Bryan M; Naughton, Thomas J

    2008-01-01

    We present a digital image processing heuristic for the removal of the twin-image in inline digital holograms. Typically, the unwanted twin manifests itself as visible corruptive noise in the reconstruction plane. We reconstruct the unwanted twin-image at its in-focus plane and suppress it by first finding the boundary of the object, and then removing the optical energy within this boundary. In this plane, the wanted twin-image optical energy is largely dispersed outside this boundary and so it is retained. The heuristic's effectiveness is demonstrated using a digital hologram of a real-world object.

  4. Twin-image reduction in inline digital holography using an object segmentation heuristic

    Energy Technology Data Exchange (ETDEWEB)

    McElhinney, Conor P; Hennelly, Bryan M [Department of Computer Science, National University of Ireland, Maynooth, County Kildare (Ireland); Naughton, Thomas J [University of Oulu, RFMedia Laboratory, Oulu Southern Institute, Vierimaantie 5, 84100 Ylivieska (Finland)], E-mail: conormce@cs.nuim.ie, E-mail: tomn@cs.nuim.ie

    2008-11-01

    We present a digital image processing heuristic for the removal of the twin-image in inline digital holograms. Typically, the unwanted twin manifests itself as visible corruptive noise in the reconstruction plane. We reconstruct the unwanted twin-image at its in-focus plane and suppress it by first finding the boundary of the object, and then removing the optical energy within this boundary. In this plane, the wanted twin-image optical energy is largely dispersed outside this boundary and so it is retained. The heuristic's effectiveness is demonstrated using a digital hologram of a real-world object.

  5. Reconstruction of a digital core containing clay minerals based on a clustering algorithm

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K -means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  6. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  7. Object detection and recognition in digital images theory and practice

    CERN Document Server

    Cyganek, Boguslaw

    2013-01-01

    Object detection, tracking and recognition in images are key problems in computer vision. This book provides the reader with a balanced treatment between the theory and practice of selected methods in these areas to make the book accessible to a range of researchers, engineers, developers and postgraduate students working in computer vision and related fields. Key features: Explains the main theoretical ideas behind each method (which are augmented with a rigorous mathematical derivation of the formulas), their implementation (in C++) and demonstrated working in real applications.

  8. The Emperor’s New Augmented Clothes. Digital Objects as Part of the Every Day

    Directory of Open Access Journals (Sweden)

    Nicola Liberati

    2017-10-01

    Full Text Available The main aim of this work is to solve a problem that Augmented Reality is facing by using phenomenological and phenomenological analyses and projectors. Augmented reality seeks to merge the digital and real world by producing a mixed reality where the digital objects are usually visualised thanks to the head mounted or mobile devices. However, this technology is facing problems because the objects generated by the digital devices are existing merely inside the small group of people while using specific devices. Therefore, these objects look fictitious for the other members of the society who are not using them. In order to analyse the elements which make these objects fictitious for the other member of the society, we will take into account the story of The Emperor’s new clothes because, even in this story, there are fictional entities not perceivable by other members of the community. Thanks to this story, it will be possible to highlight some elements which make the objects part of the everyday world. Moreover, it will show how the intersubjectivity of these objects is directly related to their way of being perceived by the subjects and, in the case of augmented reality, to the devices used to make them perceivable. For this reason, it is possible to solve the problem Augmented Reality is facing by changing the devices used to produce these digital objects. At the end of the work, we will propose a project which can solve the problem by following the elements previously highlighted. We will show how, thanks to wearable projectors, it is possible to produce digital clothes as part of the everyday world of every subject. Thanks to these digital clothes people will be able to wear the digital objects as if they were common, usual objects without being naked.

  9. Multi-objective PID Optimization for Speed Control of an Isolated Steam Turbine using Gentic Algorithm

    OpenAIRE

    Sanjay Kr. Singh; D. Boolchandani; S.G. Modani; Nitish Katal

    2014-01-01

    This study focuses on multi-objective optimization of the PID controllers for optimal speed control for an isolated steam turbine. In complex operations, optimal tuning plays an imperative role in maintaining the product quality and process safety. This study focuses on the comparison of the optimal PID tuning using Multi-objective Genetic Algorithm (NSGA-II) against normal genetic algorithm and Ziegler Nichols methods for the speed control of an isolated steam turbine. Isolated steam turbine...

  10. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  11. Sensitivity of Calibrated Parameters and Water Resource Estimates on Different Objective Functions and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Delaram Houshmand Kouchi

    2017-05-01

    Full Text Available The successful application of hydrological models relies on careful calibration and uncertainty analysis. However, there are many different calibration/uncertainty analysis algorithms, and each could be run with different objective functions. In this paper, we highlight the fact that each combination of optimization algorithm-objective functions may lead to a different set of optimum parameters, while having the same performance; this makes the interpretation of dominant hydrological processes in a watershed highly uncertain. We used three different optimization algorithms (SUFI-2, GLUE, and PSO, and eight different objective functions (R2, bR2, NSE, MNS, RSR, SSQR, KGE, and PBIAS in a SWAT model to calibrate the monthly discharges in two watersheds in Iran. The results show that all three algorithms, using the same objective function, produced acceptable calibration results; however, with significantly different parameter ranges. Similarly, an algorithm using different objective functions also produced acceptable calibration results, but with different parameter ranges. The different calibrated parameter ranges consequently resulted in significantly different water resource estimates. Hence, the parameters and the outputs that they produce in a calibrated model are “conditioned” on the choices of the optimization algorithm and objective function. This adds another level of non-negligible uncertainty to watershed models, calling for more attention and investigation in this area.

  12. Representing Value as Digital Object: A Discussion of Transferability and Anonymity; Digital Library Initiatives of the Deutsche Forschungsgemeinschaft; CrossRef Turns One; Fermi National Accelerator Laboratory (Fermilab).

    Science.gov (United States)

    Kahn, Robert E.; Lyons, Patrice A.; Brahms, Ewald; Brand, Amy; van den Bergen, Mieke

    2001-01-01

    Includes four articles that discuss the use of digital objects to represent value in a network environment; digital library initiatives at the central public funding organization for academic research in Germany; an application of the Digital Object Identifier System; and the Web site of the Fermi National Accelerator Laboratory. (LRW)

  13. Low Latency Digit-Recurrence Reciprocal and Square-Root Reciprocal Algorithm and Architecture

    DEFF Research Database (Denmark)

    Antelo, Elisardo; Lang, Tomas; Montuschi, Paolo

    2005-01-01

    The reciprocal and square-root reciprocal operations are important in several applications. For these operations, we present algorithms that combine a digit-by-digit module and one iteration of a quadratic-convergence approximation. The latter is implemented by a digit-recurrence, which uses......-up of about 2 and, because of the approximation part, the area factor is also about 2. We also show a combined implementation for both operations that has essentially the same complexity as that for square-root reciprocal alone....

  14. An efficient hybrid evolutionary algorithm based on PSO and HBMO algorithms for multi-objective Distribution Feeder Reconfiguration

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Engineering Department, Shiraz University of Technology, Shiraz (Iran)

    2009-08-15

    This paper introduces a robust searching hybrid evolutionary algorithm to solve the multi-objective Distribution Feeder Reconfiguration (DFR). The main objective of the DFR is to minimize the real power loss, deviation of the nodes' voltage, the number of switching operations, and balance the loads on the feeders. Because of the fact that the objectives are different and no commensurable, it is difficult to solve the problem by conventional approaches that may optimize a single objective. This paper presents a new approach based on norm3 for the DFR problem. In the proposed method, the objective functions are considered as a vector and the aim is to maximize the distance (norm2) between the objective function vector and the worst objective function vector while the constraints are met. Since the proposed DFR is a multi objective and non-differentiable optimization problem, a new hybrid evolutionary algorithm (EA) based on the combination of the Honey Bee Mating Optimization (HBMO) and the Discrete Particle Swarm Optimization (DPSO), called DPSO-HBMO, is implied to solve it. The results of the proposed reconfiguration method are compared with the solutions obtained by other approaches, the original DPSO and HBMO over different distribution test systems. (author)

  15. Smart Objects, Dumb Archives: A User-Centric, Layered Digital Library Framework

    Science.gov (United States)

    Maly, Kurt; Nelson, Michael L.; Zubair, Mohammad

    1999-01-01

    Currently, there exist a large number of superb digital libraries, all of which are, unfortunately, vertically integrated and all presenting a monolithic interface to their users. Ideally, a user would want to locate resources from a variety of digital libraries dealing only with one interface. A number of approaches exist to this interoperability issue exist including: defining a universal protocol for all libraries to adhere to; or developing mechanisms to translate between protocols. The approach we illustrate in this paper is to push down the level of universal protocols to one for digital object communication and for communication for simple archives. This approach creates the opportunity for digital library service providers to create digital libraries tailored to the needs of user communities drawing from available archives and individual publishers who adhere to this standard. We have created a reference implementation based on the hyper text transfer protocol (http) with the protocols being derived from the Dienst protocol. We have created a special class of digital objects called buckets and a number of archives based on a NASA collection and NSF funded projects. Starting from NCSTRL we have developed a set of digital library services called NCSTRL+ and have created digital libraries for researchers, educators and students that can each draw on all the archives and individually created buckets.

  16. A Radix-10 Digit-Recurrence Division Unit: Algorithm and Architecture

    DEFF Research Database (Denmark)

    Lang, Tomas; Nannarelli, Alberto

    2007-01-01

    In this work, we present a radix-10 division unit that is based on the digit-recurrence algorithm. The previous decimal division designs do not include recent developments in the theory and practice of this type of algorithm, which were developed for radix-2^k dividers. In addition to the adaptat...... dynamic range of significant) and it has a shorter latency than a radix-10 unit based on the Newton-Raphson approximation....

  17. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  18. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    Directory of Open Access Journals (Sweden)

    Muhammad Nanda Kurniawan

    2014-08-01

    Full Text Available Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS. Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD, feature extraction algorithm (Principal Component Analysis (PCA and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ. The final result of this research is a program for AR.Drone to track a moving object on the floor in fast response time that is under 1 second.

  19. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  20. HYBRID CHRIPTOGRAPHY STREAM CIPHER AND RSA ALGORITHM WITH DIGITAL SIGNATURE AS A KEY

    Directory of Open Access Journals (Sweden)

    Grace Lamudur Arta Sihombing

    2017-03-01

    Full Text Available Confidentiality of data is very important in communication. Many cyber crimes that exploit security holes for entry and manipulation. To ensure the security and confidentiality of the data, required a certain technique to encrypt data or information called cryptography. It is one of the components that can not be ignored in building security. And this research aimed to analyze the hybrid cryptography with symmetric key by using a stream cipher algorithm and asymmetric key by using RSA (Rivest Shamir Adleman algorithm. The advantages of hybrid cryptography is the speed in processing data using a symmetric algorithm and easy transfer of key using asymmetric algorithm. This can increase the speed of transaction processing data. Stream Cipher Algorithm using the image digital signature as a keys, that will be secured by the RSA algorithm. So, the key for encryption and decryption are different. Blum Blum Shub methods used to generate keys for the value p, q on the RSA algorithm. It will be very difficult for a cryptanalyst to break the key. Analysis of hybrid cryptography stream cipher and RSA algorithms with digital signatures as a key, indicates that the size of the encrypted file is equal to the size of the plaintext, not to be larger or smaller so that the time required for encryption and decryption process is relatively fast.

  1. Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm

    Science.gov (United States)

    Craig, Sam; While, Lyndon; Barone, Luigi

    We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.

  2. Digital filter algorithm study and simulation of SSRF feedback system

    International Nuclear Information System (INIS)

    Han Lifeng; Yuan Renxian; Ye Kairong

    2008-01-01

    Least Square Fitting was used to design a FIR filter of the transverse feedback system for the Shanghai Synchrotron Radiation Facility (SSRF). The algorithm helped us to set appropriate gain and phase at special frequency points. This reduced the power needed for damping the beam oscillations, which was proved by System View signal simulation. And with AT (Accelerator Tool) simulation, the Gain calculation and settings to the output signals from the FIR filter were deduced. The relationship between the Kicker power and the system damping time was also given. (authors)

  3. Optimization of signal processing algorithm for digital beam position monitor

    International Nuclear Information System (INIS)

    Lai Longwei; Yi Xing; Leng Yongbin; Yan Yingbing; Chen Zhichu

    2013-01-01

    Based on turn-by-turn (TBT) signal processing, the paper emphasizes on the optimization of system timing and implementation of digital automatic gain control, slow application (SA) modules. Beam position including TBT, fast application (FA) and SA data can be acquired. On-line evaluation on Shanghai Synchrotron Radiation Facility (SSRF) shows that the processor is able to get the multi-rate position data which contain true beam movements. When the storage ring is 174 mA and 500 bunches filled, the resolutions of TBT data, FA data and SA data achieve 0.84, 0.44 and 0.23 μm respectively. The above results prove that the design could meet the performance requirements. (authors)

  4. Object tracking system using a VSW algorithm based on color and point features

    Directory of Open Access Journals (Sweden)

    Lim Hye-Youn

    2011-01-01

    Full Text Available Abstract An object tracking system using a variable search window (VSW algorithm based on color and feature points is proposed. A meanshift algorithm is an object tracking technique that works according to color probability distributions. An advantage of this algorithm based on color is that it is robust to specific color objects; however, a disadvantage is that it is sensitive to non-specific color objects due to illumination and noise. Therefore, to offset this weakness, it presents the VSW algorithm based on robust feature points for the accurate tracking of moving objects. The proposed method extracts the feature points of a detected object which is the region of interest (ROI, and generates a VSW using the given information which is the positions of extracted feature points. The goal of this paper is to achieve an efficient and effective object tracking system that meets the accurate tracking of moving objects. Through experiments, the object tracking system is implemented that it performs more precisely than existing techniques.

  5. Energy-Efficient Scheduling Problem Using an Effective Hybrid Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Lvjiang Yin

    2016-12-01

    Full Text Available Nowadays, manufacturing enterprises face the challenge of just-in-time (JIT production and energy saving. Therefore, study of JIT production and energy consumption is necessary and important in manufacturing sectors. Moreover, energy saving can be attained by the operational method and turn off/on idle machine method, which also increases the complexity of problem solving. Thus, most researchers still focus on small scale problems with one objective: a single machine environment. However, the scheduling problem is a multi-objective optimization problem in real applications. In this paper, a single machine scheduling model with controllable processing and sequence dependence setup times is developed for minimizing the total earliness/tardiness (E/T, cost, and energy consumption simultaneously. An effective multi-objective evolutionary algorithm called local multi-objective evolutionary algorithm (LMOEA is presented to tackle this multi-objective scheduling problem. To accommodate the characteristic of the problem, a new solution representation is proposed, which can convert discrete combinational problems into continuous problems. Additionally, a multiple local search strategy with self-adaptive mechanism is introduced into the proposed algorithm to enhance the exploitation ability. The performance of the proposed algorithm is evaluated by instances with comparison to other multi-objective meta-heuristics such as Nondominated Sorting Genetic Algorithm II (NSGA-II, Strength Pareto Evolutionary Algorithm 2 (SPEA2, Multiobjective Particle Swarm Optimization (OMOPSO, and Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D. Experimental results demonstrate that the proposed LMOEA algorithm outperforms its counterparts for this kind of scheduling problems.

  6. Multi objective optimization of horizontal axis tidal current turbines, using Meta heuristics algorithms

    International Nuclear Information System (INIS)

    Tahani, Mojtaba; Babayan, Narek; Astaraei, Fatemeh Razi; Moghadam, Ali

    2015-01-01

    Highlights: • The performance of four different Meta heuristic optimization algorithms was studied. • Power coefficient and produced torque on stationary blade were selected as objective functions. • Chord and twist distributions were selected as decision variables. • All optimization algorithms were combined with blade element momentum theory. • The best Pareto front was obtained by multi objective flower pollination algorithm for HATCTs. - Abstract: The performance of horizontal axis tidal current turbines (HATCT) strongly depends on their geometry. According to this fact, the optimum performance will be achieved by optimized geometry. In this research study, the multi objective optimization of the HATCT is carried out by using four different multi objective optimization algorithms and their performance is evaluated in combination with blade element momentum theory (BEM). The second version of non-dominated sorting genetic algorithm (NSGA-II), multi objective particle swarm optimization algorithm (MOPSO), multi objective cuckoo search algorithm (MOCS) and multi objective flower pollination algorithm (MOFPA) are the selected algorithms. The power coefficient and the produced torque on stationary blade are selected as objective functions and chord and twist distributions along the blade span are selected as decision variables. These algorithms are combined with the blade element momentum (BEM) theory for the purpose of achieving the best Pareto front. The obtained Pareto fronts are compared with each other. Different sets of experiments are carried out by considering different numbers of iterations, population size and tip speed ratios. The Pareto fronts which are achieved by MOFPA and NSGA-II have better quality in comparison to MOCS and MOPSO, but on the other hand a detail comparison between the first fronts of MOFPA and NSGA-II indicated that MOFPA algorithm can obtain the best Pareto front and can maximize the power coefficient up to 4.3% and the

  7. Measurement of curvature and twist of a deformed object using digital holography

    International Nuclear Information System (INIS)

    Chen Wen; Quan Chenggen; Cho Jui Tay

    2008-01-01

    Measurement of curvature and twist is an important aspect in the study of object deformation. In recent years, several methods have been proposed to determine curvature and twist of a deformed object using digital shearography. Here we propose a novel method to determine the curvature and twist of a deformed object using digital holography and a complex phasor. A sine/cosine transformation method and two-dimensional short time Fourier transform are proposed subsequently to process the wrapped phase maps. It is shown that high-quality phase maps corresponding to curvature and twist can be obtained. An experiment is conducted to demonstrate the validity of the proposed method

  8. Multi-objective optimization of a plate and frame heat exchanger via genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Hamidreza; Najafi, Behzad [K. N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran)

    2010-06-15

    In the present paper, a plate and frame heat exchanger is considered. Multi-objective optimization using genetic algorithm is developed in order to obtain a set of geometric design parameters, which lead to minimum pressure drop and the maximum overall heat transfer coefficient. Vividly, considered objective functions are conflicting and no single solution can satisfy both objectives simultaneously. Multi-objective optimization procedure yields a set of optimal solutions, called Pareto front, each of which is a trade-off between objectives and can be selected by the user, regarding the application and the project's limits. The presented work takes care of numerous geometric parameters in the presence of logical constraints. A sensitivity analysis is also carried out to study the effects of different geometric parameters on the considered objective functions. Modeling the system and implementing the multi-objective optimization via genetic algorithm has been performed by MATLAB. (orig.)

  9. Multi-objective random search algorithm for simultaneously optimizing wind farm layout and number of turbines

    DEFF Research Database (Denmark)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-01-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximi...

  10. A dynamic material discrimination algorithm for dual MV energy X-ray digital radiography

    International Nuclear Information System (INIS)

    Li, Liang; Li, Ruizhe; Zhang, Siyuan; Zhao, Tiao; Chen, Zhiqiang

    2016-01-01

    Dual-energy X-ray radiography has become a well-established technique in medical, industrial, and security applications, because of its material or tissue discrimination capability. The main difficulty of this technique is dealing with the materials overlapping problem. When there are two or more materials along the X-ray beam path, its material discrimination performance will be affected. In order to solve this problem, a new dynamic material discrimination algorithm is proposed for dual-energy X-ray digital radiography, which can also be extended to multi-energy X-ray situations. The algorithm has three steps: α-curve-based pre-classification, decomposition of overlapped materials, and the final material recognition. The key of the algorithm is to establish a dual-energy radiograph database of both pure basis materials and pair combinations of them. After the pre-classification results, original dual-energy projections of overlapped materials can be dynamically decomposed into two sets of dual-energy radiographs of each pure material by the algorithm. Thus, more accurate discrimination results can be provided even with the existence of the overlapping problem. Both numerical and experimental results that prove the validity and effectiveness of the algorithm are presented. - Highlights: • A material discrimination algorithm for dual MV energy X-ray digital radiography is proposed. • To solve the materials overlapping problem of the current dual energy algorithm. • The experimental results with the 4/7 MV container inspection system are shown.

  11. Study of time-domain digital pulse shaping algorithms for nuclear signals

    International Nuclear Information System (INIS)

    Zhou Jianbin; Tuo Xianguo; Zhu Xing; Liu Yi; Zhou Wei; Lei Jiarong

    2012-01-01

    With the development on high-speed integrated circuit, fast high resolution sampling ADC and digital signal processors are replacing analog shaping amplifier circuit. This paper firstly presents the numerical analysis and simulation on R-C shaping circuit model and C-R shaping circuit model. Mathematic models are established based on 1 st order digital differential method and Kirchhoff Current Law in time domain, and a simulation and error evaluation experiment on an ideal digital signal are carried out with Excel VBA. A digital shaping test for a semiconductor X-ray detector in real time is also presented. Then a numerical analysis for Sallen-Key(S-K) low-pass filter circuit model is implemented based on the analysis of digital R-C and digital C-R shaping methods. By applying the 2 nd order non-homogeneous differential equation,the authors implement a digital Gaussian filter model for a standard exponential-decaying signal and a nuclear pulse signal. Finally, computer simulations and experimental tests are carried out and the results show the possibility of the digital pulse processing algorithms. (authors)

  12. Irrigation water allocation optimization using multi-objective evolutionary algorithm (MOEA) - a review

    Science.gov (United States)

    Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian

    2018-03-01

    This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.

  13. A novel algorithm for fast grasping of unknown objects using C-shape configuration

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Meijer, Jonathan; Wisse, Martijn

    2018-02-01

    Increasing grasping efficiency is very important for the robots to grasp unknown objects especially subjected to unfamiliar environments. To achieve this, a new algorithm is proposed based on the C-shape configuration. Specifically, the geometric model of the used under-actuated gripper is approximated as a C-shape. To obtain an appropriate graspable position, this C-shape configuration is applied to fit geometric model of an unknown object. The geometric model of unknown object is constructed by using a single-view partial point cloud. To examine the algorithm using simulations, a comparison of the commonly used motion planners is made. The motion planner with the highest number of solved runs, lowest computing time and the shortest path length is chosen to execute grasps found by this grasping algorithm. The simulation results demonstrate that excellent grasping efficiency is achieved by adopting our algorithm. To validate this algorithm, experiment tests are carried out using a UR5 robot arm and an under-actuated gripper. The experimental results show that steady grasping actions are obtained. Hence, this research provides a novel algorithm for fast grasping of unknown objects.

  14. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  15. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    Science.gov (United States)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  16. An Improved Multi-Objective Artificial Bee Colony Optimization Algorithm with Regulation Operators

    Directory of Open Access Journals (Sweden)

    Jiuyuan Huo

    2017-02-01

    Full Text Available To achieve effective and accurate optimization for multi-objective optimization problems, a multi-objective artificial bee colony algorithm with regulation operators (RMOABC inspired by the intelligent foraging behavior of honey bees was proposed in this paper. The proposed algorithm utilizes the Pareto dominance theory and takes advantage of adaptive grid and regulation operator mechanisms. The adaptive grid technique is used to adaptively assess the Pareto front maintained in an external archive and the regulation operator is used to balance the weights of the local search and the global search in the evolution of the algorithm. The performance of RMOABC was evaluated in comparison with other nature inspired algorithms includes NSGA-II and MOEA/D. The experiments results demonstrated that the RMOABC approach has better accuracy and minimal execution time.

  17. Dynamic population artificial bee colony algorithm for multi-objective optimal power flow

    Directory of Open Access Journals (Sweden)

    Man Ding

    2017-03-01

    Full Text Available This paper proposes a novel artificial bee colony algorithm with dynamic population (ABC-DP, which synergizes the idea of extended life-cycle evolving model to balance the exploration and exploitation tradeoff. The proposed ABC-DP is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. ABC-DP is then used for solving the optimal power flow (OPF problem in power systems that considers the cost, loss, and emission impacts as the objective functions. The 30-bus IEEE test system is presented to illustrate the application of the proposed algorithm. The simulation results, which are also compared to nondominated sorting genetic algorithm II (NSGAII and multi-objective ABC (MOABC, are presented to illustrate the effectiveness and robustness of the proposed method.

  18. An algorithm of local earthquake detection from digital records

    Directory of Open Access Journals (Sweden)

    A. PROZOROV

    1978-06-01

    Full Text Available The problem of automatical detection of earthquake signals in seismograms
    and definition of first arrivals of p and s waves is considered.
    The algorithm is based on the analysis of t(A function which represents
    the time of first appearence of a number of going one after another
    swings of amplitudes greather than A in seismic signals. It allows to explore
    such common features of seismograms of earthquakes as sudden
    first p-arrivals of amplitude greater than general amplitude of noise and
    after the definite interval of time before s-arrival the amplitude of which
    overcomes the amplitude of p-arrival. The method was applied to
    3-channel recods of Friuli aftershocks, ¿'-arrivals were defined correctly
    in all cases; p-arrivals were defined in most cases using strict criteria of
    detection. Any false signals were not detected. All p-arrivals were defined
    using soft criteria of detection but less reliability and two false events
    were obtained.

  19. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  20. EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.

    Science.gov (United States)

    Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos

    2015-01-01

    Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.

  1. Sub-OBB based object recognition and localization algorithm using range images

    International Nuclear Information System (INIS)

    Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung

    2017-01-01

    This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization. (paper)

  2. Low emittance lattice optimization using a multi-objective evolutionary algorithm

    International Nuclear Information System (INIS)

    Gao Weiwei; Wang Lin; Li Weimin; He Duohui

    2011-01-01

    A low emittance lattice design and optimization procedure are systematically studied with a non-dominated sorting-based multi-objective evolutionary algorithm which not only globally searches the low emittance lattice, but also optimizes some beam quantities such as betatron tunes, momentum compaction factor and dispersion function simultaneously. In this paper the detailed algorithm and lattice design procedure are presented. The Hefei light source upgrade project storage ring lattice, with fixed magnet layout, is designed to illustrate this optimization procedure. (authors)

  3. Migrating Home Computer Audio Waveforms to Digital Objects: A Case Study on Digital Archaeology

    Directory of Open Access Journals (Sweden)

    Mark Guttenbrunner

    2011-03-01

    Full Text Available Rescuing data from inaccessible or damaged storage media for the purpose of preserving the digital data for the long term is one of the dimensions of digital archaeology. With the current pace of technological development, any system can become obsolete in a matter of years and hence the data stored in a specific storage media might not be accessible anymore due to the unavailability of the system to access the media. In order to preserve digital records residing in such storage media, it is necessary to extract the data stored in those media by some means.One early storage medium for home computers in the 1980s was audio tape. The first home computer systems allowed the use of standard cassette players to record and replay data. Audio cassettes are more durable than old home computers when properly stored. Devices playing this medium (i.e. tape recorders can be found in working condition or can be repaired, as they are usually made out of standard components. By re-engineering the format of the waveform and the file formats, the data on such media can then be extracted from a digitised audio stream and migrated to a non-obsolete format.In this paper we present a case study on extracting the data stored on an audio tape by an early home computer system, namely the Philips Videopac+ G7400. The original data formats were re-engineered and an application was written to support the migration of the data stored on tapes without using the original system. This eliminates the necessity of keeping an obsolete system alive for enabling access to the data on the storage media meant for this system. Two different methods to interpret the data and eliminate possible errors in the tape were implemented and evaluated on original tapes, which were recorded 20 years ago. Results show that with some error correction methods, parts of the tapes are still readable even without the original system. It also implies that it is easier to build solutions while original

  4. COMPARATIVE ANALYSIS OF APPLICATION EFFICIENCY OF ORTHOGONAL TRANSFORMATIONS IN FREQUENCY ALGORITHMS FOR DIGITAL IMAGE WATERMARKING

    Directory of Open Access Journals (Sweden)

    Vladimir A. Batura

    2014-11-01

    Full Text Available The efficiency of orthogonal transformations application in the frequency algorithms of the digital watermarking of still images is examined. Discrete Hadamard transform, discrete cosine transform and discrete Haar transform are selected. Their effectiveness is determined by the invisibility of embedded in digital image watermark and its resistance to the most common image processing operations: JPEG-compression, noising, changing of the brightness and image size, histogram equalization. The algorithm for digital watermarking and its embedding parameters remain unchanged at these orthogonal transformations. Imperceptibility of embedding is defined by the peak signal to noise ratio, watermark stability– by Pearson's correlation coefficient. Embedding is considered to be invisible, if the value of the peak signal to noise ratio is not less than 43 dB. Embedded watermark is considered to be resistant to a specific attack, if the Pearson’s correlation coefficient is not less than 0.5. Elham algorithm based on the image entropy is chosen for computing experiment. Computing experiment is carried out according to the following algorithm: embedding of a digital watermark in low-frequency area of the image (container by Elham algorithm, exposure to a harmful influence on the protected image (cover image, extraction of a digital watermark. These actions are followed by quality assessment of cover image and watermark on the basis of which efficiency of orthogonal transformation is defined. As a result of computing experiment it was determined that the choice of the specified orthogonal transformations at identical algorithm and parameters of embedding doesn't influence the degree of imperceptibility for a watermark. Efficiency of discrete Hadamard transform and discrete cosine transformation in relation to the attacks chosen for experiment was established based on the correlation indicators. Application of discrete Hadamard transform increases

  5. Distribution Network Expansion Planning Based on Multi-objective PSO Algorithm

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Wu, Qiuwei

    2013-01-01

    This paper presents a novel approach for electrical distribution network expansion planning using multi-objective particle swarm optimization (PSO). The optimization objectives are: investment and operation cost, energy losses cost, and power congestion cost. A two-phase multi-objective PSO...... algorithm was proposed to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of both the proposed multi-objective planning approach and the improved multi-objective PSO have been verified...

  6. The sloan digital sky Survey-II supernova survey: search algorithm and follow-up observations

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Bassett, Bruce [Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701 (South Africa); Becker, Andrew; Hogan, Craig J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Cinabro, David [Department of Physics, Wayne State University, Detroit, MI 48202 (United States); DeJongh, Fritz; Frieman, Joshua A.; Marriner, John; Miknaitis, Gajus [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Depoy, D. L.; Prieto, Jose Luis [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210-1173 (United States); Dilday, Ben; Kessler, Richard [Kavli Institute for Cosmological Physics, The University of Chicago, 5640 South Ellis Avenue Chicago, IL 60637 (United States); Doi, Mamoru [Institute of Astronomy, Graduate School of Science, University of Tokyo 2-21-1, Osawa, Mitaka, Tokyo 181-0015 (Japan); Garnavich, Peter M. [University of Notre Dame, 225 Nieuwland Science, Notre Dame, IN 46556-5670 (United States); Holtzman, Jon [Department of Astronomy, MSC 4500, New Mexico State University, P.O. Box 30001, Las Cruces, NM 88003 (United States); Jha, Saurabh [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, P.O. Box 20450, MS29, Stanford, CA 94309 (United States); Konishi, Kohki [Institute for Cosmic Ray Research, University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa, Chiba, 277-8582 (Japan); Lampeitl, Hubert [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Nichol, Robert C. [Institute of Cosmology and Gravitation, Mercantile House, Hampshire Terrace, University of Portsmouth, Portsmouth PO1 2EG (United Kingdom); and others

    2008-01-01

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg{sup 2} region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the type Ia SNe, the main driver for the survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  7. The method for objective evaluation of the intensity of radial bone lesions in rheumatoid arthritis using digital image analysis

    International Nuclear Information System (INIS)

    Zielinski, K.W.; Krekora, K.

    2004-01-01

    The semiquantitative methods used in everyday diagnostic practice for scoring the intensity of bone lesions in rheumatoid arthritis are susceptible to a subjective error. The paper describes the original algorithm for an image analysis as a method for quantitative and objective evaluation of the intensity of radiological lesions in rheumatoid arthritis. 75 plain radiograms of the hand of patients diagnosed with rheumatoid arthritis, in various stages of bone pathology, were evaluated. The analysis focused on the signs of pathological rebuilding of the affected bone, especially in the distal epiphysis of the radial bone. The plain radiograms of the hand were digitally analysed based on the modified method, formerly used for quantitative assessment of bone trabeculation. The method allowed us to objectively verify various scoring systems of radiograms widely used in rheumatological diagnosis. (author)

  8. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  9. Multi-objective optimal design of sandwich panels using a genetic algorithm

    Science.gov (United States)

    Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow

    2017-10-01

    In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.

  10. Implementation of an algorithm for cylindrical object identification using range data

    Science.gov (United States)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  11. Reconstruction of a uniform star object from interior x-ray data: uniqueness, stability and algorithm

    International Nuclear Information System (INIS)

    Van Gompel, Gert; Batenburg, K Joost; Defrise, Michel

    2009-01-01

    In this paper we consider the problem of reconstructing a two-dimensional star-shaped object of uniform density from truncated projections of the object. In particular, we prove that such an object is uniquely determined by its parallel projections sampled over a full π angular range with a detector that only covers an interior field-of-view, even if the density of the object is not known a priori. We analyze the stability of this reconstruction problem and propose a reconstruction algorithm. Simulation experiments demonstrate that the algorithm is capable of reconstructing a star-shaped object from interior data, even if the interior region is much smaller than the size of the object. In addition, we present results for a heuristic reconstruction algorithm called DART, that was recently proposed. The heuristic method is shown to yield accurate reconstructions if the density is known in advance, and to have a very good stability in the presence of noisy projection data. Finally, the performance of the DBP and DART algorithms is illustrated for the reconstruction of real micro-CT data of a diamond

  12. A new hybrid genetic algorithm for optimizing the single and multivariate objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya Shankar [Idaho National Laboratory; McCulloch, Richard Chet James [Idaho National Laboratory

    2015-07-01

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the most improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.

  13. Digital learning object for diagnostic reasoning in nursing applied to the integumentary system

    Directory of Open Access Journals (Sweden)

    Cecília Passos Vaz da Costa

    Full Text Available Objective: To describe the creation of a digital learning object for diagnostic reasoning in nursing applied to the integumentary system at a public university of Piaui. Method: A methodological study applied to technological production based on the pedagogical framework of problem-based learning. The methodology for creating the learning object observed the stages of analysis, design, development, implementation and evaluation recommended for contextualized instructional design. The revised taxonomy of Bloom was used to list the educational goals. Results: The four modules of the developed learning object were inserted into the educational platform Moodle. The theoretical assumptions allowed the design of an important online resource that promotes effective learning in the scope of nursing education. Conclusion: This study should add value to nursing teaching practices through the use of digital learning objects for teaching diagnostic reasoning applied to skin and skin appendages.

  14. Watermarking Techniques Using Least Significant Bit Algorithm for Digital Image Security Standard Solution- Based Android

    Directory of Open Access Journals (Sweden)

    Ari Muzakir

    2017-05-01

    Full Text Available Ease of deployment of digital image through the internet has positive and negative sides, especially for owners of the original digital image. The positive side of the ease of rapid deployment is the owner of that image deploys digital image files to various sites in the world address. While the downside is that if there is no copyright that serves as protector of the image it will be very easily recognized ownership by other parties. Watermarking is one solution to protect the copyright and know the results of the digital image. With Digital Image Watermarking, copyright resulting digital image will be protected through the insertion of additional information such as owner information and the authenticity of the digital image. The least significant bit (LSB is one of the algorithm is simple and easy to understand. The results of the simulations carried out using android smartphone shows that the LSB watermarking technique is not able to be seen by naked human eye, meaning there is no significant difference in the image of the original files with images that have been inserted watermarking. The resulting image has dimensions of 640x480 with a bit depth of 32 bits. In addition, to determine the function of the ability of the device (smartphone in processing the image using this application used black box testing. 

  15. Multi-objective hierarchical genetic algorithms for multilevel redundancy allocation optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Ranjan [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: ranjan.k@ks3.ecs.kyoto-u.ac.jp; Izui, Kazuhiro [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: izui@prec.kyoto-u.ac.jp; Yoshimura, Masataka [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: yoshimura@prec.kyoto-u.ac.jp; Nishiwaki, Shinji [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: shinji@prec.kyoto-u.ac.jp

    2009-04-15

    Multilevel redundancy allocation optimization problems (MRAOPs) occur frequently when attempting to maximize the system reliability of a hierarchical system, and almost all complex engineering systems are hierarchical. Despite their practical significance, limited research has been done concerning the solving of simple MRAOPs. These problems are not only NP hard but also involve hierarchical design variables. Genetic algorithms (GAs) have been applied in solving MRAOPs, since they are computationally efficient in solving such problems, unlike exact methods, but their applications has been confined to single-objective formulation of MRAOPs. This paper proposes a multi-objective formulation of MRAOPs and a methodology for solving such problems. In this methodology, a hierarchical GA framework for multi-objective optimization is proposed by introducing hierarchical genotype encoding for design variables. In addition, we implement the proposed approach by integrating the hierarchical genotype encoding scheme with two popular multi-objective genetic algorithms (MOGAs)-the strength Pareto evolutionary genetic algorithm (SPEA2) and the non-dominated sorting genetic algorithm (NSGA-II). In the provided numerical examples, the proposed multi-objective hierarchical approach is applied to solve two hierarchical MRAOPs, a 4- and a 3-level problems. The proposed method is compared with a single-objective optimization method that uses a hierarchical genetic algorithm (HGA), also applied to solve the 3- and 4-level problems. The results show that a multi-objective hierarchical GA (MOHGA) that includes elitism and mechanism for diversity preserving performed better than a single-objective GA that only uses elitism, when solving large-scale MRAOPs. Additionally, the experimental results show that the proposed method with NSGA-II outperformed the proposed method with SPEA2 in finding useful Pareto optimal solution sets.

  16. Multi-objective hierarchical genetic algorithms for multilevel redundancy allocation optimization

    International Nuclear Information System (INIS)

    Kumar, Ranjan; Izui, Kazuhiro; Yoshimura, Masataka; Nishiwaki, Shinji

    2009-01-01

    Multilevel redundancy allocation optimization problems (MRAOPs) occur frequently when attempting to maximize the system reliability of a hierarchical system, and almost all complex engineering systems are hierarchical. Despite their practical significance, limited research has been done concerning the solving of simple MRAOPs. These problems are not only NP hard but also involve hierarchical design variables. Genetic algorithms (GAs) have been applied in solving MRAOPs, since they are computationally efficient in solving such problems, unlike exact methods, but their applications has been confined to single-objective formulation of MRAOPs. This paper proposes a multi-objective formulation of MRAOPs and a methodology for solving such problems. In this methodology, a hierarchical GA framework for multi-objective optimization is proposed by introducing hierarchical genotype encoding for design variables. In addition, we implement the proposed approach by integrating the hierarchical genotype encoding scheme with two popular multi-objective genetic algorithms (MOGAs)-the strength Pareto evolutionary genetic algorithm (SPEA2) and the non-dominated sorting genetic algorithm (NSGA-II). In the provided numerical examples, the proposed multi-objective hierarchical approach is applied to solve two hierarchical MRAOPs, a 4- and a 3-level problems. The proposed method is compared with a single-objective optimization method that uses a hierarchical genetic algorithm (HGA), also applied to solve the 3- and 4-level problems. The results show that a multi-objective hierarchical GA (MOHGA) that includes elitism and mechanism for diversity preserving performed better than a single-objective GA that only uses elitism, when solving large-scale MRAOPs. Additionally, the experimental results show that the proposed method with NSGA-II outperformed the proposed method with SPEA2 in finding useful Pareto optimal solution sets

  17. Multi-Objective Optimization of Hybrid Renewable Energy System Using an Enhanced Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Mengjun Ming

    2017-05-01

    Full Text Available Due to the scarcity of conventional energy resources and the greenhouse effect, renewable energies have gained more attention. This paper proposes methods for multi-objective optimal design of hybrid renewable energy system (HRES in both isolated-island and grid-connected modes. In each mode, the optimal design aims to find suitable configurations of photovoltaic (PV panels, wind turbines, batteries and diesel generators in HRES such that the system cost and the fuel emission are minimized, and the system reliability/renewable ability (corresponding to different modes is maximized. To effectively solve this multi-objective problem (MOP, the multi-objective evolutionary algorithm based on decomposition (MOEA/D using localized penalty-based boundary intersection (LPBI method is proposed. The algorithm denoted as MOEA/D-LPBI is demonstrated to outperform its competitors on the HRES model as well as a set of benchmarks. Moreover, it effectively obtains a good approximation of Pareto optimal HRES configurations. By further considering a decision maker’s preference, the most satisfied configuration of the HRES can be identified.

  18. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  19. An Improved Artificial Bee Colony Algorithm and Its Application to Multi-Objective Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Xuanhu He

    2015-03-01

    Full Text Available Optimal power flow (OPF objective functions involve minimization of the total fuel costs of generating units, minimization of atmospheric pollutant emissions, minimization of active power losses and minimization of voltage deviations. In this paper, a fuzzy multi-objective OPF model is established by the fuzzy membership functions and the fuzzy satisfaction-maximizing method. The improved artificial bee colony (IABC algorithm is applied to solve the model. In the IABC algorithm, the mutation and crossover operations of a differential evolution algorithm are utilized to generate new solutions to improve exploitation capacity; tent chaos mapping is utilized to generate initial swarms, reference mutation solutions and the reference dimensions of crossover operations to improve swarm diversity. The proposed method is applied to multi-objective OPF problems in IEEE 30-bus, IEEE 57-bus and IEEE 300-bus test systems. The results are compared with those obtained by other algorithms, which demonstrates the effectiveness and superiority of the IABC algorithm, and how the optimal scheme obtained by the proposed model can make systems more economical and stable.

  20. A probabilistic multi objective CLSC model with Genetic algorithm-ε_Constraint approach

    Directory of Open Access Journals (Sweden)

    Alireza TaheriMoghadam

    2014-05-01

    Full Text Available In this paper an uncertain multi objective closed-loop supply chain is developed. The first objective function is maximizing the total profit. The second objective function is minimizing the use of row materials. In the other word, the second objective function is maximizing the amount of remanufacturing and recycling. Genetic algorithm is used for optimization and for finding the pareto optimal line, Epsilon-constraint method is used. Finally a numerical example is solved with proposed approach and performance of the model is evaluated in different sizes. The results show that this approach is effective and useful for managerial decisions.

  1. [Digital learning object for diagnostic reasoning in nursing applied to the integumentary system].

    Science.gov (United States)

    da Costa, Cecília Passos Vaz; Luz, Maria Helena Barros Araújo

    2015-12-01

    To describe the creation of a digital learning object for diagnostic reasoning in nursing applied to the integumentary system at a public university of Piaui. A methodological study applied to technological production based on the pedagogical framework of problem-based learning. The methodology for creating the learning object observed the stages of analysis, design, development, implementation and evaluation recommended for contextualized instructional design. The revised taxonomy of Bloom was used to list the educational goals. The four modules of the developed learning object were inserted into the educational platform Moodle. The theoretical assumptions allowed the design of an important online resource that promotes effective learning in the scope of nursing education. This study should add value to nursing teaching practices through the use of digital learning objects for teaching diagnostic reasoning applied to skin and skin appendages.

  2. Multi-objective Reactive Power Optimization Based on Improved Particle Swarm Algorithm

    Science.gov (United States)

    Cui, Xue; Gao, Jian; Feng, Yunbin; Zou, Chenlu; Liu, Huanlei

    2018-01-01

    In this paper, an optimization model with the minimum active power loss and minimum voltage deviation of node and maximum static voltage stability margin as the optimization objective is proposed for the reactive power optimization problems. By defining the index value of reactive power compensation, the optimal reactive power compensation node was selected. The particle swarm optimization algorithm was improved, and the selection pool of global optimal and the global optimal of probability (p-gbest) were introduced. A set of Pareto optimal solution sets is obtained by this algorithm. And by calculating the fuzzy membership value of the pareto optimal solution sets, individuals with the smallest fuzzy membership value were selected as the final optimization results. The above improved algorithm is used to optimize the reactive power of IEEE14 standard node system. Through the comparison and analysis of the results, it has been proven that the optimization effect of this algorithm was very good.

  3. Design of a centrifugal compressor impeller using multi-objective optimization algorithm

    International Nuclear Information System (INIS)

    Kim, Jin Hyuk; Husain, Afzal; Kim, Kwang Yong; Choi, Jae Ho

    2009-01-01

    This paper presents a design optimization of a centrifugal compressor impeller with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by finite volume approximations and solved on hexahedral grids for flow analyses. Two objectives, i.e., isentropic efficiency and total pressure ratio are selected with four design variables defining impeller hub and shroud contours in meridional contours to optimize the system. Non-dominated Sorting of Genetic Algorithm (NSGA-II) with ε-constraint strategy for local search coupled with Radial Basis Neural Network model is used for multi-objective optimization. The optimization results show that isentropic efficiencies and total pressure ratios of the five cluster points at the Pareto-optimal solutions are enhanced by multi-objective optimization.

  4. Design of a centrifugal compressor impeller using multi-objective optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Hyuk; Husain, Afzal; Kim, Kwang Yong [Inha University, Incheon (Korea, Republic of); Choi, Jae Ho [Samsung Techwin Co., Ltd., Changwon (Korea, Republic of)

    2009-07-01

    This paper presents a design optimization of a centrifugal compressor impeller with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by finite volume approximations and solved on hexahedral grids for flow analyses. Two objectives, i.e., isentropic efficiency and total pressure ratio are selected with four design variables defining impeller hub and shroud contours in meridional contours to optimize the system. Non-dominated Sorting of Genetic Algorithm (NSGA-II) with {epsilon}-constraint strategy for local search coupled with Radial Basis Neural Network model is used for multi-objective optimization. The optimization results show that isentropic efficiencies and total pressure ratios of the five cluster points at the Pareto-optimal solutions are enhanced by multi-objective optimization.

  5. Dynamical Black Hole Masses of BL Lac Objects from the Sloan Digital Sky Survey

    NARCIS (Netherlands)

    Plotkin, Richard M.; Markoff, Sera; Trager, Scott C.; Anderson, Scott F.

    2012-01-01

    We measure black hole masses for 71 BL Lac objects from the Sloan Digital Sky Survey (SDSS) with redshifts out to z ∼ 0.4. We perform spectral decompositions of their nuclei from their host galaxies and measure their stellar velocity dispersions. Black hole masses are then derived from the black

  6. A Semiotic Framework for the Semantics of Digital Multimedia Learning Objects

    DEFF Research Database (Denmark)

    May, Michael

    2007-01-01

    The relevance of semiotics for extending multimedia description schemes will be shown relative to existing strategies for indexing and retrieval. The semiotic framework presented is intended to support a compositional semantics of flexible digital multimedia objects. Besides semiotics insights fr...... Formal Concept Analysis is utilized....

  7. Digital Object Identifier (DOI) – a must know for every author of ...

    African Journals Online (AJOL)

    It is a unique code preferred by publishers in the identification and exchange of the content of a digital object such as journal articles, web documents, or other intellectual property. This article is an eye opener to the features and benefits of DOI, anatomy of a DOI name, citing DOI in references and locating DOI of references.

  8. Common path in-line holography using enhanced joint object reference digital interferometers

    OpenAIRE

    Kelner, Roy; Katz, Barak; Rosen, Joseph

    2014-01-01

    Joint object reference digital interferometer (JORDI) is a recently developed system capable of recording holograms of various types [Opt. Lett. 22(5), 4719 (2013)]. Presented here is a new enhanced system design that is based on the previous JORDI. While the previous JORDI has been based purely on diffractive optical elements, displayed on spatial light modulators, the present design incorporates an additional refractive objective lens, thus enabling hologram recording with improved resoluti...

  9. Experimental verification of preset time count rate meters based on adaptive digital signal processing algorithms

    Directory of Open Access Journals (Sweden)

    Žigić Aleksandar D.

    2005-01-01

    Full Text Available Experimental verifications of two optimized adaptive digital signal processing algorithms implemented in two pre set time count rate meters were per formed ac cording to appropriate standards. The random pulse generator realized using a personal computer, was used as an artificial radiation source for preliminary system tests and performance evaluations of the pro posed algorithms. Then measurement results for background radiation levels were obtained. Finally, measurements with a natural radiation source radioisotope 90Sr-90Y, were carried out. Measurement results, con ducted without and with radio isotopes for the specified errors of 10% and 5% showed to agree well with theoretical predictions.

  10. Multi-Objective Optimization of the Hedging Model for reservoir Operation Using Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    sadegh sadeghitabas

    2015-12-01

    Full Text Available Multi-objective problems rarely ever provide a single optimal solution, rather they yield an optimal set of outputs (Pareto fronts. Solving these problems was previously accomplished by using some simplifier methods such as the weighting coefficient method used for converting a multi-objective problem to a single objective function. However, such robust tools as multi-objective meta-heuristic algorithms have been recently developed for solving these problems. The hedging model is one of the classic problems for reservoir operation that is generally employed for mitigating drought impacts in water resources management. According to this method, although it is possible to supply the total planned demands, only portions of the demands are met to save water by allowing small deficits in the current conditions in order to avoid or reduce severe deficits in future. The approach heavily depends on economic and social considerations. In the present study, the meta-heuristic algorithms of NSGA-II, MOPSO, SPEA-II, and AMALGAM are used toward the multi-objective optimization of the hedging model. For this purpose, the rationing factors involved in Taleghan dam operation are optimized over a 35-year statistical period of inflow. There are two objective functions: a minimizing the modified shortage index, and b maximizing the reliability index (i.e., two opposite objectives. The results show that the above algorithms are applicable to a wide range of optimal solutions. Among the algorithms, AMALGAM is found to produce a better Pareto front for the values of the objective function, indicating its more satisfactory performance.

  11. An Extensible Component-Based Multi-Objective Evolutionary Algorithm Framework

    DEFF Research Database (Denmark)

    Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2017-01-01

    The ability to easily modify the problem definition is currently missing in Multi-Objective Evolutionary Algorithms (MOEA). Existing MOEA frameworks do not support dynamic addition and extension of the problem formulation. The existing frameworks require a re-specification of the problem definition...

  12. Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); D. Thierens (Dirk); D. Thierens (Dirk)

    2007-01-01

    htmlabstractRecent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we

  13. Algorithm for Extracting Digital Terrain Models under Forest Canopy from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Almasi S. Maguya

    2014-07-01

    Full Text Available Extracting digital elevationmodels (DTMs from LiDAR data under forest canopy is a challenging task. This is because the forest canopy tends to block a portion of the LiDAR pulses from reaching the ground, hence introducing gaps in the data. This paper presents an algorithm for DTM extraction from LiDAR data under forest canopy. The algorithm copes with the challenge of low data density by generating a series of coarse DTMs by using the few ground points available and using trend surfaces to interpolate missing elevation values in the vicinity of the available points. This process generates a cloud of ground points from which the final DTM is generated. The algorithm has been compared to two other algorithms proposed in the literature in three different test sites with varying degrees of difficulty. Results show that the algorithm presented in this paper is more tolerant to low data density compared to the other two algorithms. The results further show that with decreasing point density, the differences between the three algorithms dramatically increased from about 0.5m to over 10m.

  14. The algorithm of random length sequences synthesis for frame synchronization of digital television systems

    Directory of Open Access Journals (Sweden)

    Аndriy V. Sadchenko

    2015-12-01

    Full Text Available Digital television systems need to ensure that all digital signals processing operations are performed simultaneously and consistently. Frame synchronization dictated by the need to match phases of transmitter and receiver so that it would be possible to identify the start of a frame. As a frame synchronization signals are often used long length binary sequence with good aperiodic autocorrelation function. Aim: This work is dedicated to the development of the algorithm of random length sequences synthesis. Materials and Methods: The paper provides a comparative analysis of the known sequences, which can be used at present as synchronization ones, revealed their advantages and disadvantages. This work proposes the algorithm for the synthesis of binary synchronization sequences of random length with good autocorrelation properties based on noise generator with a uniform distribution law of probabilities. A "white noise" semiconductor generator is proposed to use as the initial material for the synthesis of binary sequences with desired properties. Results: The statistical analysis of the initial implementations of the "white noise" and synthesized sequences for frame synchronization of digital television is conducted. The comparative analysis of the synthesized sequences with known ones was carried out. The results show the benefits of obtained sequences in compare with known ones. The performed simulations confirm the obtained results. Conclusions: Thus, the search algorithm of binary synchronization sequences with desired autocorrelation properties received. According to this algorithm, the sequence can be longer in length and without length limitations. The received sync sequence can be used for frame synchronization in modern digital communication systems that will increase their efficiency and noise immunity.

  15. Prototype Implementation of Two Efficient Low-Complexity Digital Predistortion Algorithms

    Directory of Open Access Journals (Sweden)

    Timo I. Laakso

    2008-01-01

    Full Text Available Predistortion (PD lineariser for microwave power amplifiers (PAs is an important topic of research. With larger and larger bandwidth as it appears today in modern WiMax standards as well as in multichannel base stations for 3GPP standards, the relatively simple nonlinear effect of a PA becomes a complex memory-including function, severely distorting the output signal. In this contribution, two digital PD algorithms are investigated for the linearisation of microwave PAs in mobile communications. The first one is an efficient and low-complexity algorithm based on a memoryless model, called the simplicial canonical piecewise linear (SCPWL function that describes the static nonlinear characteristic of the PA. The second algorithm is more general, approximating the pre-inverse filter of a nonlinear PA iteratively using a Volterra model. The first simpler algorithm is suitable for compensation of amplitude compression and amplitude-to-phase conversion, for example, in mobile units with relatively small bandwidths. The second algorithm can be used to linearise PAs operating with larger bandwidths, thus exhibiting memory effects, for example, in multichannel base stations. A measurement testbed which includes a transmitter-receiver chain with a microwave PA is built for testing and prototyping of the proposed PD algorithms. In the testing phase, the PD algorithms are implemented using MATLAB (floating-point representation and tested in record-and-playback mode. The iterative PD algorithm is then implemented on a Field Programmable Gate Array (FPGA using fixed-point representation. The FPGA implementation allows the pre-inverse filter to be tested in a real-time mode. Measurement results show excellent linearisation capabilities of both the proposed algorithms in terms of adjacent channel power suppression. It is also shown that the fixed-point FPGA implementation of the iterative algorithm performs as well as the floating-point implementation.

  16. Bayesian Maximum Entropy Based Algorithm for Digital X-ray Mammogram Processing

    Directory of Open Access Journals (Sweden)

    Radu Mutihac

    2009-06-01

    Full Text Available Basics of Bayesian statistics in inverse problems using the maximum entropy principle are summarized in connection with the restoration of positive, additive images from various types of data like X-ray digital mammograms. An efficient iterative algorithm for image restoration from large data sets based on the conjugate gradient method and Lagrange multipliers in nonlinear optimization of a specific potential function was developed. The point spread function of the imaging system was determined by numerical simulations of inhomogeneous breast-like tissue with microcalcification inclusions of various opacities. The processed digital and digitized mammograms resulted superior in comparison with their raw counterparts in terms of contrast, resolution, noise, and visibility of details.

  17. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    Science.gov (United States)

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case

  18. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  19. Practical Constraint K-Segment Principal Curve Algorithms for Generating Railway GPS Digital Map

    Directory of Open Access Journals (Sweden)

    Dewang Chen

    2013-01-01

    Full Text Available In order to obtain a decent trade-off between the low-cost, low-accuracy Global Positioning System (GPS receivers and the requirements of high-precision digital maps for modern railways, using the concept of constraint K-segment principal curves (CKPCS and the expert knowledge on railways, we propose three practical CKPCS generation algorithms with reduced computational complexity, and thereafter more suitable for engineering applications. The three algorithms are named ALLopt, MPMopt, and DCopt, in which ALLopt exploits global optimization and MPMopt and DCopt apply local optimization with different initial solutions. We compare the three practical algorithms according to their performance on average projection error, stability, and the fitness for simple and complex simulated trajectories with noise data. It is found that ALLopt only works well for simple curves and small data sets. The other two algorithms can work better for complex curves and large data sets. Moreover, MPMopt runs faster than DCopt, but DCopt can work better for some curves with cross points. The three algorithms are also applied in generating GPS digital maps for two railway GPS data sets measured in Qinghai-Tibet Railway (QTR. Similar results like the ones in synthetic data are obtained. Because the trajectory of a railway is relatively simple and straight, we conclude that MPMopt works best according to the comprehensive considerations on the speed of computation and the quality of generated CKPCS. MPMopt can be used to obtain some key points to represent a large amount of GPS data. Hence, it can greatly reduce the data storage requirements and increase the positioning speed for real-time digital map applications.

  20. Digital learning objects in nursing consultation: technology assessment by undergraduate students.

    Science.gov (United States)

    Silveira, DeniseTolfo; Catalan, Vanessa Menezes; Neutzling, Agnes Ludwig; Martinato, Luísa Helena Machado

    2010-01-01

    This study followed the teaching-learning process about the nursing consultation, based on digital learning objects developed through the active Problem Based Learning method. The goals were to evaluate the digital learning objects about nursing consultation, develop cognitive skills on the subject using problem based learning and identify the students' opinions on the use of technology. This is an exploratory and descriptive study with a quantitative approach. The sample consisted of 71 students in the sixth period of the nursing program at the Federal University of Rio Grande do Sul. The data was collected through a questionnaire to evaluate the learning objects. The results showed positive agreement (58%) on the content, usability and didactics of the proposed computer-mediated activity regarding the nursing consultation. The application of materials to the students is considered positive.

  1. Enhanced object-based tracking algorithm for convective rain storms and cells

    Science.gov (United States)

    Muñoz, Carlos; Wang, Li-Pen; Willems, Patrick

    2018-03-01

    This paper proposes a new object-based storm tracking algorithm, based upon TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting). TITAN is a widely-used convective storm tracking algorithm but has limitations in handling small-scale yet high-intensity storm entities due to its single-threshold identification approach. It also has difficulties to effectively track fast-moving storms because of the employed matching approach that largely relies on the overlapping areas between successive storm entities. To address these deficiencies, a number of modifications are proposed and tested in this paper. These include a two-stage multi-threshold storm identification, a new formulation for characterizing storm's physical features, and an enhanced matching technique in synergy with an optical-flow storm field tracker, as well as, according to these modifications, a more complex merging and splitting scheme. High-resolution (5-min and 529-m) radar reflectivity data for 18 storm events over Belgium are used to calibrate and evaluate the algorithm. The performance of the proposed algorithm is compared with that of the original TITAN. The results suggest that the proposed algorithm can better isolate and match convective rainfall entities, as well as to provide more reliable and detailed motion estimates. Furthermore, the improvement is found to be more significant for higher rainfall intensities. The new algorithm has the potential to serve as a basis for further applications, such as storm nowcasting and long-term stochastic spatial and temporal rainfall generation.

  2. Short-term economic environmental hydrothermal scheduling using improved multi-objective gravitational search algorithm

    International Nuclear Information System (INIS)

    Li, Chunlong; Zhou, Jianzhong; Lu, Peng; Wang, Chao

    2015-01-01

    Highlights: • Improved multi-objective gravitational search algorithm. • An elite archive set is proposed to guide evolutionary process. • Neighborhood searching mechanism to improve local search ability. • Adopt chaotic mutation for avoiding premature convergence. • Propose feasible space method to handle hydro plant constrains. - Abstract: With growing concerns about energy and environment, short-term economic environmental hydrothermal scheduling (SEEHS) plays a more and more important role in power system. Because of the two objectives and various constraints, SEEHS is a complex multi-objective optimization problem (MOOP). In order to solve the problem, we propose an improved multi-objective gravitational search algorithm (IMOGSA) in this paper. In IMOGSA, the mass of the agent is redefined by multiple objectives to make it suitable for MOOP. An elite archive set is proposed to keep Pareto optimal solutions and guide evolutionary process. For balancing exploration and exploitation, a neighborhood searching mechanism is presented to cooperate with chaotic mutation. Moreover, a novel method based on feasible space is proposed to handle hydro plant constraints during SEEHS, and a violation adjustment method is adopted to handle power balance constraint. For verifying its effectiveness, the proposed IMOGSA is applied to a hydrothermal system in two different case studies. The simulation results show that IMOGSA has a competitive performance in SEEHS when compared with other established algorithms

  3. Cancer microarray data feature selection using multi-objective binary particle swarm optimization algorithm

    Science.gov (United States)

    Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider

    2016-01-01

    Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm. PMID:27822174

  4. Multi-objective flexible job shop scheduling problem using variable neighborhood evolutionary algorithm

    Science.gov (United States)

    Wang, Chun; Ji, Zhicheng; Wang, Yan

    2017-07-01

    In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.

  5. Culture belief based multi-objective hybrid differential evolutionary algorithm in short term hydrothermal scheduling

    International Nuclear Information System (INIS)

    Zhang Huifeng; Zhou Jianzhong; Zhang Yongchuan; Lu Youlin; Wang Yongqiang

    2013-01-01

    Highlights: ► Culture belief is integrated into multi-objective differential evolution. ► Chaotic sequence is imported to improve evolutionary population diversity. ► The priority of convergence rate is proved in solving hydrothermal problem. ► The results show the quality and potential of proposed algorithm. - Abstract: A culture belief based multi-objective hybrid differential evolution (CB-MOHDE) is presented to solve short term hydrothermal optimal scheduling with economic emission (SHOSEE) problem. This problem is formulated for compromising thermal cost and emission issue while considering its complicated non-linear constraints with non-smooth and non-convex characteristics. The proposed algorithm integrates a modified multi-objective differential evolutionary algorithm into the computation model of culture algorithm (CA) as well as some communication protocols between population space and belief space, three knowledge structures in belief space are redefined according to these problem-solving characteristics, and in the differential evolution a chaotic factor is embedded into mutation operator for avoiding the premature convergence by enlarging the search scale when the search trajectory reaches local optima. Furthermore, a new heuristic constraint-handling technique is utilized to handle those complex equality and inequality constraints of SHOSEE problem. After the application on hydrothermal scheduling system, the efficiency and stability of the proposed CB-MOHDE is verified by its more desirable results in comparison to other method established recently, and the simulation results also reveal that CB-MOHDE can be a promising alternative for solving SHOSEE.

  6. Multi-objective optimization with estimation of distribution algorithm in a noisy environment.

    Science.gov (United States)

    Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah

    2013-01-01

    Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.

  7. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    Science.gov (United States)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  8. A hybrid multi-objective cultural algorithm for short-term environmental/economic hydrothermal scheduling

    International Nuclear Information System (INIS)

    Lu Youlin; Zhou Jianzhong; Qin Hui; Wang Ying; Zhang Yongchuan

    2011-01-01

    Research highlights: → Multi-objective optimization model of short-term environmental/economic hydrothermal scheduling. → A hybrid multi-objective cultural algorithm (HMOCA) is presented. → New heuristic constraint handling methods are proposed. → Better quality solutions by reducing fuel cost and emission effects simultaneously are obtained. -- Abstract: The short-term environmental/economic hydrothermal scheduling (SEEHS) with the consideration of multiple objectives is a complicated non-linear constrained optimization problem with non-smooth and non-convex characteristics. In this paper, a multi-objective optimization model of SEEHS is proposed to consider the minimal of fuel cost and emission effects synthetically, and the transmission loss, the water transport delays between connected reservoirs as well as the valve-point effects of thermal plants are taken into consideration to formulate the problem precisely. Meanwhile, a hybrid multi-objective cultural algorithm (HMOCA) is presented to deal with SEEHS problem by optimizing both two objectives simultaneously. The proposed method integrated differential evolution (DE) algorithm into the framework of cultural algorithm model to implement the evolution of population space, and two knowledge structures in belief space are redefined according to the characteristics of DE and SEEHS problem to avoid premature convergence effectively. Moreover, in order to deal with the complicated constraints effectively, new heuristic constraint handling methods without any penalty factor settings are proposed in this paper. The feasibility and effectiveness of the proposed HMOCA method are demonstrated by two case studies of a hydrothermal power system. The simulation results reveal that, compared with other methods established recently, HMOCA can get better quality solutions by reducing fuel cost and emission effects simultaneously.

  9. Identification of Forested Landslides Using LiDar Data, Object-based Image Analysis, and Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Xianju Li

    2015-07-01

    Full Text Available For identification of forested landslides, most studies focus on knowledge-based and pixel-based analysis (PBA of LiDar data, while few studies have examined (semi- automated methods and object-based image analysis (OBIA. Moreover, most of them are focused on soil-covered areas with gentle hillslopes. In bedrock-covered mountains with steep and rugged terrain, it is so difficult to identify landslides that there is currently no research on whether combining semi-automated methods and OBIA with only LiDar derivatives could be more effective. In this study, a semi-automatic object-based landslide identification approach was developed and implemented in a forested area, the Three Gorges of China. Comparisons of OBIA and PBA, two different machine learning algorithms and their respective sensitivity to feature selection (FS, were first investigated. Based on the classification result, the landslide inventory was finally obtained according to (1 inclusion of holes encircled by the landslide body; (2 removal of isolated segments, and (3 delineation of closed envelope curves for landslide objects by manual digitizing operation. The proposed method achieved the following: (1 the filter features of surface roughness were first applied for calculating object features, and proved useful; (2 FS improved classification accuracy and reduced features; (3 the random forest algorithm achieved higher accuracy and was less sensitive to FS than a support vector machine; (4 compared to PBA, OBIA was more sensitive to FS, remarkably reduced computing time, and depicted more contiguous terrain segments; (5 based on the classification result with an overall accuracy of 89.11% ± 0.03%, the obtained inventory map was consistent with the referenced landslide inventory map, with a position mismatch value of 9%. The outlined approach would be helpful for forested landslide identification in steep and rugged terrain.

  10. Digital holographic setups for phase object measurements in micro and macro scale

    Directory of Open Access Journals (Sweden)

    Lédl Vít

    2015-01-01

    Full Text Available The measurement of properties of so called phase objects is being solved for more than one Century starting probably with schlieren technique 1. Classical interferometry served as a great measurement tool for several decades and was replaced by holographic interferometry, which disposes with many benefits when compared to classical interferometry. Holographic interferometry undergone an enormous development in last decade when digital holography has been established as a standard technique and most of the drawbacks were solved. The paper deals with scope of the huge applicability of digital holographic interferometry in heat and mass transfer measurement from micro to macro scale and from simple 2D measurement up to complex tomographic techniques. Recently the very complex experimental setups are under development in our labs combining many techniques leading to digital holographic micro tomography methods.

  11. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  12. An improved fast and elitist multi-objective genetic algorithm-ANSGA-II for multi-objective optimization of inverse radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Cao Ruifen; Li Guoli; Song Gang; Zhao Pan; Lin Hui; Wu Aidong; Huang Chenyu; Wu Yican

    2007-01-01

    Objective: To provide a fast and effective multi-objective optimization algorithm for inverse radiotherapy treatment planning system. Methods: Non-dominated Sorting Genetic Algorithm-NSGA-II is a representative of multi-objective evolutionary optimization algorithms and excels the others. The paper produces ANSGA-II that makes use of advantage of NSGA-II, and uses adaptive crossover and mutation to improve its flexibility; according the character of inverse radiotherapy treatment planning, the paper uses the pre-known knowledge to generate individuals of every generation in the course of optimization, which enhances the convergent speed and improves efficiency. Results: The example of optimizing average dose of a sheet of CT, including PTV, OAR, NT, proves the algorithm could find satisfied solutions in several minutes. Conclusions: The algorithm could provide clinic inverse radiotherapy treatment planning system with selection of optimization algorithms. (authors)

  13. An object-based approach for tree species extraction from digital orthophoto maps

    Science.gov (United States)

    Jamil, Akhtar; Bayram, Bulent

    2018-05-01

    Tree segmentation is an active and ongoing research area in the field of photogrammetry and remote sensing. It is more challenging due to both intra-class and inter-class similarities among various tree species. In this study, we exploited various statistical features for extraction of hazelnut trees from 1 : 5000 scaled digital orthophoto maps. Initially, the non-vegetation areas were eliminated using traditional normalized difference vegetation index (NDVI) followed by application of mean shift segmentation for transforming the pixels into meaningful homogeneous objects. In order to eliminate false positives, morphological opening and closing was employed on candidate objects. A number of heuristics were also derived to eliminate unwanted effects such as shadow and bounding box aspect ratios, before passing them into the classification stage. Finally, a knowledge based decision tree was constructed to distinguish the hazelnut trees from rest of objects which include manmade objects and other type of vegetation. We evaluated the proposed methodology on 10 sample orthophoto maps obtained from Giresun province in Turkey. The manually digitized hazelnut tree boundaries were taken as reference data for accuracy assessment. Both manually digitized and segmented tree borders were converted into binary images and the differences were calculated. According to the obtained results, the proposed methodology obtained an overall accuracy of more than 85 % for all sample images.

  14. Thermodynamic design of Stirling engine using multi-objective particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Duan, Chen; Wang, Xinggang; Shu, Shuiming; Jing, Changwei; Chang, Huawei

    2014-01-01

    Highlights: • An improved thermodynamic model taking into account irreversibility parameter was developed. • A multi-objective optimization method for designing Stirling engine was investigated. • Multi-objective particle swarm optimization algorithm was adopted in the area of Stirling engine for the first time. - Abstract: In the recent years, the interest in Stirling engine has remarkably increased due to its ability to use any heat source from outside including solar energy, fossil fuels and biomass. A large number of studies have been done on Stirling cycle analysis. In the present study, a mathematical model based on thermodynamic analysis of Stirling engine considering regenerative losses and internal irreversibilities has been developed. Power output, thermal efficiency and the cycle irreversibility parameter of Stirling engine are optimized simultaneously using Particle Swarm Optimization (PSO) algorithm, which is more effective than traditional genetic algorithms. In this optimization problem, some important parameters of Stirling engine are considered as decision variables, such as temperatures of the working fluid both in the high temperature isothermal process and in the low temperature isothermal process, dead volume ratios of each heat exchanger, volumes of each working spaces, effectiveness of the regenerator, and the system charge pressure. The Pareto optimal frontier is obtained and the final design solution has been selected by Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP). Results show that the proposed multi-objective optimization approach can significantly outperform traditional single objective approaches

  15. An Algorithmic Approach to the Management of Infantile Digital Fibromatosis: Review of Literature and a Case Report.

    Science.gov (United States)

    Eypper, Elizabeth H; Lee, Johnson C; Tarasen, Ashley J; Weinberg, Maxene H; Adetayo, Oluwaseun A

    2018-01-01

    Objective: Infantile digital fibromatosis is a rare benign childhood tumor, infrequently cited in the literature. Hallmarks include nodular growths exclusive to fingers and toes and the presence of eosinophilic cytoplasmic inclusions on histology. This article aims to exemplify diagnoses of infantile digital fibromatosis and possible treatment options. Methods: A computerized English literature search was performed in the PubMed/MEDLINE database using MeSH headings "infantile," "juvenile," "digital," and "fibromatosis." Twenty electronic publications were selected and their clinical and histological data recorded and used to compile a treatment algorithm. Results: A 9-month-old male child was referred for a persistent, symptomatic nodule on the third left toe. A direct excision with Brunner-type incisions was performed under general anesthesia. The procedure was successful without complications. The patient has no recurrence at 2 years postsurgery and continues to be followed. Histological examination revealed a proliferation of bland, uniformly plump spindle cells with elongated nuclei and small central nucleoli without paranuclear inclusions consistent with fibromatosis. Conclusions: Asymptomatic nodules should be observed for spontaneous regression or treated with nonsurgical techniques such as chemotherapeutic or steroid injection. Surgical removal should be reserved for cases with structural or functional compromise.

  16. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    Science.gov (United States)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  17. Multi-objective optimum design of fast tool servo based on improved differential evolution algorithm

    International Nuclear Information System (INIS)

    Zhu, Zhiwei; Zhou, Xiaoqin; Liu, Qiang; Zhao, Shaoxin

    2011-01-01

    The flexure-based mechanism is a promising realization of fast tool servo (FTS), and the optimum determination of flexure hinge parameters is one of the most important elements in the FTS design. This paper presents a multi-objective optimization approach to optimizing the dimension and position parameters of the flexure-based mechanism, which is based on the improved differential evolution algorithm embedding chaos and nonlinear simulated anneal algorithm. The results of optimum design show that the proposed algorithm has excellent performance and a well-balanced compromise is made between two conflicting objectives, the stroke and natural frequency of the FTS mechanism. The validation tests based on finite element analysis (FEA) show good agreement with the results obtained by using the proposed theoretical algorithm of this paper. Finally, a series of experimental tests are conducted to validate the design process and assess the performance of the FTS mechanism. The designed FTS reaches up to a stroke of 10.25 μm with at least 2 kHz bandwidth. Both of the FEA and experimental results demonstrate that the parameters of the flexure-based mechanism determined by the proposed approaches can achieve the specified performance and the proposed approach is suitable for the optimum design of FTS mechanism and of excellent performances

  18. Common path in-line holography using enhanced joint object reference digital interferometers

    Science.gov (United States)

    Kelner, Roy; Katz, Barak; Rosen, Joseph

    2014-01-01

    Joint object reference digital interferometer (JORDI) is a recently developed system capable of recording holograms of various types [Opt. Lett. 38(22), 4719 (2013)24322115]. Presented here is a new enhanced system design that is based on the previous JORDI. While the previous JORDI has been based purely on diffractive optical elements, displayed on spatial light modulators, the present design incorporates an additional refractive objective lens, thus enabling hologram recording with improved resolution and increased system applicability. Experimental results demonstrate successful hologram recording for various types of objects, including transmissive, reflective, three-dimensional, phase and highly scattering objects. The resolution limit of the system is analyzed and experimentally validated. Finally, the suitability of JORDI for microscopic applications is verified as a microscope objective based configuration of the system is demonstrated. PMID:24663838

  19. Common path in-line holography using enhanced joint object reference digital interferometers.

    Science.gov (United States)

    Kelner, Roy; Katz, Barak; Rosen, Joseph

    2014-03-10

    Joint object reference digital interferometer (JORDI) is a recently developed system capable of recording holograms of various types [Opt. Lett. 38(22), 4719 (2013)]. Presented here is a new enhanced system design that is based on the previous JORDI. While the previous JORDI has been based purely on diffractive optical elements, displayed on spatial light modulators, the present design incorporates an additional refractive objective lens, thus enabling hologram recording with improved resolution and increased system applicability. Experimental results demonstrate successful hologram recording for various types of objects, including transmissive, reflective, three-dimensional, phase and highly scattering objects. The resolution limit of the system is analyzed and experimentally validated. Finally, the suitability of JORDI for microscopic applications is verified as a microscope objective based configuration of the system is demonstrated.

  20. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  1. Phenomenology, Pokémon Go, and Other Augmented Reality Games : A Study of a Life Among Digital Objects

    NARCIS (Netherlands)

    Liberati, Nicola

    2017-01-01

    The aim of this paper is to analyse the effects on the everyday world of actual Augmented Reality games which introduce digital objects in our surroundings from a phenomenological point of view. Augmented Reality is a new technology aiming to merge digital and real objects, and it is becoming

  2. Synthesizing multi-objective H2/H-infinity dynamic controller using evolutionary algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf; Langballe, A.S.; Wisniewski, Rafal

    This paper covers the design of an Evolutionary Algorithm (EA), which should be able to synthesize a mixed H2/H-infinity. It will be shown how a system can be expressed as Matrix Inequalities (MI) and these will then be used in the design of the EA. The main objective is to examine whether a mixed...... H2/H-infinity controller is feasible, and if so, how the optimal mixed controller might befound....

  3. Optimal power system generation scheduling by multi-objective genetic algorithms with preferences

    International Nuclear Information System (INIS)

    Zio, E.; Baraldi, P.; Pedroni, N.

    2009-01-01

    Power system generation scheduling is an important issue both from the economical and environmental safety viewpoints. The scheduling involves decisions with regards to the units start-up and shut-down times and to the assignment of the load demands to the committed generating units for minimizing the system operation costs and the emission of atmospheric pollutants. As many other real-world engineering problems, power system generation scheduling involves multiple, conflicting optimization criteria for which there exists no single best solution with respect to all criteria considered. Multi-objective optimization algorithms, based on the principle of Pareto optimality, can then be designed to search for the set of nondominated scheduling solutions from which the decision-maker (DM) must a posteriori choose the preferred alternative. On the other hand, often, information is available a priori regarding the preference values of the DM with respect to the objectives. When possible, it is important to exploit this information during the search so as to focus it on the region of preference of the Pareto-optimal set. In this paper, ways are explored to use this preference information for driving a multi-objective genetic algorithm towards the preferential region of the Pareto-optimal front. Two methods are considered: the first one extends the concept of Pareto dominance by biasing the chromosome replacement step of the algorithm by means of numerical weights that express the DM' s preferences; the second one drives the search algorithm by changing the shape of the dominance region according to linear trade-off functions specified by the DM. The effectiveness of the proposed approaches is first compared on a case study of literature. Then, a nonlinear, constrained, two-objective power generation scheduling problem is effectively tackled

  4. Availability allocation to repairable systems with genetic algorithms: a multi-objective formulation

    International Nuclear Information System (INIS)

    Elegbede, Charles; Adjallah, Kondo

    2003-01-01

    This paper describes a methodology based on genetic algorithms (GA) and experiments plan to optimize the availability and the cost of reparable parallel-series systems. It is a NP-hard problem of multi-objective combinatorial optimization, modeled with continuous and discrete variables. By using the weighting technique, the problem is transformed into a single-objective optimization problem whose constraints are then relaxed by the exterior penalty technique. We then propose a search of solution through GA, whose parameters are adjusted using experiments plan technique. A numerical example is used to assess the method

  5. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    International Nuclear Information System (INIS)

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  6. Computation of spatial significance of mountain objects extracted from multiscale digital elevation models

    International Nuclear Information System (INIS)

    Sathyamoorthy, Dinesh

    2014-01-01

    The derivation of spatial significance is an important aspect of geospatial analysis and hence, various methods have been proposed to compute the spatial significance of entities based on spatial distances with other entities within the cluster. This paper is aimed at studying the spatial significance of mountain objects extracted from multiscale digital elevation models (DEMs). At each scale, the value of spatial significance index SSI of a mountain object is the minimum number of morphological dilation iterations required to occupy all the other mountain objects in the terrain. The mountain object with the lowest value of SSI is the spatially most significant mountain object, indicating that it has the shortest distance to the other mountain objects. It is observed that as the area of the mountain objects reduce with increasing scale, the distances between the mountain objects increase, resulting in increasing values of SSI. The results obtained indicate that the strategic location of a mountain object at the centre of the terrain is more important than its size in determining its reach to other mountain objects and thus, its spatial significance

  7. Comparison of phase unwrapping algorithms for topography reconstruction based on digital speckle pattern interferometry

    Science.gov (United States)

    Li, Yuanbo; Cui, Xiaoqian; Wang, Hongbei; Zhao, Mengge; Ding, Hongbin

    2017-10-01

    Digital speckle pattern interferometry (DSPI) can diagnose the topography evolution in real-time, continuous and non-destructive, and has been considered as a most promising technique for Plasma-Facing Components (PFCs) topography diagnostic under the complicated environment of tokamak. It is important for the study of digital speckle pattern interferometry to enhance speckle patterns and obtain the real topography of the ablated crater. In this paper, two kinds of numerical model based on flood-fill algorithm has been developed to obtain the real profile by unwrapping from the wrapped phase in speckle interference pattern, which can be calculated through four intensity images by means of 4-step phase-shifting technique. During the process of phase unwrapping by means of flood-fill algorithm, since the existence of noise pollution, and other inevitable factors will lead to poor quality of the reconstruction results, this will have an impact on the authenticity of the restored topography. The calculation of the quality parameters was introduced to obtain the quality-map from the wrapped phase map, this work presents two different methods to calculate the quality parameters. Then quality parameters are used to guide the path of flood-fill algorithm, and the pixels with good quality parameters are given priority calculation, so that the quality of speckle interference pattern reconstruction results are improved. According to the comparison between the flood-fill algorithm which is suitable for speckle pattern interferometry and the quality-guided flood-fill algorithm (with two different calculation approaches), the errors which caused by noise pollution and the discontinuous of the strips were successfully reduced.

  8. Digital dermoscopy to determine skin melanin index as an objective indicator of skin pigmentation

    Directory of Open Access Journals (Sweden)

    Sara Majewski

    2016-04-01

    Full Text Available Clinical assessment of skin photosensitivity is subjectively determined by erythema and tanning responses to sunlight recalled by the subject, alternatively known as Fitzpatrick Skin Phototype (SPT. Responses may be unreliable due to recall bias, subjective bias by clinicians and subjects, and lack of cultural sensitivity of the questions. Analysis of red-green-blue (RGB color spacing of digital images may provide an objective determination of SPT. This paper presents the studies to assess the melanin index (MI, as determined by RGB images obtained by both standard digital camera as well as by videodermoscope, and to correlate the MI with SPT based upon subjects’ verbal responses to standardized questions administered by a dermatologist.A sample of subjects representing all SPTs I–VI was selected. Both the digital camera and videodermoscope were calibrated at standard illumination, light source and white balance. Images of constitutive skin of the upper ventral arm were taken of each subject using both instruments.The studies showed that 58 subjects (20 M, 38 F were enrolled in the study (mean age: 47 years; range: 20–89, stratified to skin phototype I–VI. MI obtained by using both digital camera and videodermoscope increased significantly as the SPT increased p = 0.004 and p < 0.0001, respectively and positively correlated with dermatologist-assessed SPT (Spearman correlation, r = 0.48 and r = 0.84, respectively.Digital imaging can quantify melanin content in order to quantitatively approximate skin pigmentation in all skin phototypes including Type VI skin. This methodology holds promise as a simple, non-invasive, rapid and objective approach to reliably determine skin phototype and, with further investigation, may prove to be both practical and useful in the prediction of skin cancer risk.

  9. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    International Nuclear Information System (INIS)

    Salazar, Daniel; Rocco, Claudio M.; Galvan, Blas J.

    2006-01-01

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature

  10. Evolutionary algorithms for multi-objective energetic and economic optimization in thermal system design

    International Nuclear Information System (INIS)

    Toffolo, A.; Lazzaretto, A.

    2002-01-01

    Thermoeconomic analyses in thermal system design are always focused on the economic objective. However, knowledge of only the economic minimum may not be sufficient in the decision making process, since solutions with a higher thermodynamic efficiency, in spite of small increases in total costs, may result in much more interesting designs due to changes in energy market prices or in energy policies. This paper suggests how to perform a multi-objective optimization in order to find solutions that simultaneously satisfy exergetic and economic objectives. This corresponds to a search for the set of Pareto optimal solutions with respect to the two competing objectives. The optimization process is carried out by an evolutionary algorithm, that features a new diversity preserving mechanism using as a test case the well-known CGAM problem. (author)

  11. NSGA-II algorithm for multi-objective generation expansion planning problem

    Energy Technology Data Exchange (ETDEWEB)

    Murugan, P.; Kannan, S. [Electronics and Communication Engineering Department, Arulmigu Kalasalingam College of Engineering, Krishnankoil 626190, Tamilnadu (India); Baskar, S. [Electrical Engineering Department, Thiagarajar College of Engineering, Madurai 625015, Tamilnadu (India)

    2009-04-15

    This paper presents an application of Elitist Non-dominated Sorting Genetic Algorithm version II (NSGA-II), to multi-objective generation expansion planning (GEP) problem. The GEP problem is considered as a two-objective problem. The first objective is the minimization of investment cost and the second objective is the minimization of outage cost (or maximization of reliability). To improve the performance of NSGA-II, two modifications are proposed. One modification is incorporation of Virtual Mapping Procedure (VMP), and the other is introduction of controlled elitism in NSGA-II. A synthetic test system having 5 types of candidate units is considered here for GEP for a 6-year planning horizon. The effectiveness of the proposed modifications is illustrated in detail. (author)

  12. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es

    2006-09-15

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.

  13. Evaluating and Improving Automatic Sleep Spindle Detection by Using Multi-Objective Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Min-Yin Liu

    2017-05-01

    Full Text Available Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz measured by electroencephalography (EEG mostly during non-rapid eye movement (NREM stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1 the lack of common benchmark databases, and (2 the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA, the Strength Pareto Evolutionary Algorithm (SPEA2, to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT, and two Hilbert-Huang transform (HHT based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737.

  14. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (Zea mays L., oats (Avena sativa L., rye (Secale cereale L., wheat (Triticum aestivum L. and barley (Hordeun vulgare L., (3 high protein crops, such as peas (Pisum sativum L. and beans (Vicia faba L., (4 alfalfa (Medicago sativa L., (5 woodlands and scrublands, including holly oak (Quercus ilex L. and common retama (Retama sphaerocarpa L., (6 urban soil, (7 olive groves (Olea europaea L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  15. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    Science.gov (United States)

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  16. Optical derotator alignment using image-processing algorithm for tracking laser vibrometer measurements of rotating objects.

    Science.gov (United States)

    Khalil, Hossam; Kim, Dongkyu; Jo, Youngjoon; Park, Kyihwan

    2017-06-01

    An optical component called a Dove prism is used to rotate the laser beam of a laser-scanning vibrometer (LSV). This is called a derotator and is used for measuring the vibration of rotating objects. The main advantage of a derotator is that it works independently from an LSV. However, this device requires very specific alignment, in which the axis of the Dove prism must coincide with the rotational axis of the object. If the derotator is misaligned with the rotating object, the results of the vibration measurement are imprecise, owing to the alteration of the laser beam on the surface of the rotating object. In this study, a method is proposed for aligning a derotator with a rotating object through an image-processing algorithm that obtains the trajectory of a landmark attached to the object. After the trajectory of the landmark is mathematically modeled, the amount of derotator misalignment with respect to the object is calculated. The accuracy of the proposed method for aligning the derotator with the rotating object is experimentally tested.

  17. A Novel Object Tracking Algorithm Based on Compressed Sensing and Entropy of Information

    Directory of Open Access Journals (Sweden)

    Ding Ma

    2015-01-01

    Full Text Available Object tracking has always been a hot research topic in the field of computer vision; its purpose is to track objects with specific characteristics or representation and estimate the information of objects such as their locations, sizes, and rotation angles in the current frame. Object tracking in complex scenes will usually encounter various sorts of challenges, such as location change, dimension change, illumination change, perception change, and occlusion. This paper proposed a novel object tracking algorithm based on compressed sensing and information entropy to address these challenges. First, objects are characterized by the Haar (Haar-like and ORB features. Second, the dimensions of computation space of the Haar and ORB features are effectively reduced through compressed sensing. Then the above-mentioned features are fused based on information entropy. Finally, in the particle filter framework, an object location was obtained by selecting candidate object locations in the current frame from the local context neighboring the optimal locations in the last frame. Our extensive experimental results demonstrated that this method was able to effectively address the challenges of perception change, illumination change, and large area occlusion, which made it achieve better performance than existing approaches such as MIL and CT.

  18. Algorithmic Skin: Health-Tracking Technologies, Personal Analytics and the Biopedagogies of Digitized Health and Physical Education

    Science.gov (United States)

    Williamson, Ben

    2015-01-01

    The emergence of digitized health and physical education, or "eHPE", embeds software algorithms in the organization of health and physical education pedagogies. Particularly with the emergence of wearable and mobile activity trackers, biosensors and personal analytics apps, algorithmic processes have an increasingly powerful part to play…

  19. Multi-digit maximum voluntary torque production on a circular object

    Science.gov (United States)

    SHIM, JAE KUN; HUANG, JUNFENG; HOOKE, ALEXANDER W.; LATSH, MARK L.; ZATSIORSKY, VLADIMIR M.

    2010-01-01

    Individual digit-tip forces and moments during torque production on a mechanically fixed circular object were studied. During the experiments, subjects positioned each digit on a 6-dimensional force/moment sensor attached to a circular handle and produced a maximum voluntary torque on the handle. The torque direction and the orientation of the torque axis were varied. From this study, it is concluded that: (1) the maximum torque in the closing (clockwise) direction was larger than in the opening (counter clockwise) direction; (2) the thumb and little finger had the largest and the smallest share of both total normal force and total moment, respectively; (3) the sharing of total moment between individual digits was not affected by the orientation of the torque axis or by the torque direction, while the sharing of total normal force between the individual digit varied with torque direction; (4) the normal force safety margins were largest and smallest in the thumb and little finger, respectively. PMID:17454086

  20. An Adaptive Multi-Objective Particle Swarm Optimization Algorithm for Multi-Robot Path Planning

    Directory of Open Access Journals (Sweden)

    Nizar Hadi Abbas

    2016-07-01

    Full Text Available This paper discusses an optimal path planning algorithm based on an Adaptive Multi-Objective Particle Swarm Optimization Algorithm (AMOPSO for two case studies. First case, single robot wants to reach a goal in the static environment that contain two obstacles and two danger source. The second one, is improving the ability for five robots to reach the shortest way. The proposed algorithm solves the optimization problems for the first case by finding the minimum distance from initial to goal position and also ensuring that the generated path has a maximum distance from the danger zones. And for the second case, finding the shortest path for every robot and without any collision between them with the shortest time. In order to evaluate the proposed algorithm in term of finding the best solution, six benchmark test functions are used to make a comparison between AMOPSO and the standard MOPSO. The results show that the AMOPSO has a better ability to get away from local optimums with a quickest convergence than the MOPSO. The simulation results using Matlab 2014a, indicate that this methodology is extremely valuable for every robot in multi-robot framework to discover its own particular proper pa‌th from the start to the destination position with minimum distance and time.

  1. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    Science.gov (United States)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2017-02-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  2. Digital Image Authentication Algorithm Based on Fragile Invisible Watermark and MD-5 Function in the DWT Domain

    Directory of Open Access Journals (Sweden)

    Nehad Hameed Hussein

    2015-04-01

    Full Text Available Using watermarking techniques and digital signatures can better solve the problems of digital images transmitted on the Internet like forgery, tampering, altering, etc. In this paper we proposed invisible fragile watermark and MD-5 based algorithm for digital image authenticating and tampers detecting in the Discrete Wavelet Transform DWT domain. The digital image is decomposed using 2-level DWT and the middle and high frequency sub-bands are used for watermark and digital signature embedding. The authentication data are embedded in number of the coefficients of these sub-bands according to the adaptive threshold based on the watermark length and the coefficients of each DWT level. These sub-bands are used because they are less sensitive to the Human Visual System (HVS and preserve high image fidelity. MD-5 and RSA algorithms are used for generating the digital signature from the watermark data that is also embedded in the medical image. We apply the algorithm on number of medical images. The Electronic Patient Record (EPR is used as watermark data. Experiments demonstrate the effectiveness of our algorithm in terms of robustness, invisibility, and fragility. Watermark and digital signature can be extracted without the need to the original image.

  3. A FAST AND ELITIST BI-OBJECTIVE EVOLUTIONARY ALGORITHM FOR SCHEDULING INDEPENDENT TASKS ON HETEROGENEOUS SYSTEMS

    Directory of Open Access Journals (Sweden)

    G.Subashini

    2010-07-01

    Full Text Available To meet the increasing computational demands, geographically distributed resources need to be logically coupled to make them work as a unified resource. In analyzing the performance of such distributed heterogeneous computing systems scheduling a set of tasks to the available set of resources for execution is highly important. Task scheduling being an NP-complete problem, use of metaheuristics is more appropriate in obtaining optimal solutions. Schedules thus obtained can be evaluated using several criteria that may conflict with one another which require multi objective problem formulation. This paper investigates the application of an elitist Nondominated Sorting Genetic Algorithm (NSGA-II, to efficiently schedule a set of independent tasks in a heterogeneous distributed computing system. The objectives considered in this paper include minimizing makespan and average flowtime simultaneously. The implementation of NSGA-II algorithm and Weighted-Sum Genetic Algorithm (WSGA has been tested on benchmark instances for distributed heterogeneous systems. As NSGA-II generates a set of Pareto optimal solutions, to verify the effectiveness of NSGA-II over WSGA a fuzzy based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set.

  4. Multi-objective genetic algorithm based innovative wind farm layout optimization method

    International Nuclear Information System (INIS)

    Chen, Ying; Li, Hua; He, Bang; Wang, Pengcheng; Jin, Kai

    2015-01-01

    Highlights: • Innovative optimization procedures for both regular and irregular shape wind farm. • Using real wind condition and commercial wind turbine parameters. • Using multiple-objective genetic algorithm optimization method. • Optimize the selection of different wind turbine types and their hub heights. - Abstract: Layout optimization has become one of the critical approaches to increase power output and decrease total cost of a wind farm. Previous researches have applied intelligent algorithms to optimizing the wind farm layout. However, those wind conditions used in most of previous research are simplified and not accurate enough to match the real world wind conditions. In this paper, the authors propose an innovative optimization method based on multi-objective genetic algorithm, and test it with real wind condition and commercial wind turbine parameters. Four case studies are conducted to investigate the number of wind turbines needed in the given wind farm. Different cost models are also considered in the case studies. The results clearly demonstrate that the new method is able to optimize the layout of a given wind farm with real commercial data and wind conditions in both regular and irregular shapes, and achieve a better result by selecting different type and hub height wind turbines.

  5. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  6. An effective detection algorithm for region duplication forgery in digital images

    Science.gov (United States)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  7. Multi-objective optimization of a vertical ground source heat pump using evolutionary algorithm

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Amlashi, Emad Hadaddi; Amidpour, Majid

    2009-01-01

    Thermodynamic and thermoeconomic optimization of a vertical ground source heat pump system has been studied. A model based on the energy and exergy analysis is presented here. An economic model of the system is developed according to the Total Revenue Requirement (TRR) method. The objective functions based on the thermodynamic and thermoeconomic analysis are developed. The proposed vertical ground source heat pump system including eight decision variables is considered for optimization. An artificial intelligence technique known as evolutionary algorithm (EA) has been utilized as an optimization method. This approach has been applied to minimize either the total levelized cost of the system product or the exergy destruction of the system. Three levels of optimization including thermodynamic single objective, thermoeconomic single objective and multi-objective optimizations are performed. In Multi-objective optimization, both thermodynamic and thermoeconomic objectives are considered, simultaneously. In the case of multi-objective optimization, an example of decision-making process for selection of the final solution from available optimal points on Pareto frontier is presented. The results obtained using the various optimization approaches are compared and discussed. Further, the sensitivity of optimized systems to the interest rate, to the annual number of operating hours and to the electricity cost are studied in detail.

  8. Terrestrial scanning or digital images in inventory of monumental objects? - case study

    Science.gov (United States)

    Markiewicz, J. S.; Zawieska, D.

    2014-06-01

    Cultural heritage is the evidence of the past; monumental objects create the important part of the cultural heritage. Selection of a method to be applied depends on many factors, which include: the objectives of inventory, the object's volume, sumptuousness of architectural design, accessibility to the object, required terms and accuracy of works. The paper presents research and experimental works, which have been performed in the course of development of architectural documentation of elements of the external facades and interiors of the Wilanów Palace Museum in Warszawa. Point clouds, acquired from terrestrial laser scanning (Z+F 5003h) and digital images taken with Nikon D3X and Hasselblad H4D cameras were used. Advantages and disadvantages of utilisation of these technologies of measurements have been analysed with consideration of the influence of the structure and reflectance of investigated monumental surfaces on the quality of generation of photogrammetric products. The geometric quality of surfaces obtained from terrestrial laser scanning data and from point clouds resulting from digital images, have been compared.

  9. Evaluation of the algorithms for recovering reflectance from virtual digital camera response

    Directory of Open Access Journals (Sweden)

    Ana Gebejes

    2012-10-01

    Full Text Available In the recent years many new methods for quality control in graphic industry are proposed. All of these methodshave one in common – using digital camera as a capturing device and appropriate image processing method/algorithmto obtain desired information. With the development of new, more accurate sensors, digital cameras becameeven more dominant and the use of cameras as measuring device became more emphasized. The idea of using cameraas spectrophotometer is interesting because this kind of measurement would be more economical, faster, widelyavailable and it would provide a possibility of multiple colour capture with a single shot. This can be very usefulfor capturing colour targets for characterization of different properties of a print device. A lot of effort is put into enablingcommercial colour CCD cameras (3 acquisition channels to obtain enough of the information for reflectancerecovery. Unfortunately, RGB camera was not made with the idea of performing colour measurements but ratherfor producing an image that is visually pleasant for the observer. This somewhat complicates the task and seeks fora development of different algorithms that will estimate the reflectance information from the available RGB cameraresponses with minimal possible error. In this paper three different reflectance estimation algorithms are evaluated(Orthogonal projection,Wiener and optimized Wiener estimation, together with the method for reflectance approximationbased on principal component analysis (PCA. The aim was to perform reflectance estimation pixelwise and analyze the performance of some reflectance estimation algorithms locally, at some specific pixels in theimage, and globally, on the whole image. Performances of each algorithm were evaluated visually and numericallyby obtaining pixel wise colour difference and pixel wise difference of estimated reflectance to the original values. Itwas concluded that Wiener method gives the best reflectance estimation

  10. Astrophysical Information from Objective Prism Digitized Images: Classification with an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bratsolis Emmanuel

    2005-01-01

    Full Text Available Stellar spectral classification is not only a tool for labeling individual stars but is also useful in studies of stellar population synthesis. Extracting the physical quantities from the digitized spectral plates involves three main stages: detection, extraction, and classification of spectra. Low-dispersion objective prism images have been used and automated methods have been developed. The detection and extraction problems have been presented in previous works. In this paper, we present a classification method based on an artificial neural network (ANN. We make a brief presentation of the entire automated system and we compare the new classification method with the previously used method of maximum correlation coefficient (MCC. Digitized photographic material has been used here. The method can also be used on CCD spectral images.

  11. A proposal of multi-objective function for submarine rigid pipelines route optimization via evolutionary algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes, D.H.; Medeiros, A.R. [Subsea7, Niteroi, RJ (Brazil); Jacob, B.P.; Lima, B.S.L.P.; Albrecht, C.H. [Universidade Federaldo Rio de Janeiro (COPPE/UFRJ), RJ (Brazil). Coordenacao de Programas de Pos-graduacao em Engenharia

    2009-07-01

    This work presents studies regarding the determination of optimal pipeline routes for offshore applications. The assembly of an objective function is presented; this function can be later associated with Evolutionary Algorithm to implement a computational tool for the automatic determination of the most advantageous pipeline route for a given scenario. This tool may reduce computational overheads, avoid mistakes with route interpretation, and minimize costs with respect to submarine pipeline design and installation. The following aspects can be considered in the assembly of the objective function: Geophysical and geotechnical data obtained from the bathymetry and sonography; the influence of the installation method, total pipeline length and number of free spans to be mitigated along the routes as well as vessel time for both cases. Case studies are presented to illustrate the use of the proposed objective function, including a sensitivity analysis intended to identify the relative influence of selected parameters in the evaluation of different routes. (author)

  12. Multi-objective particle swarm and genetic algorithm for the optimization of the LANSCE linac operation

    International Nuclear Information System (INIS)

    Pang, X.; Rybarcyk, L.J.

    2014-01-01

    Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster

  13. Multi-Objective Optimization Design for a Hybrid Energy System Using the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Myeong Jin Ko

    2015-04-01

    Full Text Available To secure a stable energy supply and bring renewable energy to buildings within a reasonable cost range, a hybrid energy system (HES that integrates both fossil fuel energy systems (FFESs and new and renewable energy systems (NRESs needs to be designed and applied. This paper presents a methodology to optimize a HES consisting of three types of NRESs and six types of FFESs while simultaneously minimizing life cycle cost (LCC, maximizing penetration of renewable energy and minimizing annual greenhouse gas (GHG emissions. An elitist non-dominated sorting genetic algorithm is utilized for multi-objective optimization. As an example, we have designed the optimal configuration and sizing for a HES in an elementary school. The evolution of Pareto-optimal solutions according to the variation in the economic, technical and environmental objective functions through generations is discussed. The pair wise trade-offs among the three objectives are also examined.

  14. Multi-objective thermodynamic optimization of combined Brayton and inverse Brayton cycles using genetic algorithms

    International Nuclear Information System (INIS)

    Besarati, S.M.; Atashkari, K.; Jamali, A.; Hajiloo, A.; Nariman-zadeh, N.

    2010-01-01

    This paper presents a simultaneous optimization study of two outputs performance of a previously proposed combined Brayton and inverse Brayton cycles. It has been carried out by varying the upper cycle pressure ratio, the expansion pressure of the bottom cycle and using variable, above atmospheric, bottom cycle inlet pressure. Multi-objective genetic algorithms are used for Pareto approach optimization of the cycle outputs. The two important conflicting thermodynamic objectives that have been considered in this work are net specific work (w s ) and thermal efficiency (η th ). It is shown that some interesting features among optimal objective functions and decision variables involved in the Baryton and inverse Brayton cycles can be discovered consequently.

  15. Multi-objective particle swarm and genetic algorithm for the optimization of the LANSCE linac operation

    Energy Technology Data Exchange (ETDEWEB)

    Pang, X., E-mail: xpang@lanl.gov; Rybarcyk, L.J.

    2014-03-21

    Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster.

  16. Solving multi-objective job shop problem using nature-based algorithms: new Pareto approximation features

    Directory of Open Access Journals (Sweden)

    Jarosław Rudy

    2015-01-01

    Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.

  17. The Formation of Optimal Portfolio of Mutual Shares Funds using Multi-Objective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yandra Arkeman

    2013-09-01

    Full Text Available Investments in financial assets have become a trend in the globalization era, especially the investment in mutual fund shares. Investors who want to invest in stock mutual funds can set up an investment portfolio in order to generate a minimal risk and maximum return. In this study the authors used the Multi-Objective Genetic Algorithm Non-dominated Sorting II (MOGA NSGA-II technique with the Markowitz portfolio principle to find the best portfolio from several mutual funds. The data used are 10 company stock mutual funds with a period of 12 months, 24 months and 36 months. The genetic algorithm parameters used are crossover probability of 0.65, mutation probability of 0.05, Generation 400 and a population numbering 20 individuals. The study produced a combination of the best portfolios for the period of 24 months with a computing time of 63,289 seconds.

  18. An efficient algorithm for bi-objective combined heat and power production planning under the emission trading scheme

    International Nuclear Information System (INIS)

    Rong, Aiying; Figueira, José Rui; Lahdelma, Risto

    2014-01-01

    Highlights: • Define fuel mix setting for the bi-objective CHP environmental/economic dispatch. • Develop an efficient algorithm for constructing the Pareto frontier for the problem. • Time complexity analysis is conducted for the proposed algorithm. • The algorithm is theoretically compared against a traditional algorithm. • The efficiency of the algorithm is justified by numerical results. - Abstract: The growing environmental awareness and the apparent conflicts between economic and environmental objectives turn energy planning problems naturally into multi-objective optimization problems. In the current study, mixed fuel combustion is considered as an option to achieve tradeoff between economic objective (associated with fuel cost) and emission objective (measured in CO 2 emission cost according to fuels and emission allowance price) because a fuel with higher emissions is usually cheaper than one with lower emissions. Combined heat and power (CHP) production is an important high-efficiency technology to promote under the emission trading scheme. In CHP production, the production planning of both commodities must be done in coordination. A long-term planning problem decomposes into thousands of hourly subproblems. In this paper, a bi-objective multi-period linear programming CHP planning model is presented first. Then, an efficient specialized merging algorithm for constructing the exact Pareto frontier (PF) of the problem is presented. The algorithm is theoretically and empirically compared against a modified dichotomic search algorithm. The efficiency and effectiveness of the algorithm is justified

  19. Digital algorithms for parallel pipelined single-detector homodyne fringe counting in laser interferometry

    Science.gov (United States)

    Rerucha, Simon; Sarbort, Martin; Hola, Miroslava; Cizek, Martin; Hucl, Vaclav; Cip, Ondrej; Lazar, Josef

    2016-12-01

    The homodyne detection with only a single detector represents a promising approach in the interferometric application which enables a significant reduction of the optical system complexity while preserving the fundamental resolution and dynamic range of the single frequency laser interferometers. We present the design, implementation and analysis of algorithmic methods for computational processing of the single-detector interference signal based on parallel pipelined processing suitable for real time implementation on a programmable hardware platform (e.g. the FPGA - Field Programmable Gate Arrays or the SoC - System on Chip). The algorithmic methods incorporate (a) the single detector signal (sine) scaling, filtering, demodulations and mixing necessary for the second (cosine) quadrature signal reconstruction followed by a conic section projection in Cartesian plane as well as (a) the phase unwrapping together with the goniometric and linear transformations needed for the scale linearization and periodic error correction. The digital computing scheme was designed for bandwidths up to tens of megahertz which would allow to measure the displacements at the velocities around half metre per second. The algorithmic methods were tested in real-time operation with a PC-based reference implementation that employed the advantage pipelined processing by balancing the computational load among multiple processor cores. The results indicate that the algorithmic methods are suitable for a wide range of applications [3] and that they are bringing the fringe counting interferometry closer to the industrial applications due to their optical setup simplicity and robustness, computational stability, scalability and also a cost-effectiveness.

  20. New reconstruction algorithm for digital breast tomosynthesis: better image quality for humans and computers.

    Science.gov (United States)

    Rodriguez-Ruiz, Alejandro; Teuwen, Jonas; Vreemann, Suzan; Bouwman, Ramona W; van Engen, Ruben E; Karssemeijer, Nico; Mann, Ritse M; Gubern-Merida, Albert; Sechopoulos, Ioannis

    2017-01-01

    Background The image quality of digital breast tomosynthesis (DBT) volumes depends greatly on the reconstruction algorithm. Purpose To compare two DBT reconstruction algorithms used by the Siemens Mammomat Inspiration system, filtered back projection (FBP), and FBP with iterative optimizations (EMPIRE), using qualitative analysis by human readers and detection performance of machine learning algorithms. Material and Methods Visual grading analysis was performed by four readers specialized in breast imaging who scored 100 cases reconstructed with both algorithms (70 lesions). Scoring (5-point scale: 1 = poor to 5 = excellent quality) was performed on presence of noise and artifacts, visualization of skin-line and Cooper's ligaments, contrast, and image quality, and, when present, lesion visibility. In parallel, a three-dimensional deep-learning convolutional neural network (3D-CNN) was trained (n = 259 patients, 51 positives with BI-RADS 3, 4, or 5 calcifications) and tested (n = 46 patients, nine positives), separately with FBP and EMPIRE volumes, to discriminate between samples with and without calcifications. The partial area under the receiver operating characteristic curve (pAUC) of each 3D-CNN was used for comparison. Results EMPIRE reconstructions showed better contrast (3.23 vs. 3.10, P = 0.010), image quality (3.22 vs. 3.03, P algorithm provides DBT volumes with better contrast and image quality, fewer artifacts, and improved visibility of calcifications for human observers, as well as improved detection performance with deep-learning algorithms.

  1. Multi-objective genetic algorithm for solving N-version program design problem

    International Nuclear Information System (INIS)

    Yamachi, Hidemi; Tsujimura, Yasuhiro; Kambayashi, Yasushi; Yamamoto, Hisashi

    2006-01-01

    N-version programming (NVP) is a programming approach for constructing fault tolerant software systems. Generally, an optimization model utilized in NVP selects the optimal set of versions for each module to maximize the system reliability and to constrain the total cost to remain within a given budget. In such a model, while the number of versions included in the obtained solution is generally reduced, the budget restriction may be so rigid that it may fail to find the optimal solution. In order to ameliorate this problem, this paper proposes a novel bi-objective optimization model that maximizes the system reliability and minimizes the system total cost for designing N-version software systems. When solving multi-objective optimization problem, it is crucial to find Pareto solutions. It is, however, not easy to obtain them. In this paper, we propose a novel bi-objective optimization model that obtains many Pareto solutions efficiently. We formulate the optimal design problem of NVP as a bi-objective 0-1 nonlinear integer programming problem. In order to overcome this problem, we propose a Multi-objective genetic algorithm (MOGA), which is a powerful, though time-consuming, method to solve multi-objective optimization problems. When implementing genetic algorithm (GA), the use of an appropriate genetic representation scheme is one of the most important issues to obtain good performance. We employ random-key representation in our MOGA to find many Pareto solutions spaced as evenly as possible along the Pareto frontier. To pursue improve further performance, we introduce elitism, the Pareto-insertion and the Pareto-deletion operations based on distance between Pareto solutions in the selection process. The proposed MOGA obtains many Pareto solutions along the Pareto frontier evenly. The user of the MOGA can select the best compromise solution among the candidates by controlling the balance between the system reliability and the total cost

  2. Multi-objective genetic algorithm for solving N-version program design problem

    Energy Technology Data Exchange (ETDEWEB)

    Yamachi, Hidemi [Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Saitama 345-8501 (Japan) and Department of Production and Information Systems Engineering, Tokyo Metropolitan Institute of Technology, Hino, Tokyo 191-0065 (Japan)]. E-mail: yamachi@nit.ac.jp; Tsujimura, Yasuhiro [Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Saitama 345-8501 (Japan)]. E-mail: tujimr@nit.ac.jp; Kambayashi, Yasushi [Department of Computer and Information Engineering, Nippon Institute of Technology, Miyashiro, Saitama 345-8501 (Japan)]. E-mail: yasushi@nit.ac.jp; Yamamoto, Hisashi [Department of Production and Information Systems Engineering, Tokyo Metropolitan Institute of Technology, Hino, Tokyo 191-0065 (Japan)]. E-mail: yamamoto@cc.tmit.ac.jp

    2006-09-15

    N-version programming (NVP) is a programming approach for constructing fault tolerant software systems. Generally, an optimization model utilized in NVP selects the optimal set of versions for each module to maximize the system reliability and to constrain the total cost to remain within a given budget. In such a model, while the number of versions included in the obtained solution is generally reduced, the budget restriction may be so rigid that it may fail to find the optimal solution. In order to ameliorate this problem, this paper proposes a novel bi-objective optimization model that maximizes the system reliability and minimizes the system total cost for designing N-version software systems. When solving multi-objective optimization problem, it is crucial to find Pareto solutions. It is, however, not easy to obtain them. In this paper, we propose a novel bi-objective optimization model that obtains many Pareto solutions efficiently. We formulate the optimal design problem of NVP as a bi-objective 0-1 nonlinear integer programming problem. In order to overcome this problem, we propose a Multi-objective genetic algorithm (MOGA), which is a powerful, though time-consuming, method to solve multi-objective optimization problems. When implementing genetic algorithm (GA), the use of an appropriate genetic representation scheme is one of the most important issues to obtain good performance. We employ random-key representation in our MOGA to find many Pareto solutions spaced as evenly as possible along the Pareto frontier. To pursue improve further performance, we introduce elitism, the Pareto-insertion and the Pareto-deletion operations based on distance between Pareto solutions in the selection process. The proposed MOGA obtains many Pareto solutions along the Pareto frontier evenly. The user of the MOGA can select the best compromise solution among the candidates by controlling the balance between the system reliability and the total cost.

  3. Development of test objects for image quality evaluation of digital mammography

    International Nuclear Information System (INIS)

    Pinto, Vitor Nascimento de Carvalho

    2013-01-01

    Mammography is the image exam called 'gold standard' for early detection of breast cancer. 111 Brazil, more than eight million mammograms are carried out per year. With the advancement of technology, the digital systems CR and DR for this diagnostic modality have been increasingly implemented, replacing the conventional screen-film system, which brought environmental problems, like the disposal of chemical waste, and is also responsible for the rejection of radiographic films with processing artifacts. Digital systems, besides not experiencing the problem of environmental pollution, are still capable of image processing, allowing a much lower rejection rate when compared to the conventional system. Moreover, the determination of an accurate diagnosis is highly dependent on the image quality of the examination. To ensure the reliability of the images produced by these systems, it is necessary to evaluate them on a regular basis. Unfortunately, there is no regulation in Brazil about the Quality Assurance of these systems. The aim of this study was to develop a set of test objects that allow the evaluation of some parameters of image quality of these systems, such as field image uniformity, the linearity between the air Kerma incident on detector and the mean pixel value (MPV) of the image, the spatial resolution of the system through the modulation transfer function (MTF) and also to suggest an object to be applied in the evaluation of contrast-to-noise ratio (CNR) and signal-difference-to-noise ratio (SDNR). In order to test the objects. 10 mammography centers were evaluated, seven with CR systems and three with the DR systems. To evaluate the linearity, besides the test objects high sensitivity dosimeters were necessary to be used, namely LiF:Mg,Cu,P TL dosimeters. The use of these dosimeters was recommended in order to minimize the time required to perform the tests and to decrease the number of exposures needed. For evaluation of digital images in DICOM format

  4. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    Science.gov (United States)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  5. Real-Time Attitude Control Algorithm for Fast Tumbling Objects under Torque Constraint

    Science.gov (United States)

    Tsuda, Yuichi; Nakasuka, Shinichi

    This paper describes a new control algorithm for achieving any arbitrary attitude and angular velocity states of a rigid body, even fast and complicated tumbling rotations, under some practical constraints. This technique is expected to be applied for the attitude motion synchronization to capture a non-cooperative, tumbling object in such missions as removal of debris from orbit, servicing broken-down satellites for repairing or inspection, rescue of manned vehicles, etc. For this objective, we have introduced a novel control algorithm called Free Motion Path Method (FMPM) in the previous paper, which was formulated as an open-loop controller. The next step of this consecutive work is to derive a closed-loop FMPM controller, and as the preliminary step toward the objective, this paper attempts to derive a conservative state variables representation of a rigid body dynamics. 6-Dimensional conservative state variables are introduced in place of general angular velocity-attitude angle representation, and how to convert between both representations are shown in this paper.

  6. Multi-objective exergy-based optimization of a polygeneration energy system using an evolutionary algorithm

    International Nuclear Information System (INIS)

    Ahmadi, Pouria; Rosen, Marc A.; Dincer, Ibrahim

    2012-01-01

    A comprehensive thermodynamic modeling and optimization is reported of a polygeneration energy system for the simultaneous production of heating, cooling, electricity and hot water from a common energy source. This polygeneration system is composed of four major parts: gas turbine (GT) cycle, Rankine cycle, absorption cooling cycle and domestic hot water heater. A multi-objective optimization method based on an evolutionary algorithm is applied to determine the best design parameters for the system. The two objective functions utilized in the analysis are the total cost rate of the system, which is the cost associated with fuel, component purchasing and environmental impact, and the system exergy efficiency. The total cost rate of the system is minimized while the cycle exergy efficiency is maximized by using an evolutionary algorithm. To provide a deeper insight, the Pareto frontier is shown for multi-objective optimization. In addition, a closed form equation for the relationship between exergy efficiency and total cost rate is derived. Finally, a sensitivity analysis is performed to assess the effects of several design parameters on the system total exergy destruction rate, CO 2 emission and exergy efficiency.

  7. Multi-energy method of digital radiography for imaging of biological objects

    Science.gov (United States)

    Ryzhikov, V. D.; Naydenov, S. V.; Opolonin, O. D.; Volkov, V. G.; Smith, C. F.

    2016-03-01

    This work has been dedicated to the search for a new possibility to use multi-energy digital radiography (MER) for medical applications. Our work has included both theoretical and experimental investigations of 2-energy (2E) and 3- energy (3D) radiography for imaging the structure of biological objects. Using special simulation methods and digital analysis based on the X-ray interaction energy dependence for each element of importance to medical applications in the X-ray range of energy up to 150 keV, we have implemented a quasi-linear approximation for the energy dependence of the X-ray linear mass absorption coefficient μm (E) that permits us to determine the intrinsic structure of the biological objects. Our measurements utilize multiple X-ray tube voltages (50, 100, and 150 kV) with Al and Cu filters of different thicknesses to achieve 3-energy X-ray examination of objects. By doing so, we are able to achieve significantly improved imaging quality of the structure of the subject biological objects. To reconstruct and visualize the final images, we use both two-dimensional (2D) and three-dimensional (3D) palettes of identification. The result is a 2E and/or 3E representation of the object with color coding of each pixel according to the data outputs. Following the experimental measurements and post-processing, we produce a 3D image of the biological object - in the case of our trials, fragments or parts of chicken and turkey.

  8. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  9. Clinical ethics in rehabilitation medicine: core objectives and algorithm for resident education.

    Science.gov (United States)

    Sliwa, J A; McPeak, L; Gittler, M; Bodenheimer, C; King, J; Bowen, J

    2002-09-01

    Described as the balance of values on either side of a moral dilemma, ethics and ethical issues are of increasing importance in the changing practice of rehabilitation medicine. Because the substance of ethics and true ethical issues can be difficult to identify, the education of rehabilitation residents in ethics can similarly be challenging. This article discusses topics pertinent to an understanding of clinical ethics in rehabilitation medicine and provides a method of teaching residents through an algorithm of ethical issues, learning objectives, and illustrative cases.

  10. SLOAN DIGITAL SKY SURVEY OBSERVATIONS OF KUIPER BELT OBJECTS: COLORS AND VARIABILITY

    International Nuclear Information System (INIS)

    Ofek, Eran O.

    2012-01-01

    Colors of trans-Neptunian objects (TNOs) are used to study the evolutionary processes of bodies in the outskirts of the solar system and to test theories regarding their origin. Here I describe a search for serendipitous Sloan Digital Sky Survey (SDSS) observations of known TNOs and Centaurs. I present a catalog of SDSS photometry, colors, and astrometry of 388 measurements of 42 outer solar system objects. I find weak evidence, at the ≈ 2σ level (per trial), for a correlation between the g – r color and inclination of scattered disk objects and hot classical Kuiper Belt objects. I find a correlation between the g – r color and the angular momentum in the z direction of all the objects in this sample. These findings should be verified using larger samples of TNOs. Light curves as a function of phase angle are constructed for 13 objects. The steepness of the slopes of these light curves suggests that the coherent backscatter mechanism plays a major role in the reflectivity of outer solar system small objects at small phase angles. I find weak evidence for an anticorrelation, significant at the 2σ confidence level (per trial), between the g-band phase-angle slope parameter and the semimajor axis, as well as the aphelion distance, of these objects (i.e., they show a more prominent 'opposition effect' at smaller distances from the Sun). However, this plausible correlation should be verified using a larger sample. I discuss the origin of this possible correlation and argue that if this correlation is real it probably indicates that 'Sedna'-like objects have a different origin than other classes of TNOs. Finally, I identify several objects with large variability amplitudes.

  11. Solving multi-objective job shop scheduling problems using a non-dominated sorting genetic algorithm

    Science.gov (United States)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    The efforts of finding optimal schedules for the job shop scheduling problems are highly important for many real-world industrial applications. In this paper, a multi-objective based job shop scheduling problem by simultaneously minimizing makespan and tardiness is taken into account. The problem is considered to be more complex due to the multiple business criteria that must be satisfied. To solve the problem more efficiently and to obtain a set of non-dominated solutions, a meta-heuristic based non-dominated sorting genetic algorithm is presented. In addition, task based representation is used for solution encoding, and tournament selection that is based on rank and crowding distance is applied for offspring selection. Swapping and insertion mutations are employed to increase diversity of population and to perform intensive search. To evaluate the modified non-dominated sorting genetic algorithm, a set of modified benchmarking job shop problems obtained from the OR-Library is used, and the results are considered based on the number of non-dominated solutions and quality of schedules obtained by the algorithm.

  12. Geomagnetic Navigation of Autonomous Underwater Vehicle Based on Multi-objective Evolutionary Algorithm.

    Science.gov (United States)

    Li, Hong; Liu, Mingyong; Zhang, Feihu

    2017-01-01

    This paper presents a multi-objective evolutionary algorithm of bio-inspired geomagnetic navigation for Autonomous Underwater Vehicle (AUV). Inspired by the biological navigation behavior, the solution was proposed without using a priori information, simply by magnetotaxis searching. However, the existence of the geomagnetic anomalies has significant influence on the geomagnetic navigation system, which often disrupts the distribution of the geomagnetic field. An extreme value region may easily appear in abnormal regions, which makes AUV lost in the navigation phase. This paper proposes an improved bio-inspired algorithm with behavior constraints, for sake of making AUV escape from the abnormal region. First, the navigation problem is considered as the optimization problem. Second, the environmental monitoring operator is introduced, to determine whether the algorithm falls into the geomagnetic anomaly region. Then, the behavior constraint operator is employed to get out of the abnormal region. Finally, the termination condition is triggered. Compared to the state-of- the-art, the proposed approach effectively overcomes the disturbance of the geomagnetic abnormal. The simulation result demonstrates the reliability and feasibility of the proposed approach in complex environments.

  13. Multi-objective optimization of HVAC system with an evolutionary computation algorithm

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Tang, Fan; Xu, Guanglin

    2011-01-01

    A data-mining approach for the optimization of a HVAC (heating, ventilation, and air conditioning) system is presented. A predictive model of the HVAC system is derived by data-mining algorithms, using a dataset collected from an experiment conducted at a research facility. To minimize the energy while maintaining the corresponding IAQ (indoor air quality) within a user-defined range, a multi-objective optimization model is developed. The solutions of this model are set points of the control system derived with an evolutionary computation algorithm. The controllable input variables - supply air temperature and supply air duct static pressure set points - are generated to reduce the energy use. The results produced by the evolutionary computation algorithm show that the control strategy saves energy by optimizing operations of an HVAC system. -- Highlights: → A data-mining approach for the optimization of a heating, ventilation, and air conditioning (HVAC) system is presented. → The data used in the project has been collected from an experiment conducted at an energy research facility. → The approach presented in the paper leads to accomplishing significant energy savings without compromising the indoor air quality. → The energy savings are accomplished by computing set points for the supply air temperature and the supply air duct static pressure.

  14. Object-Oriented/Data-Oriented Design of a Direct Simulation Monte Carlo Algorithm

    Science.gov (United States)

    Liechty, Derek S.

    2014-01-01

    Over the past decade, there has been much progress towards improved phenomenological modeling and algorithmic updates for the direct simulation Monte Carlo (DSMC) method, which provides a probabilistic physical simulation of gas Rows. These improvements have largely been based on the work of the originator of the DSMC method, Graeme Bird. Of primary importance are improved chemistry, internal energy, and physics modeling and a reduction in time to solution. These allow for an expanded range of possible solutions In altitude and velocity space. NASA's current production code, the DSMC Analysis Code (DAC), is well-established and based on Bird's 1994 algorithms written in Fortran 77 and has proven difficult to upgrade. A new DSMC code is being developed in the C++ programming language using object-oriented and data-oriented design paradigms to facilitate the inclusion of the recent improvements and future development activities. The development efforts on the new code, the Multiphysics Algorithm with Particles (MAP), are described, and performance comparisons are made with DAC.

  15. Assessment of the accuracy of a Bayesian estimation algorithm for perfusion CT by using a digital phantom

    International Nuclear Information System (INIS)

    Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio; Boutelier, Timothe; Pautot, Fabrice; Christensen, Soren

    2013-01-01

    A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)

  16. Assessment of the accuracy of a Bayesian estimation algorithm for perfusion CT by using a digital phantom

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio [Iwate Medical University, Division of Ultrahigh Field MRI, Institute for Biomedical Sciences, Yahaba (Japan); Boutelier, Timothe; Pautot, Fabrice [Olea Medical, Department of Research and Innovation, La Ciotat (France); Christensen, Soren [University of Melbourne, Department of Neurology and Radiology, Royal Melbourne Hospital, Victoria (Australia)

    2013-10-15

    A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)

  17. Design and selection of load control strategies using a multiple objective model and evolutionary algorithms

    International Nuclear Information System (INIS)

    Gomes, Alvaro; Antunes, Carlos Henggeler; Martins, Antonio Gomes

    2005-01-01

    This paper aims at presenting a multiple objective model to evaluate the attractiveness of the use of demand resources (through load management control actions) by different stakeholders and in diverse structure scenarios in electricity systems. For the sake of model flexibility, the multiple (and conflicting) objective functions of technical, economical and quality of service nature are able to capture distinct market scenarios and operating entities that may be interested in promoting load management activities. The computation of compromise solutions is made by resorting to evolutionary algorithms, which are well suited to tackle multiobjective problems of combinatorial nature herein involving the identification and selection of control actions to be applied to groups of loads. (Author)

  18. A multi-objective location routing problem using imperialist competitive algorithm

    Directory of Open Access Journals (Sweden)

    Amir Mohammad Golmohammadi

    2016-06-01

    Full Text Available Nowadays, most manufacturing units try to locate their requirements and the depot vehicle routing in order to transport the goods at optimum cost. Needless to mention that the locations of the required warehouses influence on the performance of vehicle routing. In this paper, a mathematical programming model to optimize the storage location and vehicle routing are presented. The first objective function of the model minimizes the total cost associated with the transportation and storage, and the second objective function minimizes the difference distance traveled by vehicles. The study uses Imperialist Competitive Algorithm (ICA to solve the resulted problems in different sizes. The preliminary results have indicated that the proposed study has performed better than NSGA-II and PAES methods in terms of Quality metric and Spacing metric.

  19. Utility of Objective Chest Tube Management After Pulmonary Resection Using a Digital Drainage System.

    Science.gov (United States)

    Takamochi, Kazuya; Imashimizu, Kota; Fukui, Mariko; Maeyashiki, Tatsuo; Suzuki, Mikiko; Ueda, Takuya; Matsuzawa, Hironori; Hirayama, Shunki; Matsunaga, Takeshi; Oh, Shiaki; Suzuki, Kenji

    2017-07-01

    We sought to evaluate the clinical utility of chest tube management after pulmonary resection based on objective digital monitoring of pleural pressure and digital surveillance for air leaks. We prospectively recorded the perioperative data of 308 patients who underwent pulmonary resection between December 2013 and January 2016. We used information from a digital monitoring thoracic drainage system to measure peak air leakage during the first 24 hours after the operation, patterns of air leakage over the first 72 hours, and patterns of pleural pressure changes until the chest tubes were removed. There were 240 patients with lung cancer and 68 patients with other diseases. The operations included 49 wedge resections, 58 segmentectomies, and 201 lobectomies. A postoperative air leak was observed in 61 patients (20%). A prolonged air leak exceeding 20 mL/min lasting 5 days or more was observed in 18 patients (5.8%). Multivariate analysis of various perioperative factors showed forced expiratory volume in 1 second below 70%, patterns of air leakage, defined as exacerbating and remitting or without a trend toward improvement, and peak air leakage of 100 mL/min or more were significant positive predictors of prolonged air leak. Fluctuations in pleural pressure occurred just after the air leakage rate decreased to less than 20 mL/min. Digital monitoring of peak air leakage and patterns of air leakage were useful for predicting prolonged air leak after pulmonary resection. Information on the disappearance of air leak could be derived from the change in the rate of air leakage and from the increase in fluctuation of pleural pressure. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Research and Setting the Modified Algorithm "Predator-Prey" in the Problem of the Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2016-01-01

    Full Text Available We consider a class of algorithms for multi-objective optimization - Pareto-approximation algorithms, which suppose a preliminary building of finite-dimensional approximation of a Pareto set, thereby also a Pareto front of the problem. The article gives an overview of population and non-population algorithms of the Pareto-approximation, identifies their strengths and weaknesses, and presents a canonical algorithm "predator-prey", showing its shortcomings. We offer a number of modifications of the canonical algorithm "predator-prey" with the aim to overcome the drawbacks of this algorithm, present the results of a broad study of the efficiency of these modifications of the algorithm. The peculiarity of the study is the use of the quality indicators of the Pareto-approximation, which previous publications have not used. In addition, we present the results of the meta-optimization of the modified algorithm, i.e. determining the optimal values of some free parameters of the algorithm. The study of efficiency of the modified algorithm "predator-prey" has shown that the proposed modifications allow us to improve the following indicators of the basic algorithm: cardinality of a set of the archive solutions, uniformity of archive solutions, and computation time. By and large, the research results have shown that the modified and meta-optimized algorithm enables achieving exactly the same approximation as the basic algorithm, but with the number of preys being one order less. Computational costs are proportionally reduced.

  1. A multi-objective improved teaching-learning based optimization algorithm for unconstrained and constrained optimization problems

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2014-01-01

    Full Text Available The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self-motivated learning. The MO-ITLBO algorithm uses a grid-based approach to adaptively assess the non-dominated solutions (i.e. Pareto front maintained in an external archive. The performance of the MO-ITLBO algorithm is assessed by implementing it on unconstrained and constrained test problems proposed for the Congress on Evolutionary Computation 2009 (CEC 2009 competition. The performance assessment is done by using the inverted generational distance (IGD measure. The IGD measures obtained by using the MO-ITLBO algorithm are compared with the IGD measures of the other state-of-the-art algorithms available in the literature. Finally, Lexicographic ordering is used to assess the overall performance of competitive algorithms. Results have shown that the proposed MO-ITLBO algorithm has obtained the 1st rank in the optimization of unconstrained test functions and the 3rd rank in the optimization of constrained test functions.

  2. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  3. Multi-objective decoupling algorithm for active distance control of intelligent hybrid electric vehicle

    Science.gov (United States)

    Luo, Yugong; Chen, Tao; Li, Keqiang

    2015-12-01

    The paper presents a novel active distance control strategy for intelligent hybrid electric vehicles (IHEV) with the purpose of guaranteeing an optimal performance in view of the driving functions, optimum safety, fuel economy and ride comfort. Considering the complexity of driving situations, the objects of safety and ride comfort are decoupled from that of fuel economy, and a hierarchical control architecture is adopted to improve the real-time performance and the adaptability. The hierarchical control structure consists of four layers: active distance control object determination, comprehensive driving and braking torque calculation, comprehensive torque distribution and torque coordination. The safety distance control and the emergency stop algorithms are designed to achieve the safety and ride comfort goals. The optimal rule-based energy management algorithm of the hybrid electric system is developed to improve the fuel economy. The torque coordination control strategy is proposed to regulate engine torque, motor torque and hydraulic braking torque to improve the ride comfort. This strategy is verified by simulation and experiment using a forward simulation platform and a prototype vehicle. The results show that the novel control strategy can achieve the integrated and coordinated control of its multiple subsystems, which guarantees top performance of the driving functions and optimum safety, fuel economy and ride comfort.

  4. A Multi-objective PMU Placement Method Considering Observability and Measurement Redundancy using ABC Algorithm

    Directory of Open Access Journals (Sweden)

    KULANTHAISAMY, A.

    2014-05-01

    Full Text Available This paper presents a Multi- objective Optimal Placement of Phasor Measurement Units (MOPP method in large electric transmission systems. It is proposed for minimizing the number of Phasor Measurement Units (PMUs for complete system observability and maximizing the measurement redundancy of the system, simultaneously. The measurement redundancy means that number of times a bus is able to monitor more than once by PMUs set. A higher level of measurement redundancy can maximize the total system observability and it is desirable for a reliable power system state estimation. Therefore, simultaneous optimization of the two conflicting objectives are performed using a binary coded Artificial Bee Colony (ABC algorithm. The complete observability of the power system is first prepared and then, single line loss contingency condition is considered to the main model. The efficiency of the proposed method is validated on IEEE 14, 30, 57 and 118 bus test systems. The valuable approach of ABC algorithm is demonstrated in finding the optimal number of PMUs and their locations by comparing the performance with earlier works.

  5. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  6. Simulation and Digitization of a Gas Electron Multiplier Detector Using Geant4 and an Object-Oriented Digitization Program

    Science.gov (United States)

    McMullen, Timothy; Liyanage, Nilanga; Xiong, Weizhi; Zhao, Zhiwen

    2017-01-01

    Our research has focused on simulating the response of a Gas Electron Multiplier (GEM) detector using computational methods. GEM detectors provide a cost effective solution for radiation detection in high rate environments. A detailed simulation of GEM detector response to radiation is essential for the successful adaption of these detectors to different applications. Using Geant4 Monte Carlo (GEMC), a wrapper around Geant4 which has been successfully used to simulate the Solenoidal Large Intensity Device (SoLID) at Jefferson Lab, we are developing a simulation of a GEM chamber similar to the detectors currently used in our lab. We are also refining an object-oriented digitization program, which translates energy deposition information from GEMC into electronic readout which resembles the readout from our physical detectors. We have run the simulation with beta particles produced by the simulated decay of a 90Sr source, as well as with a simulated bremsstrahlung spectrum. Comparing the simulation data with real GEM data taken under similar conditions is used to refine the simulation parameters. Comparisons between results from the simulations and results from detector tests will be presented.

  7. A Novel Sub-pixel Measurement Algorithm Based on Mixed the Fractal and Digital Speckle Correlation in Frequency Domain

    Directory of Open Access Journals (Sweden)

    Zhangfang Hu

    2014-10-01

    Full Text Available The digital speckle correlation is a non-contact in-plane displacement measurement method based on machine vision. Motivated by the facts that the low accuracy and large amount of calculation produced by the traditional digital speckle correlation method in spatial domain, we introduce a sub-pixel displacement measurement algorithm which employs a fast interpolation method based on fractal theory and digital speckle correlation in frequency domain. This algorithm can overcome either the blocking effect or the blurring caused by the traditional interpolation methods, and the frequency domain processing also avoids the repeated searching in the correlation recognition of the spatial domain, thus the operation quantity is largely reduced and the information extracting speed is improved. The comparative experiment is given to verify that the proposed algorithm in this paper is effective.

  8. A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem

    Science.gov (United States)

    Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid

    2016-09-01

    Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.

  9. Constrained multi-objective optimization of radial expanders in organic Rankine cycles by firefly algorithm

    International Nuclear Information System (INIS)

    Bahadormanesh, Nikrouz; Rahat, Shayan; Yarali, Milad

    2017-01-01

    Highlights: • A multi-objective optimization for radial expander in Organic Rankine Cycles is implemented. • By using firefly algorithm, Pareto front based on the size of turbine and thermal efficiency is produced. • Tension and vibration constrains have a significant effect on optimum design points. - Abstract: Organic Rankine Cycles are viable energy conversion systems in sustainable energy systems due to their compatibility with low-temperature heat sources. In the present study, one dimensional model of radial expanders in conjunction with a thermodynamic model of organic Rankine cycles is prepared. After verification, by defining thermal efficiency of the cycle and size parameter of a radial turbine as the objective functions, a multi-objective optimization was conducted regarding tension and vibration constraints for 4 different organic working fluids (R22, R245fa, R236fa and N-Pentane). In addition to mass flow rate, evaporator temperature, maximum pressure of cycle and turbo-machinery design parameters are selected as the decision variables. Regarding Pareto fronts, by a little increase in size of radial expanders, it is feasible to reach high efficiency. Moreover, by assessing the distribution of decision variables, the variables that play a major role in trending between the objective functions are found. Effects of mechanical and vibration constrains on optimum decision variables are investigated. The results of optimization can be considered as an initial values for design of radial turbines for Organic Rankine Cycles.

  10. Optimum analysis of pavement maintenance using multi-objective genetic algorithms

    Directory of Open Access Journals (Sweden)

    Amr A. Elhadidy

    2015-04-01

    Full Text Available Road network expansion in Egypt is considered as a vital issue for the development of the country. This is done while upgrading current road networks to increase the safety and efficiency. A pavement management system (PMS is a set of tools or methods that assist decision makers in finding optimum strategies for providing and maintaining pavements in a serviceable condition over a given period of time. A multi-objective optimization problem for pavement maintenance and rehabilitation strategies on network level is discussed in this paper. A two-objective optimization model considers minimum action costs and maximum condition for used road network. In the proposed approach, Markov-chain models are used for predicting the performance of road pavement and to calculate the expected decline at different periods of time. A genetic-algorithm-based procedure is developed for solving the multi-objective optimization problem. The model searched for the optimum maintenance actions at adequate time to be implemented on an appropriate pavement. Based on the computing results, the Pareto optimal solutions of the two-objective optimization functions are obtained. From the optimal solutions represented by cost and condition, a decision maker can easily obtain the information of the maintenance and rehabilitation planning with minimum action costs and maximum condition. The developed model has been implemented on a network of roads and showed its ability to derive the optimal solution.

  11. A Pareto-based multi-objective optimization algorithm to design energy-efficient shading devices

    International Nuclear Information System (INIS)

    Khoroshiltseva, Marina; Slanzi, Debora; Poli, Irene

    2016-01-01

    Highlights: • We present a multi-objective optimization algorithm for shading design. • We combine Harmony search and Pareto-based procedures. • Thermal and daylighting performances of external shading were considered. • We applied the optimization process to a residential social housing in Madrid. - Abstract: In this paper we address the problem of designing new energy-efficient static daylight devices that will surround the external windows of a residential building in Madrid. Shading devices can in fact largely influence solar gains in a building and improve thermal and lighting comforts by selectively intercepting the solar radiation and by reducing the undesirable glare. A proper shading device can therefore significantly increase the thermal performance of a building by reducing its energy demand in different climate conditions. In order to identify the set of optimal shading devices that allow a low energy consumption of the dwelling while maintaining high levels of thermal and lighting comfort for the inhabitants we derive a multi-objective optimization methodology based on Harmony Search and Pareto front approaches. The results show that the multi-objective approach here proposed is an effective procedure in designing energy efficient shading devices when a large set of conflicting objectives characterizes the performance of the proposed solutions.

  12. A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm.

    Directory of Open Access Journals (Sweden)

    Tao Yuan

    Full Text Available Objective and effective image quality assessment (IQA is directly related to the application of optical remote sensing images (ORSI. In this study, a new IQA method of standardizing the target object recognition rate (ORR is presented to reflect quality. First, several quality degradation treatments with high-resolution ORSIs are implemented to model the ORSIs obtained in different imaging conditions; then, a machine learning algorithm is adopted for recognition experiments on a chosen target object to obtain ORRs; finally, a comparison with commonly used IQA indicators was performed to reveal their applicability and limitations. The results showed that the ORR of the original ORSI was calculated to be up to 81.95%, whereas the ORR ratios of the quality-degraded images to the original images were 65.52%, 64.58%, 71.21%, and 73.11%. The results show that these data can more accurately reflect the advantages and disadvantages of different images in object identification and information extraction when compared with conventional digital image assessment indexes. By recognizing the difference in image quality from the application effect perspective, using a machine learning algorithm to extract regional gray scale features of typical objects in the image for analysis, and quantitatively assessing quality of ORSI according to the difference, this method provides a new approach for objective ORSI assessment.

  13. A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm.

    Science.gov (United States)

    Yuan, Tao; Zheng, Xinqi; Hu, Xuan; Zhou, Wei; Wang, Wei

    2014-01-01

    Objective and effective image quality assessment (IQA) is directly related to the application of optical remote sensing images (ORSI). In this study, a new IQA method of standardizing the target object recognition rate (ORR) is presented to reflect quality. First, several quality degradation treatments with high-resolution ORSIs are implemented to model the ORSIs obtained in different imaging conditions; then, a machine learning algorithm is adopted for recognition experiments on a chosen target object to obtain ORRs; finally, a comparison with commonly used IQA indicators was performed to reveal their applicability and limitations. The results showed that the ORR of the original ORSI was calculated to be up to 81.95%, whereas the ORR ratios of the quality-degraded images to the original images were 65.52%, 64.58%, 71.21%, and 73.11%. The results show that these data can more accurately reflect the advantages and disadvantages of different images in object identification and information extraction when compared with conventional digital image assessment indexes. By recognizing the difference in image quality from the application effect perspective, using a machine learning algorithm to extract regional gray scale features of typical objects in the image for analysis, and quantitatively assessing quality of ORSI according to the difference, this method provides a new approach for objective ORSI assessment.

  14. A new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes

    Science.gov (United States)

    Baek, Jieun; Choi, Yosoon

    2017-04-01

    Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.

  15. Cyndi: a multi-objective evolution algorithm based method for bioactive molecular conformational generation.

    Science.gov (United States)

    Liu, Xiaofeng; Bai, Fang; Ouyang, Sisheng; Wang, Xicheng; Li, Honglin; Jiang, Hualiang

    2009-03-31

    Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105-112). Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 A to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 +/- 0.18 seconds per molecule) renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms other four multiple conformer generators in the case of

  16. Cyndi: a multi-objective evolution algorithm based method for bioactive molecular conformational generation

    Directory of Open Access Journals (Sweden)

    Li Honglin

    2009-03-01

    Full Text Available Abstract Background Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. Results The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105–112. Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 Å to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 ± 0.18 seconds per molecule renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. Conclusion On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms

  17. Towards Emulation-as-a-Service: Cloud Services for Versatile Digital Object Access

    Directory of Open Access Journals (Sweden)

    Dirk von Suchodoletz

    2013-06-01

    Full Text Available The changing world of IT services opens the chance to more tightly integrate digital long-term preservation into systems, both for commercial and end users. The emergence of cloud offerings re-centralizes services, and end users interact with them remotely through standardized (web-client applications on their various devices. This offers the chance to use partially the same concepts and methods to access obsolete computer environments and allows for more sustainable business processes. In order to provide a large variety of user-friendly remote emulation services, especially in combination with authentic performance and user experience, a distributed system model and architecture is required, suitable to run as a cloud service, allowing for the specialization both of memory institutions and third party service providers.The shift of the usually non-trivial task of the emulation of obsolete software environments from the end user to specialized providers can help to simplify digital preservation and access strategies. Besides offering their users better access to their holdings, libraries and archives may gain new business opportunities to offer services to a third party, such as businesses requiring authentic reproduction of digital objects and processes for legal reasons. This paper discusses cloud concepts as the next logical step for accessing original digital material. Emulation-as-a-Service (EaaS fills the gap between the successful demonstration of emulation strategies as a long term access strategy and it’s perceived availability and usability. EaaS can build upon the ground of research and prototypical implementations of previous projects, and reuse well established remote access technology.In this article we develop requirements and a system model, suitable for a distributed environment. We will discuss the building blocks of the core services as well as requirements regarding access management. Finally, we will try to present a

  18. A performance comparison of multi-objective optimization algorithms for solving nearly-zero-energy-building design problems

    NARCIS (Netherlands)

    Hamdy, M.; Nguyen, A.T. (Anh Tuan); Hensen, J.L.M.

    2016-01-01

    Integrated building design is inherently a multi-objective optimization problem where two or more conflicting objectives must be minimized and/or maximized concurrently. Many multi-objective optimization algorithms have been developed; however few of them are tested in solving building design

  19. Evaluation of a digital learning object (DLO) to support the learning process in radiographic dental diagnosis.

    Science.gov (United States)

    Busanello, F H; da Silveira, P F; Liedke, G S; Arús, N A; Vizzotto, M B; Silveira, H E D; Silveira, H L D

    2015-11-01

    Studies have shown that inappropriate therapeutic strategies may be adopted if crown and root changes are misdiagnosed, potentially leading to undesirable consequences. Therefore, the aim of this study was to evaluate a digital learning object, developed to improve skills in diagnosing radiographic dental changes. The object was developed using the Visual Basic Application (VBA) software and evaluated by 62 undergraduate students (male: 24 and female: 38) taking an imaging diagnosis course. Participants were divided in two groups: test group, which used the object and control group, which attended conventional classes. After 3 weeks, students answered a 10-question test and took a practice test to diagnose 20 changes in periapical radiographs. The results show that test group performed better that control group in both tests, with statistically significant difference (P = 0.004 and 0.003, respectively). In overall, female students were better than male students. Specific aspects of object usability were assessed using a structured questionnaire based on the System Usability Scale (SUS), with a score of 90.5 and 81.6 by male and female students, respectively. The results obtained in this study suggest that students who used the DLO performed better than those who used conventional methods. This suggests that the DLO may be a useful teaching tool for dentistry undergraduates, on distance learning courses and as a complementary tool in face-to-face teaching. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. A hybrid evolutionary algorithm for multi-objective anatomy-based dose optimization in high-dose-rate brachytherapy

    International Nuclear Information System (INIS)

    Lahanas, M; Baltas, D; Zamboglou, N

    2003-01-01

    Multiple objectives must be considered in anatomy-based dose optimization for high-dose-rate brachytherapy and a large number of parameters must be optimized to satisfy often competing objectives. For objectives expressed solely in terms of dose variances, deterministic gradient-based algorithms can be applied and a weighted sum approach is able to produce a representative set of non-dominated solutions. As the number of objectives increases, or non-convex objectives are used, local minima can be present and deterministic or stochastic algorithms such as simulated annealing either cannot be used or are not efficient. In this case we employ a modified hybrid version of the multi-objective optimization algorithm NSGA-II. This, in combination with the deterministic optimization algorithm, produces a representative sample of the Pareto set. This algorithm can be used with any kind of objectives, including non-convex, and does not require artificial importance factors. A representation of the trade-off surface can be obtained with more than 1000 non-dominated solutions in 2-5 min. An analysis of the solutions provides information on the possibilities available using these objectives. Simple decision making tools allow the selection of a solution that provides a best fit for the clinical goals. We show an example with a prostate implant and compare results obtained by variance and dose-volume histogram (DVH) based objectives

  1. Kinetic Digitally-Driven Architectural Structures as ‘Marginal’ Objects – a Conceptual Framework

    Directory of Open Access Journals (Sweden)

    Sokratis Yiannoudes

    2014-07-01

    Full Text Available Although the most important reasons for designing digitally-driven kinetic architectural structures seem to be practical ones, namely functional flexibility and adaptation to changing conditions and needs, this paper argues that there is possibly an additional socio-cultural aspect driving their design and construction. Through this argument, the paper attempts to debate their status and question their concepts and practices.Looking at the design explorations and discourses of real or visionary technologically-augmented architecture since the 1960s, one cannot fail to notice the use of biological metaphors and concepts to describe them – an attempt to ‘naturalise’ them which culminates today in the conception of kinetic structures and intelligent environments as literally ‘alive’. Examining these attitudes in contemporary examples, the paper demonstrates that digitally-driven kinetic structures can be conceived as artificial ‘living’ machines that undermine the boundary between the natural and the artificial. It argues that by ‘humanising’ these structures, attributing biological characteristics such as self-initiated motion, intelligence and reactivity, their designers are ‘trying’ to subvert and blur the human-machine (-architecture discontinuity.The argument is developed by building a conceptual framework which is based on evidence from the social studies of science and technology, in particular their critique in modern nature-culture and human-machine distinctions, as well as the history and theory of artificial life which discuss the cultural significance and sociology of ‘living’ objects. In particular, the paper looks into the techno-scientific discourses and practices which, since the 18th century, have been exploring the creation of ‘marginal’ objects, i.e. seemingly alive objects made to challenge the nature-artifice boundary.

  2. Multi-Objective Two-Dimensional Truss Optimization by using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Harun Alrasyid

    2011-05-01

    Full Text Available During last three decade, many mathematical programming methods have been develop for solving optimization problems. However, no single method has been found to be entirely efficient and robust for the wide range of engineering optimization problems. Most design application in civil engineering involve selecting values for a set of design variables that best describe the behavior and performance of the particular problem while satisfying the requirements and specifications imposed by codes of practice. The introduction of Genetic Algorithm (GA into the field of structural optimization has opened new avenues for research because they have been successful applied while traditional methods have failed. GAs is efficient and broadly applicable global search procedure based on stochastic approach which relies on “survival of the fittest” strategy. GAs are search algorithms that are based on the concepts of natural selection and natural genetics. On this research Multi-objective sizing and configuration optimization of the two-dimensional truss has been conducted using a genetic algorithm. Some preliminary runs of the GA were conducted to determine the best combinations of GA parameters such as population size and probability of mutation so as to get better scaling for rest of the runs. Comparing the results from sizing and sizing– configuration optimization, can obtained a significant reduction in the weight and deflection. Sizing–configuration optimization produces lighter weight and small displacement than sizing optimization. The results were obtained by using a GA with relative ease (computationally and these results are very competitive compared to those obtained from other methods of truss optimization.

  3. Optimization of externalities using DTM measures: a Pareto optimal multi objective optimization using the evolutionary algorithm SPEA2+

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart

    2010-01-01

    Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.

  4. Selection of security system design via games of imperfect information and multi-objective genetic algorithm

    International Nuclear Information System (INIS)

    Lins, Isis Didier; Rêgo, Leandro Chaves; Moura, Márcio das Chagas

    2013-01-01

    This work analyzes the strategic interaction between a defender and an intelligent attacker by means of a game and reliability framework involving a multi-objective approach and imperfect information so as to support decision-makers in choosing efficiently designed security systems. A multi-objective genetic algorithm is used to determine the optimal security system's configurations representing the tradeoff between the probability of a successful defense and the acquisition and operational costs. Games with imperfect information are considered, in which the attacker has limited knowledge about the actual security system. The types of security alternatives are readily observable, but the number of redundancies actually implemented in each security subsystem is not known. The proposed methodology is applied to an illustrative example considering power transmission lines in the Northeast of Brazil, which are often targets for attackers who aims at selling the aluminum conductors. The empirical results show that the framework succeeds in handling this sort of strategic interaction. -- Highlights: ► Security components must have feasible costs and must be reliable. ► The optimal design of security systems considers a multi-objective approach. ► Games of imperfect information enable the choice of non-dominated configurations. ► MOGA, reliability and games support the entire defender's decision process. ► The selection of effective security systems may discourage attacker's actions

  5. Multi-objective optimization of an underwater compressed air energy storage system using genetic algorithm

    International Nuclear Information System (INIS)

    Cheung, Brian C.; Carriveau, Rupp; Ting, David S.K.

    2014-01-01

    This paper presents the findings from a multi-objective genetic algorithm optimization study on the design parameters of an underwater compressed air energy storage system (UWCAES). A 4 MWh UWCAES system was numerically simulated and its energy, exergy, and exergoeconomics were analysed. Optimal system configurations were determined that maximized the UWCAES system round-trip efficiency and operating profit, and minimized the cost rate of exergy destruction and capital expenditures. The optimal solutions obtained from the multi-objective optimization model formed a Pareto-optimal front, and a single preferred solution was selected using the pseudo-weight vector multi-criteria decision making approach. A sensitivity analysis was performed on interest rates to gauge its impact on preferred system designs. Results showed similar preferred system designs for all interest rates in the studied range. The round-trip efficiency and operating profit of the preferred system designs were approximately 68.5% and $53.5/cycle, respectively. The cost rate of the system increased with interest rates. - Highlights: • UWCAES system configurations were developed using multi-objective optimization. • System was optimized for energy efficiency, exergy, and exergoeconomics • Pareto-optimal solution surfaces were developed at different interest rates. • Similar preferred system configurations were found at all interest rates studied

  6. MULTI-OBJECTIVE OPTIMISATION OF LASER CUTTING USING CUCKOO SEARCH ALGORITHM

    Directory of Open Access Journals (Sweden)

    M. MADIĆ

    2015-03-01

    Full Text Available Determining of optimal laser cutting conditions for improving cut quality characteristics is of great importance in process planning. This paper presents multi-objective optimisation of the CO2 laser cutting process considering three cut quality characteristics such as surface roughness, heat affected zone (HAZ and kerf width. It combines an experimental design by using Taguchi’s method, modelling the relationships between the laser cutting factors (laser power, cutting speed, assist gas pressure and focus position and cut quality characteristics by artificial neural networks (ANNs, formulation of the multiobjective optimisation problem using weighting sum method, and solving it by the novel meta-heuristic cuckoo search algorithm (CSA. The objective is to obtain optimal cutting conditions dependent on the importance order of the cut quality characteristics for each of four different case studies presented in this paper. The case studies considered in this study are: minimisation of cut quality characteristics with equal priority, minimisation of cut quality characteristics with priority given to surface roughness, minimisation of cut quality characteristics with priority given to HAZ, and minimisation of cut quality characteristics with priority given to kerf width. The results indicate that the applied CSA for solving the multi-objective optimisation problem is effective, and that the proposed approach can be used for selecting the optimal laser cutting factors for specific production requirements.

  7. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    Directory of Open Access Journals (Sweden)

    Ziho Kang

    2016-01-01

    Full Text Available Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1 participants interrogate dynamic multielement objects that can overlap on the display and (2 visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1 developing dynamic areas of interests (AOIs in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2 introducing the concept of AOI gap tolerance (AGT that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3 finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC operations where air traffic controller specialists (ATCSs interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  8. Objective assessment in digital images of skin erythema caused by radiotherapy

    International Nuclear Information System (INIS)

    Matsubara, H.; Matsufuji, N.; Tsuji, H.; Yamamoto, N.; Karasawa, K.; Nakajima, M.; Karube, M.; Takahashi, W.

    2015-01-01

    Purpose: Skin toxicity caused by radiotherapy has been visually classified into discrete grades. The present study proposes an objective and continuous assessment method of skin erythema in digital images taken under arbitrary lighting conditions, which is the case for most clinical environments. The purpose of this paper is to show the feasibility of the proposed method. Methods: Clinical data were gathered from six patients who received carbon beam therapy for lung cancer. Skin condition was recorded using an ordinary compact digital camera under unfixed lighting conditions; a laser Doppler flowmeter was used to measure blood flow in the skin. The photos and measurements were taken at 3 h, 30, and 90 days after irradiation. Images were decomposed into hemoglobin and melanin colors using independent component analysis. Pixel values in hemoglobin color images were compared with skin dose and skin blood flow. The uncertainty of the practical photographic method was also studied in nonclinical experiments. Results: The clinical data showed good linearity between skin dose, skin blood flow, and pixel value in the hemoglobin color images; their correlation coefficients were larger than 0.7. It was deduced from the nonclinical that the uncertainty due to the proposed method with photography was 15%; such an uncertainty was not critical for assessment of skin erythema in practical use. Conclusions: Feasibility of the proposed method for assessment of skin erythema using digital images was demonstrated. The numerical relationship obtained helped to predict skin erythema by artificial processing of skin images. Although the proposed method using photographs taken under unfixed lighting conditions increased the uncertainty of skin information in the images, it was shown to be powerful for the assessment of skin conditions because of its flexibility and adaptability

  9. Objective assessment in digital images of skin erythema caused by radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsubara, H., E-mail: matubara@nirs.go.jp; Matsufuji, N.; Tsuji, H.; Yamamoto, N.; Karasawa, K.; Nakajima, M.; Karube, M. [National Institute of Radiological Sciences (NIRS), Chiba 263-8555 (Japan); Takahashi, W. [Department of Radiology, University of Tokyo Hospital, Tokyo 113-8655 (Japan)

    2015-09-15

    Purpose: Skin toxicity caused by radiotherapy has been visually classified into discrete grades. The present study proposes an objective and continuous assessment method of skin erythema in digital images taken under arbitrary lighting conditions, which is the case for most clinical environments. The purpose of this paper is to show the feasibility of the proposed method. Methods: Clinical data were gathered from six patients who received carbon beam therapy for lung cancer. Skin condition was recorded using an ordinary compact digital camera under unfixed lighting conditions; a laser Doppler flowmeter was used to measure blood flow in the skin. The photos and measurements were taken at 3 h, 30, and 90 days after irradiation. Images were decomposed into hemoglobin and melanin colors using independent component analysis. Pixel values in hemoglobin color images were compared with skin dose and skin blood flow. The uncertainty of the practical photographic method was also studied in nonclinical experiments. Results: The clinical data showed good linearity between skin dose, skin blood flow, and pixel value in the hemoglobin color images; their correlation coefficients were larger than 0.7. It was deduced from the nonclinical that the uncertainty due to the proposed method with photography was 15%; such an uncertainty was not critical for assessment of skin erythema in practical use. Conclusions: Feasibility of the proposed method for assessment of skin erythema using digital images was demonstrated. The numerical relationship obtained helped to predict skin erythema by artificial processing of skin images. Although the proposed method using photographs taken under unfixed lighting conditions increased the uncertainty of skin information in the images, it was shown to be powerful for the assessment of skin conditions because of its flexibility and adaptability.

  10. Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs

    International Nuclear Information System (INIS)

    Rehm, K.

    1992-01-01

    This dissertation presents a contrast-enhancement algorithm Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency

  11. Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2016-12-01

    Full Text Available This paper deals with mixed model assembly line (MMAL balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II and multi-objective particle swarm optimization (MOPSO are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.

  12. ESDORA: A Data Archive Infrastructure Using Digital Object Model and Open Source Frameworks

    Science.gov (United States)

    Shrestha, Biva; Pan, Jerry; Green, Jim; Palanisamy, Giriprakash; Wei, Yaxing; Lenhardt, W.; Cook, R. Bob; Wilson, B. E.; Leggott, M.

    2011-12-01

    There are an array of challenges associated with preserving, managing, and using contemporary scientific data. Large volume, multiple formats and data services, and the lack of a coherent mechanism for metadata/data management are some of the common issues across data centers. It is often difficult to preserve the data history and lineage information, along with other descriptive metadata, hindering the true science value for the archived data products. In this project, we use digital object abstraction architecture as the information/knowledge framework to address these challenges. We have used the following open-source frameworks: Fedora-Commons Repository, Drupal Content Management System, Islandora (Drupal Module) and Apache Solr Search Engine. The system is an active archive infrastructure for Earth Science data resources, which include ingestion, archiving, distribution, and discovery functionalities. We use an ingestion workflow to ingest the data and metadata, where many different aspects of data descriptions (including structured and non-structured metadata) are reviewed. The data and metadata are published after reviewing multiple times. They are staged during the reviewing phase. Each digital object is encoded in XML for long-term preservation of the content and relations among the digital items. The software architecture provides a flexible, modularized framework for adding pluggable user-oriented functionality. Solr is used to enable word search as well as faceted search. A home grown spatial search module is plugged in to allow user to make a spatial selection in a map view. A RDF semantic store within the Fedora-Commons Repository is used for storing information on data lineage, dissemination services, and text-based metadata. We use the semantic notion "isViewerFor" to register internally or externally referenced URLs, which are rendered within the same web browser when possible. With appropriate mapping of content into digital objects, many

  13. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  14. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Ahmad Audi

    2017-07-01

    Full Text Available Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique camera, which has an IMU (Inertial Measurement Unit sensor and an SoC (System on Chip/FPGA (Field-Programmable Gate Array. To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  15. Identifying groups of critical edges in a realistic electrical network by multi-objective genetic algorithms

    International Nuclear Information System (INIS)

    Zio, E.; Golea, L.R.; Rocco S, C.M.

    2012-01-01

    In this paper, an analysis of the vulnerability of the Italian high-voltage (380 kV) electrical transmission network (HVIET) is carried out for the identification of the groups of links (or edges, or arcs) most critical considering the network structure and flow. Betweenness centrality and network connection efficiency variations are considered as measures of the importance of the network links. The search of the most critical ones is carried out within a multi-objective optimization problem aimed at the maximization of the importance of the groups and minimization of their dimension. The problem is solved using a genetic algorithm. The analysis is based only on information on the topology of the network and leads to the identification of the most important single component, couples of components, triplets and so forth. The comparison of the results obtained with those reported by previous analyses indicates that the proposed approach provides useful complementary information.

  16. Subjective and Objective Quality Assessment of Single-Channel Speech Separation Algorithms

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll

    2012-01-01

    Previous studies on performance evaluation of single-channel speech separation (SCSS) algorithms mostly focused on automatic speech recognition (ASR) accuracy as their performance measure. Assessing the separated signals by different metrics other than this has the benefit that the results...... are expected to carry on to other applications beyond ASR. In this paper, in addition to conventional speech quality metrics (PESQ and SNRloss), we also evaluate the separation systems output using different source separation metrics: blind source separation evaluation (BSS EVAL) and perceptual evaluation...... that PESQ and PEASS quality metrics predict well the subjective quality of separated signals obtained by the separation systems. From the results it is observed that the short-time objective intelligibility (STOI) measure predict the speech intelligibility results....

  17. Intersection signal control multi-objective optimization based on genetic algorithm

    Directory of Open Access Journals (Sweden)

    Zhanhong Zhou

    2014-04-01

    Full Text Available A signal control intersection increases not only vehicle delay, but also vehicle emissions and fuel consumption in that area. Because more and more fuel and air pollution problems arise recently, an intersection signal control optimization method which aims at reducing vehicle emissions, fuel consumption and vehicle delay is required heavily. This paper proposed a signal control multi-object optimization method to reduce vehicle emissions, fuel consumption and vehicle delay simultaneously at an intersection. The optimization method combined the Paramics microscopic traffic simulation software, Comprehensive Modal Emissions Model (CMEM, and genetic algorithm. An intersection in Haizhu District, Guangzhou, was taken for a case study. The result of the case study shows the optimal timing scheme obtained from this method is better than the Webster timing scheme.

  18. Combining soft decision algorithms and scale-sequential hypotheses pruning for object recognition

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, V.P.; Manolakos, E.S. [Northeastern Univ., Boston, MA (United States)

    1996-12-31

    This paper describes a system that exploits the synergy of Hierarchical Mixture Density (HMD) estimation with multiresolution decomposition based hypothesis pruning to perform efficiently joint segmentation and labeling of partially occluded objects in images. First we present the overall structure of the HMD estimation algorithm in the form of a recurrent neural network which generates the posterior probabilities of the various hypotheses associated with the image. Then in order to reduce the large memory and computation requirement we propose a hypothesis pruning scheme making use of the orthonormal discrete wavelet transform for dimensionality reduction. We provide an intuitive justification for the validity of this scheme and present experimental results and performance analysis on real and synthetic images to verify our claims.

  19. A possibilistic approach to rotorcraft design through a multi-objective evolutionary algorithm

    Science.gov (United States)

    Chae, Han Gil

    Most of the engineering design processes in use today in the field may be considered as a series of successive decision making steps. The decision maker uses information at hand, determines the direction of the procedure, and generates information for the next step and/or other decision makers. However, the information is often incomplete, especially in the early stages of the design process of a complex system. As the complexity of the system increases, uncertainties eventually become unmanageable using traditional tools. In such a case, the tools and analysis values need to be "softened" to account for the designer's intuition. One of the methods that deals with issues of intuition and incompleteness is possibility theory. Through the use of possibility theory coupled with fuzzy inference, the uncertainties estimated by the intuition of the designer are quantified for design problems. By involving quantified uncertainties in the tools, the solutions can represent a possible set, instead of a crisp spot, for predefined levels of certainty. From a different point of view, it is a well known fact that engineering design is a multi-objective problem or a set of such problems. The decision maker aims to find satisfactory solutions, sometimes compromising the objectives that conflict with each other. Once the candidates of possible solutions are generated, a satisfactory solution can be found by various decision-making techniques. A number of multi-objective evolutionary algorithms (MOEAs) have been developed, and can be found in the literature, which are capable of generating alternative solutions and evaluating multiple sets of solutions in one single execution of an algorithm. One of the MOEA techniques that has been proven to be very successful for this class of problems is the strength Pareto evolutionary algorithm (SPEA) which falls under the dominance-based category of methods. The Pareto dominance that is used in SPEA, however, is not enough to account for the

  20. Combination of digital signal processing methods towards an improved analysis algorithm for structural health monitoring.

    Science.gov (United States)

    Pentaris, Fragkiskos P.; Makris, John P.

    2013-04-01

    In Structural Health Monitoring (SHM) is of great importance to reveal valuable information from the recorded SHM data that could be used to predict or indicate structural fault or damage in a building. In this work a combination of digital signal processing methods, namely FFT along with Wavelet Transform is applied, together with a proposed algorithm to study frequency dispersion, in order to depict non-linear characteristics of SHM data collected in two university buildings under natural or anthropogenic excitation. The selected buildings are of great importance from civil protection point of view, as there are the premises of a public higher education institute, undergoing high use, stress, visit from academic staff and students. The SHM data are collected from two neighboring buildings that have different age (4 and 18 years old respectively). Proposed digital signal processing methods are applied to the data, presenting a comparison of the structural behavior of both buildings in response to seismic activity, weather conditions and man-made activity. Acknowledgments This work was supported in part by the Archimedes III Program of the Ministry of Education of Greece, through the Operational Program "Educational and Lifelong Learning", in the framework of the project entitled «Interdisciplinary Multi-Scale Research of Earthquake Physics and Seismotectonics at the front of the Hellenic Arc (IMPACT-ARC) » and is co-financed by the European Union (European Social Fund) and Greek National Fund.

  1. Using digital educational objects to teach human body systems at a countryside school

    Directory of Open Access Journals (Sweden)

    Silvio Ferreira dos Santos

    2017-12-01

    Full Text Available The purpose of this study is to assess the use of educational software and mobile device applications to teach about the human body at a rural school. It is an action research, with a qualitative approach, developed in 2016 and involving 14 students from the 8th year Elementary Education at the Escola Estadual Sol Nascente in Confresa-MT. The chosen software and application was the Human Body Atlas and 3D Human Body Systems, emphasizing on the digestive and circulation systems. The results from the pre-and post-test, which comprised 20 questions together, corroborate the hypothesis that the use of digital educational objects benefits the education process. Students learned better the systems in comparison with when only traditional resources, such as a didactic book, was used. Such advancement could be harnessed to the fact that the resources make use of 3D images and point out each part of the body, besides providing important information and curiosities about the subject. It is therefore expected that digital technologies may be increasingly inserted and explored in pedagogical practices, since those resources allow for studying and solving queries, by broadening educational space and time.

  2. Constructing a graph of connections in clustering algorithm of complex objects

    Directory of Open Access Journals (Sweden)

    Татьяна Шатовская

    2015-05-01

    Full Text Available The article describes the results of modifying the algorithm Chameleon. Hierarchical multi-level algorithm consists of several phases: the construction of the count, coarsening, the separation and recovery. Each phase can be used various approaches and algorithms. The main aim of the work is to study the quality of the clustering of different sets of data using a set of algorithms combinations at different stages of the algorithm and improve the stage of construction by the optimization algorithm of k choice in the graph construction of k of nearest neighbors

  3. An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective

    Directory of Open Access Journals (Sweden)

    Ľuboš Buzna

    2014-01-01

    Full Text Available We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.

  4. Comparative Study of Evolutionary Multi-objective Optimization Algorithms for a Non-linear Greenhouse Climate Control Problem

    DEFF Research Database (Denmark)

    Ghoreishi, Newsha; Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2015-01-01

    Non-trivial real world decision-making processes usually involve multiple parties having potentially conflicting interests over a set of issues. State-of-the-art multi-objective evolutionary algorithms (MOEA) are well known to solve this class of complex real-world problems. In this paper, we...... compare the performance of state-of-the-art multi-objective evolutionary algorithms to solve a non-linear multi-objective multi-issue optimisation problem found in Greenhouse climate control. The chosen algorithms in the study includes NSGAII, eNSGAII, eMOEA, PAES, PESAII and SPEAII. The performance...... of all aforementioned algorithms is assessed and compared using performance indicators to evaluate proximity, diversity and consistency. Our insights to this comparative study enhanced our understanding of MOEAs performance in order to solve a non-linear complex climate control problem. The empirical...

  5. Quantitative Trait Loci Mapping Problem: An Extinction-Based Multi-Objective Evolutionary Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Nicholas S. Flann

    2013-09-01

    Full Text Available The Quantitative Trait Loci (QTL mapping problem aims to identify regions in the genome that are linked to phenotypic features of the developed organism that vary in degree. It is a principle step in determining targets for further genetic analysis and is key in decoding the role of specific genes that control quantitative traits within species. Applications include identifying genetic causes of disease, optimization of cross-breeding for desired traits and understanding trait diversity in populations. In this paper a new multi-objective evolutionary algorithm (MOEA method is introduced and is shown to increase the accuracy of QTL mapping identification for both independent and epistatic loci interactions. The MOEA method optimizes over the space of possible partial least squares (PLS regression QTL models and considers the conflicting objectives of model simplicity versus model accuracy. By optimizing for minimal model complexity, MOEA has the advantage of solving the over-fitting problem of conventional PLS models. The effectiveness of the method is confirmed by comparing the new method with Bayesian Interval Mapping approaches over a series of test cases where the optimal solutions are known. This approach can be applied to many problems that arise in analysis of genomic data sets where the number of features far exceeds the number of observations and where features can be highly correlated.

  6. Objective Function and Learning Algorithm for the General Node Fault Situation.

    Science.gov (United States)

    Xiao, Yi; Feng, Rui-Bin; Leung, Chi-Sing; Sum, John

    2016-04-01

    Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations. This paper studies the performance of faulty radial basis function (RBF) networks for the general node fault situation. We first propose a general node fault model that is able to describe a large class of node fault situations, such as stuck-at-zero, stuck-at-one, and the stuck-at level being with arbitrary distribution. Afterward, we derive an expression to describe the performance of faulty RBF networks. An objective function is then identified from the formula. With the objective function, a training algorithm for the general node situation is developed. Finally, a mean prediction error (MPE) formula that is able to estimate the test set error of faulty networks is derived. The application of the MPE formula in the selection of basis width is elucidated. Simulation experiments are then performed to demonstrate the effectiveness of the proposed method.

  7. Probing optimal measurement configuration for optical scatterometry by the multi-objective genetic algorithm

    Science.gov (United States)

    Chen, Xiuguo; Gu, Honggang; Jiang, Hao; Zhang, Chuanwei; Liu, Shiyuan

    2018-04-01

    Measurement configuration optimization (MCO) is a ubiquitous and important issue in optical scatterometry, whose aim is to probe the optimal combination of measurement conditions, such as wavelength, incidence angle, azimuthal angle, and/or polarization directions, to achieve a higher measurement precision for a given measuring instrument. In this paper, the MCO problem is investigated and formulated as a multi-objective optimization problem, which is then solved by the multi-objective genetic algorithm (MOGA). The case study on the Mueller matrix scatterometry for the measurement of a Si grating verifies the feasibility of the MOGA in handling the MCO problem in optical scatterometry by making a comparison with the Monte Carlo simulations. Experiments performed at the achieved optimal measurement configuration also show good agreement between the measured and calculated best-fit Mueller matrix spectra. The proposed MCO method based on MOGA is expected to provide a more general and practical means to solve the MCO problem in the state-of-the-art optical scatterometry.

  8. Global shape optimization of airfoil using multi-objective genetic algorithm

    International Nuclear Information System (INIS)

    Lee, Ju Hee; Lee, Sang Hwan; Park, Kyoung Woo

    2005-01-01

    The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model

  9. Global shape optimization of airfoil using multi-objective genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ju Hee; Lee, Sang Hwan [Hanyang Univ., Seoul (Korea, Republic of); Park, Kyoung Woo [Hoseo Univ., Asan (Korea, Republic of)

    2005-10-01

    The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model.

  10. Clustering Educational Digital Library Usage Data: A Comparison of Latent Class Analysis and K-Means Algorithms

    Science.gov (United States)

    Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei

    2013-01-01

    This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…

  11. Application of the tuning algorithm with the least squares approximation to the suboptimal control algorithm for integrating objects

    Science.gov (United States)

    Kuzishchin, V. F.; Merzlikina, E. I.; Van Va, Hoang

    2017-11-01

    The problem of PID and PI-algorithms tuning by means of the approximation by the least square method of the frequency response of a linear algorithm to the sub-optimal algorithm is considered. The advantage of the method is that the parameter values are obtained through one cycle of calculation. Recommendations how to choose the parameters of the least square method taking into consideration the plant dynamics are given. The parameters mentioned are the time constant of the filter, the approximation frequency range and the correction coefficient for the time delay parameter. The problem is considered for integrating plants for some practical cases (the level control system in a boiler drum). The transfer function of the suboptimal algorithm is determined relating to the disturbance that acts in the point of the control impact input, it is typical for thermal plants. In the recommendations it is taken into consideration that the overregulation for the transient process when the setpoint is changed is also limited. In order to compare the results the systems under consideration are also calculated by the classical method with the limited frequency oscillation index. The results given in the paper can be used by specialists dealing with tuning systems with the integrating plants.

  12. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  13. Predicting peptides binding to MHC class II molecules using multi-objective evolutionary algorithms

    Directory of Open Access Journals (Sweden)

    Feng Lin

    2007-11-01

    Full Text Available Abstract Background Peptides binding to Major Histocompatibility Complex (MHC class II molecules are crucial for initiation and regulation of immune responses. Predicting peptides that bind to a specific MHC molecule plays an important role in determining potential candidates for vaccines. The binding groove in class II MHC is open at both ends, allowing peptides longer than 9-mer to bind. Finding the consensus motif facilitating the binding of peptides to a MHC class II molecule is difficult because of different lengths of binding peptides and varying location of 9-mer binding core. The level of difficulty increases when the molecule is promiscuous and binds to a large number of low affinity peptides. In this paper, we propose two approaches using multi-objective evolutionary algorithms (MOEA for predicting peptides binding to MHC class II molecules. One uses the information from both binders and non-binders for self-discovery of motifs. The other, in addition, uses information from experimentally determined motifs for guided-discovery of motifs. Results The proposed methods are intended for finding peptides binding to MHC class II I-Ag7 molecule – a promiscuous binder to a large number of low affinity peptides. Cross-validation results across experiments on two motifs derived for I-Ag7 datasets demonstrate better generalization abilities and accuracies of the present method over earlier approaches. Further, the proposed method was validated and compared on two publicly available benchmark datasets: (1 an ensemble of qualitative HLA-DRB1*0401 peptide data obtained from five different sources, and (2 quantitative peptide data obtained for sixteen different alleles comprising of three mouse alleles and thirteen HLA alleles. The proposed method outperformed earlier methods on most datasets, indicating that it is well suited for finding peptides binding to MHC class II molecules. Conclusion We present two MOEA-based algorithms for finding motifs

  14. Generation of a Skeleton Corpus of Digital Objects for the Validation and Evaluation of Format Identification Tools and Signatures

    Directory of Open Access Journals (Sweden)

    Ross Spencer

    2013-06-01

    Full Text Available To preserve digital information it is vital that the format of that information can be identified, in-perpetuity. This is the major focus of research within the field of Digital Preservation. The National Archives of the UK called for the Digital Preservation and Digital Curation communities to develop a test corpus of digital objects to help further develop tools to aid this purpose. Following that call, an attempt has been made to develop the suite.This paper initially outlines a methodology to generate a skeleton corpus using simple user-generated digital objects. It then explores the lessons learnt in the generation of a corpus using scripting language techniques from the file format signatures described in The National Archives PRONOM technical registry. It will also discuss the use of the digital signature for this purpose, the benefits of developing a test corpus using this technique. Finally, this paper will outline a methodology for future research before exploring how the community can best make use of the output of this project and how this project needs to be taken forward to completion.

  15. The digital object identifier (DOI in electronic scientific journals of communication and information

    Directory of Open Access Journals (Sweden)

    Erik André de Nazaré Pires

    2017-06-01

    Full Text Available The present study seeks to investigate the use of the Digital Object Identifier (DOI in the scientific journals of Communication and Information and, providing new integration utilities with the Lattes Platform. In this sense, it aims to inform the existing titles in Communication Information in electronic format, demonstrate the importance of DOI in the integration with the Lattes Platform in order to guarantee author credibility and analyze the characteristics of publications that have DOI. The methodology used for the development of this study is bibliographic, research with descriptive-descriptive characteristics. From the development of the research, it is inferred that of all the analyzed journals (33 journals, 10 titles in the evaluation of 2013 and 06 titles of the evaluation of 2014 present DOI in their publications, all have WebQualis classification, Qualis A1 in the area Communication and Information. Most publications are international and only 3 titles are national. It is necessary that journals, principally national ones, accompany new technologies such as DOI for objects and ORCID for the identification of people, bringing more mechanisms that guarantee authors 'credibility and to bring the researchers' connection, and both can already be adopted in the Platform Lattes.

  16. A Case Study: Optimal Stage Gauge NetworkUsing Multi Objective Genetic Algorithm

    Science.gov (United States)

    Joo, H. J.; Han, D.; Jung, J.; Kim, H. S.

    2017-12-01

    Recently, the possibility of occurrence of localized strong heavy rainfall due to climate change is increasing and flood damage is also increasing trend in Korea. Therefore we need more precise hydrologic analysis for preparing alternatives or measures for flood reduction by considering climate conditions which we have difficulty in the prediction. To do this, obtaining reliable hydrologic data, for an example, stage data, is very important. However, the existing stage gauge stations are scattered around the country, making it difficult to maintain them in a stable manner, and subsequently hard to acquire the hydrologic data that could be used for reflecting the localized hydrologic characteristics. In order to overcome such restrictions, this paper not only aims to establish a plan to acquire the water stage data in a constant and proper manner by using limited manpower and costs, but also establishes the fundamental technology for acquiring the water level observation data or the stage data. For that, this paper identifies the current status of the stage gauge stations installed in the Chung-Ju dam in Han river, Korea and extract the factors related to the division and characteristics of basins. Then, the obtained factors are used to develop the representative unit hydrograph that shows the characteristics of flow. After that, the data are converted into the probability density function and the stations at individual basins are selected by using the entropy theory. In last step, we establish the optimized stage gauge network by the location of the stage station and grade using the Multi Objective Genetic Algorithm(MOGA) technique that takes into account for the combinations of the number of the stations. It is expected that this paper can help establish an optimal observational network of stage guages as it can be applied usefully not only for protecting against floods in a stable manner, but also for acquiring the hydrologic data in an efficient manner. Keywords

  17. Multi-objective optimization in the presence of practical constraints using non-dominated sorting hybrid cuckoo search algorithm

    Directory of Open Access Journals (Sweden)

    M. Balasubbareddy

    2015-12-01

    Full Text Available A novel optimization algorithm is proposed to solve single and multi-objective optimization problems with generation fuel cost, emission, and total power losses as objectives. The proposed method is a hybridization of the conventional cuckoo search algorithm and arithmetic crossover operations. Thus, the non-linear, non-convex objective function can be solved under practical constraints. The effectiveness of the proposed algorithm is analyzed for various cases to illustrate the effect of practical constraints on the objectives' optimization. Two and three objective multi-objective optimization problems are formulated and solved using the proposed non-dominated sorting-based hybrid cuckoo search algorithm. The effectiveness of the proposed method in confining the Pareto front solutions in the solution region is analyzed. The results for single and multi-objective optimization problems are physically interpreted on standard test functions as well as the IEEE-30 bus test system with supporting numerical and graphical results and also validated against existing methods.

  18. Multi-objective optimization of p-xylene oxidation process using an improved self-adaptive differential evolution algorithm

    Institute of Scientific and Technical Information of China (English)

    Lili Tao; Bin Xu; Zhihua Hu; Weimin Zhong

    2017-01-01

    The rise in the use of global polyester fiber contributed to strong demand of the Terephthalic acid (TPA). The liquid-phase catalytic oxidation of p-xylene (PX) to TPA is regarded as a critical and efficient chemical process in industry [1]. PX oxidation reaction involves many complex side reactions, among which acetic acid combustion and PX combustion are the most important. As the target product of this oxidation process, the quality and yield of TPA are of great concern. However, the improvement of the qualified product yield can bring about the high energy consumption, which means that the economic objectives of this process cannot be achieved simulta-neously because the two objectives are in conflict with each other. In this paper, an improved self-adaptive multi-objective differential evolution algorithm was proposed to handle the multi-objective optimization prob-lems. The immune concept is introduced to the self-adaptive multi-objective differential evolution algorithm (SADE) to strengthen the local search ability and optimization accuracy. The proposed algorithm is successfully tested on several benchmark test problems, and the performance measures such as convergence and divergence metrics are calculated. Subsequently, the multi-objective optimization of an industrial PX oxidation process is carried out using the proposed immune self-adaptive multi-objective differential evolution algorithm (ISADE). Optimization results indicate that application of ISADE can greatly improve the yield of TPA with low combustion loss without degenerating TA quality.

  19. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    Energy Technology Data Exchange (ETDEWEB)

    Sako, Masao; /Pennsylvania U. /KIPAC, Menlo Park; Bassett, Bruce; /Cape Town U. /South African Astron. Observ.; Becker, Andrew; /Washington U., Seattle, Astron. Dept.; Cinabro, David; /Wayne State U.; DeJongh, Don Frederic; /Fermilab; Depoy, D.L.; /Ohio State U.; Doi, Mamoru; /Tokyo U.; Garnavich, Peter M.; /Notre Dame U.; Craig, Hogan, J.; /Washington U., Seattle, Astron. Dept.; Holtzman, Jon; /New Mexico State U.; Jha, Saurabh; /Stanford U., Phys. Dept.; Konishi, Kohki; /Tokyo U.; Lampeitl, Hubert; /Baltimore, Space; Marriner, John; /Fermilab; Miknaitis, Gajus; /Fermilab; Nichol, Robert C.; /Portsmouth U.; Prieto, Jose Luis; /Ohio State U.; Richmond, Michael W.; /Rochester Inst.; Schneider, Donald P.; /Penn State U., Astron. Astrophys.; Smith, Mathew; /Portsmouth U.; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  20. Optimal Allocation of Generalized Power Sources in Distribution Network Based on Multi-Objective Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Li Ran

    2017-01-01

    Full Text Available Optimal allocation of generalized power sources in distribution network is researched. A simple index of voltage stability is put forward. Considering the investment and operation benefit, the stability of voltage and the pollution emissions of generalized power sources in distribution network, a multi-objective optimization planning model is established. A multi-objective particle swarm optimization algorithm is proposed to solve the optimal model. In order to improve the global search ability, the strategies of fast non-dominated sorting, elitism and crowding distance are adopted in this algorithm. Finally, tested the model and algorithm by IEEE-33 node system to find the best configuration of GP, the computed result shows that with the generalized power reasonable access to the active distribution network, the investment benefit and the voltage stability of the system is improved, and the proposed algorithm has better global search capability.

  1. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    Science.gov (United States)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial

  2. Parallel Multi-Objective Genetic Algorithm for Short-Term Economic Environmental Hydrothermal Scheduling

    Directory of Open Access Journals (Sweden)

    Zhong-Kai Feng

    2017-01-01

    Full Text Available With the increasingly serious energy crisis and environmental pollution, the short-term economic environmental hydrothermal scheduling (SEEHTS problem is becoming more and more important in modern electrical power systems. In order to handle the SEEHTS problem efficiently, the parallel multi-objective genetic algorithm (PMOGA is proposed in the paper. Based on the Fork/Join parallel framework, PMOGA divides the whole population of individuals into several subpopulations which will evolve in different cores simultaneously. In this way, PMOGA can avoid the wastage of computational resources and increase the population diversity. Moreover, the constraint handling technique is used to handle the complex constraints in SEEHTS, and a selection strategy based on constraint violation is also employed to ensure the convergence speed and solution feasibility. The results from a hydrothermal system in different cases indicate that PMOGA can make the utmost of system resources to significantly improve the computing efficiency and solution quality. Moreover, PMOGA has competitive performance in SEEHTS when compared with several other methods reported in the previous literature, providing a new approach for the operation of hydrothermal systems.

  3. Multi-Objective Optimization of Squeeze Casting Process using Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Patel G.C.M.

    2016-09-01

    Full Text Available The near net shaped manufacturing ability of squeeze casting process requiresto set the process variable combinations at their optimal levels to obtain both aesthetic appearance and internal soundness of the cast parts. The aesthetic and internal soundness of cast parts deal with surface roughness and tensile strength those can readily put the part in service without the requirement of costly secondary manufacturing processes (like polishing, shot blasting, plating, hear treatment etc.. It is difficult to determine the levels of the process variable (that is, pressure duration, squeeze pressure, pouring temperature and die temperature combinations for extreme values of the responses (that is, surface roughness, yield strength and ultimate tensile strength due to conflicting requirements. In the present manuscript, three population based search and optimization methods, namely genetic algorithm (GA, particle swarm optimization (PSO and multi-objective particle swarm optimization based on crowding distance (MOPSO-CD methods have been used to optimize multiple outputs simultaneously. Further, validation test has been conducted for the optimal casting conditions suggested by GA, PSO and MOPSO-CD. The results showed that PSO outperformed GA with regard to computation time.

  4. An objective algorithm for the determination of bone mineral content using dichromatic absorptiometry

    International Nuclear Information System (INIS)

    Appledorn, C.R.; Witt, R.M.; Wellman, H.N.; Johnston, C.C.

    1985-01-01

    The determination of vertebral column bone mineral content by dual photon absorptiometric methods is a problem of continued clinical interest. The more successful methods suffer from the frequent need of operator interaction in order to maintain good precision results. The authors have introduced a new objective algorithm that eliminates the subjectiveness of operator interaction without sacrificing reproducibility. The authors' system consists of a modified rectilinear scanner interfaced to a CAMAC acquisition device coupled to a PDP-11V03 minicomputer. The subject is scanned in the supine position with legs elevated to minimize lordosis. The source (Gd-153) and detector are collimated defining an area of 10mm x 10mm at the level of the spine. The transverse scan width is usually 120 mm. Scanning from the iliac crests toward the head, 50 transverses at 3mm y-increments are acquired at approximately 1mm increments. The data analysis begins with the calculation of R-value for each pixel in the scan. The calculations for bone mineral content are performed and various quantities are accumulated. In a reproducibility study of 116 patient studies, the authors achieved a bone mineral/bone area ratio precision (std dev/mean) of 1.37% without operator interaction nor vertebral body selection

  5. Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Tong Shu

    2016-03-01

    Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.

  6. Prediction and optimization of fuel cell performance using a multi-objective genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Marques Hobold, Gustavo [Laboratory of Energy Conversion Engineering and Technology, Federal University of Santa Catarina (Brazil); Washington University in St. Louis, MO 63130 (United States); Agarwal, Ramesh K. [Department of Mechanical Engineering and Materials Science, Washington University in St. Louis, MO 63130 (United States)

    2013-07-01

    The attention that is currently being given to the emission of pollutant gases in the atmosphere has made the fuel cell (FC), an energy conversion device that cleanly converts chemical energy into electrical energy, a good alternative to other technologies that still use carbon-based fuels. The temperature plays an important role on the efficiency of an FC as it influences directly the humidity of the membrane, the reversible thermodynamic potential and the partial pressure of water; therefore the thermal control of the fuel cell is the focus of this paper. We present models for both high and low temperature fuel cells based on the solid-oxide fuel cell (SOFC) and the polymer electrolyte membrane fuel cell (PEMFC). A thermodynamic analysis is performed on the cells and the methods of controlling their temperature are discussed. The cell parameters are optimized for both high and low temperatures using a Java-based multi-objective genetic algorithm, which makes use of the logic of the biological theory of evolution to classify individual parameters based on a fitness function in order to maximize the power of the fuel cell. Applications to high and low temperature fuel cells are discussed.

  7. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Marseguerra, M.; Zio, E.; Canetta, R. [Polytechnic of Milan, Dept. of Nuclear Engineering, Milano (Italy)

    2005-07-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  8. Multi-objective genetic algorithm parameter estimation in a reduced nuclear reactor model

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.; Canetta, R.

    2005-01-01

    The fast increase in computing power has rendered, and will continue to render, more and more feasible the incorporation of dynamics in the safety and reliability models of complex engineering systems. In particular, the Monte Carlo simulation framework offers a natural environment for estimating the reliability of systems with dynamic features. However, the time-integration of the dynamic processes may render the Monte Carlo simulation quite burdensome so that it becomes mandatory to resort to validated, simplified models of process evolution. Such models are typically based on lumped effective parameters whose values need to be suitably estimated so as to best fit to the available plant data. In this paper we propose a multi-objective genetic algorithm approach for the estimation of the effective parameters of a simplified model of nuclear reactor dynamics. The calibration of the effective parameters is achieved by best fitting the model responses of the quantities of interest to the actual evolution profiles. A case study is reported in which the real reactor is simulated by the QUAndry based Reactor Kinetics (Quark) code available from the Nuclear Energy Agency and the simplified model is based on the point kinetics approximation to describe the neutron balance in the core and on thermal equilibrium relations to describe the energy exchange between the different loops. (authors)

  9. The challenge of objective scar colour assessment in a clinical setting: using digital photography.

    Science.gov (United States)

    Anderson, J C; Hallam, M-J; Nduka, C; Osorio, D

    2015-08-01

    Scar assessment in the clinical setting is typically impeded by a lack of quantitative data and most systems rely on subjective rating scales which are user dependant and show considerable variability between raters. The growing use of digital photography in medicine suggests a more objective approach to scar evaluation. Our objective was to determine if cameras could be of practical use for measuring colour in a clinical setting. The measurement of colour and reflectance spectra in photographs faces two difficulties: firstly the effects of variable illumination spectra, and secondly to recover accurate colour and spectral information from the sparse red, green and blue (RGB) camera signals. As a result the colour rendition is often inaccurate, and spectral information is lost. To deal with variable illumination and other factors that systematically affect all reflectance spectra ColourWorker (a method for image-based colour measurement implemented in software) calibrates the spectral responses of the camera's RGB sensors using a colour standard in the image. To make best use of the calibrated signals, it takes advantage of the fact that although a given RGB signal can be caused by an infinite number of spectra, most natural reflectance spectra vary smoothly and have predictable forms. This means given a set of examples of spectra produced by the materials of interest, it is possible to estimate the specific spectrum that produced a given RGB signal once corrected for the illumination. We describe a method for recovering spectral and chromatic information relating to surface reflectance from ordinary digital images and apply this to analyse photographs of surgical scars, taken as part of a clinical trial, in an attempt to better quantify clinical scar assessment. It should be noted the pre-existing trial protocol did not allow for a comprehensive evaluation of the accuracy of the method which would require the spectrophotometric measurement of skin regions

  10. The C4 clustering algorithm: Clusters of galaxies in the Sloan Digital Sky Survey

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Christopher J.; Nichol, Robert; Reichart, Dan; Wechsler, Risa H.; Evrard, August; Annis, James; McKay, Timothy; Bahcall, Neta; Bernardi, Mariangela; Boehringer,; Connolly, Andrew; Goto, Tomo; Kniazev, Alexie; Lamb, Donald; Postman, Marc; Schneider, Donald; Sheth, Ravi; Voges, Wolfgang; /Cerro-Tololo InterAmerican Obs. /Portsmouth U.,

    2005-03-01

    We present the ''C4 Cluster Catalog'', a new sample of 748 clusters of galaxies identified in the spectroscopic sample of the Second Data Release (DR2) of the Sloan Digital Sky Survey (SDSS). The C4 cluster-finding algorithm identifies clusters as overdensities in a seven-dimensional position and color space, thus minimizing projection effects that have plagued previous optical cluster selection. The present C4 catalog covers {approx}2600 square degrees of sky and ranges in redshift from z = 0.02 to z = 0.17. The mean cluster membership is 36 galaxies (with redshifts) brighter than r = 17.7, but the catalog includes a range of systems, from groups containing 10 members to massive clusters with over 200 cluster members with redshifts. The catalog provides a large number of measured cluster properties including sky location, mean redshift, galaxy membership, summed r-band optical luminosity (L{sub r}), velocity dispersion, as well as quantitative measures of substructure and the surrounding large-scale environment. We use new, multi-color mock SDSS galaxy catalogs, empirically constructed from the {Lambda}CDM Hubble Volume (HV) Sky Survey output, to investigate the sensitivity of the C4 catalog to the various algorithm parameters (detection threshold, choice of passbands and search aperture), as well as to quantify the purity and completeness of the C4 cluster catalog. These mock catalogs indicate that the C4 catalog is {approx_equal}90% complete and 95% pure above M{sub 200} = 1 x 10{sup 14} h{sup -1}M{sub {circle_dot}} and within 0.03 {le} z {le} 0.12. Using the SDSS DR2 data, we show that the C4 algorithm finds 98% of X-ray identified clusters and 90% of Abell clusters within 0.03 {le} z {le} 0.12. Using the mock galaxy catalogs and the full HV dark matter simulations, we show that the L{sub r} of a cluster is a more robust estimator of the halo mass (M{sub 200}) than the galaxy line-of-sight velocity dispersion or the richness of the cluster

  11. A comparison of temporal, spatial and parallel phase shifting algorithms for digital image plane holography

    International Nuclear Information System (INIS)

    Arroyo, M P; Lobera, J

    2008-01-01

    This paper investigates the performance of several phase shifting (PS) techniques when using digital image plane holography (DIPH) as a fluid velocimetry technique. The main focus is on increasing the recording system aperture in order to overcome the limitation on the little light available in fluid applications. Some experiments with small rotations of a fluid-like solid object have been used to test the ability of PS-DIPH to faithfully reconstruct the object complex amplitude. Holograms for several apertures and for different defocusing distances have been recorded using spatial phase shifting (SPS) or temporal phase shifting (TPS) techniques. The parallel phase shifted holograms (H PPS ) have been generated from the TPS holograms (H TPS ). The data obtained from TPS-DIPH have been taken as the true object complex amplitude, which is used to benchmark that recovered using the other techniques. The findings of this work show that SPS and PPS are very similar indeed, and suggest that both can work for bigger apertures yet retain phase information

  12. Object based classification of high resolution data in urban areas considering digital surface models

    OpenAIRE

    Oczipka, Martin Eckhard

    2010-01-01

    Over the last couple of years more and more analogue airborne cameras were replaced by digital cameras. Digitally recorded image data have significant advantages to film based data. Digital aerial photographs have a much better radiometric resolution. Image information can be acquired in shaded areas too. This information is essential for a stable and continuous classification, because no data or unclassified areas should be as small as possible. Considering this technological progress, on...

  13. Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm

    Science.gov (United States)

    Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing

    2018-03-01

    As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.

  14. Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis

    Science.gov (United States)

    Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.

    2018-04-01

    Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.

  15. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Directory of Open Access Journals (Sweden)

    Jiangyi Qin

    Full Text Available A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  16. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    Science.gov (United States)

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  17. Learning Clinical Procedures Through Internet Digital Objects: Experience of Undergraduate Students Across Clinical Faculties.

    Science.gov (United States)

    Li, Tse Yan; Gao, Xiaoli; Wong, Kin; Tse, Christine Shuk Kwan; Chan, Ying Yee

    2015-04-14

    Various digital learning objects (DLOs) are available via the World Wide Web, showing the flow of clinical procedures. It is unclear to what extent these freely accessible Internet DLOs facilitate or hamper students' acquisition of clinical competence. This study aimed to understand the experience of undergraduate students across clinical disciplines-medicine, dentistry, and nursing-in using openly accessible Internet DLOs, and to investigate the role of Internet DLOs in facilitating their clinical learning. Mid-year and final-year groups were selected from each undergraduate clinical degree program of the University of Hong Kong-Bachelor of Medicine and Bachelor of Surgery (MBBS), Bachelor of Dental Surgery (BDS), and Bachelor of Nursing (BNurs). All students were invited to complete a questionnaire on their personal and educational backgrounds, and their experiences and views on using Internet DLOs in learning clinical procedures. The questionnaire design was informed by the findings of six focus groups. Among 439 respondents, 97.5% (428/439) learned a variety of clinical procedures through Internet DLOs. Most nursing students (107/122, 87.7%) learned preventive measures through Internet DLOs, with a lower percentage of medical students (99/215, 46.0%) and dental students (43/96, 45%) having learned them this way (both Plearning in the planned curriculum. This trend calls for a transformation of the educator's role from dispensing knowledge to guidance and support.

  18. A Model for Data Citation in Astronomical Research Using Digital Object Identifiers (DOIs)

    Science.gov (United States)

    Novacescu, Jenny; Peek, Joshua E. G.; Weissman, Sarah; Fleming, Scott W.; Levay, Karen; Fraser, Elizabeth

    2018-05-01

    Standardizing and incentivizing the use of digital object identifiers (DOIs) to aggregate and identify both data analyzed and data generated by a research project will advance the field of astronomy to match best practices in other research fields like geoscience and medicine. An increase in the use of DOIs will prepare the discipline for changing expectations among funding agencies and publishers, who increasingly expect accurate and thorough data citation to accompany scientific outputs. The use of DOIs ensures a robust, sustainable, and interoperable approach to data citation in which due credit is given to the researchers and institutions who produce and maintain the primary data. We describe in this work the advantages of DOIs for data citation and best practices for integrating a DOI service in an astronomical archive. We report on a pilot project carried out in collaboration with AAS journals. During the course of the 1.5-year long pilot, over 75% of submitting authors opted to use the integrated DOI service to clearly identify data analyzed during their research project when prompted at the time of paper submission.

  19. DIGITAL INVASIONS: FROM POINT CLOUDS TO HISTORICAL BUILDING OBJECT MODELING (H-BOM OF A UNESCO WHL SITE

    Directory of Open Access Journals (Sweden)

    F. Chiabrando

    2017-02-01

    Full Text Available The paper here presented shows the outcomes of a research/didactic activity carried out within a workshop titled "Digital Invasions. From point cloud to Heritage Building Information Modeling" held at Politecnico di Torino (29th September–5th October 2016. The term digital invasions refers to an Italian bottom up project born in the 2013 with the aim of promoting innovative digital ways for the enhancement of Cultural Heritage by the co-creation of cultural contents and its sharing through social media platforms. At this regard, we have worked with students of Architectural Master of Science degree, training them with a multidisciplinary teaching team (Architectural Representation, History of Architecture, Restoration, Digital Communication and Geomatics. The aim was also to test if our students could be involved in a sort of niche crowdsourcing for the creation of a library of H-BOMS (Historical-Building Object Modeling of architectural elements.

  20. Digital Invasions: from Point Clouds to Historical Building Object Modeling H-Bom of a Unesco Whl Site

    Science.gov (United States)

    Chiabrando, F.; Lo Turco, M.; Santagati, C.

    2017-02-01

    The paper here presented shows the outcomes of a research/didactic activity carried out within a workshop titled "Digital Invasions. From point cloud to Heritage Building Information Modeling" held at Politecnico di Torino (29th September-5th October 2016). The term digital invasions refers to an Italian bottom up project born in the 2013 with the aim of promoting innovative digital ways for the enhancement of Cultural Heritage by the co-creation of cultural contents and its sharing through social media platforms. At this regard, we have worked with students of Architectural Master of Science degree, training them with a multidisciplinary teaching team (Architectural Representation, History of Architecture, Restoration, Digital Communication and Geomatics). The aim was also to test if our students could be involved in a sort of niche crowdsourcing for the creation of a library of H-BOMS (Historical-Building Object Modeling) of architectural elements.

  1. A new parallel algorithm and its simulation on hypercube simulator for low pass digital image filtering using systolic array

    International Nuclear Information System (INIS)

    Al-Hallaq, A.; Amin, S.

    1998-01-01

    This paper introduces a new parallel algorithm and its simulation on a hypercube simulator for the low pass digital image filtering using a systolic array. This new algorithm is faster than the old one (Amin, 1988). This is due to the the fact that the old algorithm carries out the addition operations in a sequential mode. But in our new design these addition operations are divided into tow groups, which can be performed in parallel. One group will be performed on one half of the systolic array and the other on the second half, that is, by folding. This parallelism reduces the time required for the whole process by almost quarter the time of the old algorithm.(authors). 18 refs., 3 figs

  2. Multi-objective hybrid PSO-APO algorithm based security constrained optimal power flow with wind and thermal generators

    Directory of Open Access Journals (Sweden)

    Kiran Teeparthi

    2017-04-01

    Full Text Available In this paper, a new low level with teamwork heterogeneous hybrid particle swarm optimization and artificial physics optimization (HPSO-APO algorithm is proposed to solve the multi-objective security constrained optimal power flow (MO-SCOPF problem. Being engaged with the environmental and total production cost concerns, wind energy is highly penetrating to the main grid. The total production cost, active power losses and security index are considered as the objective functions. These are simultaneously optimized using the proposed algorithm for base case and contingency cases. Though PSO algorithm exhibits good convergence characteristic, fails to give near optimal solution. On the other hand, the APO algorithm shows the capability of improving diversity in search space and also to reach a near global optimum point, whereas, APO is prone to premature convergence. The proposed hybrid HPSO-APO algorithm combines both individual algorithm strengths, to get balance between global and local search capability. The APO algorithm is improving diversity in the search space of the PSO algorithm. The hybrid optimization algorithm is employed to alleviate the line overloads by generator rescheduling during contingencies. The standard IEEE 30-bus and Indian 75-bus practical test systems are considered to evaluate the robustness of the proposed method. The simulation results reveal that the proposed HPSO-APO method is more efficient and robust than the standard PSO and APO methods in terms of getting diverse Pareto optimal solutions. Hence, the proposed hybrid method can be used for the large interconnected power system to solve MO-SCOPF problem with integration of wind and thermal generators.

  3. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  4. Multi objective genetic algorithm to optimize the local heat treatment of a hardenable aluminum alloy

    Science.gov (United States)

    Piccininni, A.; Palumbo, G.; Franco, A. Lo; Sorgente, D.; Tricarico, L.; Russello, G.

    2018-05-01

    The continuous research for lightweight components for transport applications to reduce the harmful emissions drives the attention to the light alloys as in the case of Aluminium (Al) alloys, capable to combine low density with high values of the strength-to-weight ratio. Such advantages are partially counterbalanced by the poor formability at room temperature. A viable solution is to adopt a localized heat treatment by laser of the blank before the forming process to obtain a tailored distribution of material properties so that the blank can be formed at room temperature by means of conventional press machines. Such an approach has been extensively investigated for age hardenable alloys, but in the present work the attention is focused on the 5000 series; in particular, the optimization of the deep drawing process of the alloy AA5754 H32 is proposed through a numerical/experimental approach. A preliminary investigation was necessary to correctly tune the laser parameters (focus length, spot dimension) to effectively obtain the annealed state. Optimal process parameters were then obtained coupling a 2D FE model with an optimization platform managed by a multi-objective genetic algorithm. The optimal solution (i.e. able to maximize the LDR) in terms of blankholder force and extent of the annealed region was thus evaluated and validated through experimental trials. A good matching between experimental and numerical results was found. The optimal solution allowed to obtain an LDR of the locally heat treated blank larger than the one of the material either in the wrought condition (H32) either in the annealed condition (H111).

  5. Stereo perception of reconstructions of digital holograms of real-world objects

    Energy Technology Data Exchange (ETDEWEB)

    Lehtimaeki, Taina M; Saeaeskilahti, Kirsti; Naesaenen, Risto [University of Oulu, Oulu Southern Institute, Ylivieska (Finland); Naughton, Thomas J, E-mail: taina.lehtimaki@oulu.f [Department of Computer Science, National University of Ireland Maynooth (Ireland)

    2010-02-01

    In digital holography a 3D scene is captured optically and often the perspectives are reconstructed numerically. In this study we digitally process the holograms to allow them to be displayed on autostereoscopic displays. This study is conducted by subjective visual perception experiments comparing single reconstructed images from left and right perspective to the resulting stereo image.

  6. Stereo perception of reconstructions of digital holograms of real-world objects

    International Nuclear Information System (INIS)

    Lehtimaeki, Taina M; Saeaeskilahti, Kirsti; Naesaenen, Risto; Naughton, Thomas J

    2010-01-01

    In digital holography a 3D scene is captured optically and often the perspectives are reconstructed numerically. In this study we digitally process the holograms to allow them to be displayed on autostereoscopic displays. This study is conducted by subjective visual perception experiments comparing single reconstructed images from left and right perspective to the resulting stereo image.

  7. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    Science.gov (United States)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  8. A 12bits 40MSPS SAR ADC with a redundancy algorithm and digital calibration for the ATLAS LAr calorimeter readout

    CERN Document Server

    Zeloufi, Mohamed; The ATLAS collaboration; Rarbi, Fatah-ellah

    2015-01-01

    We present a SAR ADC with a generalized redundant search algorithm offering the flexibility to relax the requirements on the DAC settling time. The redundancy allows also a digital background calibration, based on a code density analysis, to compensate the capacitors mismatching effects. The total of capacitors used in this architecture is limited to a half of the one in a classical SAR design. Only 2^11 unit capacitors were necessary to reach 12bits resolution, and the switching algorithm is intrinsically monotonic. The design is fully differential featuring 12-bit 40MS/s in a CMOS 130nm 1P8M process.

  9. An Analysis of Light Periods of BL Lac Object S5 0716+714 with the MUSIC Algorithm

    Science.gov (United States)

    Tang, Jie

    2012-07-01

    The multiple signal classification (MUSIC) algorithm is introduced to the estimation of light periods of BL Lac objects. The principle of the MUSIC algorithm is given, together with a testing on its spectral resolution by using a simulative signal. From a lot of literature, we have collected a large number of effective observational data of the BL Lac object S5 0716+714 in the three optical wavebands V, R, and I from 1994 to 2008. The light periods of S5 0716+714 are obtained by means of the MUSIC algorithm and average periodogram algorithm, respectively. It is found that there exist two major periodic components, one is the period of (3.33±0.08) yr, another is the period of (1.24±0.01) yr. The comparison of the performances of periodicity analysis of two algorithms indicates that the MUSIC algorithm has a smaller requirement on the sample length, as well as a good spectral resolution and anti-noise ability, to improve the accuracy of periodicity analysis in the case of short sample length.

  10. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  11. An Improved Marriage in Honey Bees Optimization Algorithm for Single Objective Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    Yuksel Celik

    2013-01-01

    Full Text Available Marriage in honey bees optimization (MBO is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm’s performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.

  12. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    International Nuclear Information System (INIS)

    Binh, Do Quang; Huy, Ngo Quang; Hai, Nguyen Hoang

    2014-01-01

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  13. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    Energy Technology Data Exchange (ETDEWEB)

    Binh, Do Quang [University of Technical Education Ho Chi Minh City (Viet Nam); Huy, Ngo Quang [University of Industry Ho Chi Minh City (Viet Nam); Hai, Nguyen Hoang [Centre for Research and Development of Radiation Technology, Ho Chi Minh City (Viet Nam)

    2014-12-15

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  14. An objective protocol for comparing the noise performance of silver halide film and digital sensor

    Science.gov (United States)

    Cao, Frédéric; Guichard, Frédéric; Hornung, Hervé; Tessière, Régis

    2012-01-01

    Digital sensors have obviously invaded the photography mass market. However, some photographers with very high expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital processing afterwards. Therefore, it is legitimate to evaluate silver halide film "with a digital eye", with the assumption that processing can be applied as for a digital camera. The article will describe in details the operations we need to consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array). We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor response, signal to noise ratio and dynamic range.

  15. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  16. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    Science.gov (United States)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  17. Learning Clinical Procedures Through Internet Digital Objects: Experience of Undergraduate Students Across Clinical Faculties

    Science.gov (United States)

    Li, Tse Yan; Wong, Kin; Tse, Christine Shuk Kwan; Chan, Ying Yee

    2015-01-01

    Background Various digital learning objects (DLOs) are available via the World Wide Web, showing the flow of clinical procedures. It is unclear to what extent these freely accessible Internet DLOs facilitate or hamper students’ acquisition of clinical competence. Objective This study aimed to understand the experience of undergraduate students across clinical disciplines—medicine, dentistry, and nursing—in using openly accessible Internet DLOs, and to investigate the role of Internet DLOs in facilitating their clinical learning. Methods Mid-year and final-year groups were selected from each undergraduate clinical degree program of the University of Hong Kong—Bachelor of Medicine and Bachelor of Surgery (MBBS), Bachelor of Dental Surgery (BDS), and Bachelor of Nursing (BNurs). All students were invited to complete a questionnaire on their personal and educational backgrounds, and their experiences and views on using Internet DLOs in learning clinical procedures. The questionnaire design was informed by the findings of six focus groups. Results Among 439 respondents, 97.5% (428/439) learned a variety of clinical procedures through Internet DLOs. Most nursing students (107/122, 87.7%) learned preventive measures through Internet DLOs, with a lower percentage of medical students (99/215, 46.0%) and dental students (43/96, 45%) having learned them this way (both Pstudents accessed DLOs through public search engines, whereas 93.2% (409/439) accessed them by watching YouTube videos. Students often shared DLOs with classmates (277/435, 63.7%), but rarely discussed them with teachers (54/436, 12.4%). The accuracy, usefulness, and importance of Internet DLOs were rated as 6.85 (SD 1.48), 7.27 (SD 1.53), and 7.13 (SD 1.72), respectively, out of a high score of 10. Conclusions Self-exploration of DLOs in the unrestricted Internet environment is extremely common among current e-generation learners and was regarded by students across clinical faculties as an important

  18. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    International Nuclear Information System (INIS)

    Mahmoud, Imbaby I.; El-Tokhy, Mohamed S.

    2015-01-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  19. Modified SURF Algorithm Implementation on FPGA For Real-Time Object Tracking

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of the modified speeded-up robust features (SURF algorithm. FPGA was selected for parallel process implementation using VHDL to ensure features extraction in real-time. A sliding 84×84 size window was used to store integral pixels and accelerate Hessian determinant calculation, orientation assignment and descriptor estimation. The local extreme searching was used to find point of interest in 8 scales. The simplified descriptor and orientation vector were calculated in parallel in 6 scales. The algorithm was investigated by tracking marker and drawing a plane or cube. All parts of algorithm worked on 25 MHz clock. The video stream was generated using 60 fps and 640×480 pixel camera.Article in Lithuanian

  20. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Maryam Mousavi

    Full Text Available Flexible manufacturing system (FMS enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs. An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA, particle swarm optimization (PSO, and hybrid GA-PSO to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  1. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    Science.gov (United States)

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  2. Architectures/Algorithms/Tools for Ultra-Low Power, Compact EVA Digital Radio, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The EVA digital radio imposes tight constraints on power consumption, latency, throughput, form factor, reconfigurability, single event upset and fault tolerance,...

  3. Digitization

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2014-01-01

    what a concept of digital media might add to the understanding of processes of mediatization and what the concept of mediatization might add to the understanding of digital media. It is argued that digital media open an array of new trajectories in human communication, trajectories which were...

  4. Comparison of single distance phase retrieval algorithms by considering different object composition and the effect of statistical and structural noise.

    Science.gov (United States)

    Chen, R C; Rigon, L; Longo, R

    2013-03-25

    Phase retrieval is a technique for extracting quantitative phase information from X-ray propagation-based phase-contrast tomography (PPCT). In this paper, the performance of different single distance phase retrieval algorithms will be investigated. The algorithms are herein called phase-attenuation duality Born Algorithm (PAD-BA), phase-attenuation duality Rytov Algorithm (PAD-RA), phase-attenuation duality Modified Bronnikov Algorithm (PAD-MBA), phase-attenuation duality Paganin algorithm (PAD-PA) and phase-attenuation duality Wu Algorithm (PAD-WA), respectively. They are all based on phase-attenuation duality property and on weak absorption of the sample and they employ only a single distance PPCT data. In this paper, they are investigated via simulated noise-free PPCT data considering the fulfillment of PAD property and weakly absorbing conditions, and with experimental PPCT data of a mixture sample containing absorbing and weakly absorbing materials, and of a polymer sample considering different degrees of statistical and structural noise. The simulation shows all algorithms can quantitatively reconstruct the 3D refractive index of a quasi-homogeneous weakly absorbing object from noise-free PPCT data. When the weakly absorbing condition is violated, the PAD-RA and PAD-PA/WA obtain better result than PAD-BA and PAD-MBA that are shown in both simulation and mixture sample results. When considering the statistical noise, the contrast-to-noise ratio values decreases as the photon number is reduced. The structural noise study shows that the result is progressively corrupted by ring-like artifacts with the increase of structural noise (i.e. phantom thickness). The PAD-RA and PAD-PA/WA gain better density resolution than the PAD-BA and PAD-MBA in both statistical and structural noise study.

  5. Objectivity

    CERN Document Server

    Daston, Lorraine

    2010-01-01

    Objectivity has a history, and it is full of surprises. In Objectivity, Lorraine Daston and Peter Galison chart the emergence of objectivity in the mid-nineteenth-century sciences--and show how the concept differs from its alternatives, truth-to-nature and trained judgment. This is a story of lofty epistemic ideals fused with workaday practices in the making of scientific images. From the eighteenth through the early twenty-first centuries, the images that reveal the deepest commitments of the empirical sciences--from anatomy to crystallography--are those featured in scientific atlases, the compendia that teach practitioners what is worth looking at and how to look at it. Galison and Daston use atlas images to uncover a hidden history of scientific objectivity and its rivals. Whether an atlas maker idealizes an image to capture the essentials in the name of truth-to-nature or refuses to erase even the most incidental detail in the name of objectivity or highlights patterns in the name of trained judgment is a...

  6. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    Science.gov (United States)

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  7. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    Science.gov (United States)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  8. The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law

    Directory of Open Access Journals (Sweden)

    Antoinette Rouvroy

    2016-11-01

    Full Text Available This text is a transcription of Rouvroy’s presentation on 7th October 2014 at the “Digital Studies” seminar series at the Centre Georges Pompidou. This seminar series, organised by the French philosopher Bernard Stiegler, question the influence of digital technologies on knowledge from an epistemological point of view and from the way they alter academic disciplines.

  9. Development of a multi-objective PBIL evolutionary algorithm applied to a nuclear reactor core reload optimization problem

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Dchirru, Roberto

    2005-01-01

    The nuclear reactor core reload optimization problem consists in finding a pattern of partially burned-up and fresh fuels that optimizes the plant's next operation cycle. This optimization problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. The artificial intelligence optimization techniques generally have a single objective. However, most real-world engineering problems, including nuclear core reload optimization, have more than one objective (multi-objective) and these objectives are usually conflicting. The aim of this work is to develop a tool to solve multi-objective problems based on the Population-Based Incremental Learning (PBIL) algorithm. The new tool is applied to solve the Angra 1 PWR core reload optimization problem with the purpose of creating a Pareto surface, so that a pattern selected from this surface can be applied for the plant's next operation cycle. (author)

  10. Development of test objects for image quality evaluation of digital mammography; Desenvolvimento de objetos de teste para avaliacao da qualidade da imagem em mamografia digital

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Vitor Nascimento de Carvalho

    2013-07-01

    Mammography is the image exam called 'gold standard' for early detection of breast cancer. 111 Brazil, more than eight million mammograms are carried out per year. With the advancement of technology, the digital systems CR and DR for this diagnostic modality have been increasingly implemented, replacing the conventional screen-film system, which brought environmental problems, like the disposal of chemical waste, and is also responsible for the rejection of radiographic films with processing artifacts. Digital systems, besides not experiencing the problem of environmental pollution, are still capable of image processing, allowing a much lower rejection rate when compared to the conventional system. Moreover, the determination of an accurate diagnosis is highly dependent on the image quality of the examination. To ensure the reliability of the images produced by these systems, it is necessary to evaluate them on a regular basis. Unfortunately, there is no regulation in Brazil about the Quality Assurance of these systems. The aim of this study was to develop a set of test objects that allow the evaluation of some parameters of image quality of these systems, such as field image uniformity, the linearity between the air Kerma incident on detector and the mean pixel value (MPV) of the image, the spatial resolution of the system through the modulation transfer function (MTF) and also to suggest an object to be applied in the evaluation of contrast-to-noise ratio (CNR) and signal-difference-to-noise ratio (SDNR). In order to test the objects. 10 mammography centers were evaluated, seven with CR systems and three with the DR systems. To evaluate the linearity, besides the test objects high sensitivity dosimeters were necessary to be used, namely LiF:Mg,Cu,P TL dosimeters. The use of these dosimeters was recommended in order to minimize the time required to perform the tests and to decrease the number of exposures needed. For evaluation of digital images in DICOM

  11. Irreversibility analysis for optimization design of plate fin heat exchangers using a multi-objective cuckoo search algorithm

    International Nuclear Information System (INIS)

    Wang, Zhe; Li, Yanzhong

    2015-01-01

    Highlights: • The first application of IMOCS for plate-fin heat exchanger design. • Irreversibility degrees of heat transfer and fluid friction are minimized. • Trade-off of efficiency, total cost and pumping power is achieved. • Both EGM and EDM methods have been compared in the optimization of PFHE. • This study has superiority over other single-objective optimization design. - Abstract: This paper introduces and applies an improved multi-objective cuckoo search (IMOCS) algorithm, a novel met-heuristic optimization algorithm based on cuckoo breeding behavior, for the multi-objective optimization design of plate-fin heat exchangers (PFHEs). A modified irreversibility degree of the PFHE is separated into heat transfer and fluid friction irreversibility degrees which are adopted as two initial objective functions to be minimized simultaneously for narrowing the search scope of the design. The maximization efficiency, minimization of pumping power, and total annual cost are considered final objective functions. Results obtained from a two dimensional normalized Pareto-optimal frontier clearly demonstrate the trade-off between heat transfer and fluid friction irreversibility. Moreover, a three dimensional Pareto-optimal frontier reveals a relationship between efficiency, total annual cost, and pumping power in the PFHE design. Three examples presented here further demonstrate that the presented method is able to obtain optimum solutions with higher accuracy, lower irreversibility, and fewer iterations as compared to the previous methods and single-objective design approaches

  12. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-07-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world

  13. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Directory of Open Access Journals (Sweden)

    Muhammad Hashim

    2017-05-01

    Full Text Available Purpose:  The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM. Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM with fuzzy coefficients constructed in present research should be helpful for

  14. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    International Nuclear Information System (INIS)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-01-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world

  15. Mathematics Education ITE Students Examining the Value of Digital Learning Objects

    Science.gov (United States)

    Hawera Ngarewa; Wright, Noeline; Sharma, Sashi

    2017-01-01

    One issue in mathematics initial teacher education (ITE) is how to best support students to use digital technologies (DTs) to enhance their teaching of mathematics. While most ITE students are probably using DTs on a daily basis for personal use, they are often unfamiliar with using them for educative purposes in New Zealand primary school…

  16. Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Zhijun Xie

    2014-10-01

    Full Text Available Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT. In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.

  17. Thermo-economic multi-objective optimization of solar dish-Stirling engine by implementing evolutionary algorithm

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad H.; Sayyaadi, Hoseyn; Mohammadi, Amir H.; Barranco-Jimenez, Marco A.

    2013-01-01

    Highlights: • Thermo-economic multi-objective optimization of solar dish-Stirling engine is studied. • Application of the evolutionary algorithm is investigated. • Error analysis is done to find out the error through investigation. - Abstract: In the recent years, remarkable attention is drawn to Stirling engine due to noticeable advantages, for instance a lot of resources such as biomass, fossil fuels and solar energy can be applied as heat source. Great number of studies are conducted on Stirling engine and finite time thermo-economic is one of them. In the present study, the dimensionless thermo-economic objective function, thermal efficiency and dimensionless power output are optimized for a dish-Stirling system using finite time thermo-economic analysis and NSGA-II algorithm. Optimized answers are chosen from the results using three decision-making methods. Error analysis is done to find out the error through investigation

  18. Environmental/economic dispatch problem of power system by using an enhanced multi-objective differential evolution algorithm

    International Nuclear Information System (INIS)

    Lu Youlin; Zhou Jianzhong; Qin Hui; Wang Ying; Zhang Yongchuan

    2011-01-01

    An enhanced multi-objective differential evolution algorithm (EMODE) is proposed in this paper to solve environmental/economic dispatch (EED) problem by considering the minimal of fuel cost and emission effects synthetically. In the proposed algorithm, an elitist archive technique is adopted to retain the non-dominated solutions obtained during the evolutionary process, and the operators of DE are modified according to the characteristics of multi-objective optimization problems. Moreover, in order to avoid premature convergence, a local random search (LRS) operator is integrated with the proposed method to improve the convergence performance. In view of the difficulties of handling the complicated constraints of EED problem, a new heuristic constraints handling method without any penalty factor settings is presented. The feasibility and effectiveness of the proposed EMODE method is demonstrated for a test power system. Compared with other methods, EMODE can get higher quality solutions by reducing the fuel cost and the emission effects synthetically.

  19. An experimental study of the scatter correction by using a beam-stop-array algorithm with digital breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ye-Seul; Park, Hye-Suk; Kim, Hee-Joung [Yonsei University, Wonju (Korea, Republic of); Choi, Young-Wook; Choi, Jae-Gu [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)

    2014-12-15

    Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.

  20. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  1. Assessing the performance of a differential evolution algorithm in structural damage detection by varying the objective function

    OpenAIRE

    Villalba-Morales, Jesús Daniel; Laier, José Elias

    2014-01-01

    Structural damage detection has become an important research topic in certain segments of the engineering community. These methodologies occasionally formulate an optimization problem by defining an objective function based on dynamic parameters, with metaheuristics used to find the solution. In this study, damage localization and quantification is performed by an Adaptive Differential Evolution algorithm, which solves the associated optimization problem. Furthermore, this paper looks at the ...

  2. Multi-objective thermodynamic optimization of an irreversible regenerative Brayton cycle using evolutionary algorithm and decision making

    OpenAIRE

    Rajesh Kumar; S.C. Kaushik; Raj Kumar; Ranjana Hans

    2016-01-01

    Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is s...

  3. Performance of a genetic algorithm for solving the multi-objective, multimodel transportation network design problem

    NARCIS (Netherlands)

    Brands, Ties; van Berkum, Eric C.

    2014-01-01

    The optimization of infrastructure planning in a multimodal network is defined as a multi-objective network design problem, with accessibility, use of urban space by parking, operating deficit and climate impact as objectives. Decision variables are the location of park and ride facilities, train

  4. Identification of cultivated land using remote sensing images based on object-oriented artificial bee colony algorithm

    Science.gov (United States)

    Li, Nan; Zhu, Xiufang

    2017-04-01

    Cultivated land resources is the key to ensure food security. Timely and accurate access to cultivated land information is conducive to a scientific planning of food production and management policies. The GaoFen 1 (GF-1) images have high spatial resolution and abundant texture information and thus can be used to identify fragmentized cultivated land. In this paper, an object-oriented artificial bee colony algorithm was proposed for extracting cultivated land from GF-1 images. Firstly, the GF-1 image was segmented by eCognition software and some samples from the segments were manually identified into 2 types (cultivated land and non-cultivated land). Secondly, the artificial bee colony (ABC) algorithm was used to search for classification rules based on the spectral and texture information extracted from the image objects. Finally, the extracted classification rules were used to identify the cultivated land area on the image. The experiment was carried out in Hongze area, Jiangsu Province using wide field-of-view sensor on the GF-1 satellite image. The total precision of classification result was 94.95%, and the precision of cultivated land was 92.85%. The results show that the object-oriented ABC algorithm can overcome the defect of insufficient spectral information in GF-1 images and obtain high precision in cultivated identification.

  5. Dynamic Power Dispatch Considering Electric Vehicles and Wind Power Using Decomposition Based Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Boyang Qu

    2017-12-01

    Full Text Available The intermittency of wind power and the large-scale integration of electric vehicles (EVs bring new challenges to the reliability and economy of power system dispatching. In this paper, a novel multi-objective dynamic economic emission dispatch (DEED model is proposed considering the EVs and uncertainties of wind power. The total fuel cost and pollutant emission are considered as the optimization objectives, and the vehicle to grid (V2G power and the conventional generator output power are set as the decision variables. The stochastic wind power is derived by Weibull probability distribution function. Under the premise of meeting the system energy and user’s travel demand, the charging and discharging behavior of the EVs are dynamically managed. Moreover, we propose a two-step dynamic constraint processing strategy for decision variables based on penalty function, and, on this basis, the Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D algorithm is improved. The proposed model and approach are verified by the 10-generator system. The results demonstrate that the proposed DEED model and the improved MOEA/D algorithm are effective and reasonable.

  6. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    Science.gov (United States)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  7. Multi-objective genetic algorithm optimization of 2D- and 3D-Pareto fronts for vibrational quantum processes

    International Nuclear Information System (INIS)

    Gollub, C; De Vivie-Riedle, R

    2009-01-01

    A multi-objective genetic algorithm is applied to optimize picosecond laser fields, driving vibrational quantum processes. Our examples are state-to-state transitions and unitary transformations. The approach allows features of the shaped laser fields and of the excitation mechanisms to be controlled simultaneously with the quantum yield. Within the parameter range accessible to the experiment, we focus on short pulse durations and low pulse energies to optimize preferably robust laser fields. Multidimensional Pareto fronts for these conflicting objectives could be constructed. Comparison with previous work showed that the solutions from Pareto optimizations and from optimal control theory match very well.

  8. Converting optical scanning holograms of real objects to binary Fourier holograms using an iterative direct binary search algorithm.

    Science.gov (United States)

    Leportier, Thibault; Park, Min Chul; Kim, You Seok; Kim, Taegeun

    2015-02-09

    In this paper, we present a three-dimensional holographic imaging system. The proposed approach records a complex hologram of a real object using optical scanning holography, converts the complex form to binary data, and then reconstructs the recorded hologram using a spatial light modulator (SLM). The conversion from the recorded hologram to a binary hologram is achieved using a direct binary search algorithm. We present experimental results that verify the efficacy of our approach. To the best of our knowledge, this is the first time that a hologram of a real object has been reconstructed using a binary SLM.

  9. Algorithm for definition of stones components at kidney-stones illness using two-energetic digital roentgen-graphic method

    International Nuclear Information System (INIS)

    Nedavnij, O.I.; Osipov, S.P.

    2001-01-01

    Paper presents the algorithm for definition of stone composition in case of kidney-stones using two-energy digital X-ray photography. One calculated the values of p information parameter for the main types of stones within 40-150 keV energy range. It was shown that p parameter dependence on energy was not essential one (maximum 3.5% deviation), p value for various chemical compositions of kidney stones ranged from 15% (calcium phosphate and calcium oxalate) up to 70% (calcium lactate and calcium oxalate). The conducted studies enable to make a conclusion about the possibility to define material representing the heart of kidney stones using two-energy digital X-ray photography. Paper includes recommendations on selection of the optimal energy values [ru

  10. Optical encryption of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography

    Science.gov (United States)

    Wang, Ying; Liu, Qi; Wang, Jun; Wang, Qiong-Hua

    2018-03-01

    We present an optical encryption method of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography. By modifying the Mach–Zehnder interferometer, the interference of the multiple objects beams and the one reference beam is used to simultaneously encrypt multiple objects into a ciphertext. During decryption, each three-dimensional object can be decrypted independently without having to decrypt other objects. Since the single-pixel digital holography based on compressive sensing theory is introduced, the encrypted data of this method is effectively reduced. In addition, recording fewer encrypted data can greatly reduce the bandwidth of network transmission. Moreover, the compressive sensing essentially serves as a secret key that makes an intruder attack invalid, which means that the system is more secure than the conventional encryption method. Simulation results demonstrate the feasibility of the proposed method and show that the system has good security performance. Project supported by the National Natural Science Foundation of China (Grant Nos. 61405130 and 61320106015).

  11. Optimizing bi-objective, multi-echelon supply chain model using particle swarm intelligence algorithm

    Science.gov (United States)

    Sathish Kumar, V. R.; Anbuudayasankar, S. P.; Rameshkumar, K.

    2018-02-01

    In the current globalized scenario, business organizations are more dependent on cost effective supply chain to enhance profitability and better handle competition. Demand uncertainty is an important factor in success or failure of a supply chain. An efficient supply chain limits the stock held at all echelons to the extent of avoiding a stock-out situation. In this paper, a three echelon supply chain model consisting of supplier, manufacturing plant and market is developed and the same is optimized using particle swarm intelligence algorithm.

  12. Multi-objective Optimization Algorithms with the Island Metaheuristic for Effective Project Management Problem Solving

    Directory of Open Access Journals (Sweden)

    Brester Christina

    2017-12-01

    Full Text Available Background and Purpose: In every organization, project management raises many different decision-making problems, a large proportion of which can be efficiently solved using specific decision-making support systems. Yet such kinds of problems are always a challenge since there is no time-efficient or computationally efficient algorithm to solve them as a result of their complexity. In this study, we consider the problem of optimal financial investment. In our solution, we take into account the following organizational resource and project characteristics: profits, costs and risks.

  13. Architecture and Initial Development of a Digital Library Platform for Computable Knowledge Objects for Health.

    Science.gov (United States)

    Flynn, Allen J; Bahulekar, Namita; Boisvert, Peter; Lagoze, Carl; Meng, George; Rampton, James; Friedman, Charles P

    2017-01-01

    Throughout the world, biomedical knowledge is routinely generated and shared through primary and secondary scientific publications. However, there is too much latency between publication of knowledge and its routine use in practice. To address this latency, what is actionable in scientific publications can be encoded to make it computable. We have created a purpose-built digital library platform to hold, manage, and share actionable, computable knowledge for health called the Knowledge Grid Library. Here we present it with its system architecture.

  14. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    Science.gov (United States)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  15. A new algorithm for a high-modulation frequency and high-speed digital lock-in amplifier

    International Nuclear Information System (INIS)

    Jiang, G L; Yang, H; Li, R; Kong, P

    2016-01-01

    To increase the maximum modulation frequency of the digital lock-in amplifier in an online system, we propose a new algorithm using a square wave reference whose frequency is an odd sub-multiple of the modulation frequency, which is based on odd harmonic components in the square wave reference. The sampling frequency is four times the modulation frequency to insure the orthogonality of reference sequences. Only additions and subtractions are used to implement phase-sensitive detection, which speeds up the computation in lock-in. Furthermore, the maximum modulation frequency of a lock-in is enhanced considerably. The feasibility of this new algorithm is tested by simulation and experiments. (paper)

  16. Improving the Penetration of Wind Power with Dynamic Thermal Rating System, Static VAR Compensator and Multi-Objective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Jiashen Teh

    2018-04-01

    Full Text Available The integration of renewable energy sources, especially wind energy, has been on the rise throughout power systems worldwide. Due to this relatively new introduction, the integration of wind energy is often not optimized. Moreover, owing to the technical constraints and transmission congestions of the power network, most of the wind energy has to be curtailed. Due to various factors that influence the connectivity of wind energy, this paper proposes a well-organized posterior multi-objective (MO optimization algorithm for maximizing the connections of wind energy. In this regard, the dynamic thermal rating (DTR system and the static VAR compensator (SVC have been identified as effective tools for improving the loadability of the network. The propose MO algorithm in this paper aims to minimize: (1 wind energy curtailment, (2 operation cost of the network considering all investments and operations, also known as the total social cost, and (3 SVC operation cost. The proposed MO problem was solved using the non-dominated sorting genetic algorithm (NSGA II and it was tested on the modified IEEE reliability test system (IEEE-RTS. The results demonstrate the applicability of the proposed algorithm in aiding power system enhancement planning for integrating wind energy.

  17. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults.

    Directory of Open Access Journals (Sweden)

    Jorgen A Wullems

    Full Text Available Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.

  18. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults

    Science.gov (United States)

    Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839

  19. Review and comparison of non-conventional imaging systems for three-dimensional digitization of transparent objects

    Science.gov (United States)

    Mériaudeau, Fabrice; Rantoson, Rindra; Fofi, David; Stolz, Christophe

    2012-04-01

    Fashion and design greatly influence the conception of manufactured products which now exhibit complex forms and shapes. Two-dimensional quality control procedures (e.g., shape, textures, colors, and 2D geometry) are progressively being replaced by 3D inspection methods (e.g., 3D geometry, colors, and texture on the 3D shape) therefore requiring a digitization of the object surface. Three dimensional surface acquisition is a topic which has been studied to a large extent, and a significant number of techniques for acquiring 3D shapes has been proposed, leading to a wide range of commercial solutions available on the market. These systems cover a wide range from micro-scale objects such as shape from focus and shape from defocus techniques, to several meter sized objects (time of flight technique). Nevertheless, the use of such systems still encounters difficulties when dealing with non-diffuse (non Lambertian) surfaces as is the case for transparent, semi-transparent, or highly reflective materials (e.g., glass, crystals, plastics, and shiny metals). We review and compare various systems and approaches which were recently developed for 3D digitization of transparent objects.

  20. Non-sky polarization-based dehazing algorithm for non-specular objects using polarization difference and global scene feature.

    Science.gov (United States)

    Qu, Yufu; Zou, Zhaofan

    2017-10-16

    Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a "sky area", and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area ("non-sky"). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel

  1. A multiple objective magnet sorting algorithm for the Advanced Light Source insertion devices

    International Nuclear Information System (INIS)

    Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.

    1995-01-01

    Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors

  2. A multiple objective magnet sorting algorithm for the ALS insertion devices

    International Nuclear Information System (INIS)

    Humphries, D.; Goetz, F.; Kownacki, P.; Marks, S.; Schlueter, R.

    1994-07-01

    Insertion devices for the Advanced Light Source (ALS) incorporate large numbers of permanent magnets which have a variety of magnetization orientation errors. These orientation errors can produce field errors which affect both the spectral brightness of the insertion devices and the storage ring electron beam dynamics. A perturbation study was carried out to quantify the effects of orientation errors acting in a hybrid magnetic structure. The results of this study were used to develop a multiple stage sorting algorithm which minimizes undesirable integrated field errors and essentially eliminates pole excitation errors. When applied to a measured magnet population for an existing insertion device, an order of magnitude reduction in integrated field errors was achieved while maintaining near zero pole excitation errors

  3. TRACKING AND MONITORING OF TAGGED OBJECTS EMPLOYING PARTICLE SWARM OPTIMIZATION ALGORITHM IN A DEPARTMENTAL STORE

    Directory of Open Access Journals (Sweden)

    Indrajit Bhattacharya

    2011-05-01

    Full Text Available The present paper proposes a departmental store automation system based on Radio Frequency Identification (RFID technology and Particle Swarm Optimization (PSO algorithm. The items in the departmental store spanned over different sections and in multiple floors, are tagged with passive RFID tags. The floor is divided into number of zones depending on different types of items that are placed in their respective racks. Each of the zones is placed with one RFID reader, which constantly monitors the items in their zone and periodically sends that information to the application. The problem of systematic periodic monitoring of the store is addressed in this application so that the locations, distributions and demands of every item in the store can be invigilated with intelligence. The proposed application is successfully demonstrated on a simulated case study.

  4. Object-Oriented Economic Power Dispatch of Electrical Power System with minimum pollution using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    T. Bouktir

    2005-06-01

    Full Text Available This paper presents solution of optimal power flow (OPF problem of electrical power system via a genetic algorithm of real type. The objective is to minimize the total fuel cost of generation and environmental pollution caused by fossil based thermal generating units and also maintain an acceptable system performance in terms of limits on generator real and reactive power outputs, bus voltages, shunt capacitors/reactors, transformers tap-setting and power flow of transmission lines. CPU times can be reduced by decomposing the optimization constraints to active constraints that affect directly the cost function manipulated directly the GA, and passive constraints such as generator bus voltages and transformer tap setting maintained in their soft limits using a conventional constraint load flow. The algorithm was developed in an Object Oriented fashion, in the C++ programming language. This option satisfies the requirements of flexibility, extensibility, maintainability and data integrity. The economic power dispatch is applied to IEEE 30-bus model system (6-generator, 41-line and 20-load. The numerical results have demonstrate the effectiveness of the stochastic search algorithms because its can provide accurate dispatch solutions with reasonable time. Further analyses indicate that this method is effective for large-scale power systems.

  5. Resonance assignment of the NMR spectra of disordered proteins using a multi-objective non-dominated sorting genetic algorithm

    International Nuclear Information System (INIS)

    Yang, Yu; Fritzsching, Keith J.; Hong, Mei

    2013-01-01

    A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra (“good connections”), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra (“bad connections”), and minimizing the number of assigned peaks that have no matching peaks in the other spectra (“edges”). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct

  6. Resonance assignment of the NMR spectra of disordered proteins using a multi-objective non-dominated sorting genetic algorithm.

    Science.gov (United States)

    Yang, Yu; Fritzsching, Keith J; Hong, Mei

    2013-11-01

    A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra ("good connections"), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra ("bad connections"), and minimizing the number of assigned peaks that have no matching peaks in the other spectra ("edges"). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct assignment for a

  7. Comparison of active SIFT-based 3D object recognition algorithms

    CSIR Research Space (South Africa)

    Keaikitse, M

    2013-09-01

    Full Text Available by the author of [8]. The following is the procedure used for obtaining that dataset. The training and testing datasets were captured using a Prosilica GE1900C camera. Everyday objects such as cereal and spice boxes were used. In compiling the training dataset....88 1 Yes Spice Bottle - - No Spray Can - - No Spray Can2 1.39 1 Yes images satisfies the condition: (|xi − xj | ≤ xT = 12) ∧ (|yi − yj | ≤ yT = 4), In our case, however, the camera is fixed and the object is placed on a rotating turntable. As a result...

  8. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  9. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  10. An effective docking strategy for virtual screening based on multi-objective optimization algorithm

    Directory of Open Access Journals (Sweden)

    Kang Ling

    2009-02-01

    Full Text Available Abstract Background Development of a fast and accurate scoring function in virtual screening remains a hot issue in current computer-aided drug research. Different scoring functions focus on diverse aspects of ligand binding, and no single scoring can satisfy the peculiarities of each target system. Therefore, the idea of a consensus score strategy was put forward. Integrating several scoring functions, consensus score re-assesses the docked conformations using a primary scoring function. However, it is not really robust and efficient from the perspective of optimization. Furthermore, to date, the majority of available methods are still based on single objective optimization design. Results In this paper, two multi-objective optimization methods, called MOSFOM, were developed for virtual screening, which simultaneously consider both the energy score and the contact score. Results suggest that MOSFOM can effectively enhance enrichment and performance compared with a single score. For three different kinds of binding sites, MOSFOM displays an excellent ability to differentiate active compounds through energy and shape complementarity. EFMOGA performed particularly well in the top 2% of database for all three cases, whereas MOEA_Nrg and MOEA_Cnt performed better than the corresponding individual scoring functions if the appropriate type of binding site was selected. Conclusion The multi-objective optimization method was successfully applied in virtual screening with two different scoring functions that can yield reasonable binding poses and can furthermore, be ranked with the potentially compromised conformations of each compound, abandoning those conformations that can not satisfy overall objective functions.

  11. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  12. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    Science.gov (United States)

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  13. Multi-objective optimization of water supply network rehabilitation with non-dominated sorting Genetic Algorithm-Ⅱ

    Institute of Scientific and Technical Information of China (English)

    Xi JIN; Jie ZHANG; Jin-liang GAO; Wen-yan WU

    2008-01-01

    Through the transformation of hydraulic constraints into the objective functions associated with a water supply network rehabilitation problem, a non-dominated sorting Genetic Aigorithm-Ⅱ (NSGA-Ⅱ) can be used to solve the altered multi-objective optimization model. The introduction of NSGA-Ⅱ into water supply network optimal rehabilitation problem solves the conflict between one fitness value of standard genetic algorithm (SGA) and multi-objectives of rehabilitation problem. And the uncertainties brought by using weight coefficients or punish functions in conventional methods are controlled. And also by introduction of artificial inducement mutation (AIM) operation, the convergence speed of population is accelerated; this operation not only improves the convergence speed, but also improves the rationality and feasibility of solutions.

  14. Multi-objective optimization of an industrial penicillin V bioreactor train using non-dominated sorting genetic algorithm.

    Science.gov (United States)

    Lee, Fook Choon; Rangaiah, Gade Pandu; Ray, Ajay Kumar

    2007-10-15

    Bulk of the penicillin produced is used as raw material for semi-synthetic penicillin (such as amoxicillin and ampicillin) and semi-synthetic cephalosporins (such as cephalexin and cefadroxil). In the present paper, an industrial penicillin V bioreactor train is optimized for multiple objectives simultaneously. An industrial train, comprising a bank of identical bioreactors, is run semi-continuously in a synchronous fashion. The fermentation taking place in a bioreactor is modeled using a morphologically structured mechanism. For multi-objective optimization for two and three objectives, the elitist non-dominated sorting genetic algorithm (NSGA-II) is chosen. Instead of a single optimum as in the traditional optimization, a wide range of optimal design and operating conditions depicting trade-offs of key performance indicators such as batch cycle time, yield, profit and penicillin concentration, is successfully obtained. The effects of design and operating variables on the optimal solutions are discussed in detail. Copyright 2007 Wiley Periodicals, Inc.

  15. PROCESSING OF DIGITAL IMAGES OF INDUSTRIAL OBJECT SURFACES DURING NON-DESTRUCTIVE TESTING

    Directory of Open Access Journals (Sweden)

    A. A. Hundzin

    2016-01-01

    Full Text Available The paper presents modern approaches to processing of images obtained with the help of industrial equipment. Usage of pixel modification in small neighborhoods, application of uniform image processing while changing brightness level, possibilities for combination of several images, threshold image processing have been described in the paper. While processing a number of images on a metal structure containing micro-cracks and being under strain difference between two such images have been determined in the paper. The metal structure represents a contour specifying the difference in images. An analysis of the contour makes it possible to determine initial direction of crack propagation in the metal. A threshold binarization value has been determined while processing the image having a field of medium intensity which are disappearing in the process of simple binarization and merging with the background due to rather small drop between the edges. In this regard an algorithm of a balanced threshold histogram clipping has been selected and it is based on the following approach: two different histogram fractions are “weighed” and if one of the fractions “outweighs” then last column of the histogram fraction is removed and the procedure is repeated again. When there is rather high threshold value a contour break (disappearance of informative pixels may occur, and when there is a low threshold value – a noise (non-informative pixels may appear. The paper shows implementation of an algorithm for location of contact pads on image of semiconductor crystal. Algorithms for morphological processing of production prototype images have been obtained in the paper and these algorithms permit to detect defects on the surface of semiconductors, to carry out filtration, threshold binarization that presupposes application of an algorithm of a balanced threshold histogram clipping. The developed approaches can be used to highlight contours on the surface

  16. An Efficient Two-Objective Hybrid Local Search Algorithm for Solving the Fuel Consumption Vehicle Routing Problem

    Directory of Open Access Journals (Sweden)

    Weizhen Rao

    2016-01-01

    Full Text Available The classical model of vehicle routing problem (VRP generally minimizes either the total vehicle travelling distance or the total number of dispatched vehicles. Due to the increased importance of environmental sustainability, one variant of VRPs that minimizes the total vehicle fuel consumption has gained much attention. The resulting fuel consumption VRP (FCVRP becomes increasingly important yet difficult. We present a mixed integer programming model for the FCVRP, and fuel consumption is measured through the degree of road gradient. Complexity analysis of FCVRP is presented through analogy with the capacitated VRP. To tackle the FCVRP’s computational intractability, we propose an efficient two-objective hybrid local search algorithm (TOHLS. TOHLS is based on a hybrid local search algorithm (HLS that is also used to solve FCVRP. Based on the Golden CVRP benchmarks, 60 FCVRP instances are generated and tested. Finally, the computational results show that the proposed TOHLS significantly outperforms the HLS.

  17. A novel algorithm for fractional resizing of digital image in DCT domain

    Institute of Scientific and Technical Information of China (English)

    Wang Ci; Zhang Wenjun; Zheng Meng

    2005-01-01

    Fractional resizing of digital images is needed in various applications, such as displaying at different resolution depending on that of display device, building image index for an image database, and changing resolution according to the transmission channel bandwidth. With the wide use of JPEG and MPEG, almost all digital images are stored and transferred in DCT compressed format. Inorder to save the computation and memory cost, it is desirable to do resizing in DCT domain directly. This paper presents a fast and efficient method, which possesses the capability of fractional resizing in DCT domain. Experimental results confirm that this scheme can achieve significant computation cost reduction while maintain better quality.

  18. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  19. Semantic Web, Reusable Learning Objects, Personal Learning Networks in Health: Key Pieces for Digital Health Literacy.

    Science.gov (United States)

    Konstantinidis, Stathis Th; Wharrad, Heather; Windle, Richard; Bamidis, Panagiotis D

    2017-01-01

    The knowledge existing in the World Wide Web is exponentially expanding, while continuous advancements in health sciences contribute to the creation of new knowledge. There are a lot of efforts trying to identify how the social connectivity can endorse patients' empowerment, while other studies look at the identification and the quality of online materials. However, emphasis has not been put on the big picture of connecting the existing resources with the patients "new habits" of learning through their own Personal Learning Networks. In this paper we propose a framework for empowering patients' digital health literacy adjusted to patients' currents needs by utilizing the contemporary way of learning through Personal Learning Networks, existing high quality learning resources and semantics technologies for interconnecting knowledge pieces. The framework based on the concept of knowledge maps for health as defined in this paper. Health Digital Literacy needs definitely further enhancement and the use of the proposed concept might lead to useful tools which enable use of understandable health trusted resources tailored to each person needs.

  20. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    Science.gov (United States)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  1. DIGITAL

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — The Digital Flood Insurance Rate Map (DFIRM) Database depicts flood risk information and supporting data used to develop the risk data. The primary risk...

  2. Intersection signal control multi-objective optimization based on genetic algorithm

    OpenAIRE

    Zhanhong Zhou; Ming Cai

    2014-01-01

    A signal control intersection increases not only vehicle delay, but also vehicle emissions and fuel consumption in that area. Because more and more fuel and air pollution problems arise recently, an intersection signal control optimization method which aims at reducing vehicle emissions, fuel consumption and vehicle delay is required heavily. This paper proposed a signal control multi-object optimization method to reduce vehicle emissions, fuel consumption and vehicle delay simultaneously at ...

  3. Linking to Scientific Data: Identity Problems of Unruly and Poorly Bounded Digital Objects

    Directory of Open Access Journals (Sweden)

    Laura Wynholds

    2011-03-01

    Full Text Available Within information systems, a significant aspect of search and retrieval across information objects, such as datasets, journal articles, or images, relies on the identity construction of the objects. This paper uses identity to refer to the qualities or characteristics of an information object that make it definable and recognizable, and can be used to distinguish it from other objects. Identity, in this context, can be seen as the foundation from which citations, metadata and identifiers are constructed.In recent years the idea of including datasets within the scientific record has been gaining significant momentum, with publishers, granting agencies and libraries engaging with the challenge. However, the task has been fraught with questions of best practice for establishing this infrastructure, especially in regards to how citations, metadata and identifiers should be constructed. These questions suggests a problem with how dataset identities are formed, such that an engagement with the definition of datasets as conceptual objects is warranted.This paper explores some of the ways in which scientific data is an unruly and poorly bounded object, and goes on to propose that in order for datasets to fulfill the roles expected for them, the following identity functions are essential for scholarly publications: (i the dataset is constructed as a semantically and logically concrete object, (ii the identity of the dataset is embedded, inherent and/or inseparable, (iii the identity embodies a framework of authorship, rights and limitations, and (iv the identity translates into an actionable mechanism for retrieval or reference.

  4. Anthropology with algorithms? An exploration of online drug knowledge using digital methods

    NARCIS (Netherlands)

    Krieg, L.J.; Berning, M.; Hardon, A.

    2017-01-01

    Based on a study of more than twenty thousand reports on drug experiences from the online drug education portal Erowid, this article argues that the integration of ethnographic methods with computational methods and digital data analysis, including so-called big data, is not only possible but highly

  5. An algorithm for treating flat areas and depressions in digital elevation models using linear interpolation

    Science.gov (United States)

    F. Pan; M. Stieglitz; R.B. McKane

    2012-01-01

    Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Except for slope, none of the other topographic characteristics can be calculated until the flow...

  6. A multi-objective multi-memetic algorithm for network-wide conflict-free 4D flight trajectories planning

    Institute of Scientific and Technical Information of China (English)

    Su YAN; Kaiquan CAI

    2017-01-01

    Under the demand of strategic air traffic flow management and the concept of trajectory based operations (TBO),the network-wide 4D flight trajectories planning (N4DFTP) problem has been investigated with the purpose of safely and efficiently allocating 4D trajectories (4DTs) (3D position and time) for all the flights in the whole airway network.Considering that the introduction of large-scale 4DTs inevitably increases the problem complexity,an efficient model for strategic level conflict management is developed in this paper.Specifically,a bi-objective N4DFTP problem that aims to minimize both potential conflicts and the trajectory cost is formulated.In consideration of the large-scale,high-complexity,and multi-objective characteristics of the N4DFTP problem,a multi-objective multi-memetic algorithm (MOMMA) that incorporates an evolutionary global search framework together with three problem-specific local search operators is implemented.It is capable of rapidly and effectively allocating 4DTs via rerouting,target time controlling,and flight level changing.Additionally,to balance the ability of exploitation and exploration of the algorithm,a special hybridization scheme is adopted for the integration of local and global search.Empirical studies using real air traffic data in China with different network complexities show that the pro posed MOMMA is effective to solve the N4DFTP problem.The solutions achieved are competitive for elaborate decision support under a TBO environment.

  7. Optimization of a Finned Shell and Tube Heat Exchanger Using a Multi-Objective Optimization Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Heidar Sadeghzadeh

    2015-08-01

    Full Text Available Heat transfer rate and cost significantly affect designs of shell and tube heat exchangers. From the viewpoint of engineering, an optimum design is obtained via maximum heat transfer rate and minimum cost. Here, an analysis of a radial, finned, shell and tube heat exchanger is carried out, considering nine design parameters: tube arrangement, tube diameter, tube pitch, tube length, number of tubes, fin height, fin thickness, baffle spacing ratio and number of fins per unit length of tube. The “Delaware modified” technique is used to determine heat transfer coefficients and the shell-side pressure drop. In this technique, the baffle cut is 20 percent and the baffle ratio limits range from 0.2 to 0.4. The optimization of the objective functions (maximum heat transfer rate and minimum total cost is performed using a non-dominated sorting genetic algorithm (NSGA-II, and compared against a one-objective algorithm, to find the best solutions. The results are depicted as a set of solutions on a Pareto front, and show that the heat transfer rate ranges from 3517 to 7075 kW. Also, the minimum and maximum objective functions are specified, allowing the designer to select the best points among these solutions based on requirements. Additionally, variations of shell-side pressure drop with total cost are depicted, and indicate that the pressure drop ranges from 3.8 to 46.7 kPa.

  8. A multi-objective multi-memetic algorithm for network-wide conflict-free 4D flight trajectories planning

    Directory of Open Access Journals (Sweden)

    Su YAN

    2017-06-01

    Full Text Available Under the demand of strategic air traffic flow management and the concept of trajectory based operations (TBO, the network-wide 4D flight trajectories planning (N4DFTP problem has been investigated with the purpose of safely and efficiently allocating 4D trajectories (4DTs (3D position and time for all the flights in the whole airway network. Considering that the introduction of large-scale 4DTs inevitably increases the problem complexity, an efficient model for strategic-level conflict management is developed in this paper. Specifically, a bi-objective N4DFTP problem that aims to minimize both potential conflicts and the trajectory cost is formulated. In consideration of the large-scale, high-complexity, and multi-objective characteristics of the N4DFTP problem, a multi-objective multi-memetic algorithm (MOMMA that incorporates an evolutionary global search framework together with three problem-specific local search operators is implemented. It is capable of rapidly and effectively allocating 4DTs via rerouting, target time controlling, and flight level changing. Additionally, to balance the ability of exploitation and exploration of the algorithm, a special hybridization scheme is adopted for the integration of local and global search. Empirical studies using real air traffic data in China with different network complexities show that the proposed MOMMA is effective to solve the N4DFTP problem. The solutions achieved are competitive for elaborate decision support under a TBO environment.

  9. Dosimetric quality control of treatment planning systems in external radiation therapy using Digital Test Objects calculated by PENELOPE Monte-Carlo simulations

    International Nuclear Information System (INIS)

    Ben Hdech, Yassine

    2011-01-01

    To ensure the required accuracy and prevent from mis-administration, cancer treatments, by external radiation therapy are simulated on Treatment Planning System or TPS before radiation delivery in order to ensure that the prescription is achieved both in terms of target volumes coverage and healthy tissues protection. The TPS calculates the patient dose distribution and the treatment time per beam required to deliver the prescribed dose. TPS is a key system in the decision process of treatment by radiation therapy. It is therefore essential that the TPS be subject to a thorough check of its performance (quality control or QC) and in particular its ability to accurately compute dose distributions for patients in all clinical situations that be met. The 'traditional' methods recommended to carry out dosimetric CQ of algorithms implemented in the TPS are based on comparisons between dose distributions calculated with the TPS and dose measured in physical test objects (PTO) using the treatment machine. In this thesis we propose to substitute the reference dosimetric measurements performed in OTP by benchmark dose calculations in Digital Test Objects using PENELOPE Monte-Carlo code. This method has three advantages: (i) it allows simulation in situations close to the clinic and often too complex to be experimentally feasible; (ii) due to the digital form of reference data the QC process may be automated; (iii) it allows a comprehensive TPS CQ without hindering the use of an equipment devoted primarily to patients treatments. This new method of CQ has been tested successfully on the Eclipse TPS from Varian Medical Systems Company. (author) [fr

  10. The Sloan Digital Sky Survey Science Archive: Migrating a Multi-Terabyte Astronomical Archive from Object to Relational DBMS

    CERN Document Server

    Thakar, A R; Kunszt, Peter Z; Gray, J; Thakar, Aniruddha R.; Szalay, Alexander S.; Kunszt, Peter Z.; Gray, Jim

    2004-01-01

    The Sloan Digital Sky Survey Science Archive is the first in a series of multi-Terabyte digital archives in Astronomy and other data-intensive sciences. To facilitate data mining in the SDSS archive, we adapted a commercial database engine and built specialized tools on top of it. Originally we chose an object-oriented database management system due to its data organization capabilities, platform independence, query performance and conceptual fit to the data. However, after using the object database for the first couple of years of the project, it soon began to fall short in terms of its query support and data mining performance. This was as much due to the inability of the database vendor to respond our demands for features and bug fixes as it was due to their failure to keep up with the rapid improvements in hardware performance, particularly faster RAID disk systems. In the end, we were forced to abandon the object database and migrate our data to a relational database. We describe below the technical issu...

  11. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  12. Annotation-based enrichment of Digital Objects using open-source frameworks

    Directory of Open Access Journals (Sweden)

    Marcus Emmanuel Barnes

    2017-07-01

    Full Text Available The W3C Web Annotation Data Model, Protocol, and Vocabulary unify approaches to annotations across the web, enabling their aggregation, discovery and persistence over time. In addition, new javascript libraries provide the ability for users to annotate multi-format content. In this paper, we describe how we have leveraged these developments to provide annotation features alongside Islandora’s existing preservation, access, and management capabilities. We also discuss our experience developing with the Web Annotation Model as an open web architecture standard, as well as our approach to integrating mature external annotation libraries. The resulting software (the Web Annotation Utility Module for Islandora accommodates annotation across multiple formats. This solution can be used in various digital scholarship contexts.

  13. Multi-objective optimization design of air distribution of grate cooler by entropy generation minimization and genetic algorithm

    International Nuclear Information System (INIS)

    Shao, Wei; Cui, Zheng; Cheng, Lin

    2016-01-01

    Highlights: • A multi-objective optimization model of air distribution of grate cooler by genetic algorithm is proposed. • Pareto Front is obtained and validated by comparing with operating data. • Optimal schemes are compared and selected by engineering background. • Total power consumption after optimization decreases 61.10%. • Thickness of clinker on three grate plates is thinner. - Abstract: The cooling air distributions of grate cooler exercise a great influence on the clinker cooling efficiency and power consumption of cooling fans. A multi-objective optimization model of air distributions of grate cooler with cross-flow heat exchanger analogy is proposed in this paper. Firstly, thermodynamic and flow models of clinker cooling process is carried out. Then based on entropy generation minimization analysis, modified entropy generation numbers caused by heat transfer and pressure drop are chosen as objective functions respectively which optimized by genetic algorithm. The design variables are superficial velocities of air chambers and thicknesses of clinker layers on different grate plates. A set of Pareto optimal solutions which two objectives are optimized simultaneously is achieved. Scattered distributions of design variables resulting in the conflict between two objectives are brought out. The final optimal air distribution and thicknesses of clinker layers are selected from the Pareto optimal solutions based on power consumption of cooling fans minimization and validated by measurements. Compared with actual operating scheme, the total air volumes of optimized schemes decrease 2.4%, total power consumption of cooling fans decreases 61.1% and the outlet temperature of clinker decreases 122.9 °C which shows a remarkable energy-saving effect on energy consumption.

  14. An Evolutionary Mobility Aware Multi-Objective Hybrid Routing Algorithm for Heterogeneous Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Kulkarni, Nandkumar P.; Prasad, Neeli R.; Prasad, Ramjee

    deliberation. To tackle these two problems, Mobile Wireless Sensor Networks (MWSNs) is a better choice. In MWSN, Sensor nodes move freely to a target area without the need for any special infrastructure. Due to mobility, the routing process in MWSN has become more complicated as connections in the network can...... such as Average Energy consumption, Control Overhead, Reaction Time, LQI, and HOP Count. The authors study the influence of energy heterogeneity and mobility of sensor nodes on the performance of EMRP. The Performance of EMRP compared with Simple Hybrid Routing Protocol (SHRP) and Dynamic Multi-Objective Routing...

  15. Enhancing State-of-the-art Multi-objective Optimization Algorithms by Applying Domain Specific Operators

    DEFF Research Database (Denmark)

    Ghoreishi, Newsha; Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2015-01-01

    optimization problems where the environment does not change dynamically. For that reason, the requirement for convergence in static optimization problems is not as timecritical as for dynamic optimization problems. Most MOEAs use generic variables and operators that scale to static multi-objective optimization...... problem. The domain specific operators only encode existing knowledge about the environment. A comprehensive comparative study is provided to evaluate the results of applying the CONTROLEUM-GA compared to NSGAII, e-NSGAII and e- MOEA. Experimental results demonstrate clear improvements in convergence time...

  16. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  17. Algorithm of calculation of multicomponent system eutectics using electronic digital computer

    International Nuclear Information System (INIS)

    Posypajko, V.I.; Stratilatov, B.V.; Pervikova, V.I.; Volkov, V.Ya.

    1975-01-01

    A computer algorithm is proposed for determining low-temperature equilibrium regions for existing phases. The algorithm has been used in calculating nonvariant parameters (temperatures of melting of eutectics and the concentrations of their components) for a series of trinary systems, among which are Ksub(long)Cl, WO 4 , SO 4 (x 1 =K 2 WO 4 ; x 2 =K 2 SO 4 ), Ag, Cd, Pbsub(long)Cl(x 1 =CdCl 2 , x 2 =PbCl 2 ); Ksub(long)F, Cl, I (x 1 =KF, x 2 =KI). The proposed method of calculating eutectics permits the planning of the subsequent experiment in determining the parameters of the eutectics of multicomponent systems and the forecasting of chemical interaction in such systems. The algorithm can be used in calculating systems containing any number of components

  18. The multi-objective genetic algorithm optimization, of a superplastic forming process, using ansys®

    Directory of Open Access Journals (Sweden)

    Grebenişan Gavril

    2017-01-01

    Full Text Available In the industrial practice, the product is intended to be flawless, with no technological difficulty in making the profile shapes. If this product results without defects, then any Finite Elements Method (FEM based simulation can support that technology. A technology engineer does not propose, very often to analyze the simulation of the design technology, but rather to try to optimize a solution that he feels feasible. Experiments used as the basis for numerical optimization analysis support their research in the field of superplastic forming. Determining the influence of input parameters on the output parameters, Determining the optimal shape of the product and the optimal initial geometry, the prediction of the cracks and possibly the fractures, the prediction of the final thickness of the sheet, these are the objectives of the research and optimization for this project. The results of the numerical simulations have been compared with the measurements made on parts and sections of the parts obtained by superplastic forming. Of course, the consistency of the results, costs, benefits, and times required to perform numerical simulations are evaluated, but they are not objectives for optimizing the superplastic forming process.

  19. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    Science.gov (United States)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  20. Digital algorithms to recognize shot circuits just in right time. Digitale Algorithmen zur fruehzeitigen Kurzschlusserkennung

    Energy Technology Data Exchange (ETDEWEB)

    Lindmayer, M.; Stege, M. (Technische Univ. Braunschweig (Germany, F.R.). Inst. fuer Elektrische Energieanlagen)

    1991-07-01

    Algorithms for early detection and prevention of short circuits are presented. Data on current levels and steepness in the a.c. network to be protected are evaluated by microcomputers. In particular, a simplified low-voltage grid is considered whose load circuit is formed in normal conditions by a serial R-L circuit. An optimum short-circuit detection algorithm is proposed for this network, which forecasts a current value from the current and steepness signals and compares this value with a limiting value. (orig.).

  1. Hybridization between multi-objective genetic algorithm and support vector machine for feature selection in walker-assisted gait.

    Science.gov (United States)

    Martins, Maria; Costa, Lino; Frizera, Anselmo; Ceres, Ramón; Santos, Cristina

    2014-03-01

    Walker devices are often prescribed incorrectly to patients, leading to the increase of dissatisfaction and occurrence of several problems, such as, discomfort and pain. Thus, it is necessary to objectively evaluate the effects that assisted gait can have on the gait patterns of walker users, comparatively to a non-assisted gait. A gait analysis, focusing on spatiotemporal and kinematics parameters, will be issued for this purpose. However, gait analysis yields redundant information that often is difficult to interpret. This study addresses the problem of selecting the most relevant gait features required to differentiate between assisted and non-assisted gait. For that purpose, it is presented an efficient approach that combines evolutionary techniques, based on genetic algorithms, and support vector machine algorithms, to discriminate differences between assisted and non-assisted gait with a walker with forearm supports. For comparison purposes, other classification algorithms are verified. Results with healthy subjects show that the main differences are characterized by balance and joints excursion in the sagittal plane. These results, confirmed by clinical evidence, allow concluding that this technique is an efficient feature selection approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Investigation of trunk muscle activities during lifting using a multi-objective optimization-based model and intelligent optimization algorithms.

    Science.gov (United States)

    Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam

    2016-03-01

    A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data.

  3. A dynamic programming–enhanced simulated annealing algorithm for solving bi-objective cell formation problem with duplicate machines

    Directory of Open Access Journals (Sweden)

    Mohammad Mohammadi

    2015-04-01

    Full Text Available Cell formation process is one of the first and the most important steps in designing cellular manufacturing systems. It consists of identifying part families according to the similarities in the design, shape, and presses of parts and dedicating machines to each part family based on the operations required by the parts. In this study, a hybrid method based on a combination of simulated annealing algorithm and dynamic programming was developed to solve a bi-objective cell formation problem with duplicate machines. In the proposed hybrid method, each solution was represented as a permutation of parts, which is created by simulated annealing algorithm, and dynamic programming was used to partition this permutation into part families and determine the number of machines in each cell such that the total dissimilarity between the parts and the total machine investment cost are minimized. The performance of the algorithm was evaluated by performing numerical experiments in different sizes. Our computational experiments indicated that the results were very encouraging in terms of computational time and solution quality.

  4. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    Science.gov (United States)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  5. Sustainable Scheduling of Cloth Production Processes by Multi-Objective Genetic Algorithm with Tabu-Enhanced Local Search

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2017-09-01

    Full Text Available The dyeing of textile materials is the most critical process in cloth production because of the strict technological requirements. In addition to the technical aspect, there have been increasing concerns over how to minimize the negative environmental impact of the dyeing industry. The emissions of pollutants are mainly caused by frequent cleaning operations which are necessary for initializing the dyeing equipment, as well as idled production capacity which leads to discharge of unconsumed chemicals. Motivated by these facts, we propose a methodology to reduce the pollutant emissions by means of systematic production scheduling. Firstly, we build a three-objective scheduling model that incorporates both the traditional tardiness objective and the environmentally-related objectives. A mixed-integer programming formulation is also provided to accurately define the problem. Then, we present a novel solution method for the sustainable scheduling problem, namely, a multi-objective genetic algorithm with tabu-enhanced iterated greedy local search strategy (MOGA-TIG. Finally, we conduct extensive computational experiments to investigate the actual performance of the MOGA-TIG. Based on a fair comparison with two state-of-the-art multi-objective optimizers, it is concluded that the MOGA-TIG is able to achieve satisfactory solution quality within tight computational time budget for the studied scheduling problem.

  6. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  7. Multi-objective optimization of the control strategy of electric vehicle electro-hydraulic composite braking system with genetic algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fengjiao

    2015-03-01

    Full Text Available Optimization of the control strategy plays an important role in improving the performance of electric vehicles. In order to improve the braking stability and recover the braking energy, a multi-objective genetic algorithm is applied to optimize the key parameters in the control strategy of electric vehicle electro-hydraulic composite braking system. Various limitations are considered in the optimization process, and the optimization results are verified by a software simulation platform of electric vehicle regenerative braking system in typical brake conditions. The results show that optimization objectives achieved a good astringency, and the optimized control strategy can increase the brake energy recovery effectively under the condition of ensuring the braking stability.

  8. A computationally efficient depression-filling algorithm for digital elevation models, applied to proglacial lake drainage

    NARCIS (Netherlands)

    Berends, Constantijn J.; Van De Wal, Roderik S W

    2016-01-01

    Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a

  9. A homotopy algorithm for digital optimal projection control GASD-HADOC

    Science.gov (United States)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  10. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    Science.gov (United States)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination

  11. Optimization of a novel carbon dioxide cogeneration system using artificial neural network and multi-objective genetic algorithm

    International Nuclear Information System (INIS)

    Jamali, Arash; Ahmadi, Pouria; Mohd Jaafar, Mohammad Nazri

    2014-01-01

    In this research study, a combined cycle based on the Brayton power cycle and the ejector expansion refrigeration cycle is proposed. The proposed cycle can provide heating, cooling and power simultaneously. One of the benefits of such a system is to be driven by low temperature heat sources and using CO 2 as working fluid. In order to enhance the understanding of the current work, a comprehensive parametric study and exergy analysis are conducted to determine the effects of the thermodynamic parameters on the system performance and the exergy destruction rate in the components. The suggested cycle can save the energy around 46% in comparison with a system producing cooling, power and hot water separately. On the other hand, to optimize a system to meet the load requirement, the surface area of the heat exchangers is determined and optimized. The results of this section can be used when a compact system is also an objective function. Along with a comprehensive parametric study and exergy analysis, a complete optimization study is carried out using a multi-objective evolutionary based genetic algorithm considering two different objective functions, heat exchangers size (to be minimized) and exergy efficiency (to be maximized). The Pareto front of the optimization problem and a correlation between exergy efficiency and total heat exchangers length is presented in order to predict the trend of optimized points. The suggested system can be a promising combined system for buildings and outland regions. - Highlights: •Energy and exergy analysis of a novel CHP system are reported. •A comprehensive parametric study is conducted to enhance the understanding of the system performance. •Apply a multi-objective optimization technique based on a code developed in the Matlab software program using an evolutionary algorithm

  12. Digital watermarking techniques and trends

    CERN Document Server

    Nematollahi, Mohammad Ali; Rosales, Hamurabi Gamboa

    2017-01-01

    This book presents the state-of-the-arts application of digital watermarking in audio, speech, image, video, 3D mesh graph, text, software, natural language, ontology, network stream, relational database, XML, and hardware IPs. It also presents new and recent algorithms in digital watermarking for copyright protection and discusses future trends in the field. Today, the illegal manipulation of genuine digital objects and products represents a considerable problem in the digital world. Offering an effective solution, digital watermarking can be applied to protect intellectual property, as well as fingerprinting, enhance the security and proof-of-authentication through unsecured channels.

  13. Herschel Observations of Protostellar and Young Stellar Objects in Nearby Molecular Clouds: The DIGIT Open Time Key Project

    Science.gov (United States)

    Green, Joel D.; DIGIT OTKP Team

    2010-01-01

    The DIGIT (Dust, Ice, and Gas In Time) Open Time Key Project utilizes the PACS spectrometer (57-210 um) onboard the Herschel Space Observatory to study the colder regions of young stellar objects and protostellar cores, complementary to recent observations from Spitzer and ground-based observatories. DIGIT focuses on 30 embedded sources and 64 disk sources, and includes supporting photometry from PACS and SPIRE, as well as spectroscopy from HIFI, selected from nearby molecular clouds. For the embedded sources, PACS spectroscopy will allow us to address the origin of [CI] and high-J CO lines observed with ISO-LWS. Our observations are sensitive to the presence of cold crystalline water ice, diopside, and carbonates. Additionally, PACS scans are 5x5 maps of the embedded sources and their outflows. Observations of more evolved disk sources will sample low and intermediate mass objects as well as a variety of spectral types from A to M. Many of these sources are extremely rich in mid-IR crystalline dust features, enabling us to test whether similar features can be detected at larger radii, via colder dust emission at longer wavelengths. If processed grains are present only in the inner disk (in the case of full disks) or from the emitting wall surface which marks the outer edge of the gap (in the case of transitional disks), there must be short timescales for dust processing; if processed grains are detected in the outer disk, radial transport must be rapid and efficient. Weak bands of forsterite and clino- and ortho-enstatite in the 60-75 um range provide information about the conditions under which these materials were formed. For the Science Demonstration Phase we are observing an embedded protostar (DK Cha) and a Herbig Ae/Be star (HD 100546), exemplars of the kind of science that DIGIT will achieve over the full program.

  14. Standards for digital computers used in non-safety nuclear power plant applications: objectives and limitations

    International Nuclear Information System (INIS)

    Rorer, D.C.; Long, A.B.

    1977-01-01

    There are currently a number of efforts to develop standards which would apply to digital computers used in nuclear power plants for functions other than those directly involving plant protection (for example, ANS: 4.3.3 Subworking Group in the U.S., IEC 45A/WGA1 Subcommittee in Europe). The impetus for this activity is discussed and generally attributed to the realization that nonsafety systems computers may affect the assumptions used as the design bases for safety systems, the sizable economic loss which can result from the failure of a critical computer application, and the lingering concern about the misapplication of a still-new technology. At the same time, it is pointed out that these standards may create additional obstacles for the use of this new technology which are not present in the application of more conventional instrumentation and control equipment. Much of the U.S. effort has been directed toward the problem of validation of computer systems in which the potential exists for unplanned interactions between various functions in a multiprogram environment, using common hardware in a time-sharing mode. The goal is to develop procedures for the specification, development implementation, and documentation of testable, modular systems which, in the absence of proven quantitative techniques for assessing software reliability, are felt to provide reasonable assurance that the computer system will function as planned

  15. Multi objective Flower Pollination Algorithm for solving capacitor placement in radial distribution system using data structure load flow analysis

    Directory of Open Access Journals (Sweden)

    Tamilselvan V.

    2016-06-01

    Full Text Available The radial distribution system is a rugged system, it is also the most commonly used system, which suffers by loss and low voltage at the end bus. This loss can be reduced by the use of a capacitor in the system, which injects reactive current and also improves the voltage magnitude in the buses. The real power loss in the distribution line is the I2R loss which depends on the current and resistance. The connection of the capacitor in the bus reduces the reactive current and losses. The loss reduction is equal to the increase in generation, necessary for the electric power provided by firms. For consumers, the quality of power supply depends on the voltage magnitude level, which is also considered and hence the objective of the problem becomes the multi objective of loss minimization and the minimization of voltage deviation. In this paper, the optimal location and size of the capacitor is found using a new computational intelligent algorithm called Flower Pollination Algorithm (FPA. To calculate the power flow and losses in the system, novel data structure load flow is introduced. In this, each bus is considered as a node with bus associated data. Links between the nodes are distribution lines and their own resistance and reactance. To validate the developed FPA solutions standard test cases, IEEE 33 and IEEE 69 radial distribution systems are considered.

  16. Monitoring mangrove biomass change in Vietnam using SPOT images and an object-based approach combined with machine learning algorithms

    Science.gov (United States)

    Pham, Lien T. H.; Brabyn, Lars

    2017-06-01

    Mangrove forests are well-known for their provision of ecosystem services and capacity to reduce carbon dioxide concentrations in the atmosphere. Mapping and quantifying mangrove biomass is useful for the effective management of these forests and maximizing their ecosystem service performance. The objectives of this research were to model, map, and analyse the biomass change between 2000 and 2011 of mangrove forests in the Cangio region in Vietnam. SPOT 4 and 5 images were used in conjunction with object-based image analysis and machine learning algorithms. The study area included natural and planted mangroves of diverse species. After image preparation, three different mangrove associations were identified using two levels of image segmentation followed by a Support Vector Machine classifier and a range of spectral, texture and GIS information for classification. The overall classification accuracy for the 2000 and 2011 images were 77.1% and 82.9%, respectively. Random Forest regression algorithms were then used for modelling and mapping biomass. The model that integrated spectral, vegetation association type, texture, and vegetation indices obtained the highest accuracy (R2adj = 0.73). Among the different variables, vegetation association type was the most important variable identified by the Random Forest model. Based on the biomass maps generated from the Random Forest, total biomass in the Cangio mangrove forest increased by 820,136 tons over this period, although this change varied between the three different mangrove associations.

  17. An object-oriented and quadrilateral-mesh based solution adaptive algorithm for compressible multi-fluid flows

    Science.gov (United States)

    Zheng, H. W.; Shu, C.; Chew, Y. T.

    2008-07-01

    In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.

  18. Traces of humanity: Echoes of social and cultural experience in physical objects and digital surrogates in the University of Victoria Libraries

    Directory of Open Access Journals (Sweden)

    Robbyn Gordon Lanning

    2016-12-01

    Full Text Available The relationships between primary source materials and their digital surrogates warrant consideration about how different materials translate into digitized forms. Physical primary source materials found in library special collections and archives and their digital surrogates challenge the viewer to consider what these objects are communicating through their materiality or lack thereof. For example, how does a clay tablet represent itself digitally, as compared to a parchment manuscript, or a paper accounts book? What qualities, stories or narratives do these resources communicate in their original forms, as digital surrogates, or when engaged with together, and how do these differ? How do both physical and digital resources serve as archival objects with the ability to reflect our social and cultural experiences—and indeed our humanity—back to us? As more and more library and museum resources are digitized and made open to researchers, such questions must be addressed as the use and reuse of digital surrogates becomes increasingly complex as digital scholarship evolves.

  19. IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS

    Directory of Open Access Journals (Sweden)

    A. Audi

    2017-08-01

    Full Text Available In the recent years, unmanned aerial vehicles (UAVs have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation

  20. Use of Portable Digital Devices to Analyze Autonomic Stress Response in Psychology Objective Structured Clinical Examination.

    Science.gov (United States)

    Beltrán-Velasco, Ana Isabel; Bellido-Esteban, Alberto; Ruisoto-Palomera, Pablo; Clemente-Suárez, Vicente Javier

    2018-01-12

    The aim of the present study was to explore changes in the autonomic stress response of Psychology students in a Psychology Objective Structured Clinical Examination (OSCE) and their relationship with OSCE performance. Variables of autonomic modulation by the analysis of heart rate variability in temporal, frequency and non-linear domains, subjective perception of distress strait and academic performance were measured before and after the two different evaluations that composed the OSCE. A psychology objective structured clinical examination composed by two different evaluation scenarios produced a large anxiety anticipatory response, a habituation response in the first of the evaluation scenarios and a in the entire evaluation, and a no habituation response in the second evaluation scenario. Autonomic modulation parameters do not correlate with academic performance of students.