Spatial-frequency information of a given image can be extracted by associating the grey-level spatial data with one of the well-known spatial/spatial-frequency distributions. The Wigner-Ville distribution (WVD) has a good characteristic that the images can be represented in spatial/spatial-frequency domains. For intensity and range images of ladar, through the pseudo Wigner-Ville distribution (PWVD) using one or two dimension window, the statistical property of Rényi entropy is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on PWVD and Rényi entropy is proposed. After that, target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.
Xu, Yuannan; Zhao, Yuan; Deng, Rong; Dong, Yanbing
Atmospheric turbulence produces intensity modulation or "scintillation" effects, on both on the outward path and on the return path, that degrade laser radar (ladar) target acquisition, ranging, and imaging. Quantitative previous measurements of ladar scintillation have used tiny flat mirrors and corner-cube retro-reflectors as their objects. In actuality, the real finite sized objects create scintillation averaging on the outgoing path and the finite sized telescope apertures produce scintillation averaging on the return path. We will quantify these effects and compare them to the tiny mirror and corner-cube retro-reflector quantitative data from the literature. Methods for modeling the outward path and the inward path scintillation effects and the target produced laser-speckle over arbitrary focal plane array detector areas will be discussed. The analysis of the ladar receiver-operating-characteristic (ROC) and signal-to-noise ratio (SNRp) or mean squared over a variance will also be discussed.
Youmans, Douglas G.
When travelling in unfamiliar environments, a mobile service robot needs to acquire information about his surroundings in order to detect and avoid obstacles and arrive safely at his destination. This dissertation presents a solution for the problem of mapping and obstacle detection in indoor/outdoor structured3 environments, with particular application on service robots equipped with a LADAR. Since this system was designed for structured environments, offroad terrains are o...
Gomes, Pedro Miguel Barros
LADAR (Laser Detection and Ranging) systems constitue a direct extension of the conventional radar techniques. Because they operate at much shorter wavelengths, LADARs have the unique capability to generate 3D images of objects. These laser systems have many applications in both the civilian and the defence fields concerning target detection and identification. The extraction of these features depends on the processing algorithms, target properties and 3D images quality. In order to support f...
This study explained a method to classify humans and trees by extraction their geometric and statistical features in data obtained from 3D LADAR. In a wooded GPS-denied environment, it is difficult to identify the location of unmanned ground vehicles and it is also difficult to properly recognize the environment in which these vehicles move. In this study, using the point cloud data obtained via 3D LADAR, a method to extract the features of humans, trees, and other objects within an environment was implemented and verified through the processes of segmentation, feature extraction, and classification. First, for the segmentation, the radially bounded nearest neighbor method was applied. Second, for the feature extraction, each segmented object was divided into three parts, and then their geometrical and statistical features were extracted. A human was divided into three parts: the head, trunk and legs. A tree was also divided into three parts: the top, middle, and bottom. The geometric features were the variance of the x-y data for the center of each part in an object, using the distance between the two central points for each part, using K-mean clustering. The statistical features were the variance of each of the parts. In this study, three, six and six features of data were extracted, respectively, resulting in a total of 15 features. Finally, after training the extracted data via an artificial network, new data were classified. This study showed the results of an experiment that applied an algorithm proposed with a vehicle equipped with 3D LADAR in a thickly forested area, which is a GPS-denied environment. A total of 5,158 segments were obtained and the classification rates for human and trees were 82.9% and 87.4%, respectively.
Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
Rockwell International's objective was to develop a robust and state of the art FLIR/LADAR target detection and identification system for the reconnaissance, surveillance, and target acquisition program. The algorithm suite was to be integrated into the U...
The performance of Geiger-mode LAser Detection and Ranging (LADAR) cameras is primarily defined by individual pixel attributes, such as dark count rate (DCR), photon detection efficiency (PDE), jitter, and crosstalk. However, for the expanding LADAR imaging applications, other factors, such as image uniformity, component tolerance, manufacturability, reliability, and operational features, have to be considered. Recently we have developed new 32×32 and 32×128 Read-Out Integrated Circuits (ROIC) for LADAR applications. With multiple filter and absorber structures, the 50-?m-pitch arrays demonstrate pixel crosstalk less than 100 ppm level, while maintaining a PDE greater than 40% at 4 V overbias. Besides the improved epitaxial and process uniformity of the APD arrays, the new ROICs implement a Non-uniform Bias (NUB) circuit providing 4-bit bias voltage tunability over a 2.5 V range to individually bias each pixel. All these features greatly increase the performance uniformity of the LADAR camera. Cameras based on these ROICs were integrated with a data acquisition system developed by Boeing DES. The 32×32 version has a range gate of up to 7 ?s and can cover a range window of about 1 km with 14-bit and 0.5 ns timing resolution. The 32×128 camera can be operated at a frame rate of up to 20 kHz with 0.3 ns and 14-bit time resolution through a full CameraLink. The performance of the 32×32 LADAR camera has been demonstrated in a series of field tests on various vehicles.
Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Boisvert, Joseph; McDonald, Paul; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison; van Duyne, Stephen; Pauls, Greg; Gaalema, Stephen
Autonomous off-road navigation is central to several important applications of unmanned ground vehicles. This requires the ability to detect obstacles in vegetation. We examine the prospects for doing so with scanning ladar and with a linear array of 2.2 GHz micro-impulse radar transceivers. For ladar, we summarize our work to date on algorithms for detecting obstacles in tall grass with single-axis ladar, then present a simple probabilistic model of the distance into tall grass that ladar-based obstacle detection is possible.
Matthies, Larry; Bergh, Chuck; Castano, Andres; Macedo, Jose; Manduchi, Roberto
The Army Research Laboratory (ARL) is researching a short-range ladar imager for small unmanned ground vehicles for navigation, obstacle/collision avoidance, and target detection and identification. To date, commercial ladars for this application have been flawed by one or more factors including, low pixelization, insufficient range or range resolution, image artifacts, no daylight operation, large size, high power consumption, and high cost. In the prior year we conceived a scanned ladar design based on a newly developed but commercial MEMS mirror and a pulsed Erbium fiber laser. We initiated construction, and performed in-lab tests that validated the basic ladar architecture. This year we improved the transmitter and receiver modules and successfully tested a new low-cost and compact Erbium laser candidate. We further developed the existing software to allow adjustment of operating parameters on-the-fly and display of the imaged data in real-time. For our most significant achievement we mounted the ladar on an iRobot PackBot and wrote software to integrate PackBot and ladar control signals and ladar imagery on the PackBot's computer network. We recently remotely drove the PackBot over an inlab obstacle course while displaying the ladar data real-time over a wireless link. The ladar has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° x 30° field of regard, 20 m range, eyesafe operation, and 40 cm range resolution (with provisions for super-resolution or accuracy). This paper will describe the ladar design and update progress in its development and performance.
Stann, Barry L.; Dammann, John F.; Giza, Mark M.; Jian, Pey-Schuan; Lawler, William B.; Nguyen, Hung M.; Sadler, Laurel C.
The Army Research Laboratory (ARL) is researching a short-range ladar imager for navigation, obstacle/collision avoidance, and target detection/identification on small unmanned ground vehicles (UGV).To date, commercial UGV ladars have been flawed by one or more factors including low pixelization, insufficient range or range resolution, image artifacts, no daylight operation, large size, high power consumption, and high cost. ARL built a breadboard ladar based on a newly developed but commercially available micro-electro-mechanical system (MEMS) mirror coupled to a lowcost pulsed Erbium fiber laser transmitter that largely addresses these problems. Last year we integrated the ladar and associated control software on an iRobot PackBot and distributed the ladar imagery data via the PackBot's computer network. The un-tethered PackBot was driven through an indoor obstacle course while displaying the ladar data realtime on a remote laptop computer over a wireless link. We later conducted additional driving experiments in cluttered outdoor environments. This year ARL partnered with General Dynamics Robotics Systems to start construction of a brass board ladar design. This paper will discuss refinements and rebuild of the various subsystems including the transmitter and receiver module, the data acquisition and data processing board, and software that will lead to a more compact, lower cost, and better performing ladar. The current ladar breadboard has a 5-6 Hz frame rate, an image size of 256 (h) × 128 (v) pixels, a 60° × 30° field of regard, 20 m range, eyesafe operation, and 40 cm range resolution (with provisions for super-resolution or accuracy).
Stann, Barry L.; Dammann, John F.; Enke, Joseph A.; Jian, Pey-Schuan; Giza, Mark M.; Lawler, William B.; Powers, Michael A.
This paper proposes an object-based VRML interface that can reduce the burden of both users and developers. In order to automatically generate the object menu, a new VRML node type that defines the names of objects in a scene graph is introduced together with the methods for real-time control of an active viewpoint node.
Araya, Shinji; Suzaki, Kenichi; Miyake, Yoshihiro
Object-based video representations are considered to be useful for easing the process of multimedia content production and enhancing user interactivity in multimedia productions. Object-based video presents several new technical challenges, however. Firstly, as with conventional video representations, compression of the video data is a requirement. For object-based representations, it is necessary to compress the shape of each video object as it moves in time. This amounts to the co...
In this study, we investigated whether awareness of objects is necessary for object-based guidance of attention. We used the two-rectangle method (Egly, Driver, & Rafal, 1994) to probe object-based attention and adopted the continuous flash suppression technique (Tsuchiya & Koch, 2005) to control for the visibility of the two rectangles. Our results show that object-based attention, as indexed by the same-object advantage--faster response to a target within a cued object than within a noncued object--was obtained regardless of participants' awareness of the objects. This study provides the first evidence of object-based attention under unconscious conditions by showing that the selection unit of attention can be at an object level even when these objects are invisible--a level higher than the previous evidence for a subliminally cued location. We suggest that object-based attentional guidance plays a fundamental role of binding features in both the conscious and unconscious mind. PMID:22237418
Chou, Wei-Lun; Yeh, Su-Ling
The paper presents new techniques for automatic segmentation, classification, and generic pose estimation of ships and boats in laser radar imagery. Segmentation, which primarily involves elimination of water reflections, is based on modeling surface waves and comparing the expected water reflection signature to the ladar intensity image. Shape classification matches a parametric shape representation of a generic ship hull with parameters extracted from the range image. The extracted paramete...
Armbruster, Walter; Hammer, Marcus
We present real-time 3D image processing of flash ladar data using our recently developed GPU parallel processing kernels. Our laboratory and airborne experiences with flash ladar focal planes have shown that per laser flash, typically only a small fraction of the pixels on the focal plane array actually produce a meaningful range signal. Therefore, to optimize overall data processing speed, the large quantity of uninformative data are filtered out and removed from the data stream prior to the mathematically intensive point cloud transformation processing. This front-end pre-processing, which largely consists of control flow instructions, is specific to the particular type of flash ladar focal plane array being used and is performed by the computer's CPU. The valid signals along with their corresponding inertial and navigation metadata are then transferred to a GPU device to perform range-correction, geo-location, and ortho-rectification on each 3D data point so that data from multiple frames can be properly tiled together either to create a wide-area map or to reconstruct an object from multiple look angles. GPU parallel processing kernels were developed using OpenCL. Postprocessing to perform fine registration between data frames via complex iterative steps also benefits greatly from this type of high-performance computing. The performance improvements obtained using GPU processing to create corrected 3D images and for frame-to-frame fine-registration are presented.
Wong, Chung M.; Bracikowski, Christopher; Baldauf, Brian K.; Havstad, Steven A.
Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects l...
Pilz, Karin S.; Roggeveen, Alexa B.; Creighton, Sarah E.; Bennett, Patrick J.; Sekuler, Allison B.
Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.
Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison
The paper describes the application of Choquet integral filters to automatic object detection in laser radar (LADAR) imagery. Choquet integrals are nonlinear integrals with respect to non-additive measures. These integrals can be used to represent typical nonlinear filters such as order statistic filters, linear combination of order statistic filters, weighted median filters and others. A Choquet integral filter is characterized by a measure. The representation of these filters as integrals with respect to measures provides an opportunity for optimizing the filters by finding optimal measures. Both optimal and heuristic filters are designed and compared on real data.
Hocaoglu, Ali K.; Gader, Paul D.
Objective: To evaluate the effect of the electromagnetic irradiation of ladar army on neurobehavioral function of military task population. Methods: 40 workers exposed to electromagnetic irradiation and 20 controls were investigated with questionnaire survey, profile of mood state and Some other neurobehavioral function tests. Results: Of all the rational symptoms, visual fatigue is ware obvious in the irradiation group and fatigue of POMS form of irradiation group have significant increased. The sum of the pursuit aiming test and the second self intercrossing test have obvious decreased. Conclusion: The mood state, hand operation ability and work efficiency in occupational people are affected by electromagnetic irradiation. (authors)
The authors investigated 2 effects of object-based attention: the spread of attention within an attended object and the prioritization of search across possible target locations within an attended object. Participants performed a flanker task in which the location of the task-relevant target was fixed and known to participants. A spreading…
Richard, Ashleigh M.; Lee, Hyunkyu; Vecera, Shaun P.
Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research…
Jarmasz, Jerzy; Herdman, Chris M.; Johannsdottir, Kamilla Run
This thesis proposes techniques for object-based audio and music. The work can be divided into two parts corresponding to model-based synthesis of musical instruments and computational analysis of audio and music. The contributions of this work are in signal and auditory analysis, sound object modeling, and sound signal synthesis.
Full Text Available OBJETIVO:Avaliar os resultados do retratamento convencional (LADAR, Alcon e do retratamento personalizado(LADARWave, Alcon em olhos submetidos a LASIK primário convencional. MÉTODOS: Estudo retrospectivo de revisão de prontuários consecutivos, de 38 olhos em 38 pacientes, submetidos a retratamento de LASIK para correção de miopia e astigmatismo. Os olhos operados foram divididos em dois grupos iguais. No primeiro grupo foi realizado o retratamento personalizado e, no outro, o retratamento convencional. As seguintes variáveis foram comparadas: acuidade visual de alto contraste e refração manifesta. A qualidade visual foi estimada e comparada através de inquérito subjetivo proposto aos pacientes. RESULTADOS: Não houve diferença estatística entre os grupos comparando-se as variáveis estudadas. O equivalente esférico pós-retratamento foi de 0,36 no grupo convencional e de 0,47 no personalizado (p=0,079. A acuidade visual de Snellen foi de 0,91 e 0,87, respectivamente, com p=0,07. O total de aberrações pré-operatório foi maior do que o pós-operatório no grupo personalizado (pOBJECTIVE: To evaluate the results of conventional (Ladar, Alcon and customized (LADARWave, Alcon retreatment ineyes undergoing conventional primary LASIK. METHODS: Retrospective revision of consecutive clinical report forms of 38 eyes of 38 patients who underwent LASIK retreatment for myopia and astigmatism. The operated eyes were divided into two equal groups. In the first was performed customized retreatment and, in the other, conventional retreatment. The following variables were compared: high contrast visual acuity and manifest refraction. The visual quality was estimated and compared using subjective survey offered to patients. RESULTS: There was no statistical difference between the groups when comparing the variables studied. The spherical equivalent after retreatment was 0.36 in the conventional group and 0.47 in the custom (p = 0.079. Snelen visual acuity was 0.91 and 0.87, respectively (p = 0.07. The preoperative total aberrations was higher than the postoperative period in custom group (p <0.001. In the conventional group there was no difference for any aberration evaluated. Complaints of glare (p = 0.117, photophobia (p = 0.987 and vision fluctuation (p = 0.545 were statistically similar between the two groups. CONCLUSION: Comparing the custom and conventional surgery for primary LASIK retreatment with LADAR, Alcon, there wasno statistical difference in the quantity and quality of vision. Nevertheless, there was a higher percentage of patients with complaints in relation to the visual quality in the group undergoing conventional surgery. Custom surgery seems to have greater capacity to reduce the total aberrations than conventional.
Lucas Monferrari Monteiro Vianna
This thesis proposes techniques for object-based audio and music. The work can be divided into two parts corresponding to model-based synthesis of musical instruments and computational analysis of audio and music. The contributions of this work are in sig...
How we attend to objects and their features that cannot be separated by location is not understood. We presented two temporally and spatially overlapping streams of objects, faces versus houses, and used magnetoencephalography and functional magnetic resonance imaging to separate neuronal responses to attended and unattended objects. Attention to faces versus houses enhanced the sensory responses in the fusiform face area (FFA) and parahippocampal place area (PPA), respectively. The increases in sensory responses were accompanied by induced gamma synchrony between the inferior frontal junction, IFJ, and either FFA or PPA, depending on which object was attended. The IFJ appeared to be the driver of the synchrony, as gamma phases were advanced by 20 ms in IFJ compared to FFA or PPA. Thus, the IFJ may direct the flow of visual processing during object-based attention, at least in part through coupled oscillations with specialized areas such as FFA and PPA. PMID:24763592
Baldauf, Daniel; Desimone, Robert
We present the SAMMI lightweight object detection method which has a high level of accuracy and robustness, and which is able to operate in an environment with a large number of cameras. Background modeling is based on DCT coefficients provided by cameras. Foreground detection uses similarity in temporal characteristics of adjacent blocks of pixels, which is a computationally inexpensive way to make use of object coherence. Scene model updating uses the approximated median method for improved performance. Evaluation at pixel level and application level shows that SAMMI object detection performs better and faster than the conventional Mixture of Gaussians method.
Raytheon is developing NIR sensor chip assemblies (SCAs) for scanning and staring 3D LADAR systems. High sensitivity is obtained by integrating high performance detectors with gain i.e. APDs with very low noise Readout Integrated Circuits. Unique aspects of these designs include: independent acquisition (non-gated) of pulse returns, multiple pulse returns with both time and intensity reported to enable full 3D reconstruction of the image. Recent breakthrough in device design has resulted in HgCdTe APDs operating at 300K with essentially no excess noise to gains in excess of 100, low NEP <1nW and GHz bandwidths and have demonstrated linear mode photon counting. SCAs utilizing these high performance APDs have been integrated and demonstrated excellent spatial and range resolution enabling detailed 3D imagery both at short range and long ranges. In this presentation we will review progress in high resolution scanning, staring and ultra-high sensitivity photon counting LADAR sensors.
Bailey, Steven; McKeag, William; Wang, Jinxue; Jack, Michael; Amzajerdian, Farzin
The Advanced Ladar Signal Simulator (ALASS) is a comprehensive laser radar simulator that synthesizes ladar signals for complex three dimensional dynamic diffuse targets in the presence of a dynamic turbulent atmosphere. ALASS provides single realization random signals (speckle) or the associated mean signals (ensemble rough target average). ALASS is radiometrically correct, accurately models receiver diffraction and defocus for both coherent and direct detection transceivers with single or multi-element detectors, and generates signals with correct three dimensional speckle statistics. Signals are computed using the target plane formulism; for coherent detection this involves the calculation of the back propagated local oscillator (BPLO) while for direct detection the back propagated impulse response (BPIR) is used. ALASS's primary functions are to serve as a laser radar sensor design tool, data product generator for exploitation, and a decision aid for implementing system designs. This paper provides an overview of ALASS, describes its functionality, presents validation results, and displays example imagery.
Jacob, Don; Gatt, Phillip; Nichols, Terry
Remote sensing imagery needs to be converted into tangible information which can be utilised in conjunction with other data sets, often within widely used Geographic Information Systems (GIS). As long as pixel sizes remained typically coarser than, or at the best, similar in size to the objects of interest, emphasis was placed on per-pixel analysis, or even sub-pixel analysis for this conversion, but with increasing spatial resolutions alternative paths have been followed, aimed at deriving objects that are made up of several pixels. This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way. The most common approach used for building objects is image segmentation, which dates back to the 1970s. Around the year 2000 GIS and image processing started to grow together rapidly through object based image analysis (OBIA - or GEOBIA for geospatial object based image analysis). In contrast to typical Landsat resolutions, high resolution images support several scales within their images. Through a comprehensive literature review several thousand abstracts have been screened, and more than 820 OBIA-related articles comprising 145 journal papers, 84 book chapters and nearly 600 conference papers, are analysed in detail. It becomes evident that the first years of the OBIA/GEOBIA developments were characterised by the dominance of 'grey' literature, but that the number of peer-reviewed journal articles has increased sharply over the last four to five years. The pixel paradigm is beginning to show cracks and the OBIA methods are making considerable progress towards a spatially explicit information extraction workflow, such as is required for spatial planning as well as for many monitoring programmes.
Object based representations of image data enable new content-related functionalities while facilitating management of large image databases. Developing such representations for multi-date and multi-spectral images is one of the objectives of the second phase of the Alexandria Digital Library (ADL) project at UCSB. Image segmentation and image registration are two of the main issues that are to be addressed in creating localized image representations. We present in this paper some of the recent and current work by the ADL's image processing group on robust image segmentation, registration, and the use of image texture for content representation. Built upon these technologies are techniques for managing large repositories of data. A texture thesaurus assists in creating a semantic classification of image regions. An object-based representation is proposed to facilitate data storage, retrieval, analysis, and navigation.
Newsam, Shawn; Bhagavathy, Sitaram; Kenney, Charles; Manjunath, B. S.; Fonseca, Leila
Full Text Available Visual appearance of natural objects is profoundly affected by viewing conditions such as viewpoint and illumination. Human subjects can nevertheless compensate well for variations in these viewing conditions. The strategies that the visual system uses to accomplish this are largely unclear. Previous computational studies have suggested that in principle, certain types of object fragments (rather than whole objects can be used for invariant recognition. However, whether the human visual system is actually capable of using this strategy remains unknown. Here, we show that human observers can achieve illumination invariance by using object fragments that carry the relevant information. To determine this, we have used novel, but naturalistic, 3-D visual objects called ‘digital embryos’. Using novel instances of whole embryos, not fragments, we trained subjects to recognize individual embryos across illuminations. We then tested the illumination-invariant object recognition performance of subjects using fragments. We found that the performance was strongly correlated with the mutual information (MI of the fragments, provided that MI value took variations in illumination into consideration. This correlation was not attributable to any systematic differences in task difficulty between different fragments. These results reveal two important principles of invariant object recognition. First, the subjects can achieve invariance at least in part by compensating for the changes in the appearance of small local features, rather than of whole objects. Second, the subjects do not always rely on generic or pre-existing invariance of features (i.e., features whose appearance remains largely unchanged by variations in illumination, and are capable of using learning to compensate for appearance changes when necessary. These psychophysical results closely fit the predictions of earlier computational studies of fragment-based invariant object recognition.
REBOL, the Relative Expression-Based Object Language, is a fascinating new scripting language developed at REBOL Technologies by Carl Sassenrath, the architect of the Amiga operating system. REBOL is intended to be used for Internet programming and, among its many features, it contains very easy-to-use networking capabilities. An example is this tiny line of REBOL code which retrieves a web page and emails it to a (fictitious) email address: "send email@example.com read
Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese OBJETIVO:Avaliar os resultados do retratamento convencional (LADAR, Alcon) e do retratamento personalizado(LADARWave, Alcon) em olhos submetidos a LASIK primário convencional. MÉTODOS: Estudo retrospectivo de revisão de prontuários consecutivos, de 38 olhos em 38 pacientes, submetidos a retratamento [...] de LASIK para correção de miopia e astigmatismo. Os olhos operados foram divididos em dois grupos iguais. No primeiro grupo foi realizado o retratamento personalizado e, no outro, o retratamento convencional. As seguintes variáveis foram comparadas: acuidade visual de alto contraste e refração manifesta. A qualidade visual foi estimada e comparada através de inquérito subjetivo proposto aos pacientes. RESULTADOS: Não houve diferença estatística entre os grupos comparando-se as variáveis estudadas. O equivalente esférico pós-retratamento foi de 0,36 no grupo convencional e de 0,47 no personalizado (p=0,079). A acuidade visual de Snellen foi de 0,91 e 0,87, respectivamente, com p=0,07. O total de aberrações pré-operatório foi maior do que o pós-operatório no grupo personalizado (p Abstract in english OBJECTIVE: To evaluate the results of conventional (Ladar, Alcon) and customized (LADARWave, Alcon) retreatment ineyes undergoing conventional primary LASIK. METHODS: Retrospective revision of consecutive clinical report forms of 38 eyes of 38 patients who underwent LASIK retreatment for myopia and [...] astigmatism. The operated eyes were divided into two equal groups. In the first was performed customized retreatment and, in the other, conventional retreatment. The following variables were compared: high contrast visual acuity and manifest refraction. The visual quality was estimated and compared using subjective survey offered to patients. RESULTS: There was no statistical difference between the groups when comparing the variables studied. The spherical equivalent after retreatment was 0.36 in the conventional group and 0.47 in the custom (p = 0.079). Snelen visual acuity was 0.91 and 0.87, respectively (p = 0.07). The preoperative total aberrations was higher than the postoperative period in custom group (p
Vianna, Lucas Monferrari Monteiro; Nascimento, Heloisa Moraes do; Campos, Mauro.
Water bodies are challenging terrain hazards for terrestrial unmanned ground vehicles (UGVs) for several reasons. Traversing through deep water bodies could cause costly damage to the electronics of UGVs. Additionally, a UGV that is either broken down due to water damage or becomes stuck in a water body during an autonomous operation will require rescue, potentially drawing critical resources away from the primary operation and increasing the operation cost. Thus, robust water detection is a critical perception requirement for UGV autonomous navigation. One of the properties useful for detecting still water bodies is that their surface acts as a horizontal mirror at high incidence angles. Still water bodies in wide-open areas can be detected by geometrically locating the exact pixels in the sky that are reflecting on candidate water pixels on the ground, predicting if ground pixels are water based on color similarity to the sky and local terrain features. But in cluttered areas where reflections of objects in the background dominate the appearance of the surface of still water bodies, detection based on sky reflections is of marginal value. Specifically, this software attempts to solve the problem of detecting still water bodies on cross-country terrain in cluttered areas at low cost.
Rankin, Arturo L.; Matthies, Larry H.
Full Text Available A method for retrieving video containing a particular object, a single image of the object is given as a query. The local invariant features are obtained for all frames in a sequence and tracked throughout the shot to extract stable features. In Video Retrieval system, each video that is stored in the database has its features extracted and compared to the features of the query image. Proposed work is to retrieve video from the database by giving query as an object. Video is firstly converted into frames, these frames are then segmented and an object is separated from the image. Then features are extracted from object image by using SIFT features. Features of the video database obtained by the segmentation and feature extraction using SIFT feature are matched by Nearest Neighbor Search (NNS.
Neetesh Gupta, Shiv K Sahu
Full Text Available Engagement with objects, either directly or through digital media, has long been recognized as a viable, constructivist pedagogy, capable of mediating significant meaning and context. The increasing uptake of digital technologies in university learning and teaching programs provides a timely opportunity for integrating museum and collection data and metadata in these programs. This project looked at the use of university museum and collection objects in teaching programs through a controlled experiment. A group of students were exposed directly to collection objects while another group was exposed to their digital surrogate. Students were then tested at later stages concerning their recall of didactic information. Results clearly show that students exposed to the original object had far better didactic recall over a longer time period than students exposed to their digital surrogates. This has implications for the development and rapid expansion of online education delivery in the tertiary education sector and elsewhere and the role university collections can play.
Engagement with objects, either directly or through digital media, has long been recognized as a viable, constructivist pedagogy, capable of mediating significant meaning and context. The increasing uptake of digital technologies in university learning and teaching programs provides a timely opportunity for integrating museum and collection data and metadata in these programs. This project looked at the use of university museum and collection objects in teaching programs through a controlled...
Simpson, Andrew; Hammond, Gina
Theories of visual attention argue that attention operates on perceptual objects, and thus that interactions between object formation and selective attention determine how competing sources interfere with perception. In auditory perception, theories of attention are less mature, and no comprehensive framework exists to explain how attention influences perceptual abilities. However, the same principles that govern visual perception can explain many seemingly disparate auditory phenomena. In pa...
Shinn-cunningham, Barbara G.
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find bette...
Karshenas, Hossein; Santana, Roberto; Bielza, Concha; Larran?aga Mu?gica, Pedro
Full Text Available The use of clustering algorithms for partition to establish a hierarchical structure in a library of objectmodels based on appearance is deployed. The main contribution corresponds to a novel and intuitivealgorithm for clustering of models based on their appearance, closer to “human behavior”. This dividesthe complete set into subclasses. Immediately, divides each of these in a number of predefined groupstocomplete the levels of hierarchy that the user wants. Whose main purpose is to obtain a competitiveclassification compared to what a human would perform.
In the artificial intelligence field, knowledge representation and reasoning are important areas for intelligent systems, especially knowledge base systems and expert systems. Knowledge representation Methods has an important role in designing the systems. There have been many models for knowledge such as semantic networks, conceptual graphs, and neural networks. These models are useful tools to design intelligent systems. However, they are not suitable to represent knowledge in the domains o...
Full Text Available The Object-Oriented paradigm approaches the software development by representing real world entities into classes of software objects. Object oriented design patterns facilitate small scale and large scale design reuse. This paper presents an object oriented design pattern, Administrator Object, to address the User-Role assignment problem in Role Based Access Control (RBAC. Two alternative solutions are proposed. The pattern is presented according to the Gang of Four template.
S. R. KODITUWAKKU
Object movie refers to a set of images captured from different perspectives around a 3D object. Object movie provides a good representation of a physical object because it can provide 3D interactive viewing effect, but does not require 3D model reconstruction. In this paper, we propose an efficient approach for content-based object movie retrieval. In order to retrieve the desired object movie from the database, we first map an object movie into the sampling of a manifold in the feature space...
Cheng-Chieh Chiang; Li-Wei Chan; Yi-Ping Hung; Lee, Greg C.
The Super-resolution Sensor System (S3) program is an ambitious effort to exploit the maximum information a laser-based sensor can obtain. At Lockheed Martin Coherent Technologies (LMCT), we are developing methods of incorporating multi-function operation (3D imaging, vibrometry, polarimetry, aperture synthesis, etc.) into a single device. The waveforms are matched to the requirements of both hardware (e.g., optical amplifiers, modulators) and the targets being imaged. The first successful demonstrations of this program have produced high-resolution, three-dimensional images at intermediate stand-off ranges. In addition, heavy camouflage penetration has been successfully demonstrated. The resolution of a ladar sensor scales with the bandwidth as dR = c/(2B), with a corresponding scaling of the range precision. Therefore, the ability to achieve large bandwidths is crucial to developing a high-resolution sensor. While there are many methods of achieving the benefit of large bandwidths while using lower bandwidth electronics (e.g., an FMCW implementation), the S3 system produces and detects the full waveform bandwidth, enabling a large set of adaptive waveforms for applications requiring large range search intervals (RSI) and short duration waveforms. This paper highlights the combined three-dimensional imaging and vibrometry demos.
Buck, Joseph; Malm, Andrew; Zakel, Andrew; Krause, Brian; Tiemann, Bruce
Infants' ability to accurately represent and later recognize previously viewed objects, and conversely, to discriminate novel objects from those previously seen improves remarkably over the first two years of life. During this time, infants acquire extensive experience viewing and manipulating objects and these experiences influence their physical reasoning. Here we posited that infants' observations of object feature stability (rigid versus malleable) can influence the use of those features to individuate two successively viewed objects. We showed 8.5-month-olds a series of objects that could or could not change shape, then assessed their use of shape as a basis for object individuation. Infants who explored rigid objects later used shape differences to individuate objects; however, infants who explored malleable objects did not. This outcome suggests that the latter infants did not take into account shape differences during the physical reasoning task and provides further evidence that infants' attention to object features can be readily modified based on recent experiences. PMID:24561541
Woods, Rebecca J; Schuler, Jena
OBJECTIVE To understand core curriculum design and involvement of stakeholders.METHODS Twelve homogeneous focus group interviews with a total of 88 students, house officers, seniordoctors and nurses concerning an undergraduate emergency medicine curriculum. Following content coding of transcripts, we analysed by condensation, categorisation and qualitative content analyses.RESULTS The focus group participants gave a range of reasons for defining objectives or outcomes. They found their involvement in the process essential.Their argumentation and beliefs differed significantly, revealing 2 opposite perspectives: objectives as context-free theory-based rules versus objectives as personal practice-based guidelines. The students favoured theory-based objectives, which should be defined by experts conclusively as minimum levels and checklists. The senior doctors preferred practice-based objectives, which should be decided in a collaborative, local, continuous process, and should be expressed as ideals and expectations. The house officers held both perspectives. Adding to complexity, participants also interpreted competence inconsistently and mixed concepts such as knowledge, observation, supervision, experience and expertise.DISCUSSION Participating novices' perspectives on objectives differed completely from expertise level participants. These differences in perspectives should not be underestimated, as they can lead easily to misunderstandings among stakeholders, or between stakeholders, educational leaders and curriculum designers. We recommend that concepts are discussed with stakeholders in order to reach a common understanding and point of departure for discussing outcomes. Differences in perspectives, in our opinion, need to be recognised, respected and incorporated into the curriculum design process.
MÃ¸rcke, Anne Mette; Wichmann-Hansen, Gitte
This study presents the spatio-temporal constraint relationships of moving objects by employing Petri Net technology. The spatial constraints of moving objects are first presented in this study based on the V4I theory, then the temporal constraints for moving objects are presented by applying this theory to temporal aspect. With the proposing of Moving Object Petri Net (MOPN) and Spatial Constraint Petri Net (SCPN) in this study, the spatio-temporal constraint relationships of moving objec...
Yong-shan Liu; Zhong-xiao Hao
Full Text Available The moving object module matching method base on Kalman Filter (KF algorithm which proposed to solve the problem of traditional moving object matching method’s, that fault of huge searching range and weakness in real-time processing. Relative to traditional module matching method, the method mentioned here effectively improved the speed and the accuracy of object tracking. This method has tripled the object matching speed of traditional tracking method.
This paper presents a new object-based video compression approach. It consists on predicting video objects motions throughout the scene. Neural networks are used to carry out the prediction step. A multi-step- ahead prediction is performed to predict the video objects trajectories over the sequence. In order to reduce video data, only the background of the video sequence is transmitted with the different detected video objects as well as their initial properties such as placement and dimensio...
We present a novel contour-based object detector using generalized hough transform where each local part casts a vote for the possible locations of the object center. The angles of line segments as local feature are extracted to describe the contour of the object, then an improved voting tactics is applied to detect the location and attitude of the object. Experimental results demonstrate that the algorithm has an encouraging detection performance.
Jiang, Bitao; Ma, Lei
OBJETIVO:Avaliar os resultados do retratamento convencional (LADAR, Alcon) e do retratamento personalizado(LADARWave, Alcon) em olhos submetidos a LASIK primário convencional. MÉTODOS: Estudo retrospectivo de revisão de prontuários consecutivos, de 38 olhos em 38 pacientes, submetidos a retratamento de LASIK para correção de miopia e astigmatismo. Os olhos operados foram divididos em dois grupos iguais. No primeiro grupo foi realizado o retratamento personalizado e, no outro, o retratam...
Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.
Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun
We demonstrate sub-nanosecond range and unambiguous sub-50-Hz Doppler resolved laser radar (LADAR) measurements using spectral holographic processing in rare-earth ion doped crystals. The demonstration utilizes pseudo-random-noise 2 giga-sample-per-second baseband waveforms modulated onto an optical carrier.
Cole, Z. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States)]. E-mail: firstname.lastname@example.org; Roos, P.A. [Spectrum Lab, Montana State University, P.O. Box 173510, Bozeman, MT 59717 (United States); Berg, T. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Kaylor, B. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Merkel, K.D. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Babbitt, W.R. [Spectrum Lab, Montana State University, P.O. Box 173510, Bozeman, MT 59717 (United States); Reibel, R.R. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States)
Full Text Available This study presents the spatio-temporal constraint relationships of moving objects by employing Petri Net technology. The spatial constraints of moving objects are first presented in this study based on the V4I theory, then the temporal constraints for moving objects are presented by applying this theory to temporal aspect. With the proposing of Moving Object Petri Net (MOPN and Spatial Constraint Petri Net (SCPN in this study, the spatio-temporal constraint relationships of moving objects are presented
We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.
Sebari, Imane; He, Dong-Chen
This paper presents a LISP based system for signal and image processing. Using an object based approach the system integrates signal and image processing algorithms, supervised and unsupervised neural network algorithms, and mild-level computer vision capabilities, into a cohesive framework. This framework is suitable for prototyping complex algorithms dealing with multiple classes of data. The system, known as VISION, is currently used as a prototyping environment for wide range of scientific applications internal to LLNL. This paper highlights some of the capabilities of VISION, and how they were implemented using the Common LISP Object System, CLOS. 13 refs.
Hernandez, J.E.; Lu, Shin-Yee; Sherwood, R.J.; Clark, G.A.; Lawver, B.S.
To improve the performance of three-dimensional object recognition systems, we propose a view-based method in this study. First we extract wavelet moments, texture features and color moments from the 2D view images of 3D objects. Wavelet moments have the multi-resolution properties in addition to the invariant properties under translation, scaling and rotation. Texture features can distinguish objects which have similar shapes and different appearance. Color moments are robust and insensitive...
Xu Sheng; Peng Qi-Cong
Full Text Available Moving Object Detection is one of the key research areas of Image processing. In this regard many researches are underway to provide a novel approach to detect moving object with less space and time complexity. This paper, outlines the novel approach to detect high speed moving object with frame interleaving in frame differencing operation and clustering based compression for frame and background modeling to reduce the time complexity for frame differencing operation.
Sunil Kumar S
An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the rob...
Qi Tian; Qibin Sun; Dajun He
This paper presents a model of NAND flash SSD utilization and write amplification when the ATA/ATAPI SSD Trim command is incorporated into object-based storage under a variety of user workloads, including a uniform random workload with objects of fixed size and a uniform random workload with objects of varying sizes. We first summarize the existing models for write amplification in SSDs for workloads with and without the Trim command, then propose an alteration of the models...
Frankie, Tasha; Hughes, Gordon; Kreutz-delgado, Ken
Full Text Available Object movie refers to a set of images captured from different perspectives around a 3D object. Object movie provides a good representation of a physical object because it can provide 3D interactive viewing effect, but does not require 3D model reconstruction. In this paper, we propose an efficient approach for content-based object movie retrieval. In order to retrieve the desired object movie from the database, we first map an object movie into the sampling of a manifold in the feature space. Two different layers of feature descriptors, dense and condensed, are designed to sample the manifold for representing object movies. Based on these descriptors, we define the dissimilarity measure between the query and the target in the object movie database. The query we considered can be either an entire object movie or simply a subset of views. We further design a relevance feedback approach to improving retrieved results. Finally, some experimental results are presented to show the efficacy of our approach.
Greg C. Lee
Object based image analysis (OBIA) is a relatively new form of remote sensing which aims to overcome the failings of traditional pixel based techniques at providing accurate land-use classification for high resolution data. The failure of pixel based techniques is due to the fact that
In this paper a new multichannel object-based audio coding scheme with scalable signal quality is proposed. The novel scheme is based on controlled downmixing and demixing. By means of a dedicated control mechanism, a number of distinct audio objects are mixed into a lower number of channels. The latter is chosen such that the desired quality level is met after demixing. The quality is assessed with two new psychoacoustically motivated metrics. Following the informed source separation approac...
Gorlow, Stanislaw; Habets, Emanue?l; Marchand, Sylvain
A method for reconstructing two-dimensional binary objects from its autocorrelation function is discussed. The objects consist of a finite set of identical elements. The reconstruction algorithm is based on the concept of class of element pairs, defined as the set of element pairs with the same separation vector. This concept allows to solve the redundancy introduced by the element pairs of each class. It is also shown that different objects, consisting of an equal number of elements and the same classes of pairs, provide Fraunhofer diffraction patterns with identical intensity distributions. However, the method predicts all the possible objects that produce the same Fraunhofer pattern. (author)
Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.
Until recently, landforms such as drumlins have only been manually delineated due to the difficulty in integrating contextual and semantic landform information in per cell classification approaches. Therefore, in most cases the results of per cell classifications presented basic landform elements or broad-scale physiographic regions that were only thematically defined. In contrast, object-based analysis provides spatially configured landform objects that are generated by terrain segmentation, the process of merging DTM cells to meaningful terrain objects at multiple scales. Such terrain objects should be favoured for landform modelling due to the following reasons: Firstly, their outlines potentially better correspond to the spatial limits of landforms as conceptualised by geoscientists; secondly, spatially aware objects enable the integration of semantic descriptions in the classification process. We present a multi-scale object-based study on automated delineation and classification of drumlins for a small test area in Bavaria, Germany. The multi-resolution segmentation algorithm is applied to create statistically meaningful objects patterns of selected DTMs, which are derived from a 5 m LiDAR DEM. For the subsequent classification of drumlins a semantics-based approach, which uses the principles of semantic modelling, is employed: initially, a geomorphological concept of the landform type drumlin is developed. The drumlin concept should ideally comprise verbal descriptions of the fundamental morphometric, morphological, hierarchical and contextual properties. Subsequently, the semantic model is built by structuring the conceptualised knowledge facts, and by associating those facts with object and class-related features, which are available in commonly used object-based software products for the development of classification rules. For the accuracy assessment we plan an integrated approach, which combines a statistical comparison to field maps and a qualitative evaluation based on expert consultation. The study on drumlins should demonstrate the applicability of the object-based approach for the extraction of specific landforms from DTMs in a multi-scale framework. The provision of meaningful spatial modelling units and the straightforward way for the integration of semantics make object-based analysis superior to field-based methods. However, an explicit representation of geomorphological knowledge - as for example in the form of a semantic model - prior to landform classification is a prerequisite for effective mapping. Such an approach allows the user to delineate and map drumlins in a way that is close to the human cognition of landforms. Once most of the drumlins are recognized by the developed classification system, those objects can further be investigated with respect to their morphometry and morphology in order to improve the understanding of glacial processes.
Eisank, C.; Dragut, L.; Blaschke, T.
The underlying units of attention are often discrete visual objects. Perhaps the clearest form of evidence for this is the same-object advantage: Following a spatial cue, responses are faster to probes occurring on the same object than they are to probes occurring on other objects, while equating brute distance. Is this a fundamentally spatial effect, or can same-object advantages also occur in time? We explored this question using independently normed rhythmic temporal sequences, structured into phrases and presented either visually or auditorily. Detection was speeded when cues and probes both lay within the same rhythmic phrase, compared to when they spanned a phrase boundary, while equating brute duration. This same-phrase advantage suggests that object-based attention is a more general phenomenon than has been previously suspected: Perceptual structure constrains attention, in both space and time, and in both vision and audition. PMID:23586668
De Freitas, Julian; Liverence, Brandon M; Scholl, Brian J
We present a robust object-based watermarking algorithm using the scale-invariant feature transform (SIFT) in conjunction with a data embedding method based on Discrete Cosine Transform (DCT). The message is embedded in the DCT domain of randomly generated blocks in the selected object region. To recognize the object region after being distorted, its SIFT features are registered in advance. In the detection scheme, we extract SIFT features from the distorted image and match them with the registered ones. Then we recover the distorted object region based on the transformation parameters obtained from the matching result using SIFT, and the watermarked message can be detected. Experimental results demonstrated that our proposed algorithm is very robust to distortions such as JPEG compression, scaling, rotation, shearing, aspect ratio change, and image filtering.
Pham, Viet-Quoc; Miyaki, Takashi; Yamasaki, Toshihiko; Aizawa, Kiyoharu
Visual attention can be allocated to either a location or an object, named location- or object-based attention, respectively. Despite the burgeoning evidence in support of the existence of two kinds of attention, little is known about their underlying mechanisms in terms of whether they are achieved by enhancing signal strength or excluding external noises. We adopted the noise-masking paradigm in conjunction with the double-rectangle method to probe the mechanisms of location-based attention and object-based attention. Two rectangles were shown, and one end of one rectangle was cued, followed by the target appearing at (a) the cued location; (b) the uncued end of the cued rectangle; and (c) the equal-distant end of the uncued rectangle. Observers were required to detect the target that was superimposed at different levels of noise contrast. We explored how attention affects performance by assessing the threshold versus external noise contrast (TvC) functions and fitted them with a divisive inhibition model. Results show that location-based attention – lower threshold at cued location than at uncued location – was observed at all noise levels, a signature of signal enhancement. However, object-based attention – lower threshold at the uncued end of the cued than at the uncued rectangle – was found only in high-noise conditions, a signature of noise exclusion. Findings here shed a new insight into the current theories of object-based attention.
Chou, Wei-Lun; Yeh, Su-Ling; Chen, Chien-Chung
The enterprise software need adapt to new requirements from the continuous change management. The recent development methods have increased the flexibility of software. However, previous studies have ignored the stability of business object and the particular business relationships to support the software development. In this paper, a coarse-grained business object based software development, BOSD, is presented to resolve this problem. By analyzing the characteristics of variable requirement,...
Problem statement: This study deals with object recognition based on image segmentation and clustering. Acquiring prior information of an image is done via two separate processes. Approach: The first process deals with detecting object parts of an image and integration of detected parts into several clusters. All these cluster centers form the visual words. The second process deals with over segmenting the image into super pixels and formation of larger sub region using Mid-leve...
Thilagamani, S.; Shanthi, N.
Full Text Available A large body of evidence supports that visual attention – the cognitive process of selectively concentrating on a salient or task-relevant subset of visual information – often works on object-based representation. Recent studies have postulated two possible accounts for the object-specific attentional advantage: attentional spreading and attentional prioritization, each of which modulates a bottom-up signal for sensory processing and a top-down signal for attentional allocation, respectively. It is still unclear which account can explain the object-specific attentional advantage. To address this issue, we examined the influence of object-specific advantage on two types of visual search: parallel search, invoked when a bottom-up signal is fully available at a target location, and serial search, invoked when a bottom-up signal is not enough to guide target selection and a top-down control for shifting of focused attention is required. Our results revealed that the object-specific advantage is given to the serial search but not to the parallel search, suggesting that object-based attention facilitates stimulus processing by affecting the priority of attentional shifts rather than by enhancing sensory signals. Thus, our findings support the notion that the object-specific attentional advantage can be explained by attentional prioritization but not attentional spreading.
A large body of evidence supports that visual attention – the cognitive process of selectively concentrating on a salient or task-relevant subset of visual information – often works on object-based representation. Recent studies have postulated two possible accounts for the object-specific attentional advantage: attentional spreading and attentional prioritization, each of which modulates a bottom-up signal for sensory processing and a top-down signal for attentional allocation, respectively. It is still unclear which account can explain the object-specific attentional advantage. To address this issue, we examined the influence of object-specific advantage on two types of visual search: parallel search, invoked when a bottom-up signal is fully available at a target location, and serial search, invoked when a bottom-up signal is not enough to guide target selection and a top-down control for shifting of focused attention is required. Our results revealed that the object-specific advantage is given to the serial search but not to the parallel search, suggesting that object-based attention facilitates stimulus processing by affecting the priority of attentional shifts rather than by enhancing sensory signals. Thus, our findings support the notion that the object-specific attentional advantage can be explained by attentional prioritization but not attentional spreading.
Nishida, Satoshi; Shibata, Tomohiro; Ikeda, Kazushi
Full Text Available As a novel optimization method, chaos has gained lots of attentions and applications in the past few years. Chaos movement can go through all states unrepeated according to the rule of itself in some area. It was introduced into the optimization strategy to accelerate the optimum seeking operation in this study. A chaos based particle swarm optimization strategy was developed to solve multi-objective optimization problems. The proposed approach is validated using several benchmark test functions and metrics on evolutionary multi-objective optimization. Results demonstrate the effectiveness and efficiency of the proposed strategy and that can be considered a viable alternative to solve multi-objective optimization problems.
Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated.
Aghito, Shankar Manuel; Stegmann, Mikkel Bille
Full Text Available Usually, the video based object tracking deal with non-stationary image stream that changes over time. Robust and Real time moving object tracking is a problematic issue in computer vision research area. Most of the existing algorithms are able to track only inpredefined and well controlled environment. Some cases, they don’t consider non-linearity problem. In our paper, we develop such a system which considers color information, distance transform (DT based shape information and also nonlinearity. Particle filtering has been proven very successful for non-gaussian and non-linear estimation problems. We examine the difficulties of video based tracking and step by step we analyze these issues. In our firstapproach, we develop the color based particle filter tracker that relies on the deterministic search of window, whose color content matches a reference histogram model. A simple HSV histogram-based color model is used to develop this observation system. Secondly, wedescribe a new approach for moving object tracking with particle filter by shape information. The shape similarity between a template and estimated regions in the video scene is measured by their normalized cross-correlation of distance transformed images. Our observation system of particle filter is based on shape from distance transformed edge features. Template is created instantly by selecting any object from the video scene by a rectangle. Finally, inthis paper we illustrate how our system is improved by using both these two cues with non linearity.
Md. Zahidul Islam
Full Text Available To improve the performance of three-dimensional object recognition systems, we propose a view-based method in this study. First we extract wavelet moments, texture features and color moments from the 2D view images of 3D objects. Wavelet moments have the multi-resolution properties in addition to the invariant properties under translation, scaling and rotation. Texture features can distinguish objects which have similar shapes and different appearance. Color moments are robust and insensitive to the size and pose of objects. Support Vector Machine (SVM is chosen as classifier. Then the feature subset selection and SVM parameters optimization are accomplished automatically and simultaneously using Genetic Algorithm (GA in an evolutionary way. We assessed our method based on the original and noise corrupted 3D object dataset COIL-100. One hundred percent correct rate of recognition was obtained when the number of presented training views for each object was 36 (10 degrees interval and 18 (20 degrees interval. When the number of training views was reduced, the correct rate of recognition was also satisfied.
Full Text Available Target weighted multi-objective optimization genetic algorithm for solving the problem is to place all aggregated into a target objective function with parameters. In the multi-objective optimization evaluation index system, determine the weights of attributes have a pivotal position. So how to scientifically and reasonably determine the attribute weights, the results related to the multi-objective optimization reliability and validity. The first focuses on the weighted sum of the genetic algorithm, uniform design created by combining the initial population and its standardization of the objective function to create a new fitness function, we propose a dynamic allocation weighting scheme, based on the design of a new weight distribution strategy multi-objective genetic algorithm for solving multi-objective optimization problem. The algorithm can find the sparse regions of non-dominated frontier, to search for sparse areas, making the search to a more uniform distribution of non-dominated solutions, introduces a uniform crossover operator and single point crossover two kinds of hybrid composite operator, to make up for a simulated binary search capability is weak crossover defects and gives proof of convergence of the algorithm by simulation to verify the effectiveness of the algorithm.
The results of optimization of inverse treatment plans depend on a choice of the objective function. Even when the optimal solution for a given cost function can be obtained, a better solution may exist for a given clinical scenario and it could be obtained with a revised objective function. In the approach presented in this work mixed integer programming was used to introduce a new volume-based objective function, which allowed for minimization of the number of under- or overdosed voxels in selected structures. By selecting and prioritizing components of this function the user could drive the computations towards the desired solution. This optimization approach was tested using cases of patients treated for prostate and oropharyngeal cancer. Initial solutions were obtained based on minimization/maximization of the dose to critical structures and targets. Subsequently, the volume-based objective functions were used to locate solutions, which satisfied better clinical objectives particular to each of the cases. For prostate cases, these additional solutions offered further improvements in sparing of the rectum or the bladder. For oropharyngeal cases, families of solutions were obtained satisfying an intensity modulated radiation therapy protocol for this disease site, while offering significant improvement in the sparing of selected critical structures, e.g., parotid glands. An additional advantage of the present approach was in providing a convenient mechanism to test the feasibility of the dose-volume histogram constraints
The results of optimization of inverse treatment plans depend on a choice of the objective function. Even when the optimal solution for a given cost function can be obtained, a better solution may exist for a given clinical scenario and it could be obtained with a revised objective function. In the approach presented in this work mixed integer programming was used to introduce a new volume-based objective function, which allowed for minimization of the number of under- or overdosed voxels in selected structures. By selecting and prioritizing components of this function the user could drive the computations towards the desired solution. This optimization approach was tested using cases of patients treated for prostate and oropharyngeal cancer. Initial solutions were obtained based on minimization/maximization of the dose to critical structures and targets. Subsequently, the volume-based objective functions were used to locate solutions, which satisfied better clinical objectives particular to each of the cases. For prostate cases, these additional solutions offered further improvements in sparing of the rectum or the bladder. For oropharyngeal cases, families of solutions were obtained satisfying an intensity modulated radiation therapy protocol for this disease site, while offering significant improvement in the sparing of selected critical structures, e.g., parotid glands. An additional advantage of the present approach was in providing a convenient mechanism to test the feasibility of the dose-volume histogram constraints.
Bednarz, Greg; Michalski, Darek; Anne, Pramila R; Valicenti, Richard K [Department of Radiation Oncology, Kimmel Cancer Center of the Jefferson Medical College, Thomas Jefferson University, Philadelphia, PA 19107 (United States)
In spite of the recent quick growth of the Evolutionary Multi-objective Optimization (EMO) research field, there has been few trials to adapt the general variation operators to the particular context of the quest for the Pareto-optimal set. The only exceptions are some mating restrictions that take in account the distance between the potential mates - but contradictory conclusions have been reported. This paper introduces a particular mating restriction for Evolutionary Multi-objective Algorithms, based on the Pareto dominance relation: the partner of a non-dominated individual will be preferably chosen among the individuals of the population that it dominates. Coupled with the BLX crossover operator, two different ways of generating offspring are proposed. This recombination scheme is validated within the well-known NSGA-II framework on three bi-objective benchmark problems and one real-world bi-objective constrained optimization problem. An acceleration of the progress of the population toward the Pareto se...
Roudenko, O; Roudenko, Olga; Schoenauer, Marc
Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.
Rosalyn R. Porle
During the signal acquisition for synthetic aperture imaging ladar, any phase error will deteriorate the phase-matched filtering results. The phase differential algorithm (PDA) is presented to correct the phase error. The quadratic spatial phase history can be reconstructed from the phase information submerged by the phase error from platform line-of-sight translation-vibration and nonlinear chirp. The theoretical modeling results and experimental results are presented.
Zhi, Ya'nan; Sun, Jianfeng; Hou, Peipei; Dai, Enwen; Zhou, Yu; Liu, Liren
This paper describes the philosophy, development and framework of the body of elements formulated to provide an approach to evidence-based learning sustained by Learning Objects and web based technology Due to the demands for continuous improvement in the delivery of healthcare and in the continuous endeavour to improve the quality of life, there is a continuous need for practitioner's to update their knowledge by accomplishing accredited courses. The rapid advances in medical science has mea...
Zabin Visram; Bruce Elson; Patricia Reynolds
Computer vision techniques have been used to develop a vision-based grasping capability for autonomously picking and placing unknown piled objects. This work is currently being applied to the problem of hazardous waste sorting in support of the Department of Energy's Mixed Waste Operations Program
In this paper, a stereo vision-based robot servoing control approach is presented for object grasping. Firstly, three-dimensional projective reconstruction with two free-standing CCD cameras and homogeneous transformation are used to specify the goal grasping position and orientation of a robot hand. Secondly, a stereo vision-based servoing problem is formulated, and a stereo vision-based servoing control algorithm which is independent of robotic dynamics is proposed. Using this algorithm, a set of velocity reference inputs can be obtained to control the motions and velocities of the robot hand during the visual servoing. Thirdly, the methods for coping with the time delay of image processing and the CCD camera calibration are put forward. Lastly, the effectiveness of the present approach is verified by carrying out several experiments on object grasping using a 6 degrees of freedom robot. Its stability and robustness as well as flexibility are also confirmed by the experimental results.
Xiao, Nan-Feng; Todo, Isao
Full Text Available This work uses data from the Spanish Tourism Demand Segments Survey (N=6900 conducted by the IESA-CSIC for Turismo Andaluz, SA. The objective of the paper is to develop a statistical segmentation or typology of Spanish tourists based on objective aspects of tourist behaviour measured in the survey including destinations visited, theme of the trip, lodging, transportation and travel group. Initial categorical data are reduced using multiple correspondence analysis and grouped through cluster analysis. Afterwards, identified segments are evaluated to analyse their tourist profiles with a view to examining sociological perspectives of tourist behaviour.
Oscar Molina Molina
This paper focuses on the application of rigorous optimization methods to the design of complete satellite structural subsystems. It discusses a software system being developed which is based on a combination of finite-element analysis using MSC/NASTEAN, numerical optimization, approximation concepts, and object-oriented technology. The benefits of object-oriented design, in the context of satellite structural optimization, are identified. In addition, an approach to system optimization, including multilevel and coupled subsystem optimization is explored. Finally, the application of this software to the actual design of two satellite structures is presented.
Kodiyalam, Srinivas; Graichen, Catherine M.; Connell, Isobel J.; Finnigan, Peter M.
We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic distribution, which combined with a match of the warped intensity template and the image form the final criteria used for localization and recognition of a given object. The chosen representation gives the model an ability to model an almost arbitrary object. Beside the actual model a full general scheme for applying the model is proposed. The scheme includes general methods for initialization, optimization and validation. Experimental results for real data are shown. Compared to related work the proposed meodel and the methods for initialization and validation containsa number of intersting features and improved abilities.
Jensen, Rune Fisker; Carstensen, Jens Michael
Students learned about object-oriented design concepts and knowledge representation through the use of a set of toy blocks. The blocks represented a limited and focused domain of knowledge and one that was physical and tangible. The blocks helped the students to better visualize, communicate, and understand the domain of knowledge as well as how to perform object decomposition. The blocks were further abstracted to an engineering design kit for water park design. This helped the students to work on techniques for abstraction and conceptualization. It also led the project from tangible exercises into software and programming exercises. Students employed XML to create object-based knowledge representations and Java to use the represented knowledge. The students developed and implemented software allowing a lay user to design and create their own water slide and then to take a simulated ride on their slide.
Kelsey, R. L. (Robert L.)
Full Text Available Graphene-based nano-objects such as nanotrenches, nanowires, nanobelts and nanoscale superstructures have been grown by surface segregation and precipitation on carbon-doped mono- and polycrystalline nickel substrates in ultrahigh vacuum. The dominant morphologies of the nano-objects were nanowire and nanosheet. Nucleation of graphene sheets occurred at surface defects such as step edges and resulted in the directional growth of nanowires. Surface analysis by scanning tunneling microscopy (STM has clarified the structure and functionality of the novel nano-objects at atomic resolution. Nanobelts were detected consisting of bilayer graphene sheets with a nanoscale width and a length of several microns. Moiré patterns and one-dimensional reconstruction were observed on multilayer graphite terraces. As a useful functionality, application to repairable high-resolution STM probes is demonstrated.
Full Text Available In studying multidisciplinary design optimization method for non-hierarchic system, Multidisciplinary Object Compatibility Design Optimization method based on simulated annealing algorithm is presented. In order to coordinate the independent optimization of subsystems, the compatibility constraint in system level and compatibility objective in subsystem work together. As optimization process continued, the coupling relationship between system level and different subsystems is gradually improved by state accepting function which is embedded in compatibility constraint. In this way, abnormal program termination and premature convergence will be avoided and ideal global optimal solution will be achieved effectually. Then the method is used in the optimization design of TR-B triple-redundancy transmission system. The multidisciplinary object compatibility design optimization model is established and the comprehensive optimal solution is obtained which meets the matching relationship of gear teeth, strength requirement and dynamic requirement, etc
Graphene-based nano-objects such as nanotrenches, nanowires, nanobelts and nanoscale superstructures have been grown by surface segregation and precipitation on carbon-doped mono- and polycrystalline nickel substrates in ultrahigh vacuum. The dominant morphologies of the nano-objects were nanowire and nanosheet. Nucleation of graphene sheets occurred at surface defects such as step edges and resulted in the directional growth of nanowires. Surface analysis by scanning tunneling microscopy (STM) has clarified the structure and functionality of the novel nano-objects at atomic resolution. Nanobelts were detected consisting of bilayer graphene sheets with a nanoscale width and a length of several microns. Moire patterns and one-dimensional reconstruction were observed on multilayer graphite terraces. As a useful functionality, application to repairable high-resolution STM probes is demonstrated.
Fujita, Daisuke, E-mail: FUJITA.Daisuke@nims.go.jp [International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science 1-2-1 Sengen, Tsukuba 305-0047 (Japan)
Full Text Available Problem statement: This study deals with object recognition based on image segmentation and clustering. Acquiring prior information of an image is done via two separate processes. Approach: The first process deals with detecting object parts of an image and integration of detected parts into several clusters. All these cluster centers form the visual words. The second process deals with over segmenting the image into super pixels and formation of larger sub region using Mid-level clustering algorithm, since it incorporates various information to decide the homogeneity of a sub region. Results: The outcome of the two processes are used for the similarity graph representation for object segmentation as proposed. In order to model the relationship between the shape and color or texture matrix representation has been used. Mask map ensures that the probability of each super pixel to harp inside an object. Conclusion: The basic whim is to integrate all the priors into an uniform framework. Thus the ORBISC can handle size, color, texture and pose variations better than those methods that focus on the objects only.
Full text: Geological dating with the help of fission track analysis is based on a time-consuming counting of the spontaneous and induced tracks in the minerals. Fission tracks are damage trails in minerals caused by fast charged particles, released in nuclear fission. In this study the 950;-method is used for fission-track dating. In order to determine the age, spontaneous tracks in the apatite and induced tracks in the muscovite external detector have to be counted. The automatic extraction and identification would not only improve the speed of track counting and eliminate the personal factor. Pixel values alone are not enough to distinguish between tracks and background. Traditional pixel based approaches are therefore inefficient for fission track counting. Image analysis based on objects, which include shape, texture and contextual information is a more promising method. A procedure for automatic object - based classification is used to extract the track objects. Resolving the individual tracks in a multi-track object is based on morphological operations. The individual track objects are skeletonized and the number of individual tracks in the object is counted by processing the skeletons. To give the right fission track age, there has to be a calibration of every single user manually counting the tracks. We calibrate the automatic approach for counting in the same way. Durango apatite standard samples are used to determine the 950;- and Z-calibration factor. The automatic approach is useful for counting tracks in apatite standards and induced tracks in muscovite external detectors where the quality and quantities of the etched tracks is high. Muscovite detectors irradiated against glasses can also be used to determine the thermal neutron fluence, which is necessary to determine an absolute age. These images are of high quality and free of disturbing background irregularities. Here the automatic approach is a practical alternative. However for natural samples of small grain size, low track-numbers and background irregularities, the implementation is questionable. The algorithm for the automatic extraction and counting of fission tracks in standard samples of Durango Apatite and muscovite external detectors is shown to be self-consistent. (author)
Full Text Available This paper describes the philosophy, development and framework of the body of elements formulated to provide an approach to evidence-based learning sustained by Learning Objects and web based technology Due to the demands for continuous improvement in the delivery of healthcare and in the continuous endeavour to improve the quality of life, there is a continuous need for practitioner's to update their knowledge by accomplishing accredited courses. The rapid advances in medical science has meant increasingly, there is a desperate need to adopt wireless schemes, whereby bespoke courses can be developed to help practitioners keep up with expanding knowledge base. Evidently, without current best evidence, practice risks becoming rapidly out of date, to the detriment of the patient. There is a need to provide a tactical, operational and effective environment, which allows professional to update their education, and complete specialised training, just-in-time, in their own time and location. Following this demand in the marketplace the information engineering group, in combination with several medical and dental schools, set out to develop and design a conceptual framework which form the basis of pioneering research, which at last, enables practitioner's to adopt a philosophy of life long learning. The body and structure of this framework is subsumed under the term Object oriented approach to Evidence Based learning, Just-in-time, via Internet sustained by Reusable Learning Objects (The OEBJIRLO Progression. The technical pillars which permit this concept of life long learning are pivoted by the foundations of object oriented technology, Learning objects, Just-in-time education, Data Mining, intelligent Agent technology, Flash interconnectivity and remote wireless technology, which allow practitioners to update their professional skills, complete specialised training which leads to accredited qualifications. This paper sets out to develop and implement a range of teaching and learning strategies that would accommodate the flexibility required by such a scheme. At the same time the specific requirements of individual programmes are satisfied. The body of elements provide an integrated path taking students through the range of operational, tactical and strategic issues involved in Web Based Learning, sustained by learning object abstract framework and Agent technology, within a distant learning context.
Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies
Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies.
WYSS, GREGORY D.; DURAN, FELICIA A.
With the Internet success leading to heavy demands on network, proxies have beco me an unavoidable necessity. In this paper we present a new technique to improve caching services for an Internet user group. We propose an alternative to classical LRU, LFU or similar algorithms. Our appro ach is based on object usage, cross-references and geographic location. With our system we will not only improve on storing performance taking into account pref erences and usage of specific user groups, but w...
Rochat, Philippe; Thompson, Stuart
Efficient incremental image alignment is a topic of renewed interest in the computer vision community because of its applications in model fitting and model-based object tracking. Successful compositional procedures for aligning 2D and 3D models under weak-perspective imaging conditions have already been proposed. Here we present a mixed compositional and additive algorithm which is applicable to the full projective camera case.
Mun?oz, Enrique; Buenaposada Biencinto, Jose? Miguel; Baumela Molina, Luis
We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate s...
Dra?gut?, Lucian; Eisank, Clemens
The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.
Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk
The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.
Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk
The software development for programmable logical controllers is usually based on low-level languages such as the instruction list or the ladder diagram. At the same time, the programmer looks at a machine or an assembly system in a bit-oriented way: he translates the operational sequences into logical and/or time based combinations of binary signals described by the means of boolean algebra. A machine, however, does not only consist of binary signals but of technical components, i.e. objects...
Weule, Hartmut; Spath, Dieter; Schelberg, Hans-joachim
This article explains that most existing vision systems rely on models generated in an ad hoc manner and have no explicit relation to the CAD/CAM system originally used to design and manufacture these objects. The authors desire a more unified system that allows vision models to be automatically generated from an existing CAD database. A CAD system contains an interactive design interface, graphic display utilities, model analysis tools, automatic manufacturing interfaces, etc. Although it is a suitable environment for design purposes, its representations and the models it generates do not contain all the features that are important in robot vision applications. In this article, the authors propose a CAD-based approach for building representations and models that can be used in diverse applications involving 3D object recognition and manipulation. There are two main steps in using this approach. First, they design the object's geometry using a CAD system, or extract its CAD model from the existing database if it has already been modeled. Second, they develop representations from the CAD model and construct features possibly by combining multiple representations that are crucial in 3D object recognition and manipulation.
Bhanu, B.; Ho, C.C.
An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.
Duong, Tuan; Duong, Vu; Stubberud, Allen
Image compression based on transform coding appears to be approaching a bit-rate limit for visually acceptable distortion levels. Although an emerging compression technology called object-based compression (OBC) promises significantly improved bit rate and computational efficiency, OBC is epistemologically distinct in a way that renders existing image quality measures (IQMs) for compression transform optimization less suitable for OBC. In particular, OBC segments source image regions, then efficiently encodes each region's content and boundary. During decompression, region contents are often replaced by similar-appearing objects from a codebook, thus producing a reconstructed image that corresponds semantically to the source image, but has pixel-, featural-, and object-level differences that are apparent visually. OBC thus gains the advantage of fast decompression via efficient codebook-based substitutions, albeit at the cost of codebook search in the compression step and significant pixel- or region-level errors in decompression. Existing IQMs are pixel- and region oriented, and thus tend to indicate high error due to OBC's lack of pixel-level correlation between source and reconstructed imagery. Thus, current IQMs do not necessarily measure the semantic correspondence that OBC is designed to produce. This paper presents image quality measures for estimating semantic correspondence between a source image and a corresponding OBC-decompressed image. In particular, we examine the semantic assumptions and models that underlie various approaches to OBC, especially those based on textural as well as high-level name and spatial similarities. We propose several measures that are designed to quantify this type of high-level similarity, and can be combined with existing IQMs for assessing compression transform performance. Discussion also highlights how these novel IQMs can be combined with time and space complexity measures for compression transform optimization.
Schmalz, Mark S.; Ritter, Gerhard X.
In this paper, a fast moving objects detection algorithm, based on motions estimation, is proposed. Our algorithm consists of three main parts: Firstly, we use Enhanced Predictive Zonal Search(EPZS) method to match patches between video frames, Then, moving objects contour information is extracted by applying Tree Structure Moving Compensation (TSMC) technique into interesting patch subdivision and matching. Finally, we extract the patches of the target by the statistics of motion vectors. By applying EPZS we can quickly detect the moving target and get its information .Experiment shows that the algorithm can quickly and efficiently detected moving target and its contour from adjacent frames. Compared with other algorithms, this algorithm has better usability and faster processing speed while not be affected by the moving of camera.
Li, Zhida; Yu, Changsheng; Xie, Shixiong
This paper discusses object-based graphical user interfaces that may be used as a flexible, device independent front-end for power system simulation and control. This discussion is illustrated by an experimental prototype GUI suitable for energy management systems or operator training simulators. The GUI is based on the X window environment and employs multiple windows to display differing views of the system and direct mouse-manipulations to affect the various power system objects in a consistent fashion, ensuring ease of use. An editing portion of the system allows the dynamic construction of one-line diagrams in a paint-program-like environment, although an advanced automatic display generation feature is also provided which is capable of constructing layouts based on database information and sophisticated routing and layout heuristics. The X window (and C'') basis of the implementation of the system provides for its relative platform and operating system independence, and additionally enables the networked operation of multiple platforms running the same functions. As the industry is already moving in these directions, the aim of this paper is to document the generic features and advantages of such a GUI.
Foley, M.; Bose, A. (Arizona State Univ., Tempe, AZ (United States). Electrical Engineering Dept.); Mitchell, W.; Faustini, A. (Arizona State Univ., Tempe, AZ (United States). Computer Science Dept.)
The purpose of this paper is to describe a new method for tracking trajectories specified in the image space. This method, called movement flow-based visual servoing system, is applied to an eye-in-hand robot and it is shown that it allows the correct tracking of a trajectory, not only in the image but also in the 3-D space. This method is also extended to the case in which the object from which the features are extracted, is in motion. To do so, the estimations obtained, using ...
Pomares Baeza, Jorge; Torres Medina, Fernando
Full Text Available This paper considers the problem of accuracy for judging threshold under the complicated circumstance. In the detecting system, threshold is one of the most important factor, it decides the accuracy of the detecting result. Because the circumstance is changing, the threshold is asked to adapt the change. The traditional algorithm can hardly satisfy the need of the system. Bayesian model is an efficient system based on statistics rule, and it can give a better detecting result. In order to adapt the change of the light in a same video sequence, Bayesian judging criterion is used to detect object, void warm price and falling report price is considered comprehensively, combined with likelihood function and Bayesian risk assessment, an adaptive threshold is obtained. The threshold is determined by mean and variance of the image, so it is an optimal threshold changed with every image. The optimal threshold is used to separate object from background. Compared with the traditional threshold, it can suit different circumstance. The experimental result shows that the background noise can be removed with the dynamic threshold and the moving object can be detected accurately.
Rapid change detection is used in cases of natural hazards and disasters. This analysis lead to quick information about areas of damage. In certain cases the lack of information after catastrophe events is obstructing supporting measures within disaster management. Earthquakes, tsunamis, civil war, volcanic eruption, droughts and floods have much in common: people are directly affected, landscapes and buildings are destroyed. In every case geospatial data is necessary to gain knowledge as basement for decision support. Where to go first? Which infrastructure is usable? How much area is affected? These are essential questions which need to be answered before appropriate, eligible help can be established. This study presents an innovative strategy to retrieve post event information by use of an object-based change detection approach. Within a transferable framework, the developed algorithms can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated normalized temporal change index (NTCI) panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas which are developing new for cases where rebuilding has already started. The results of the study are also feasible for monitoring urban growth.
Thunig, Holger; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter
Full Text Available Object tracking in video sequences is one of the important ongoing exploration areas in the field of computer vision. Computer vision is an arena that comprises methods for acquiring, processing, analyzing images and also covers the essential technology of automatic image analysis which is used in various fields. The aim of object tracking is to find the trajectory of the target objects through a number of frames from an image sequence. Object Tracking is identification of interesting object, especially on tracking of walkers or moving vehicles. Tracking is an interesting problem owing to, object occlusion, varying of illumination, unexpected object motion and camera motion. Normally many algorithms were developed for successful tracking. Object Tracking is mainly classified of three stages: object extraction, object recognition and tracking, and decisions about activities. In this paper we have implemented some algorithms and comparison table are analyzed.
Mr. Joshan Athanesious J , Mr. Suresh P
The US Department of Energy complex has a significant legacy of equipment which has been used in the production of the nations nuclear weapons stockpile, and is consequently contaminated with Transuranic (TRU) material. Present methods for decontamination and disposal of these items typically involve surface scrubbing followed by size reduction, and packaging for ultimate disposal in a facility such as the Waste Isolation Pilot Plant (WIPP). The present decontamination methods are often crude--involving significant possibility for exposure of personnel to radiation, and frequently generating large quantities of contaminated liquid waste, which must be processed before re-use or disposal. A dry decontamination process using Reactive Ion Etching (RIE) with a fluorine and oxygen based plasma is being developed at Los Alamos National Laboratory as an improved method for removal of TRU material from the surface or near-surface of large metallic objects. Technical issues which remain to be addressed in the development of this process include evaluation of potential cross- and residual contamination, the ability to access hold-up areas such as varying sizes of cracks and crevices in the object being decontaminated, the impact of non-conducting portions of the target, and the efficiency of contaminant control and capture in simple down-stream filters. Details and results of RIE tests using surrogate materials in various geometries will be presented
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.
Manger, D.; Pagel, F.; Widak, H.
The adjustment of multiple criteria in hit-to-lead identification and lead optimization is a major advance in drug discovery. Thus, the development of approaches able to handle additional criteria for the early simultaneous treatment of the most important properties determining the pharmaceutical profile of a drug candidate is an emergent issue in this area. In this paper, we review a desirability-based multi-objective QSAR method allowing the joint handling of multiple properties of interest in drug discovery: the MOOP-DESIRE methodology. This methodology adapts desirability theory concepts allowing the holistic modeling of the many and conflicting biological properties determining the therapeutic utility of a drug candidate. Here we survey their suitability for key tasks involving the use of chemoinformatics methods in medicinal chemistry and drug discovery. PMID:22420570
Cruz-Monteagudo, Maykel; Cordeiro, M Natalia D S; Tejera, Eduardo; Dominguez, Elena Rosa; Borges, Fernanda
Traditional just-in-time compilers operate at the granularity of methods. Compiling a method in such a compiler is an atomic operation that can require substantial amounts of processing time, resulting in execution pauses in interactive computing environments. We describe a new software architecture for dynamic compilers in which the granularity of compilation steps is much finer, forming a “pipeline” with completely linear runtime behavior, and in which there are only two write barriers. This means that on future many-core platforms, the compiler itself can be parallelized, providing high-throughput dynamic compilation without execution pauses. As our prototype for Java demonstrates, stream-based compilation lends itself very naturally to an object-oriented implementation.
Bebenita, Michael; Chang, Mason; Gal, Andreas; Franz, Michael
Full Text Available ???????Verhulst????????????????????????Verhulst?????“S?”?????????????????????????????????????????????????Verhulst???????????????????????????Verhulst???????????????????????????????????????????????????????Verhulst?????????????????In order to overcome the defects of parameters estimation in traditional grey Verhulst model by means of least square procedure, and enhance the forecasting accuracy of grey Verhulst model in medium and long-term load forecasting for load growth in S-type or load growth being saturated, an estimation method based on least absolute de- viation, which use objective programming to estimate the parameters of grey Verhulst is presented. Then, this model is applied to long-term load forecasting, and is compared with the traditional grey Verhulst model. The results show that the method takes advantages of the benefits of least absolute deviation, which is small influenced by singular value, and robustness is good. This model avoids the defects of parameters estimation in traditional grey Verhulst model by means of least square procedure, and forecasting precision is higher.
Geographic object-based image analysis (GEOBIA) produces results that have both thematic and geometric properties. Classified objects not only belong to particular classes but also have spatial properties such as location and shape. Therefore, any accuracy assessment where quantification of area is required must (but often does not) take into account both thematic and geometric properties of the classified objects. By using location-based and area-based measures to compare classified objects to corresponding reference objects, accuracy information for both thematic and geometric assessment is available. Our methods provide location-based and area-based measures with application to both a single-class feature detection and a multi-class object-based land cover analysis. In each case the classification was compared to a GIS layer of associated reference data using randomly selected sample areas. Error is able to be pin-pointed spatially on per-object, per class and per-sample area bases although there is no indication whether the errors exist in the classification product or the reference data. This work showcases the utility of the methods for assessing the accuracy of GEOBIA derived classifications provided the reference data is accurate and of comparable scale.
Whiteside, Timothy G.; Maier, Stefan W.; Boggs, Guy S.
Wetlands are valuable ecosystems that benefit society. However, throughout history wetlands have been converted to other land uses. For this reason, timely wetland maps are necessary for developing strategies to protect wetland habitat. The goal of this research was to develop a time-efficient, automated, low-cost method to map wetlands in a semi-arid landscape that could be scaled up for use at a county or state level, and could lay the groundwork for expanding to forested areas. Therefore, it was critical that the research project contain two components: accurate automated feature extraction and the use of low-cost imagery. For that reason, we tested the effectiveness of geographic object-based image analysis (GEOBIA) to delineate and classify wetlands using freely available true color aerial photographs provided through the National Agriculture Inventory Program. The GEOBIA method produced an overall accuracy of 89% (khat = 0.81), despite the absence of infrared spectral data. GEOBIA provides the automation that can save significant resources when scaled up while still providing sufficient spatial resolution and accuracy to be useful to state and local resource managers and policymakers.
Halabisky, Meghan; Moskal, L. Monika; Hall, Sonia A.
In the proposal, a flexible business process management axed on the objective concept and for the process lifecycle is presented. The main feature of this approach is that the map model is used as the key element to drive the construction and execution of flexible business processes. An analysis phase starts with a model which fully considers the objective and sub-objectives of the business process, when defining it. A design phase uses the map model for specifying and representing the possib...
One of the "hottest" topics in recent information systems and computer science is metadata. Learning Object Metadata (LOM) appears to be a very powerful mechanism for representing metadata, because of the great variety of LOM Objects. This is on of the reasons why the LOM standard is repeatedly cited in projects in the field of eLearning Systems.…
Holzinger, Andreas; Kleinberger, Thomas; Muller, Paul
This paper proposes a robust object recognition system based on camera control. The objective of our system is to seek the optimum camera position where the unknown object can be recognized clearly. We define a degree of recognition-ambiguity based on basic probabilities that is calculated by using an input image and model images generated from object model data. Our active vision system makes an act plan iteratively so as to decrease the degree of recognition-ambiguity and controls the camera to move to the optimum position. Our proposed active method is able to recognize the object more accurately than conventional passive methods which analyze only a given input image. Experimental results show effectiveness of our approach. 12 refs., 8 figs.
Nishikawa, N.; Onishi, M.; Matsumoto, T.; Izumi, M.; Fukunaga, K. [University of Osaka Prefecture, Osaka (Japan). Faculty of Engineering
Full Text Available In the proposal, a flexible business process management axed on the objective concept and for the process lifecycle is presented. The main feature of this approach is that the map model is used as the key element to drive the construction and execution of flexible business processes. An analysis phase starts with a model which fully considers the objective and sub-objectives of the business process, when defining it. A design phase uses the map model for specifying and representing the possible plans that are capable of achieving the predefined objective and this will be done in a modular manner. Examples are presented from a case study in the travel agency Numédia. The architecture of the execution engine for, so defined, business process map modeling is presented for its interpretation and its execution. Finally, an evaluation of the degree of flexibility brought by proposed management is given.
We present an algorithm of clustering of many-dimensional objects, where only the distances between objects are used. Centers of classes are found with the aid of neuron-like procedure with lateral inhibition. The result of clustering does not depend on starting conditions. Our algorithm makes it possible to give an idea about classes that really exist in the empirical data. The results of computer simulations are presented.
Litinskii, L B; Litinskii, Leonid B.; Romanov, Dmitry E.
Particle Filter is an effective solution to track objects in video sequences in complex situations. Its key idea is to estimate the density over the possible states of the object using a weighted sample whose elements are called particles. One of its crucial step is a resampling step in which particles are resampled to avoid some degeneracy problem. In this paper, we introduce a new resampling method called Combinatorial Resampling that exploits some features of articulated ...
Dubuisson, Severine; Gonzales, Christophe; Nguyen, Xuan Son
This paper describes an accurate position control in object coordinate. In case the motion control of industrial robot placed in global coordinate is considered in object coordinate, it is preferable and convenient to decide its motion by the teaching of robot operator. However the teaching procedure requires much time and effort. Moreover, as often as relative position between robot and object is changed, the operator needs to do the teaching operation again. To improve the above issue, it is required to develop the strategy that decides the robot motion without the teaching operation. This paper proposes a control strategy that is not required the teaching operation and enables to realize the desired motion without affecting the relative position error between the robot and the target object in object coordinate defined by PSD (Position Sensitive Detector). In the proposed approach, the estimation algorithm of the kinetic transformation between global and object coordinates is introduced by using PSD output, and the error of coordinate transformation estimated by the proposed approach is compensated in global coordinate. The validity of the proposed method is shown by simulations and experiments.
Nakano, Keisuke; Murakami, Toshiyuki
Full Text Available In Video Retrieval system, each video that is stored in the database has its features extracted and compared to the features of the query image. The local invariant features are obtained for all frames in a sequence and tracked throughout the shot to extract stable features. Proposed work is to retrieve video from the database by giving query as an object. Video is firstly converted into frames, these frames are then segmented and an object is separated from the image. Then features are extracted from object image by using SIFT features. Features of the video database obtained by the segmentation and feature extraction using SIFT feature are matched by Nearest Neighbor Search (NNS. In this paper we have evaluated the proposed video retrieval system. The proposed method is better than previous video retrieval methods because it is invariant to illumination changes.
Full Text Available Object plays a very important role in comparing softwareentities. But the role of object when it comes to simulationfield is still undefined. Several standards have been createdwith the purpose to provide a design solution for the projectof Simulators. The design solution reported in this paperaggregates the principles of both software and simulatorarchitectures. The objective is to invert the top-downstrategy of model-driven development with the SimulationModel Portability (SMP standard into a bottom-updevelopment process of a SMP framework. The paper alsoprovides solution for two development lines of two differentframeworks: the first is the SMP framework that changesaccording to the design models, and the second is aframework designed to support the development ofreusable behavior implementations
Object segmentation is nowadays a vital component of special effect creation. In order to add virtual object or a virtual background in a video sequence a mask needs to be constructed. This mask makes the delineation between the parts that must remain in the final sequence and the parts where we can add the special effects. Such a mask can easily be constructed by filming people in front of a blue or green screen. The main topic of this thesis is the conception of an automatic method to find ...
The detection of moving object is one of the key techniques for video surveillance. In order to extract the moving object robustly in complex background, this paper presen...
Full Text Available With the rapid development of multimedia technologies, man-made object detection is one of the important applications. An improved LDA approach was used to learn and recognize man-made and natural scene categories. It represent the image of a scene by a collection of local regions, denoted as codewords, each region is represented as part of a “theme”. It learns the theme distributions as well as the codewords distribution over the themes. At last Support Vector Machine (SVM classifier was used to image database for the man-made object detection. We report satisfactory categorization performances on a large set of image database.
Verification has become an integral component of satellite precipitation algorithms and products. A number of object-based verification methods have been proposed to provide diagnostic information regarding the precipitation products' ability to capture the spatial pattern, intensity, and placement of precipitation. However, most object-based methods are not capable of investigating precipitation objects at the storm-scale. In this study, an image processing approach known as watershed segmentation was adopted to detect the storm-scale rainfall objects. Then, a fuzzy logic-based technique was utilized to diagnose and analyze storm-scale object attributes, including centroid distance, area ratio, intersection area ratio and orientation angle difference. Three verification metrics (i.e., false alarm ratio, missing ratio and overall membership score) were generated for validation and verification. Three satellite-based precipitation products, including PERSIANN, CMORPH, 3B42RT, were evaluated against NOAA stage IV MPE multi-sensor composite rain analysis at 0.25° by 0.25° on a daily scale in the winter season of 2010 over the contiguous United States. Winter season is dominated by frontal systems which usually have larger area coverage. All three products and the stage IV observation tend to find large size storm objects. With respect to the evaluation attributes, PERSIANN tends to obtain larger area ratio and consequently has larger centroid distance to the stage IV observations, while 3B42RT are found to be closer to the stage IV for the object size. All evaluation products give small orientation angle differences but vary significantly for the missing ratio and false alarm ratio. This implies that satellite estimates can fail to detect storms in winter. The overall membership scores are close for all three different products which indicate that all three satellite-based precipitation products perform well for capturing the spatial and geometric characteristics of the precipitation structure.
Li, J.; Hsu, K.; AghaKouchak, A.; Sorooshian, S.
This paper describes video compression in real time.The aim is to achieve higher compression ratio in losslesscompression. Efficient compression is achieved by separating themoving objects from stationary background and compactlyrepresenting their shape, motion, and the content. Videocompression techniques are used to make efficient use of theavailable bandwidth. Lossless means that the output from thedecompressor is bit-for-bit identical with the original input to thecompressor. The decompre...
In a recent study two hierarchical multi-objective methods were suggested to include short-term targets in life-cycle production optimization. However this previous study has two limitations: 1) the adjoint formulation is used to obtain gradient information, requiring simulator source code access and an extensive implementation effort, and 2) one of the two proposed methods relies on the Hessian matrix which is obtained by a computationally expensive method. In order to overcome the first of ...
Fonseca, R. M.; Leeuwenburgh, O.; Jansen, J. D.
This paper presents a novel algorithm which registers pressure information from tactile sensors installed over the fingers of a robotic hand in order to perform manipulation tasks with objects. This algorithm receives as an input the joint trajectories of the fingers which have to be executed and adapts it to the real contact pressure of each finger in order to guarantee that undesired slippage or contact?breaking is avoided during the execution of the manipulation task. This algorithm has ...
Juan Antonio Corrales Ramo?n; Fernando Torres Medina; Ve?ronique Perdereau
The first application of the CERN Unified Industrial Control system (UNICOS) has been developed for the 1.8 K refrigerator at point 1.8 in mid-2001. This paper presents the engineering methods used for application development, in order to reach the objectives of maintainability and reusability, in the context of a development done by an external consortium of engineering firms. It will also review the lessons learned during this first development and the improvements planned for the next applications.
Casas-Cubillos, J; Gomes, P; Pezzetti, M; Sicard, Claude Henri; Varas, F J
Object-orientation has rapidly become accepted as the preferred paradigm for large-scale system design. The product created during Software Development effort has to be tested since bugs may get introduced during its development. In this research work we 1) establish a requirement specification for a comprehensive software testing tool. 2) This will involve studying the feature set offered by existing software testing tools and their limitations. This will be able to overcome the limitations ...
This paper reports the work-in-progress on the definition of an object-oriented data model tailored for multimedia applications within the HERMES project. The wide diffusion of multimedia applications that use CD quality audio, video, high quality images, etc. and the initial availability of multimedia databases lead to the need of finding suitable solutions for the retrieval and the manipulation of multimedia data. In this paper we present a multimedia data model that addresses the aspects r...
Amato, Giuseppe; Mainetto, Gianni; Savino, Pasquale; Rabitti, Fausto
In a federated database system, a view mechanism is crucial since it is used to define exportable subsets of data ; to perform a virtual restructuring d ataset; and to construct the integrated schema. The view service in federated databa se systems must be capable of retaining as much semantic information as possible. The object-oriented ( 0 - 0 ) model was considered the suitable canonical data model since it meets the original criteria for canonical model selection. However, with the emerge...
In this paper, we propose a visual object tracking framework, which employs an appearance-based representation of the target object, based on local steering kernel descriptors and color histogram information. This framework takes as input the region of the target object in the previous video frame and a stored instance of the target object, and tries to localize the object in the current frame by finding the frame region that best resembles the input. As the object view changes over time, the...
Full Text Available Object detection applications are associated with real-time performance constraints that originate from the embedded system that they are often deployed in. Our Embedded system using ARM 32 bit Microcontroller has the feature of image/video processing by using various features and classification algorithms have been proposed for object detection. It overcomes the performance in terms of sensors and hardware cost is also very high. So, our design Embedded system that detects partially visible pedestrians with low false alarm rate and high speed wherever they enter the camera view. This system takes captured image by means of web camera connected to ARM microcontroller through USB and the image is processed by using image processing technique. Image processing is a signal processing for which the input is an image, whether it is a photograph or a video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image. The captured image undergoes spatio temporal reference samples in terms of both back ground and fore ground estimation, evaluation and spatial Gaussian kernel to provide high quality image of object detection that detected image is continuously displayed on display unit and the data is stored in pen drive connected to it.
An essential task of diagnosticians is the accurate assessment of behavioral skills. Traditionally, deficit-based behavioral assessments have underscored student social skill deficits. Strength-based assessments delineate student competencies and are useful for individualized education program (IEP) and behavioral intervention plan (BIP)…
Wilder, Lynn K.; Braaten, Sheldon; Wilhite, Kathi; Algozzine, Bob
Full Text Available This paper presents a novel algorithm which registers pressure information from tactile sensors installed over the fingers of a robotic hand in order to perform manipulation tasks with objects. This algorithm receives as an input the joint trajectories of the fingers which have to be executed and adapts it to the real contact pressure of each finger in order to guarantee that undesired slippage or contact?breaking is avoided during the execution of the manipulation task. This algorithm has been applied not only for the manipulation of normal rigid bodies but also for bodies whose centre of mass can be changed during the execution of the manipulation task.
Juan Antonio Corrales Ramo?n
In this paper the behavior of three combinational rules for temporal/sequential attribute data fusion for target type estimation are analyzed. The comparative analysis is based on: Dempster's fusion rule proposed in Dempster-Shafer Theory; Proportional Conflict Redistribution rule no. 5 (PCR5), proposed in Dezert-Smarandache Theory and one alternative class fusion rule, connecting the combination rules for information fusion with particular fuzzy operators, focusing on the t-norm based Conjun...
Tchamova, Albena; Dezert, Jean; Smarandache, Florentin
Full Text Available The goal of proteomics is the complete characterization of all proteins. Efforts to characterize subcellular location have been limited to assigning proteins to general categories of organelles. We have previously designed numerical features to describe location patterns in microscope images and developed automated classifiers that distinguish major subcellular patterns with high accuracy (including patterns not distinguishable by visual examination. The results suggest the feasibility of automatically determining which proteins share a single location pattern in a given cell type. We describe an automated method that selects the best feature set to describe images for a given collection of proteins and constructs an effective partitioning of the proteins by location. An example for a limited protein set is presented. As additional data become available, this approach can produce for the first time an objective systematics for protein location and provide an important starting point for discovering sequence motifs that determine localization.
We present the results of a new astronomical object detection and deblending algorithm when applied to Sloan Digital Sky Survey data. Our algorithm fits PSF-convolved Sérsic profiles to elliptical isophotes of source candidates. The main advantage of our method is that it minimizes the amount and complexity of real-time user input relative to many commonly used source detection algorithms. Our results are compared with 1D radial profile Sérsic fits. Our long-term goal is to use these techniques in a mixture-model environment to leverage the speed and advantages of machine learning. This approach will have a great impact when re-processing large data-sets and data-streams from next generation telescopes, such as the LSST and the E-ELT.
Cabrera, Guillermo; Miller, C.; Harrison, C.; Vera, E.; Asahi, T.
In this paper the behavior of three combinational rules for temporal/sequential attribute data fusion for target type estimation are analyzed. The comparative analysis is based on: Dempster's fusion rule proposed in Dempster-Shafer Theory; Proportional Conflict Redistribution rule no. 5 (PCR5), proposed in Dezert-Smarandache Theory and one alternative class fusion rule, connecting the combination rules for information fusion with particular fuzzy operators, focusing on the t-norm based Conjunctive rule as an analog of the ordinary conjunctive rule and t-conorm based Disjunctive rule as an analog of the ordinary disjunctive rule. The way how different t-conorms and t-norms functions within TCN fusion rule influence over target type estimation performance is studied and estimated.
Tchamova, Albena; Smarandache, Florentin
Full Text Available The portfolio technology is used to solve project portfolio problems from strategic-level and tactical-level, namely, project portfolios based on goals and similarities, respectively. On the basis of analyzing and proposing the type of portfolio of project, we analyzed the relation between the project functional goals and the project, introduced the project portfolio technology of functional goals. On this basis, we studied the principle and process of the project portfolio technology which is based on project functional goals and the formation of program were studied accordingly.
The colony characteristics are used for evaluating the quality of water and food. Auto-detecting colony in an image is a hard task. This paper proposes a new multi-scale segmentation technique based on wavelet decompositions and watersheds. Firstly, we dispose the tiny colonies by using a wavelet domain median filter. Secondly, wavelet transform is used to create multi-resolution images. Then watershed segmentation algorithm is applied to segment the lowestresolution image and obtain the initial watershed segmentation result. Finally, we do segmentation on the high-resolution image based on the low-resolution image. Experiments results show that the colony images can be well segmented by using the new algorithm.
Wang, W.; Wang, Z.
Routing is one of the most important challenges in ad hoc network. Numerous algorithms have been presented and one of the most important of them is AODV. This algorithm like many other algorithm calculate optimum path while pays no attention to environment situations, mobility pattern and mobile nodes status. However several presented algorithm have considered this situation and presented algorithm which named environment aware or mobility based. But in them have not considered realistic move...
Hamideh Babaei; Morteza Romoozi
The portfolio technology is used to solve project portfolio problems from strategic-level and tactical-level, namely, project portfolios based on goals and similarities, respectively. On the basis of analyzing and proposing the type of portfolio of project, we analyzed the relation between the project functional goals and the project, introduced the project portfolio technology of functional goals. On this basis, we studied the principle and process of the project portfolio technology which i...
Jingchun Feng; Xin Zhang; Zhanjun Liu; Haiyang Li
This paper considers the problem of accuracy for judging threshold under the complicated circumstance. In the detecting system, threshold is one of the most important factor, it decides the accuracy of the detecting result. Because the circumstance is changing, the threshold is asked to adapt the change. The traditional algorithm can hardly satisfy the need of the system. Bayesian model is an efficient system based on statistics rule, and it can give a better detecting result. In order to ada...
This paper describes SIMMEK, a computer-based tool for performing analysis of manufacturing systems, developed at the Production Engineering Laboratory, NTH-SINTEF. Its main use will be in analysis of job shop type of manufacturing. But certain facilities make it suitable for FMS as well as a production line manufacturing. This type of simulation is very useful in analysis of any types of changes that occur in a manufacturing system. These changes may be investments in new machines or equipme...
Eirik Borgen; Henning Neerland; Strandhagen, Jan O.
In this paper, we introduce a new method to distinguish the principal objects in image datasets using graph-based segmentation and normalized histogram (PODSH). Unlike the usual object detection systems which require the input objects, we propose a new approach to recognize objects one might focus on when taking images. Motivated by the habit of taking picture, we suppose that the position of a main object is located near the image centre and this object always holds a large area. The normali...
Pham The Bao; Bui Ngoc Nam
In this paper, we present a novel approach to code image regions of arbitrary shapes. The proposed algorithm combines a coefficient selection scheme with traditional wavelet compression for coding arbitrary regions and uses a shape adaptive embedded zerotree wavelet coding (SA-EZW) to quantize the selected coefficients. Since the shape information is implicitly encoded by the SA-EZW, our decoder can reconstruct the arbitrary region without separate shape coding. This makes the algorithm simple to implement and avoids the problem of contour coding. Our algorithm also provides a sufficient framework to address content-based scalability and improved coding efficiency as described by MPEG-4.
Zhao, Lifeng; Kassim, Ashraf A.
The steadily increasing availability of Earth observation (EO) data from a wide range of sensors facilitates the long-time monitoring of mass movements and retrospective analysis. Pixel-based approaches are most commonly used for detecting changes based on optical remote sensing data. However, single pixels are not suitable for depicting natural phenomena such as landslides in their full complexity and their transformation over time. By applying semi-automated object-based change detection limitations inherent to pixel-based methods can be overcome to a certain extent. For instance, the problem of variant spectral reflectance for the same pixel location in images from different points in time can be minimized. Therefore, atmospheric and radiometric correction of input data sets - although highly recommended - seems to be not that important for developing a straightforward change detection approach based on object-based image analysis (OBIA). The object-based change detection approach was developed for a subset of the Baichi catchment, which is located in the Shihmen Reservoir watershed in northern Taiwan. The study area is characterized by mountainous terrain with steep slopes and is regularly affected by severe landslides and debris flows. Several optical satellite images, i.e. SPOT images from different years and seasons with a spatial resolution ranging from 2.5 to 6.25 m, have been used for monitoring the past evolution of landslides and landslide affected areas. A digital elevation model (DEM) with 5 m spatial resolution was integrated in the analysis for supporting the differentiation of landslides and debris flows. The landslide changes were identified by comparing feature values of segmentation-derived image objects between two subsequent images in eCognition (Trimble) software. To increase the robustness and transferability of the approach we identified changes by using the relative difference in values of band-specific relational features, spectral indices and texture instead of by applying absolute spectral thresholds. Especially the Normalized Differenced Vegetation Index (NDVI) turned out to be useful as indicator for change. In this course, recent landslides can be differentiated from already existing mass movements. Furthermore, old landslides, which are already overgrown by vegetation, can be identified as well as reactivated ones. The presented approach can be applied for the regular update of existing landslide inventory maps or for the identification of areas that are potentially susceptible to landslides by analyzing the frequency of landslide events in the past. This might be of interest for decision makers and local stakeholders, as this kind of information can serve as useful input for disaster prevention and risk analysis.
Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens
Full Text Available Video content analysis is essential for efficient and intelligent utilizations of vast multimedia databases over the Internet. In video sequences, object-based extraction techniques are important for content-based video processing in many applications. In this paper, a novel technique is developed to extract objects from video sequences based on spatiotemporal independent component analysis (stICA and multiscale analysis. The stICA is used to extract the preliminary source images containing moving objects in video sequences. The source image data obtained after stICA analysis are further processed using wavelet-based multiscale image segmentation and region detection techniques to improve the accuracy of the extracted object. An automated video object extraction system is developed based on these new techniques. Preliminary results demonstrate great potential for the new stICA and multiscale-segmentation-based object extraction system in content-based video processing applications.
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region...
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
Updating existing knowledge bases is crucial to take into account the information that are regularly discovered. However, this is quite tedious and in practice Semantic Web data are rarely updated by users. This paper presents UTILIS, an approach to help users create and update objects in RDF(S) bases. While creating a new object, o, UTILIS searches for similar objects, found by applying relaxation rules to the description of o, taken as a query. The resulting objects and their properties ser...
Full Text Available In this paper, a new object recognition algorithm employing a curvature-based histogram is presented. Recognition of three-dimensional (3-D objects using range images remains one of the most challenging problems in 3-D computer vision due to its noisy and cluttered scene characteristics. The key breakthroughs for this problem mainly lie in defining unique features that distinguish the similarity among various 3-D objects. In our approach, an object detection scheme is developed to identify targets underlining an automated search in the range images using an initial process of object segmentation to subdivide all possible objects in the scenes and then applying a process of object recognition based on geometric constraints and a curvature-based histogram for object recognition. The developed method has been verified through experimental tests for its feasibility confirmation.
This paper proposes an automated approach for describing how geospatial objects evolve. We consider geospatial objects whose boundaries and properties change in the time, and refer to them as evolving objects. Our approach is to provide a set of rules that describe how objects change, referred to as rule-based evolution. We consider the case where we are given a series of snapshots, each of which contains the status of the objects at a given time. Given this data, we would like to extract the rules that describe how these objects changed. We use the technique of case-based reasoning (CBR) to extract the rules of object evolution, given a few representatives examples. The resulting rules are used to elicit the full history of all changes in these objects. This allows finding out how objects evolved, recovering their history. As an example of our proposed approach, we include a case study of how deforestation evolves in Brazilian Amazonia Tropical Forest.
Mota, Joice Seleme; Câmara, Gilberto; Escada, Maria Isabel Sobral; Bittencourt, Olga; Fonseca, Leila Maria Garcia; Vinas, Lúbia
Resultados a curto prazo de ceratotomia lamelar pediculada (LASIK) para correção de hipermetropia com o sistema Ladar Vision de excimer laser / Short-term results of hyperopic laser in situ keratomileusis (LASIK) with the Ladar Vision excimer laser system
Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese OBJETIVO: Avaliar a eficácia e a segurança do LASIK (ceratotomia lamelar pediculada) hipermetrópico utilizando-se o sistema Ladar Vision de excimer laser. MÉTODOS: Foram analisados, retrospectivamente, 28 olhos de 17 pacientes com hipermetropia de +1,00 a +3,00 D (grupo 1) e 29 olhos de 18 pacientes [...] com hipermetropia de + 3,25 a + 6,00 D (grupo 2), submetidos à cirurgia de LASIK, com o Sistema Ladar Vision de excimer laser. Acuidade visual sem correção, melhor acuidade visual corrigida e refração sob cicloplegia foram avaliadas em um, três e seis meses de pós-operatório. RESULTADOS: No grupo 1, o equivalente esférico médio pré-operatório, sob cicloplegia, era de + 2,14 ± 0,64 D, passando para + 0,44 ± 0,38 D no sexto mês de pós-operatório. No grupo 2, o equivalente esférico médio pré-operatório era de +4,26 ± 0,75 D, diminuindo para +1,14 ± 0,63 D no sexto mês de pós-operatório. 3,4% dos olhos do grupo 2 perderam três linhas de visão no primeiro mês de pós-operatório. No grupo 1, não houve perda de duas ou mais linhas de visão. CONCLUSÕES: O LASIK hipermetrópico com o sistema Ladar Vision mostrou-se procedimento eficaz e seguro. Pacientes do grupo 2 parecem estar sob maior risco de perda de linhas de melhor acuidade visual corrigida no pós-operatório. Abstract in english PURPOSE: To analyze the efficacy and safety of hyperopic laser in situ keratomileusis using the Ladar Vision excimer laser system. METHODS: Twenty-eight eyes of 17 patients with hyperopia from +1.00 to +3.00 D (group 1), and 29 eyes of 18 patients with hyperopia from +3.25 to +6.00 D (group 2) that [...] had LASIK for hyperopia with the Ladar Vision, were retrospectively analyzed. Uncorrected visual acuity, best spectacle-corrected visual acuity and cycloplegic refraction were evaluated 1 , 3 and 6 months after surgery. RESULTS: In group 1, the mean preoperative cycloplegic spherical equivalent (SE) was +2.14 ± 0.64 D and 6-month postoperative SE was +0.44 ± 0.38 D. In group 2, the mean preoperative SE was +4.26 ± 0.75 D and the 6-month postoperative SE was +1.14 ± 0.63 D. 3.4% of the eyes in group 2 and none of the eyes in group 1 lost 2 or more lines of best spectacle-corrected visual acuity in the first postoperative month. CONCLUSIONS: LASIK with the Ladar Vision excimer laser system is an effective and safe procedure to correct hyperopia. Patients in group 2 appear to be at greater risk for loss of lines of best spectacle-corrected visual acuity.
Nunes, Larissa Madeira; Francesconi, Cláudia Maria; Campos, Mauro; Schor, Paulo.
Sound-based positioning systems are a potential alternative low-cost navigation system. Recently, we developed such an audible sound-based positioning system, based on a spread spectrum approach. It was shown to accurately localize a stationary object. Here, we extend this localization to a moving object by compensating for the Doppler shift associated with the object movement. Numerical simulations and experiments indicate that by compensating for the Doppler shift, the system can accurately...
The establishment of spatio-temporal model of Moving Objects is the foundation of Moving Objects Management and Moving Objects Information Service. Based on the research on Moving Objects and its development, the concept and properties of moving object discussed systemically. A moving object conceptual model called spatio-temporal tube model is presented. The model consists of five major components, i.e. moving points, moving trajectory, space grid, moving events, spatio-temporal tube cell. The model's features of the efficiency dealing with a great lot space-data are described in detail. And the relationships between five elements of the spatio-temporal tube model are also discussed.
Zhou, Li; Zhang, Deli; He, Xuezhao
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
Concerned with multi-objective reinforcement learning (MORL), this paper presents MOMCTS, an extension of Monte-Carlo Tree Search to multi-objective sequential decision making, embedding two decision rules respectively based on the hypervolume indicator and the Pareto dominance reward. The MOMCTS approaches are firstly compared with the MORL state of the art on two artificial problems, the two-objective Deep Sea Treasure problem and the three-objective Resource Gathering problem. The scalabil...
Wang, Weijia; Sebag, Miche?le
Aimed at accurary and real-time object tracking under complex background,an object tracking algorithm based on multi feature fusion is proposed. Feature points tracking is used to reduce the match time and improve the real-time of tracking; To overcome the inaccuracy of a single feature tracking, the object model is presented by the color and texture features. For the traditional "current" statistical model in maneuvering object tracking defects, an improved algorithm which combined with adap...
Jinhua Wang; Jie Cao; Di Wu; Yabing Yu
This article presents a real time Unmanned Aerial Vehicles UAVs 3D pose estimation method using planar object tracking, in order to be used on the control system of a UAV. The method explodes the rich information obtained by a projective transformation of planar objects on a calibrated camera. The algorithm obtains the metric and projective components of a reference object (landmark or helipad) with respect to the UAV camera coordinate system, using a robust real time object tracking based on...
Most of the spam filtering techniques are based on objective methods such as the content filtering and DNS/reverse DNS checks. Recently, some cooperative subjective spam filtering techniques are proposed. Objective methods suffer from the false positive and false negative classification. Objective methods based on the content filtering are time consuming and resource demanding. They are inaccurate and require continuous update to cope with newly invented spammer’s tricks. On the o...
Elsagheer Mohamed, Samir A.
The objective was to test GEographic Object-based Image Analysis (GEOBIA) techniques for delineating neighborhoods of Accra, Ghana using QuickBird multispectral imagery. Two approaches to aggregating census enumeration areas (EAs) based on image-derived measures of vegetation objects were tested: (1) merging adjacent EAs according to vegetation measures and (2) image segmentation. Both approaches exploit readily available functions within commercial GEOBIA software. Image-derived neighborhood...
Stow, Douglas A.; Lippitt, Christopher D.; Weeks, John R.
We present a loop-based topological object representation for objects with holes. The representation is used to model object parts suitable for grasping, e.g. handles, and it incorporates local volume information about these. Furthermore, we present a grasp synthesis framework that utilizes this representation for synthesizing caging grasps that are robust under measurement noise. The approach is complementary to a local contact-based force-closure analysis as it depends on global topological...
Stork, Johannes A.; Pokorny, Florian T.; Kragic, Danica
Object selection is a basic procedure in a Geographic Information System (GIS). Most current methods for doing so, select objects in two phases: create a simple distance-bounded geometric buffer; and intersect it with available features. This paper introduces a novel and intelligent selection operator based on the autonomy of the agent-based approach. The proposed operator recognizes objects around one object only in one step. In the proposed approach, each point object acts as an agent-automata object. It then senses its vicinity and identifies the surrounding objects. To assess the proposed model, the operator is designed, implemented, and evaluated in a case study. Finally, the results are evaluated and presented in details in the paper.
Behzadi, S.; Ali. Alesheikh, A.
In this paper, we propose a fast feature points?based object tracking method for robot grasp. In the detection phase, we detect the object with SIFT feature points extraction and matching. Then we compute the object’s image position with homography constraints and set up an interest window to accommodate the object. In the tracking phase, we only focus on the interest window, detecting feature points from the window and updating the window’s position and size. Our method is of special p...
A computer-graphics-based model, named DIANA, is presented for generation of objects of arbitrary shape and for calculating bidirectional reflectances and scattering from them, in the visible and infrared region. The computer generation is based on a modified Lindenmayer system approach which makes it possible to generate objects of arbitrary shapes and to simulate their growth, dynamics, and movement. Rendering techniques are used to display an object on a computer screen with appropriate shading and shadowing and to calculate the scattering and reflectance from the object. The technique is illustrated with scattering from canopies of simulated corn plants.
Goel, Narendra S.; Rozehnal, Ivan; Thompson, Richard L.
To make full use of spatially contextual information and topological information in the procedure of Object-based Image Analysis (OBIA), an object-based conditional random field is proposed and used for road extraction. Objects are produced with an initial segmentation, then their neighbours are constructed. Each object is represented by three kinds of features, including the colour, the gradient of histogram and the texture. Formulating the road extraction as a binary classification problem, a Conditional Random Fields model learns and is used for inference. The experimental results demonstrate that the proposed method is effective.
Huang, Zhijian; Xu, Fanjiang; Lu, Lei; Nie, Hongshan
Full Text Available Method for object motion characteristic estimation based on wavelet Multi-Resolution Analysis: MRA is proposed. With moving pictures, the motion characteristics, direction of translation, roll/pitch/yaw rotations can be estimated by MRA with an appropriate support length of the base function of wavelet. Through simulation study, method for determination of the appropriate support length of Daubechies base function is clarified. Also it is found that the proposed method for object motion characteristics estimation is validated.
Full Text Available Method for object motion characteristic estimation based on wavelet Multi-Resolution Analysis: MRA is proposed. With moving pictures, the motion characteristics, direction of translation, roll/pitch/yaw rotations can be estimated by MRA with an appropriate support length of the base function of wavelet. Through simulation study, method for determination of the appropriate support length of Daubechies base function is clarified. Also it is found that the proposed method for object motion characteristics estimation is validated.
Full Text Available Based on the research status of objective control theory of new energy power projects, analysed the system components of power projects, proposed the subsystem reliability control theory directed at four objectives, gave reliability control standards and calculation methods of four objectives, obtained the objective integrated method of subsystem reliability, used disjoint minimal path sets method to deal with the minimal path sets in the project construction process, proposed system reliability control theory of new energy power projects, then combined the known reliability control standards to assess project reliability, finally established objective integrated control model of new energy power projects based on reliability theory. Finally an simple example proves that the proposed objective integrated control model is simple and practical.
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method.
Tian, Yuan; Guan, Tao; Wang, Cheng
Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the le...
Heras Evangelio Rubén; Sikora Thomas
Learning objects, which are the base component of m-learning system, are usually target to modifications in contexts and formats. The device- dependent applications of hand-held devices have proven to be ineffective for creating m-learning courseware. Learning Objects Metadata (LOM) is the most popular standard specification for learning objects but lacks the ability to facilitate platforms descriptions. This paper outlines various aspects of design and implementation of Web Services Oriente...
Akram Moh. Alkouz
We do research on moving object classification in traffic video. Our aim is to classify the moving objects into pedestrians, bicycles and vehicles. Due to the advantage of self-organizing feature map (SOM), an unsupervised learning algorithm, which is simple and self organization, and the common usage of K-means clustering method, this paper combines SOM with K-means to do classification of moving objects in traffic video, constructs a system including four parts, and proposes a method based ...
Jian Wu; Jie Xia; Jian-ming Chen; Zhi-ming Cui
Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on template matching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution.
Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on template matching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution. PMID:24812556
Banharnsakun, Anan; Tanathong, Supannee
Full Text Available Sound-based positioning systems are a potential alternative low-cost navigation system. Recently, we developed such an audible sound-based positioning system, based on a spread spectrum approach. It was shown to accurately localize a stationary object. Here, we extend this localization to a moving object by compensating for the Doppler shift associated with the object movement. Numerical simulations and experiments indicate that by compensating for the Doppler shift, the system can accurately determine the position of an object moving along a non-linear path. When the object moved in a circular path with an angular velocity of 0 to 1.3 rad/s, it could be localized to within 25 mm of the actual position. Experiments also showed the proposed system has a high noise tolerance of up to ?25 dB signal-to-noise ratio (SNR without compromising accuracy.
The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.
Liburdy, Kathleen A.; Schalkoff, Robert J.
Full Text Available In this paper, we propose a fast feature points?based object tracking method for robot grasp. In the detection phase, we detect the object with SIFT feature points extraction and matching. Then we compute the object’s image position with homography constraints and set up an interest window to accommodate the object. In the tracking phase, we only focus on the interest window, detecting feature points from the window and updating the window’s position and size. Our method is of special practical meaning in the case of service robot grasp. Because when the robot grasps the object, the object’s image size is usually small relative to the whole image, it is unnecessary to detect the whole image. On the other hand, the object is partially occluded by the robot gripper. SIFT is good at dealing with occlusion, but it is time consuming. Hence, by combining SIFT and an interest window, our method gains the ability to deal with occlusion and can satisfy the real?time requirements at the same time. Experiments show that our method exceeds several leading feature points?based object tracking methods in real?time performance.
A reference point based multi-objective optimization using a combination between trust region (TR) algorithm and particle swarm optimization (PSO) to solve the multi-objective environmental/economic dispatch (EED) problem is presented in this paper. The EED problem is handled by Reference Point Interactive Approach. One of the main ad...
In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.
Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei
Full Text Available Visual sensor networks offer surveillance applications, particularly object tracking. This study uses a rotational-based and regular deployment visual sensor network to track an object. The proposed method is applied to detect the object tracking in security monitoring. This study provides two types of network architecture to deploy the sensor nodes and utilizes the lines of sight between cameras to form a defense face to surround the mobile object. Each sensor node has rotational camera lens and is deployed to form regular network architecture. The proposed tracking method assigns the sensor nodes to surround by an object and to provide continuous monitoring for the object. The proposed update method can ensure that the object is tracking by an update defense face, even if the object is moving out of the original defense face. The major advantage of this algorithm that is can solve the lost object location problem of object tracking. Finally, this study utilizes simulations to analyse and estimate the efficacy and the performance of object tracking in the proposed network architecture.
The traditional moving object extraction method based on Gaussian model has such defects as poor anti-noise performance, bad real-time performance. Considering these shortcomings, this paper proposed a new moving object extraction method for color video image based on regional kernel histogram. The method first proposed the idea of kernel histogram description theory which utilizing the kernel histogram to describe the area of video image. Then a new metric function for measuring...
We propose a fully three-dimensional object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the diffe...
Summary: We present a web-based service, SimCT, which allows to graphically display the relationships between biological objects (e.g. genes or proteins) based on their annotations to a biomedical ontology. The result is presented as a tree of these objects, which can be viewed and explored through a specific java applet designed to highlight relevant features. Unlike the numerous tools that search for overrepresented terms, SimCT draws a simplified representation of biological terms present ...
Herrmann, Carl; Be?rard, Se?verine; Tichit, Laurent
An important question in information retrieval is how to create a database index which can be searched efficiently for the data one seeks. One such technique called signature file based access method is preferred for its easy handling of insertion and update operations. Most of the proposed methods use either efficient search method or tree based intermediate data structure to filter data objects matching the query. Use of search techniques retrieves the objects by sequentially compari...
The main idea behind Reconfigurable Object Nets (RONs) is to support the visual specification of controlled rule-based net transformations of place/transition nets (P/T nets). RONs are high-level nets with two types of tokens: object nets (place/transition nets) and net transformation rules (a dedicated type of graph transformation rules). Firing of high-level transitions may involve firing of object net transitions, transporting object net tokens through the high-level net, and applying net ...
Biermann, Enrico; Modica, Tony
In the process of object tracking, the major problem is how to mark the tracking box of the object. Moreover, multi-objects tracking is also difficult. This paper proposed and efficient fast object-tracking scheme based on motion-vector-located pattern match, which adopts motion vector of Mpeg2 to mark the moving targets in static video in order to mark and locate the targets automatically and quickly. Then, extract multi-dimensional characteristics from the initial targets taken by motion ve...
Software re-engineering and object orientation are two areas of growing interest in the last years. However, while many researchers have focused their interest in the object-oriented design methodologies, a little attention has been paid to the re-engineering towards an object-oriented environment. In this paper we examine the motivations towards object-oriented re-engineering (extendibility, robustness and reusability of the code) and the problems found in moving from a process-based to an o...
Full Text Available Petri nets in object-oriented modeling, based on the objects introduced a special internal changes - the control changes, the introduction of objects in the controller, a control structure based on object-oriented Petri nets (CS-OOPN model, and described the CS-OOPN modeling steps of the described of CS-OOPN modeling. The model overcomes the traditional object-oriented Petri nets modeling of flexible processes and systems lack the flexibility of the shortcomings can be more intuitive, flexible to describe the work flow. Finally, using this model for a group management system and for modeling equipment procurement, and approval departments CS-OOPN model as an example, find its associated matrix, tree coverage and P-invariant, the correlation analysis to prove that building the model has good performance and to meet the system requirements change and restructuring.
Object-based image coding is drawing a great attention for the many opportunities it offers to high-level applications. In terms of rate-distortion performance, however, its value is still uncertain, because the gains provided by an accurate image segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with losses that depend on both the coding scheme and the object geometry. This work aims at measuring rate-distortion costs and gains for a wavelet-based shape-adap...
Marco Cagnazzo; Sara Parrilli; Giovanni Poggi; Luisa Verdoliva
Object recognition is an image processing task of finding a given object in a selected image or video sequence. Object recognition can be divided into two areas: one of these is decision-theoretic and deals with patterns described by quantitative descriptors, for example such as length, area, shape and texture. With this Graphical User Interface Circuitry (GUIC) methodology employed here being relatively new for object recognition systems, the aim of this work is to identify if the developed circuitry can detect certain shapes or strings within the target image. A much smaller reference image feeds the preset data for identification, tests are conducted for both binary and greyscale and the additional mathematical morphology to highlight the area within the target image with the object(s) are located is also presented. This then provides proof that basic recognition methods are valid and would allow the progression to developing decision-theoretical and learning based approaches using GUICs for use in multidisciplinary tasks.
Tickle, A J; Harvey, P K; Smith, J S [Intelligence Engineering and Industrial Automation Research Group, Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3GJ (United Kingdom); Wu, F, E-mail: email@example.com [RF Engines Ltd, Innovation Centre, St. Cross Business Park, Newport, Isle of Wight, PO30 5WB (United Kingdom)
Functionality-based recognition systems recognize objects at the category level by reasoning about how well the objects support the expected function. Such systems naturally associate a ``measure of goodness'' or ``membership value'' with a recognized object. This measure of goodness is the result of combining individual measures, or membership values, from potentially many primitive evaluations of different properties of the object's shape. A membership function is used to compute the membership value when evaluating a primitive of a particular physical property of an object. In previous versions of a recognition system known as Gruff, the membership function for each of the primitive evaluations was hand-crafted by the system designer. In this paper, we provide a learning component for the Gruff system, called Omlet, that automatically learns membership functions given a set of example objects labeled with their desired category measure. The learning algorithm is generally applicable to any problem in which...
Woods, K; Hall, L; Bowyer, K; Stark, L
Full Text Available Object recognition is an important task in image processing and computer vision. This paper presents aperfect method for object recognition with full boundary detection by combining affine scale invariantfeature transform (ASIFT and a region merging algorithm. ASIFT is a fully affine invariant algorithm that means features are invariant to six affine parameters namely translation (2 parameters, zoom, rotation and two camera axis orientations. The features are very reliable and give us strong keypoints that can be used for matching between different images of an object. We trained an object in several images with different aspects for finding best keypoints of it. Then, a robust region merging algorithm is used torecognize and detect the object with full boundary in the other images based on ASIFT keypoints and a similarity measure for merging regions in the image. Experimental results show that the presented method is very efficient and powerful to recognize the object and detect it with high accuracy.
In many applications of medical image analysis, the density of an object is the most important feature for isolating an area of interest (image segmentation). In this research, an object density-based image segmentation methodology is developed, which incorporates intensity-based, edge-based and texture-based segmentation techniques. The proposed method consists of three main stages: preprocessing, object segmentation and final segmentation. Image enhancement, noise reduction and layer-of-interest extraction are several subtasks of preprocessing. Object segmentation utilizes a marker-controlled watershed technique to identify each object of interest (OI) from the background. A marker estimation method is proposed to minimize over-segmentation resulting from the watershed algorithm. Object segmentation provides an accurate density estimation of OI which is used to guide the subsequent segmentation steps. The final stage converts the distribution of OI into textural energy by using fractal dimension analysis. An energy-driven active contour procedure is designed to delineate the area with desired object density. Experimental results show that the proposed method is 98% accurate in segmenting synthetic images. Segmentation of microscopic images and ultrasound images shows the potential utility of the proposed method in different applications of medical image processing. PMID:19473717
Yu, Jinhua; Tan, Jinglu
Full Text Available In this paper an object recognition system using template matching is implemented. Since objects are represented by either an external or internal descriptors, a combination of using signature, principal component analysis and color is used. The system efficacy is measured by applying it to recognize an image of a chessboard with a set of objects (pieces. The output of the system includes the pieces names, locations and color. The signature feature is used to distinguish the pieces types based on their external shape but when it falls short, the principal components analysis is used instead. The object color is also obtained. The matching between features is carried out based on Euclidean distance metric .The proposed system is implemented, trained, and tested using Matlab based on a set of collected samples representing chessboard images. The simulation results show the effectiveness of the proposed method in recognizing the pieces locations, types, and color.
Inad A. Aljarrah
Full Text Available Researches on Augmented Reality (AR have recently received attention. With these, the Machine-to-Machine (M2M market has started to be active and there are numerous efforts to apply this to real life in all sectors of society. To date, the M2M market has applied the existing marker-based AR technology in entertainment, business and other industries. With the existing marker-based AR technology, a designated object can only be loaded on the screen from one marker and a marker has to be added to load on the screen the same object again. This situation creates a problem where the relevant marker should be extracted and printed in screen so that loading of the multiple objects is enabled. However, since the distance between markers will not be measured in the process of detecting and copying markers, the markers can be overlapped and thus the objects would not be augmented. To solve this problem, a circle having the longest radius needs to be created from a focal point of a marker to be copied, so that no object is copied within the confines of the circle. In this paper, software-based sensing technology for multiple object detection and loading using PPHT has been developed and overlapping marker control according to multiple object control has been studied using the Bresenham and Mean Shift algorithms.
Researches on Augmented Reality (AR) have recently received attention. With these, the Machine-to-Machine (M2M) market has started to be active and there are numerous efforts to apply this to real life in all sectors of society. To date, the M2M market has applied the existing marker-based AR technology in entertainment, business and other industries. With the existing marker-based AR technology, a designated object can only be loaded on the screen from one marker and a marker has to be added to load on the screen the same object again. This situation creates a problem where the relevant marker'should be extracted and printed in screen so that loading of the multiple objects is enabled. However, since the distance between markers will not be measured in the process of detecting and copying markers, the markers can be overlapped and thus the objects would not be augmented. To solve this problem, a circle having the longest radius needs to be created from a focal point of a marker to be copied, so that no object is copied within the confines of the circle. In this paper, software-based sensing technology for multiple object detection and loading using PPHT has been developed and overlapping marker control according to multiple object control has been studied using the Bresenham and Mean Shift algorithms. PMID:22163444
Jung, Sungmo; Song, Jae-Gu; Hwang, Dae-Joon; Ahn, Jae Young; Kim, Seoksoo
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
An object recognition system is described. The system consists of 2-D gray-scale image acquisition, computer-aided design based model construction, image nonorthogonal decomposition, neural network matching, and model based segmentation. Accurate vision systems are needed for robotic applications.
Wang, S.; Ioannou, D.; Dalton, R.; Tulenko, J.S.
This paper discusses the design and implementation of a framework that automatically extracts and monitors the shape deformations of soft objects from a video sequence and maps them with force measurements with the goal of providing the necessary information to the controller of a robotic hand to ensure safe model-based deformable object manipulation. Measurements corresponding to the interaction force at the level of the fingertips and to the position of the fingertips of a three-finger robotic hand are associated with the contours of a deformed object tracked in a series of images using neural-network approaches. The resulting model captures the behavior of the object and is able to predict its behavior for previously unseen interactions without any assumption on the object's material. The availability of such models can contribute to the improvement of a robotic hand controller, therefore allowing more accurate and stable grasp while providing more elaborate manipulation capabilities for deformable objects. Experiments performed for different objects, made of various materials, reveal that the method accurately captures and predicts the object's shape deformation while the object is submitted to external forces applied by the robot fingers. The proposed method is also fast and insensitive to severe contour deformations, as well as to smooth changes in lighting, contrast, and background. PMID:22207640
Cretu, Ana-Maria; Payeur, Pierre; Petriu, Emil M
Full Text Available SciELO Brazil | Language: English Abstract in english An approach based on concurrent object oriented programming (COOP) to build a control system for a mobile robot is presented. A behavior-based control system is decomposed in intercommunicating concurrent objects named Agents. These agents belong to five categories: Primitive Sensor, Virtual Sensor, [...] Behavior, Primitive Actuator and Virtual Actuator. Based on this approach, a C++ tool is developed, where the categories above are implemented as C++ classes, in which built-in communication mechanisms are included. Each class has a standard interface and functionality. It is possible, then, to develop a complex control system by deriving new classes from the base classes and by instantiating objects. These objects are interconnected in a dynamic manner and thus building a control system with different behavior levels, which is able to react to environment changes.
José Eduardo Mendonça, Xavier; Hansjörg Andreas, Schneebeli.
Full Text Available Particle set sampling and weighting are both at the core of particle filter?based object tracking methods. Aiming to optimally represent the object?s motion state, a large amount of particles ? in the classical particle method ? is a prerequisite. The high?cost calculation of these particles significantly slows down the convergence of the algorithm. To this problem, a prior approach which originated from the process of video compressing and uncompressing is introduced to optimize the phase of particle sampling, making the collected particles centre on and cover the object region in the current image. This advantage dramatically reduces the number of particles required by the regularized particle sampling method, solving the problem of the high computational cost for tracking objects, while the performance of the algorithm is stable.
EPICS (Experimental Physics and Industrial Control System) is a distributed control system platform which has been widely used for large scientific devices control like particle accelerators and fusion plant. EPICS has introduced object oriented (C++) interfaces to most of the core services. But the major part of EPICS, the run-time database, only provides C interfaces, which is hard to involve the EPICS record concerned data and routines in the object oriented architecture of the software. This paper presents an object oriented framework which contains some abstract classes to encapsulate the EPICS record concerned data and routines in C++ classes so that full OOA (Objected Oriented Analysis) and OOD (Object Oriented Design) methodologies can be used for EPICS IOC design. We also present a dynamic device management scheme for the hot swap capability of the MicroTCA based control system. (authors)
The high resolution imaging of astronomical object based on phase-diversity method is a technique for obtaining estimates of both the object and the distribution of wavefront induced by atmospheric turbulence,by exploiting the simultaneous collection of one or more pairs of short-exposure images. One of the pair images is the conventional focal-plane image and another is formed by further blurring the focal-image by defocus.The telescopic optical system and image collection system of phase-diversity method are simulated by using computer in this paper. Based on signal estimation theory and optimization theory, the objective function is derived under additive Gaussian noise model. The resulting large scale unconstrained optimization problem is solved numerically using a limited memory BFGS method. The restoring results demonstrate that the phase-diversity method is remarkably efficient for removing the effect of atmospheric turbulence and solving the image restoration problem of astronomical extended object.
Li, Q.; Shen, M. Z.
Full Text Available In this paper, we propose an online key object discovery and tracking system based on visual saliency. We formulate the problem as a temporally consistent binary labelling task on a conditional random field and solve it by using a particle filter. We also propose a context?aware saliency measurement, which can be used to improve the accuracy of any static or dynamic saliency maps. Our refined saliency maps provide clearer indications as to where the key object lies. Based on good saliency cues, we can further segment the key object inside the resulting bounding box, considering the spatial and temporal context. We tested our system extensively on different video clips. The results show that our method has significantly improved the saliency maps and tracks the key object accurately.
Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is ...
Rolls, Edmund T.
Video indexing technique is crucial in multimedia applications. In the case of HD (High Definition) Video, the principle of scalability is of great importance. The wavelet decomposition used in the JPEG2000 standard provides this property. In this paper, we propose a scalable descriptor based on objects. First, a scalable moving object extraction method is constructed. Using the wavelet data, it relies on the combination of a robust global motion estimation with a morphological color segmenta...
Morand, Claire; Benois-pineau, Jenny; Domenger, Jean-philippe
The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accura...
The main goal of this exploratory project was to quantify seedling density in post fire regeneration sites, with the following objectives: to evaluate the application of second order image texture (SOIT) in image segmentation, and to apply the object-based image analysis (OBIA) approach to develop a hierarchical classification. With the utilization of image texture we successfully developed a methodology to classify hyperspatial (high-spatial) imagery to fine detail level of tree crowns, shad...
In this paper, the effects of uncertainty on multiple-objective linear programming models are studied using the concepts of fuzzy set theory. The proposed interactive decision support system is based on the interactive exploration of the weight space. The comparative analysis of indifference regions on the various weight spaces (which vary according to intervals of values of the satisfaction degree of objective functions and constraints) enables to study the stability and evolution of the bas...
Borges, Ana Rosa; Antunes, Carlos Henggeler
The MACE project aims to support architecture students while searching for learning materials by offering advanced graphical metadata-based access to learning resources in architecture across repository boundaries. Therefore, the MACE system uses real world object representations which serve as connection between learning materials. This enables the students to explore new and more complete learning paths. In this paper we outline the generation and usage of real world object representations ...
Niemann, K.; Wolpers, M.
The mapping of road environments is an important task, providing important input data for a broad range of scientific disciplines. Pole-like objects, their visibility and their influence onto local light and traffic noise conditions are of particular interest for traffic safety, public health and ecological issues. Detailed knowledge can support the improvement of traffic management, noise reducing infrastructure or the planning of photovoltaic panels. Mobile Mapping Systems coupled with computer aided mapping work-flows allow an effective data acquisition and provision. We present a classification work flow focussing on pole-like objects. It uses rotation and scale invariant point and object features for classification, avoiding planar segmentation and height slicing steps. Single objects are separated by connected component and Dijkstra-path analysis. Trees and artificial objects are separated using a graph based approach considering the branching levels of the given geometries. For the focussed semantic groups, classification accuracies higher than 0.9 are achieved. This includes both the quality of object aggregation and separation, where the combination of Dijkstrapath aggregation and graph-based classification shows good results. For planar objects the classification accuracies are lowered, recommending the usage of planar segmentation for classification and subdivision issues as presented by other authors. The presented work-flow provides sufficient input data for further 3D reconstructions and tree modelling.
Bremer, M.; Wichmann, V.; Rutzinger, M.
The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.
Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie
This paper describes how Multiple Test Procedures (MTP) can be used to achieve control over the false alarm rate in a landmine detection system. The system is based on an impulse radar to detect objects buried underground. An impulse radar combined with a detector can detect both metallic and non- metallic objects. To be able to distinguish mines from nuisance objects the detector can be followed by a classifier. One drawback with this setting is that it is very difficult to control the false alarm rate of the complete detector/classifier system. The use of MTPs is a systematic way to control the false alarm rate.
Brunzell, Hakan O.
In this paper, we present a comparative study that concerns relevance feedback (RF) algorithms in the context of content-based 3D object retrieval. In this study, we employ RF algorithms which range from query modification and multiple queries to one-class support vector machines (SVM). Furthermore, we employ pseudo relevance feedback (PRF) and show that it can considerably improve the performance of content-based retrieval. Our comparative study is based upon extensive experiments that take ...
Papadakis, Panagiotis; Pratikakis, Ioannis; Theodore, Trafalis; Theoharis, Theoharis; Perantonis, Stavros
Presence of contaminants, such as gasoline, moisture, and coolant in the engine lubricant indicates mechanical failure within the engine and significantly reduces lubricant quality. This paper describes a novel sensing system, its methodology and experimental verifications for analysis of the presence of contaminants in the engine lubricants. The sensing methodology is based on the statistical shape analysis methodology utilizing optical analysis of the distortion effect when an object image is obtained through a thin random optical medium. The novelty of the proposed sensing system lies within the employed methodology which an object with a known periodic shape is introduced behind a thin film of the contaminated lubricant. In this case, an acquired image represents a combined lubricant-object optical appearance, where an a priori known periodical structure of the object is distorted by a contaminated lubricant. The object, e.g. a stainless steel woven wire cloth with a mesh size of 65×65 µm2 and a circular wire diameter of 33 µm was placed behind a microfluidic channel, containing engine lubricant and optical images of flowing lubricant with stationary object were acquired and analyzed. Several parameters of acquired optical images, such as, color of lubricant and object, object shape width at object and lubricant levels, object relative color, and object width non-uniformity coefficient, were proposed. Measured on-line parameters were used for optical analysis of fresh and contaminated lubricants. Estimation of contaminant presence and lubricant condition was performed by comparison of parameters for fresh and contaminated lubricants. Developed methodology was verified experimentally showing ability to distinguish lubricants with 1%, 4%, 7%, and 10% coolant, gasoline and water contamination individually and in a combination form of coolant (0%-5%) and gasoline (0%-5%).
Bordatchev, Evgueni; Aghayan, Hamid; Yang, Jun
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
Ran, Bin; Liu, Henry X.; Martono, Wilfung
Full Text Available The traditional moving object extraction method based on Gaussian model has such defects as poor anti-noise performance, bad real-time performance. Considering these shortcomings, this paper proposed a new moving object extraction method for color video image based on regional kernel histogram. The method first proposed the idea of kernel histogram description theory which utilizing the kernel histogram to describe the area of video image. Then a new metric function for measuring the kernel histogram model is proposed. According to the features of measurement values, using the Gaussian mixture model and the metric of kernel histogram model to build the model. At last, based on this model, the moving object of video images is extracted. The experimental results show that the algorithm have a better segmentation result and have the better anti-noise and real-time performance compared with the traditional Gaussian mixture model algorithm.
In this paper, an evolutionary multi-objective optimization approach is employed to design a static synchronous series compensator (SSSC)-based controller. The design objective is to improve the transient performance of a power system subjected to a severe disturbance by damping the multi-modal oscillations namely; local mode, inter-area mode and inter-plant mode. A genetic algorithm (GA)-based solution technique is applied to generate a Pareto set of global optimal solutions to the given multi-objective optimization problem. Further, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set. Simulation results are presented and compared with a PI controller under various disturbances namely; three-phase fault, line outage, loss of load and unbalanced faults to show the effectiveness and robustness of the proposed approach. (author)
Panda, Sidhartha [Department of Electrical and Electronics Engineering, National Institute of Science and Technology, Berhampur, Orissa 761008 (India)
Full Text Available Most of the spam filtering techniques are based on objective methods such as the content filtering and DNS/reverse DNS checks. Recently, some cooperative subjective spam filtering techniques are proposed. Objective methods suffer from the false positive and false negative classification. Objective methods based on the content filtering are time consuming and resource demanding. They are inaccurate and require continuous update to cope with newly invented spammer’s tricks. On the other side, the existing subjective proposals have some drawbacks like the attacks from malicious users that make them unreliable and the privacy. In this paper, we propose an efficient spam filtering system that is based on a smart cooperative subjective technique for content filtering in addition to the fastest and the most reliable non-content-based objective methods. The system combines several applications. The first is a web-based system that we have developed based on the proposed technique. A server application having extra features suitable for the enterprises and closed work groups is a second part of the system. Another part is a set of standard web services that allow any existing email server or email client to interact with the system. It allows the email servers to query the system for email filtering. They can also allow the users via the mail user agents to participate in the subjective spam filtering problem.
Samir A. Elsagheer Mohamed
This paper introduces an object-oriented knowledge base management technology which has a number of desirable features. First, an object-oriented semantic association model OSAM* provides general structural constructs to model complex objects and their various types of semantic associations. It also allows the user to define the behavioral properties of objects through user-defined operations and knowledge rules, which results in an active knowledge base management system (KBMS). Second, a pattern-based query language, OQL, allows complex search conditions and constraints to be easily specified. Third, a set of intelligent graphical interface tools greatly eases scientists` tasks in defining and querying complex knowledge bases. Fourth, the system can be extended to meet the changing requirements of applications by extending the modeling capabilities of the data model, and by modifying the structure of system components. Lastly, the efficiency of processing large knowledge bases is achieved by using a transputer-based multiprocessor system and some multi-wavefront parallel processing algorithms. A prototype KBMS with the above features has been developed which runs on IBM and SUN workstations.
Su, S.Y.W.; Kamel, N. [Univ. of Florida, Gainesville, FL (United States). Database Systems Research and Development Center
Full Text Available Three dimensional object extraction and recognition (OER from geographic data has been definitely one of more important topic in photogrammetry for quite a long time. Today, the capability of rapid generating high-density DSM increases the supply of geographic information but the discrete nature of the measuring makes more difficult to recognize correctly and to extract 3D objects from these surface. The proposed methodology wants to semi-automate some geographic objects clustering operations, in order to perform the recognition process. The clustering is a subjective process; the same set of data items often needs to be partitioned differently based on the application. Fuzzy logic gives the possibility to use in a mathematical process the uncertain information typical of human reasoning. The concept at the base of our proposal is to use the information contained in Image Matching or LiDAR DSM, and typically understood by the human operator, in a fuzzy recognition process able to combine the different input in order to perform the classification. So the object recognition approach proposed in our workflow integrates 3D structural descriptive components of objects, extracted from DSM, into a fuzzy reasoning process in order to exploit more fully all available information, which can contribute to the extraction and recognition process and, to handling the object’s vagueness. The recognition algorithm has been tested with to different data set and different objectives. An important issue is to apply the typical human process which allows to recognize objects in a range image in a fuzzy reasoning process. The investigations presented here have given a first demonstration of the capability of this approach.
This paper deals with an approach to the optimization of enterprise information system (EIS) based on the object-based knowledge mesh (OKM) and binary tree. Firstly, to explore the optimization of EIS by the user’s function requirements, an OKM expression representation based on the user’s satisfaction and binary tree is proposed. Secondly, based on the definitions of the fuzzy function-satisfaction degree relationships on the OKM functions, the optimization model is constructed. ...
Haiwang Cao; Chaogai Xue
Full Text Available In this paper, an approach to integral color texture invariant information with a neural networkapproach to object recognition is proposed. A color-texture context for image retrieval system based onthe integral information of an image is represented as one compact representation base on colorhistogram approach. A general and efficient design approach using a neural classifier to cope with smalltraining sets of high dimension, which is a problem frequency encountered in object recognition, isfocused in this paper for general images. The proposed system is tested for various colored imagesamples and the recognition accuracy is evaluated.
G.Shyama Chandra Prasad
This paper presents a LISP based system for signal and image processing. Using an object based approach the system integrates signal and image processing algorithms, supervised and unsupervised neural network algorithms, and mild-level computer vision capabilities, into a cohesive framework. This framework is suitable for prototyping complex algorithms dealing with multiple classes of data. The system, known as VISION, is currently used as a prototyping environment for wide range of scientific applications internal to LLNL. This paper highlights some of the capabilities of VISION, and how they were implemented using the Common LISP Object System, CLOS. 13 refs.
Hernandez, J.E.; Lu, Shin-Yee; Sherwood, R.J.; Clark, G.A.; Lawver, B.S.
In confocal microscopy, target objects are labeled with fluorescent markers in the living specimen, and usually appear with irregular brightness in the observed images. Also, due to the existence of out-of-focus objects in the image, the segmentation of 3-D objects in the stack of image slices captured at different depth levels of the specimen is still heavily relied on manual analysis. In this paper, a novel Bayesian model is proposed for segmenting 3-D synaptic objects from given image stack. In order to solve the irregular brightness and out-offocus problems, the segmentation model employs a likelihood using the luminance-invariant 'wavelet features' of image objects in the dual-tree complex wavelet domain as well as a likelihood based on the vertical intensity profile of the image stack in 3-D. Furthermore, a smoothness 'frame' prior based on the a priori knowledge of the connections of the synapses is introduced to the model for enhancing the connectivity of the synapses. As a result, our model can successfully segment the in-focus target synaptic object from a 3D image stack with irregular brightness.
Pan, Kangyu; Corrigan, David; Hillebrand, Jens; Ramaswami, Mani; Kokaram, Anil
Full Text Available Since the examination paper generated with computer by the algorithms of random and backtracking takes on inferior quality and inefficient, and the question of generating examination paper with computer has the character of multi-ob-jective because of the index system metrics, the genetic algorithm with multi-objective strategy optimization is proposed to solve this problem. Mapping the index system to multi-objective functions and optimizing the computing with multi-objective strategy are employed in the algorithm. The genetic algorithm experiment based on the multi-objective strategy optimization shows that the result has the advantages getting tradeoff between performance and quality, and having the ability to tune the performance and quality to meet the user’s requirements.
Full Text Available In the process of object tracking, the major problem is how to mark the tracking box of the object. Moreover, multi-objects tracking is also difficult. This paper proposed and efficient fast object-tracking scheme based on motion-vector-located pattern match, which adopts motion vector of Mpeg2 to mark the moving targets in static video in order to mark and locate the targets automatically and quickly. Then, extract multi-dimensional characteristics from the initial targets taken by motion vector and make the model. Then accurately identifies the particles of larger weight and combines with inertia factor of velocity through matching the original data and the observations of particle filter. The matching of the characteristics of the new particle and the original one is more accurate and faster because of adopting the method of pattern classifying. The experiments show that the algorithm had good tracking performance and strong robustness.
Full Text Available Learning Objects are key elements within e-Learning environment because describe the created educational material for students, besides, it permits the reusing and sharing in di_erent Learning Management Systems. Usually, when teachers need to create and structure educational experiences, they attend to repositories for retrieving resources ?tted to their interest, for reducing the e_ort and the computational time. In this paper, a proposal is presented for merging Learning Objects from heterogeneous repositories; the model is based on semantic relationships between Learning Objects retrieved from a meta-search engine, as an alternative for locating ?tted educational resources for teacher’s interest. The model exposed in the proposal has been implemented as initial prototype, which retrieves Learning Objects from open repositories. An initial study results con?rm the usefulness of the model.
Full Text Available Visual image interpretation and digital image classification have been used to map and monitor mangrove extent and composition for decades. The presence of a high-spatial resolution hyperspectral sensor can potentially improve our ability to differentiate mangrove species. However, little research has explored the use of pixel-based and object-based approaches on high-spatial hyperspectral datasets for this purpose. This study assessed the ability of CASI-2 data for mangrove species mapping using pixel-based and object-based approaches at the mouth of the Brisbane River area, southeast Queensland, Australia. Three mapping techniques used in this study: spectral angle mapper (SAM and linear spectral unmixing (LSU for the pixel-based approaches, and multi-scale segmentation for the object-based image analysis (OBIA. The endmembers for the pixel-based approach were collected based on existing vegetation community map. Nine targeted classes were mapped in the study area from each approach, including three mangrove species: Avicennia marina, Rhizophora stylosa, and Ceriops australis. The mapping results showed that SAM produced accurate class polygons with only few unclassified pixels (overall accuracy 69%, Kappa 0.57, the LSU resulted in a patchy polygon pattern with many unclassified pixels (overall accuracy 56%, Kappa 0.41, and the object-based mapping produced the most accurate results (overall accuracy 76%, Kappa 0.67. Our results demonstrated that the object-based approach, which combined a rule-based and nearest-neighbor classification method, was the best classifier to map mangrove species and its adjacent environments.
This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms. PMID:22201058
Ulrich, Markus; Wiedemann, Christian; Steger, Carsten
Full Text Available A multi-agent model is proposed in which learning styles and a word analysis technique to create a learning object recommendation system are used. On the basis of a learning style-based design, a concept map combination model is proposed to filter out unsuitable learning concepts from a given course. Our learner model classifies learners into eight styles and implements compatible computational methods consisting of three recommendations: i non-personalised, ii preferred feature-based, and iii neighbour-based collaborative filtering. The analysis of preference error (PE was performed by comparing the actual preferred learning object with the predicted one. In our experiments, the feature-based recommendation algorithm has the fewest PE.
Full Text Available Object-based image coding is drawing a great attention for the many opportunities it offers to high-level applications. In terms of rate-distortion performance, however, its value is still uncertain, because the gains provided by an accurate image segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with losses that depend on both the coding scheme and the object geometry. This work aims at measuring rate-distortion costs and gains for a wavelet-based shape-adaptive encoder similar to the shape-adaptive texture coder adopted in MPEG-4. The analysis of the rate-distortion curves obtained in several experiments provides insight about what performance gains and losses can be expected in various operative conditions and shows the potential of such an approach for image coding.
Full Text Available Object-based image coding is drawing a great attention for the many opportunities it offers to high-level applications. In terms of rate-distortion performance, however, its value is still uncertain, because the gains provided by an accurate image segmentation are balanced by the inefficiency of coding objects of arbitrary shape, with losses that depend on both the coding scheme and the object geometry. This work aims at measuring rate-distortion costs and gains for a wavelet-based shape-adaptive encoder similar to the shape-adaptive texture coder adopted in MPEG-4. The analysis of the rate-distortion curves obtained in several experiments provides insight about what performance gains and losses can be expected in various operative conditions and shows the potential of such an approach for image coding.
This paper describes the aDORe repository architecture, designed and implemented for ingesting, storing, and accessing a vast collection of Digital Objects at the Research Library of the Los Alamos National Laboratory. The aDORe architecture is highly modular and standards-based. In the architecture, the MPEG-21 Digital Item Declaration Language is used as the XML-based format to represent Digital Objects that can consist of multiple datastreams as Open Archival Information System Archival Information Packages (OAIS AIPs).Through an ingestion process, these OAIS AIPs are stored in a multitude of autonomous repositories. A Repository Index keeps track of the creation and location of all the autonomous repositories, whereas an Identifier Locator registers in which autonomous repository a given Digital Object or OAIS AIP resides. A front-end to the complete environment, the OAI-PMH Federator, is introduced for requesting OAIS Dissemination Information Packages (OAIS DIPs). These OAIS DIPs can be the stored OAIS ...
Van de Sompel, Herbert; Liu, X; Balakireva, L; Schwander, T; Sompel, Herbert Van de; Bekaert, Jeroen; Liu, Xiaoming; Balakireva, Luda; Schwander, Thorsten
Full Text Available Describing and quantifying the spatial heterogeneity of land cover in urban systems is crucial for developing an ecological understanding of cities. This paper presents a new approach to quantifying the fine-scale heterogeneity in urban landscapes that capitalizes on the strengths of two commonly used approaches—visual interpretation and object-based image analysis. This new approach integrates the ability of humans to detect pattern with an object-based image analysis that accurately and efficiently quantifies the components that give rise to that pattern. Patches that contain a mix of built and natural land cover features were first delineated through visual interpretation. These patches served as pre-defined boundaries for finer-scale segmentation and classification of within-patch land cover features which were classified using object-based image analysis. Patches were then classified based on the within-patch proportion cover of features. We applied this approach to the Gwynns Falls watershed in Baltimore, Maryland, USA. The object-based classification approach proved to be effective for classifying within-patch land cover features. The overall accuracy of the classification maps of 1999 and 2004 were 92.3% and 93.7%, respectively. This exercise demonstrates that by integrating visual interpretation with object-based classification, the fine-scale spatial heterogeneity in urban landscapes and land cover change can be described and quantified in a more efficient and ecologically meaningful way than either purely automated or visual methods alone. This new approach provides a tool that allows us to quantify the structure of the urban landscape including both built and non-built components that will better accommodate ecological research linking system structure to ecological processes.
As a way of lifetime extension, the energy efficiency has become a crucial criterion in the design of object tracking sensor networks (OTSN). In this paper, we propose a sentinel based sleep scheduling called Sleep Scheduling Protocol (SSP) to reduce the number of the awakened sensor nodes with less performance loss while existing solutions either suffer from increasing of event detecting delays or computing overhead. SSP is built upon cluster based ...
Tingting Fu; Peng Liu; Fun Hu, Y.
Layered Depth Image (LDI) representations are attractive compact representations for multi-view videos. Any virtual viewpoint can be rendered from LDI by using view synthesis technique. However, rendering from classical LDI leads to annoying visual artifacts, such as cracks and disocclusions. Visual quality gets even worse after a DCT-based compression of the LDI, because of blurring effects on depth discontinuities. In this paper, we propose a novel object-based LDI representation, improving...
Jantet, Vincent; Guillemot, Christine; Morin, Luce
Full Text Available This paper presents a Case-Based Reasoning approach for the personalized recommendation and the students’ authoring tasks in on-line repositories of Learning Objects (LOs. The recommender combines content-based filtering techniques together with collaborative filtering mechanisms. Students’ authoring tasks include the incorporation of ratings of the existing LOs and new LOs, which are peer reviewed. This approach is going to be applied to a repository with more than 200 programming examples written in different programming languages.
moving objects detection is a fundamental step in many vision based applications. Background subtraction is the typical method. Many background models have been introduced to deal with different problems. The method based on mixture of Gaussians is a good balance between accuracy and complexity, and is used frequently by many researchers. But it still cannot provide satisfied results in some cases. In this paper, we solve this problem by introducing a post process to the initial results of mi...
This paper presents a Case-Based Reasoning approach for the personalized recommendation and the students’ authoring tasks in on-line repositories of Learning Objects (LOs). The recommender combines content-based filtering techniques together with collaborative filtering mechanisms. Students’ authoring tasks include the incorporation of ratings of the existing LOs and new LOs, which are peer reviewed. This approach is going to be applied to a repository with more than 200 programming examp...
Mercedes Gomez-Albarran; Guillermo Jimenez-Diaz
The Accreditation Council for Graduate Medical Education (ACGME) requires U.S. physician training programs to teach and evaluate their trainees in six core competencies. Developing innovative methods to meet the ACGME requirements is an ongoing area of research in medical education. Here we describe the development of the Competency-based Objective Resident Education using Virtual Patients (CORE-VP) system, a web-based virtual patient simulator to teach and measure the ACGME core competencies...
Taylor Sawyer; Alan Stein; Holly Olson; Christopher Becket Mahnke
SATEN is an object-oriented web-based extraction and belief revision engine. It runs on any computer via a Java 1.1 enabled browser such as Netscape 4. SATEN performs belief revision based on the AGM approach. The extraction and belief revision reasoning engines operate on a user specified ranking of information. One of the features of SATEN is that it can be used to integrate mutually inconsistent commensuate rankings into a consistent ranking.
Williams, M A; Williams, Mary-Anne; Sims, Aidan
Multiple user-specific visual interfaces are desirable in any computer-based clinical data-management system that is used by different people with different jobs to perform. The programming and maintenance problems of supporting multiple user interfaces to a single information system can be addressed by separating user-interface functionality from data-management subsystems, and by building user interfaces from object-based software components whose functionality is bound to an underlying ser...
The given paper proposes an object-based procedure for the combined analysis of high-resolution optical, thermal infrared and hyperspectral satellite imagery for different nuclear safeguards-related tasks. Some case studies using Hyperion, Landsat, QuickBird and Ikonos data will demonstrate the advantages of this approach. (author)
Full Text Available Dynamic Multi-objective Optimization (DMO is very popular nowadays. A new algorithm for DMO called Virus-GEP Dynamic based on Gene Expression Programming (GEP and virus evolution is proposed. Experiments on two test problems have shown that the algorithm has better performance on convergence, diversity and the breadth of the distribution.
Multi-objective image segmentation is a frequently encountered problem. The classical C-V algorithm has the shortage about multi-iterative operations and the computational time is too long to segment the large size image. On the base of analysis upon the relationship between the ...
Zhu Lei; Yang Jing
This paper lists objectives for the 39 courses that make up the competency-based elementary teacher education program of the University of Toledo. Courses are divided into three blocks. Block one deals with acquiring skills and includes courses such as "Performance Skills in Inquiry,""AV Equipment Operating,""Systems Design," and "School…
Toledo Univ., OH.
MPEG-4 is the first visual coding standard that allows coding of scenes as a collection of individual audio-visual objects. We present mathematical formulations for modeling object-based scalability and some functionalities that it brings with it. Our goal is to study algorithms that aid in semi-automating the authoring and subsequent selective addition/dropping of objects from a scene to provide content scalability. We start with a simplistic model for object-based scalability using the "knapsack problem"--a problem for which the optimal object set can be found using known schemes such as dynamic programming, the branch and bound method and approximation algorithms. The above formulation is then generalized to model authoring or multiplexing of scalable objects (e.g., objects encoded at various target bit-rates) using the "multiple choice knapsack problem." We relate this model to several problems that arise in video coding, the most prominent of these being the bit allocation problem. Unlike previous approaches to solve the operational bit allocation problem using Lagrangean relaxation, we discuss an algorithm that solves linear programming (LP) relaxation of this problem. We show that for this problem the duality gap for Lagrange and LP relaxations is exactly the same. The LP relaxation is solved using strong duality with dual descent--a procedure that can be completed in "linear" time. We show that there can be at most two fractional variables in the optimal primal solution and therefore this relaxation can be justified for many practical applications. This work reduces problem complexity, guarantees similar performance, is slightly more generic, and provides an alternate LP-duality based proof for earlier work by Shoham and Gersho (1988). In addition, we show how additional constraints may be added to impose inter-dependencies among objects in a presentation and discuss how object aggregation can be exploited in reducing problem complexity. The marginal analysis approach of Fox (1966) is suggested as a method of re-allocation with incremental inputs. It helps in efficiently re-optimizing the allocation when a system has user interactivity, appearing or disappearing objects, time driven events, etc. Finally, we suggest that approximation algorithms for the multiple choice knapsack problem, which can be used to quantify complexity vs. quality tradeoff at the encoder in a tunable and universal way. PMID:18262907
Full Text Available We assessed the potential of multi-spectral GeoEye imagery for biodiversity assessment in an urban context in Bangalore, India. Twenty one grids of 150 by 150 m were randomly located in the city center and all tree species within these grids mapped in the field. The six most common species, collectively representing 43% of the total trees sampled, were selected for mapping using pixel-based and object-based approaches. All pairs of species were separable based on spectral reflectance values in at least one band, with Peltophorum pterocarpum being most distinct from other species. Object-based approaches were consistently superior to pixel-based methods, which were particularly low in accuracy for tree species with small canopy sizes, such as Cocos nucifera and Roystonea regia. There was a strong and significant correlation between the number of trees determined on the ground and from object-based classification. Overall, object-based approaches appear capable of discriminating the six most common species in a challenging urban environment, with substantial heterogeneity of tree canopy sizes.
Full Text Available The emerging application domains in Engineering, Scientific Technology, Multimedia, GIS, Knowledge management, Expert system design etc require advanced data models to represent and manipulate the data values, because the information resides in these domains are often vague or imprecise in nature & difficult to represent while implementing the application software. In order to fulfill the requirements of such application demands, researchers have put the innovative concept of object based fuzzy database system by extending the object oriented system and adding fuzzy techniques to handle complex object and imprecise data together. Some extensions of the OODMS have been proposed in the literature, but what is still lacking a unifying & systematic formalization of these dedicated concepts. This paper is the consequence research of our previous work, in which we proposed an effective & formal Fuzzy class model to represent all type of fuzzy attributes & objects those can be confined to fuzzy class. Here, we introduce a generalized definition language for the fuzzy class which can efficiently define the proposed fuzzy class model along with all possible fuzzy data type to describe the structure of the database & thus serve as data definition language for the object based fuzzy database system.
Debasis Dwibedy, Dr. Laxman Sahoo, Sujoy Dutta
Full Text Available J2ME services play an important role in the field of Communication industry. In this paper, we discuss and analyze the consumptive behaviour based on object pool with RMS capabilities. We discuss and analyze different aspects of RMS mining techniques and their behaviour in mobile devices. We also analyze the better method or rule of implementing services which is more suitable for mobile devices. The method this paper mentioned has benefit to analyze large numbers of data in consumptive behaviours and provides some instructions to improve better marketing in concerned fields. In this paper we use J2ME components like CLDC (Connected Limited Device Configuration and MIDP (Mobile Information Device Profile with data mining services (DMS that provide local storage, a user interface, and networking capabilities that runs on mobile computing devices. We also discuss the need of Object Pool in mobile devices to enhance the capability of mobile devices. Object pool model based on RMS is proposed. Aimed to solve the Memory peak problem in J2ME, on the basis of object pool design pattern, an object pool model used RMS is designed and implemented.
Ms. Nandika Sood
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
Full Text Available The problem of unsupervised and supervised learning of RBF networks is discussed with Multi-Objective Particle Swarm Optimization (MOPSO. This study presents an evolutionary multi-objective selection method of RBF networks structure. The candidates of RBF networks structures are encoded into particles in PSO. These particles evolve toward Pareto-optimal front defined by several objective functions with model accuracy and complexity. This study suggests an approach of RBF network training through simultaneous optimization of architectures and connections with PSO-based multi-objective algorithm. Present goal is to determine whether MOPSO can train RBF networks and the performance is validated on accuracy and complexity. The experiments are conducted on two benchmark datasets obtained from the machine learning repository. The results show that; the best results are obtained for our proposed method that has obtained 100 and 80.21% classification accuracy from the experiments made on the data taken from breast cancer and diabetes diseases database, respectively. The results also show that our approach provides an effective means to solve multi-objective RBF networks and outperforms multi-objective genetic algorithm.
We propose a new multi-objective parameter design method that uses the combination of the following data mining techniques: analysis of variance, self-organizing map, decision tree analysis, rough set theory, and association rule. This method first aims to improve multiple objective functions simultaneously using as much predominant main effects of different design variables as possible. Then it resolves the remaining conflictions between the objective functions using predominant interaction effects of design variables. The key to realizing this method is the obtaining of various design rules that quantitatively relate levels of design variables to levels of objective functions. Based on comparative studies of data mining techniques, the systematic processes for obtaining these design rules have been clarified, and the points of combining data mining techniques have also been summarized. This method has been applied to a multi-objective robust optimization problem of an industrial fan, and the results show its superior capabilities for controlling parameters to traditional single-objective parameter design methods like the Taguchi method.
Sugimura, Kazuyuki; Obayashi, Shigeru; Jeong, Shinkyu
Human dynamics has attracted much attention in recent years. Quantitative understanding of the statistical mechanics of human behavior in an online network is a new challenge for researchers. In an online network, users’ behaviors can be abstracted and projected into a user-object network. Many complex problems concerning resource diffusion, such as recommendation system, network flow and social network behavior, can be solved partially by this user-object network. Although some researchers have given some statistical description of the network recently, little work has been done on link prediction in a user-object network. The objective of this paper is to predict new links based on historical ones in a user-object network. When link weight is taken into consideration, we find that both time attenuation and diversion delay play key roles in link prediction in an user-object network. We then combine these two time effect factors of link weight with users’ lifespans and construct the time-weighted network (TWN) model on the basis of resource allocation. Experimental results show the TWN model can greatly enhance the link prediction accuracy.
Liu, Ji; Deng, Guishi
With the intense increase in space objects, especially space debris, it is necessary to efficiently track and catalog the extensive dense clusters of space objects. As the main instrument for low earth orbit (LEO) space surveillance, ground-based radar system is usually limited by its resolution while tracking small space debris with high density. Thus, the obtained measurement information could have been seriously missed, which makes the traditional tracking method inefficient. To address this issue, we conceived the concept of group tracking. For group tracking, the overall tendency of the group objects is expected to be revealed, and the trajectories of individual objects are simultaneously reconstructed explicitly. According to model the interaction between the group center and individual trajectories using the Markov random field (MRF) within Bayesian framework, the objects' number and individual trajectory can be estimated more accurately in the condition of high miss alarm probability. The Markov chain Monte Carlo (MCMC)-Particle algorithm was utilized for solving the Bayesian integral problem. Furthermore, we introduced the mechanism for describing the behaviors of groups merging and splitting, which can expand the single group tracking algorithm to track variable multiple groups. Finally, simulation of the group tracking of space objects was carried out to validate the efficiency of the proposed method.
Huang, Jian; Hu, Weidong
Full Text Available The devastating series of fire events that occurred during the summers of 2007 and 2009 in Greece made evident the need for an operational mechanism to map burned areas in an accurate and timely fashion to be developed. In this work, Système pour l’Observation de la Terre (SPOT-4 HRVIR images are introduced in an object-based classification environment in order to develop a classification procedure for burned area mapping. The development of the procedure was based on two images and then tested for its transferability to other burned areas. Results from the SPOT-4 HRVIR burned area mapping showed very high classification accuracies ( 0.86 kappa coefficient, while the object-based classification procedure that was developed proved to be transferable when applied to other study areas.
With the continued advances in wireless communications, geo-positioning, and consumer electronics, an infrastructure is emerging that enables location-based services that rely on the tracking of the continuously changing positions of entire populations of service users, termed moving objects. This scenario is characterized by large volumes of updates, for which reason location update technologies become important. A setting is assumed in which a central database stores a representation of each moving object's current position. This position is to be maintained so that it deviates from the user's real position by at most a given threshold. To do so, each moving object stores locally the central representation of its position. Then an object updates the database whenever the deviation between its actual position (as obtained from a GPS device) and the database position exceeds the threshold. The main issue considered is how to represent the location of a moving object in a database so that tracking can be done with as few updates as possible. The paper proposes to use the road network within which the objects are assumed to move for predicting their future positions. The paper presents algorithms that modify an initial road-network representation, so that it works better as a basis for predicting an object's position; it proposes to use known movement patterns of the object, in the form of routes; and it proposes to use acceleration profiles together with the routes. Using real GPS-data and a corresponding real road network, the paper offers empirical evaluations and comparisons that include three existing approaches and all the proposed approaches. Udgivelsesdato: May
Civilis, A.; Jensen, Christian SÃ¸ndergaard
In this paper, an object-based image retrieval system for chest CT image databases is proposed. Based on the scheme of the content-based image retrieval method, we proposed an image segmentation method which combines the anatomical knowledge of the chest and the well-known watershed segmentation algorithm. The purpose of segmentation is to identify the mediastinum and the two lung lobes in a chest CT image. The ARGs (attributed relational graphs) are chosen to describe the features of segmented objects. Then, image database is constructed by the feature vectors of images. In database searching, two searching modes are provided that are "query by example" and "query by object". Our system uses Euclidean distance to measure the similarity between the image in query and the image in database. The system output the 30 most similar images in the chest CT image database as query results. The experimental results show that the average precision of our system is about 80% which is impressive in a totally automatic medical image retrieval system. Moreover, query concentrated in certain objects features usually show better result than the regular query by example. The possible reasons are discussed. PMID:17271928
Yu, Sung-Nien; Chiang, Chih-Tsung
A number of emerging applications of data management technology involve the monitoring and querying of large quantities of continuous variables, e.g., the positions of mobile service users, termed moving objects. In such applications, large quantities of state samples obtained via sensors are streamed to a database. Indexes for moving objects must support queries efficiently, but must also support frequent updates. Indexes based on minimum bounding regions (MBRs) such as the R-tree exhibit high concurrency overheads during node splitting, and each individual update is known to be quite costly. This motivates the design of a solution that enables the B+-tree to manage moving objects. We represent moving-object locations as vectors that are timestamped based on their update time. By applying a novel linearization technique to these values, it is possible to index the resulting values using a single B+-tree that partitions values according to their timestamp and otherwise preserves spatial proximity. We develop algorithms for range and k nearest neighbor queries, as well as continuous queries. The proposal can be grafted into existing database systems cost effectively. An extensive experimental study explores the performance characteristics of the proposal and also shows that it is capable of substantially outperforming the R-tree based TPR-tree for both single and concurrent access scenarios.
Jensen, Christian SÃ¸ndergaard; Lin, Dan
The paper addresses the problem of grasping moving objects from a vibratory feeder with robotic hand-eye coordination. Since the dynamics of moving targets on the vibratory feeder are highly nonlinear and impractical to model accurately, the problem has been formulated in the context of Prey Capture with the robot as a `pursuer' and a moving object as a passive `prey.' A hierarchical vision-based intelligent controller has been developed and implemented in the Factory-of-the Future Kitting Cell at Georgia Tech. The first and second levels are based on the principle of fuzzy logic to help the robot search for an object of interest and then pursue it. The third level is based on a backpropagation neural network to predict its position at which the robot gripper grasps it. The feasibility of the concept and the control strategies was verified by two experiments. The first experiment showed that the fuzzy logic controller could command the robot to successfully follow the highly nonlinear motion of a moving object and approach its vicinity. The second experiment demonstrated that the neural network could estimate its position fairly accurately in a finite period of time after the command of grasp operations was issued.
Lee, Kok-Meng; Qian, Yifei
We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature. PMID:18249726
Menegaz, Gloria; Thiran, Jean-Philippe
We revisit the problem of model-based object recognition for intensity images and attempt to address some of the shortcomings of existing Bayesian methods, such as unsuitable priors and the treatment of residuals with a non-robust error norm. We do so by using a refor- mulation of the Huber metric and carefully chosen prior distributions. Our proposed method is invariant to 2-dimensional affine transforma- tions and, because it is relatively easy to train and use, it is suited for general object matching problems.
Zografos, Vasileios; 10.1007/11559573_51
Full Text Available Domain specific question answering technique allows users to use natural language to express their queries so that users need not have the knowledge about the structures of the information source. Forsuch an application relational model is not suitable as it is not a natural way to represent real world knowledge. Using relational model for representing information source results in scattered relations ofdata about the real world objects. In this paper an effective category model is presented to organize information according to their contents based on object-relational database. For the discussion of the category model railway domain is used in the paper.
Avinash J. Agrawal
This paper presents a fractal-based method for natural scene image segmentation. The main goal is to find artifact objects from complex natural scene. We propose a set of fractal measurements in order to acquire various aspects of roughness of each part of an image. The performance of the data fitting in the box-dimension estimation is analyzed and an improved algorithm is proposed. Experiments prove that the proposed approach is suitable for texture segmentation and artifact object finding in natural environment images.
Yang, Bo; Xu, Guang-you; Zhu, Zhigang
Generative models capable of synthesising complete object images have over the past few years proven their worth when interpreting images. Due to the recent development of computational machinery it has become feasible to model the variation of image intensities and landmark positions over the complete object surface using principal component analysis. This typically involves matrices with a few thousands and up to 100.000+ rows. This paper demonstrates applications of such models applied on colour images of human faces and cardiac magnetic resonance images. Further, we devise methods for alleviating the obvious computational and storage requirements of these large models by means of truncated wavelet bases.
Stegmann, Mikkel Bille; Forchhammer, SÃ¸ren
A new lossless compression scheme for bilevel images targeted at binary shapes of image and video objects is presented. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used in the context definition for arithmetic encoding. Tested on individual images of binary shapes and binary layers of digital maps the algorithm outperforms PWC, JBIG and MPEG-4 CAE. On the binary shapes the code lengths are reduced by 21%, 25%, and 42%...
Aghito, Shankar Manuel; Forchhammer, Søren
In this paper, the implementation techniques of intelligent nuclear material surveillance system based on the COM (Component Object Model) and SOM (Self Organized Mapping) was described. The surveillance system that is to be developed is consist of CCD cameras, neutron monitors, and PC for data acquisition. To develop the system, the properties of the COM based software development technology was investigated, and the characteristics of related platform APIs was summarized. This report could be used for the developers who want to develop the intelligent surveillance system for various experimental environments based on the DVR and sensors using Borland C++ Builder
According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA) systems and three-stage iterative geographic object-oriented image analysis (GEOOIA) systems, where GEOOIA superset of GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the degree of automation, accuracy, efficiency, robustness, scalability and timeliness of existing GEOBIA/GEOOIA systems ...
Andrea Baraldi; Luigi Boschetti
Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.
Heras Evangelio Rubén
Full Text Available A modified object-tracking algorithm that uses the flexible Metric Distance Transform kernel and multiple features for the Mean shift procedure is proposed and tested. The Faithful target separation based on RGB joint pdf of the target region and that of a neighborhood surrounding the object is obtained. The non-linear log-likelihood function maps the multimodal object/background distribution as positive values for colors associated with foreground, while negative values are marked for background. This replaces the more usual Epanechnikov kernel (E-kernel, improving target representation and localization without increasing the processing time, minimizing the similarity measure using the Bhattacharya coefficient. The algorithm is tested on several image sequences and shown to achieve robust and reliable frame-rate tracking.
We present semiquantitative photoacoustic images of small nanoparticle-containing objects having a wide range of contrast levels relative to the background. The images are obtained by a finite-element reconstruction algorithm that is based on the Helmholtz-like photoacoustic wave equation in the frequency domain. Our reconstruction approach is an iterative Newton method coupled with combined Marquardt and Tikhonov regularizations that can extract the spatial distribution of relative optical absorption property in heterogeneous media. We demonstrate experimental images in single- and multiple-object configurations with a circular scanning photoacoustic tomographic system. The results obtained show that millimeter-size nanoparticle-containing objects can be clearly detected in terms of position, size, and relative optical properties. PMID:16315719
Yuan, Zhen; Wu, Changfeng; Zhao, Hongzhi; Jiang, Huabei
In motion-blur-based speed measurement, a key step is the calculation of the horizontal blur extent. To perform this calculation robustly and accurately when both a defocus blur and a motion blur occur, and for a moving object with irregular shape edges, we propose a novel scheme using the image matting and transparency map. This scheme can isolate the defocus blur from the motion blur effectively, and can also calculate the horizontal blur extent accurately, regardless of the object's shape. Moreover, our novel scheme can also perform speed measurement for an object with uniformly accelerated/retarded motion (i.e. a rigid body linear motion with a constant acceleration) by using one interlaced scan CCD image. Simulation and real experiments prove that our scheme not only outperforms the current scan-line algorithm for blur extent computation, but can also perform speed measurement accurately for uniformly accelerated/retarded motion
In the deregulated environment, transmission congestion is one major problem that needs to be handled in power system operation and network expansion planning. This paper aims to enhance the transmission system capability and have the congestion alleviated using the multi-objective transmission expansion planning (MOTEP) approach. A system congestion index called the congestion surplus is presented to measure the congestion degree of the transmission system. The proposed MOTEP approach optimizes three objectives simultaneously, namely the congestion surplus, investment cost and power outage cost. An improved strength Pareto evolutionary algorithm (SPEA) is adopted to solve the proposed model. A ranking method based on Euclidean distance is presented for decision-making in the Pareto-optimal set. The effectiveness of both the improved SPEA and the proposed multi-objective planning approach has been tested and proven on the 18-bus system and the 77-bus system, respectively. (author)
Wang, Yi; Cheng, Haozhong; Hu, Zechun [Department of Electrical Engineering, Shanghai Jiaotong University, Shanghai (China); Wang, Chun [Department of Electrical Engineering, Shanghai Jiaotong University, Shanghai (China); Department of Electrical Engineering and Automation, Nanchang University, Nanchang 330031 (China); Yao, Liangzhong [AREVA T and D Technology Centre, Stafford ST17 4LX (United Kingdom); Ma, Zeliang; Zhu, Zhonglie [Department of Development Planning, East China Power Grid Co. Ltd., Shanghai (China)
Digital classification of the Earth's surface has significantly benefited from the availability of global DEMs and recent advances in image processing techniques. Such an innovative approach is object-based analysis, which integrates multi-scale segmentation and rule-based classification. Since the classification is based on spatially configured objects and no longer on solely thematically defined cells, the resulting landforms or landform types are represented in a more realistic way. However, up to now, the object-based approach has not been adopted for broad-scale topographic modelling. Existing global to almost-global terrain classification systems have been implemented on per cell schemes, accepting disadvantages such as the speckled character of outputs and the non-consideration of space. We introduce the first object-based method to automatically classify the Earth's surface as represented by the SRTM into a three-level hierarchy of topographic regions. The new method relies on the concept of decomposing land-surface complexity into ever more homogeneous domains. The SRTM elevation layer is automatically segmented and classified at three levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these recognised scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of the classes satisfy the regionalisation requirements of maximising internal homogeneity while minimising external homogeneity. Most objects have boundaries matching natural discontinuities at the regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as an eCognition® customised process, available as free online download. The results are embedded in a web application, where users can visualise and download the data of interest in GIS ready vector format. The method has originally been developed on the SRTM, but may be applied to any other DEM and regional area of interest. The tool allows for modifications in order to meet the requirements of individual research tasks. Both segmentation and class thresholds are relative to the extent and characteristics of the input DEM. Therefore, when applying the tool to regional or national scales, the results should be interpreted within the adequate context.
Eisank, C.; Dragut, L.
Infrared object detection is an important technique of digital image processing. It is widely used in automatic navigation, intelligent video surveillance systems, traffic detection, medical image processing etc. Infrared object detection system requires large storage and high speed processing technology. The current development trend is the system which can be achieved by hardware in real-time with fewer operations and higher performance. As a main large-scale programmable specific integrated circuit, field programmable gate array (FPGA) can meet all the requirements of high speed image processing, with the characteristics of simple algorithm realization, easy programming, good portability and inheritability. So it could get better result by using FPGA to infrared object detection system. According to the requirements, the infrared object detection system is designed on FPGA. By analyzing some of the main algorithms of object detection, two new object detection algorithms called integral compare algorithm (ICA) and gradual approach centroid algorithm (GACA) are presented. The system design applying FPGA in hardware can implement high speed processing technology, which brings the advantage of both performance and flexibility. ICA is a new type of denoising algorithm with advantage of lower computation complexity and less execution time. What is more important is that this algorithm can be implemented in FPGA expediently. Base on image preprocessing of ICA, GACA brings high positioning precision with advantage of insensitivity to the initial value and fewer times of convergence iteration. The experiments indicate that the infrared object detection system can implement high speed infrared object detecting in real-time, with high antijamming ability and high precision. The progress of Verilog-HDL and its architecture are introduced in this paper. Considering the engineering application, this paper gives the particular design idea and the flow of this method's realization in FPGA device. And we also discuss the problems on how to describe the hardware system in Verilog-HDL. Based on the hardware architecture of infrared object detection system, the component units of the system are discussed in detail, such as image data acquisition unit, data pre-processing unit and logical control unit etc. The design of the FPGA function and its implementation are carried on Verilog-HDL with TOP-DOWN method. The ending is the prospect of the project.
Zhao, Jianhui; He, Jianwei; Wang, Pengpeng; Li, Fan
Typical multidisciplinary design optimization(MDO) has gradually been proposed to balance performances of lightweight, noise, vibration and harshness(NVH) and safety for instrument panel(IP) structure in the automotive development. Nevertheless, plastic constitutive relation of Polypropylene(PP) under different strain rates, has not been taken into consideration in current reliability-based and collaborative IP MDO design. In this paper, based on tensile test under different strain rates, the constitutive relation of Polypropylene material is studied. Impact simulation tests for head and knee bolster are carried out to meet the regulation of FMVSS 201 and FMVSS 208, respectively. NVH analysis is performed to obtain mainly the natural frequencies and corresponding mode shapes, while the crashworthiness analysis is employed to examine the crash behavior of IP structure. With the consideration of lightweight, NVH, head and knee bolster impact performance, design of experiment(DOE), response surface model(RSM), and collaborative optimization(CO) are applied to realize the determined and reliability-based optimizations, respectively. Furthermore, based on multi-objective genetic algorithm(MOGA), the optimal Pareto sets are completed to solve the multi-objective optimization(MOO) problem. The proposed research ensures the smoothness of Pareto set, enhances the ability of engineers to make a comprehensive decision about multi-objectives and choose the optimal design, and improves the quality and efficiency of MDO.
Wang, Ping; Wu, Guangqiang
Full Text Available This paper describes an efficient object-based hybrid image coding (OB-HIC scheme. The proposed scheme is based on using the discrete wavelet transform (DWT in conjunction with the discrete cosine transform (DCT to provide coding performance superior to the popular image coders. The proposed method uses combination of the object-based DCT coding and the high performance of the set partitioning in hierarchical tree (SPIHT coding. The subband image data in the wavelet domain is modified based on the DCT and the object classification of the coefficient in the low-frequency image subband (LL. The modification process provides a new subband image data containing almost the same information of the original one but having smaller values of the wavelet coefficients. Simulation results of the proposed method demonstrate that, with small addition in the computational complexity of the coding process, the peak signal-to-noise ratio (PSNR performance of the proposed algorithm is much higher than that of the SPIHT test coder and some of famous image coding techniques.
Usama S. Mohammed
Climate models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. In this study, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska. Spectral and finite volume dynamical cores are considered within the Community Atmosphere Model. We develop model evaluation strategies that identify like 'objects' - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes and identify the sources of uncertainty in models without needing to reproduce the time and location of, for example, a particular observed cloud system. The results of our object-based analysis revealed differences in simulation of different types orographic precipitation between FV and spectral models in terms of the relationship between the resolvable scales of precipitation objects and the resolution. A general agreement between models was observed when the objects are due to large scale atmospheric dynamics, however spectral model exhibited erratic results in simulation of smaller scale features due to local evaporation. As a follow up analysis, the objects identified by our method were subtracted from the precipitation field and a measure of spatial continuity (via geostatistical methods) was calculated to investigate grid scale variability. As a result of this analysis, spectral model also produced higher variability in the grid scale precipitation.
Yorgun, M. S.; Rood, R. B.
A new method of synthesizing computer-generated hologram of three-dimensional (3D) objects is proposed from their projection images. A series of projection images of 3D objects are recorded with one-dimensional azimuth scanning. According to the principles of paraboloid of revolution in 3D Fourier space and 3D central slice theorem, spectra information of 3D objects can be gathered from their projection images. Considering quantization error of horizontal and vertical directions, the spectrum information from each projection image is efficiently extracted in double circle and four circles shape, to enhance the utilization of projection spectra. Then spectra information of 3D objects from all projection images is encoded into computer-generated hologram based on Fourier transform using conjugate-symmetric extension. The hologram includes 3D information of objects. Experimental results for numerical reconstruction of the CGH at different distance validate the proposed methods and show its good performance. Electro-holographic reconstruction can be realized by using an electronic addressing reflective liquid-crystal display (LCD) spatial light modulator. The CGH from the computer is loaded onto the LCD. By illuminating a reference light from a laser source to the LCD, the amplitude and phase information included in the CGH will be reconstructed due to the diffraction of the light modulated by the LCD.
Huang, Sujuan; Wang, Duocheng; He, Chao
Full Text Available Video tracking is the process of locating a moving object in time that is visualized by camera and are widely used in surveillance, animation and robotics Tracking describes the process of recording movement and translating that movement onto a digital model. The set of constraints that produce the most accurate tracking is the one that describes better the action performed. The key difficulty in video tracking is to associate target locations in consecutive video frames, especially when the objects are moving fast relative to the frame rate. Here, a video tracking system is been employed in which motion model describes how the image of the target might change for different possible motions of the object to track. The role of the tracking algorithm adopted for this system is to analyze the video frames to estimate the motion parameters. These parameters characterize the location of the target. In this research, three features are been extracted from each moving objects such as centroid, area, average luminance. Finally the similarity function is applied to tracking and the attempt proves that the chosen method has good performance under dynamic circumstances for real time tracking. Simulink is integrated with MATLAB to build a model for object tracking and data transfer is easily handled between the programs. The Simulink based customizable framework is designed for rapid simulation, implementation, and verification of video and image processing algorithms and systems.
Full Text Available Quality assessment of Multi-objective Optimization algorithms has been a major concern in the scientific field during the last decades. The entropy metric is introduced and highlighted in computing the diversity of Multi-objective Optimization Algorithms. In this paper, the definition of the entropy metric and the approach of diversity measurement based on entropy are presented. This measurement is adopted to not only Multi-objective Evolutionary Algorithm but also Multi-objective Immune Algorithm. Besides, the key techniques of entropy metric, such as the appropriate principle of grid method, the reasonable parameter selection and the simplification of density function, are discussed and analyzed. Moreover, experimental results prove the validity and efficiency of the entropy metric. The computational effort of entropy increases at a linear rate with the number of points in the solution set, which is indeed superior to other quality indicators. Compared with Generational Distance, it is proved that the entropy metric have the capability of describing the diversity performance on a quantitative basis. Therefore, the entropy criterion can serve as a high-efficient diversity criterion of Multi-objective optimization algorithms.
Multimodal attention is a key requirement for humanoid robots in order to navigate in complex environments and act as social, cognitive human partners. To this end, robots have to incorporate attention mechanisms that focus the processing on the potentially most relevant stimuli while controlling the sensor orientation to improve the perception of these stimuli. In this paper, we present our implementation of audio-visual saliency-based attention that we integrated in a system for knowledge-d...
Schauerte, B.; Ku?hn, Benjamin; Kroschel, Kristian; Stiefelhagen, Rainer
Full Text Available Self-regulated learning has become an important construct in education research in the last few years. Selfregulated learning in its simple form is the learner’s ability to monitor and control the learning process. There is increasing research in the literature on how to support students become more self-regulated learners. However, the advancement in the information technology has led to paradigm changes in the design and development of educational content. The concept of learning object instructional technology has emerged as a result of this shift in educational technology paradigms. This paper presents the results of a study that investigated the potential educational effectiveness of a pedagogical framework based on the self-regulated learning theories to support the design of learning object systems to help computer science students. A prototype learning object system was developed based on the contemporary research on self-regulated learning. The system was educationally evaluated in a quasi-experimental study over two semesters in a core programming languages concepts course. The evaluation revealed that a learning object system that takes into consideration contemporary research on self-regulated learning can be an effective learning environment to support computer science education.
This paper describes a new framework for object detection and tracking of AUV including underwater acoustic data interpolation, underwater acoustic images segmentation and underwater objects tracking. This framework is applied to the design of vision-based method for AUV based on the forward looking sonar sensor. First, the real-time data flow (underwater acoustic images) is pre-processed to form the whole underwater acoustic image, and the relevant position information of objects is extracted and determined. An improved method of double threshold segmentation is proposed to resolve the problem that the threshold cannot be adjusted adaptively in the traditional method. Second, a representation of region information is created in light of the Gaussian particle filter. The weighted integration strategy combining the area and invariant moment is proposed to perfect the weight of particles and to enhance the tracking robustness. Results obtained on the real acoustic vision platform of AUV during sea trials are displayed and discussed. They show that the proposed method can detect and track the moving objects underwater online, and it is effective and robust.
Zhang, Tie-dong; Wan, Lei; Zeng, Wen-jing; Xu, Yu-ru
In this paper, we introduce a C-scan ultrasound prototype and three imaging modalities for the detection of foreign objects inserted in porcine soft tissue. The object materials include bamboo, plastics, glass and aluminum alloys. The images of foreign objects were acquired using the C-scan ultrasound, a portable B-scan ultrasound, film-based radiography, and computerized radiography. The C-scan ultrasound consists of a plane wave transducer, a compound acoustic lens system, and a newly developed ultrasound sensor array based on the complementary metal-oxide semiconductor coated with piezoelectric material (PE-CMOS). The contrast-to-noise ratio (CNR) of the images were analyzed to quantitatively evaluate the detectability using different imaging modalities. The experimental results indicate that the C-scan prototype has better CNR values in 4 out of 7 objects than other modalities. Specifically, the C-scan prototype provides more detail information of the soft tissues without the speckle artifacts that are commonly seen with conventional B-scan ultrasound, and has the same orientation as the standard radiographs but without ionizing radiation. PMID:20036873
Liu, Chu-Chuan; Lo, Shih-Chung Ben; Freedman, Matthew T; Lasser, Marvin E; Kula, John; Sarcone, Anita; Wang, Yue
Full Text Available In this paper we propose a robust object-based watermarking method, in which the watermark is embedded into the middle frequencies band of the Discrete Fourier Transform (DFT magnitude of the selected object region, altogether with the Speeded Up Robust Feature (SURF algorithm to allow the correct watermark detection, even if the watermarked image has been distorted. To recognize the selected object region after geometric distortions, during the embedding process the SURF features are estimated and stored in advance to be used during the detection process. In the detection stage, the SURF features of the distorted image are estimated and match them with the stored ones. From the matching result, SURF features are used to compute the Affine-transformation parameters and the object region is recovered. The quality of the watermarked image is measured using the Peak Signal to Noise Ratio (PSNR, Structural Similarity Index (SSIM and the Visual Information Fidelity (VIF. The experimental results show the proposed method provides robustness against several geometric distortions, signal processing operations and combined distortions. The receiver operating characteristics (ROC curves also show the desirable detection performance of the proposed method. The comparison with a previously reported methods based on different techniques is also provided.
The Internet of Things (IoT) usually refers to a world-wide network of interconnected heterogeneous objects (sensors, actuators, smart devices, smart objects, RFID, embedded computers, etc) uniquely addressable, based on standard communication protocols. Beyond such a definition, it is emerging a new definition of IoT seen as a loosely coupled, decentralized system of cooperating smart objects (SOs). A SO is an autonomous, physical digital object augmented with sensing/actuating, processing, storing, and networking capabilities. SOs are able to sense/actuate, store, and interpret information created within themselves and around the neighbouring external world where they are situated, act on their own, cooperate with each other, and exchange information with other kinds of electronic devices and human users. However, such SO-oriented IoT raises many in-the-small and in-the-large issues involving SO programming, IoT system architecture/middleware and methods/methodologies for the development of SO-based applica...
We summarize five studies of our large-scale research program, in which we examined aspects of contour-based object identification and segmentation, and we report on the stimuli we used, the norms and data we collected, and the software tools we developed. The stimuli were outlines derived from the standard set of line drawings of everyday objects by Snodgrass and Vanderwart (1980). We used contour curvature as a major variable in all the studies. The total number of 1,500 participants produced very solid, normative identification rates of silhouettes and contours, straight-line versions, and fragmented versions, and quite reliable benchmark data about saliency of points and object segmentation into parts. We also developed several software tools to generate stimuli and to analyze the data in nonstandard ways. Our stimuli, norms and data, and software tools have great potential for further exploration of factors influencing contour-based object identification, and are also useful for researchers in many different disciplines (including computer vision) on a wide variety of research topics (e.g., priming, agnosia, perceptual organization, and picture naming). The full set of norms, data, and stimuli may be downloaded from www.psychonomic.org/archive/. PMID:15641406
De Winter, Joeri; Wagemans, Johan
Full Text Available In this paper, established a mathematical model for multi-objective flexible scheduling problems, combined Pareto non-dominated sorting method, put forward a hybrid genetic algorithm based on improved DNA computation. To ensure the diversity of the optimal solution sets, designed RNA quaternary encoder mode and genetic operator based on improved DNA computation, adopted sub-area crossover and dynamic mutation, imposed on manipulation of the molecular level. Through simulation, tested the performance of the designed algorithm, compared it with the standard genetic algorithm test results. Simulation results showed the proposed algorithm can provide an optimum searching, owned better seeking abilities; the obtained scheduling results were fairly reasonable. This algorithm can effectively solve the multi-objective flexible scheduling optimization problems.
Understanding the structure and evolution of web-based user-object networks is a significant task since they play a crucial role in e-commerce nowadays. This letter reports the empirical analysis on two large-scale web sites, audioscrobbler.com and del.icio.us, where users are connected with music groups and bookmarks, respectively. The degree distributions and degree-degree correlations for both users and objects are reported. We propose a new index, named collaborative similarity, to quantify the diversity of tastes based on the collaborative selection. Accordingly, the correlation between degree and selection diversity is investigated. We report some novel phenomena well characterizing the selection mechanism of web users and outline the relevance of these phenomena to the information recommendation problem.
Shang, Ming-Sheng; Lü, Linyuan; Zhang, Yi-Cheng; Zhou, Tao
Full Text Available The assessment of the clinical performance of physicians-in-training is an important task. The critical care rotation is a mandatory rotation for most residency training programs and is designed to ensure the graduation of trainees who are able to initiate lifesaving management during medical emergencies. Ensuring that each resident fulfills the objectives of the rotation is of paramount importance. Unfortunately, the current assessment methods are subjective and suffer from many threats to validity and reliability that make the assessment inaccurate. In this review, the current assessment method is analyzed, and causes for inaccuracy are identified. A new model for assessment that is continuous, structured, objective-based and at the point of care (SCOPA is proposed based on the best available assessment methods. Such a model might be useful for the assessment of trainee?s performance in critical care as well as non-critical care rotations.
Accurate mapping of benthic habitats in the Florida Keys is essential in developing effective management strategies for this unique coastal ecosystem. In this study, we evaluated the applicability of hyperspectral imagery collected from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) for benthic habitat mapping in the Florida Keys. An overall accuracy of 84.3% and 86.7% was achieved respectively for a group-level (3-class) and code-level (12-class) classification by integrating object-based image analysis (OBIA), hyperspectral image processing methods, and machine learning algorithms. Accurate and informative object-based benthic habitat maps were produced. Three commonly used image correction procedures (atmospheric, sun-glint, and water-column corrections) were proved unnecessary for small area mapping in the Florida Keys. Inclusion of bathymetry data in the mapping procedure did not increase the classification accuracy. This study indicates that hyperspectral systems are promising in accurate benthic habitat mapping at a fine detail level.
Zhang, Caiyun; Selch, Donna; Xie, Zhixiao; Roberts, Charles; Cooper, Hannah; Chen, Ge
In this paper we present the functionality of a currently under development database programming methodology called ODRA (Object Database for Rapid Application development) which works fully on the object oriented principles. The database programming language is called SBQL (Stack based query language). We discuss some concepts in ODRA for e.g. the working of ODRA, how ODRA runtime environment operates, the interoperability of ODRA with .net and java .A view of ODRA's working with web services and xml. Currently the stages under development in ODRA are query optimization. So we present the prior work that is done in ODRA related to Query optimization and we also present a new fusion algorithm of how ODRA can deal with joins based on collections like set, lists, and arrays for query optimization.
Full Text Available Cloud computing is an emerging high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. To improve the overall performance of cloud computing, with the deadline constraint, a task scheduling model is established for reducing the system power consumption of cloud computing and improving the profit of service providers. For the scheduling model, a solving method based on multi-objective genetic algorithm (MO-GA is designed and the research is focused on encoding rules, crossover operators, selection operators and the method of sorting Pareto solutions. Based on open source cloud computing simulation platform CloudSim, compared to existing scheduling algorithms, the results show that the proposed algorithm can obtain a better solution, and it provides a balance for the performance of multiple objects.
Full Text Available Moving object database (MOD engine is the foundation of Location-Based Service (LBS information systems. Continuous queries are important in spatial-temporal reasoning of a MOD. The communication costs were the bottleneck for improving query efficiency until the rectangular safe region algorithm partly solved this problem. However, this algorithm can be further improved, as we demonstrate with the dynamic interval based continuous queries algorithm on moving objects. Two components, circular safe region and dynamic intervals were adopted by our algorithm. Theoretical proof and experimental results show that our algorithm substantially outperforms the traditional periodic monitoring and the rectangular safe region algorithm in terms of monitoring accuracy, reducing communication costs and server CPU time. Moreover, in our algorithm, the mobile terminals do not need to have any computational ability.
In this paper, we address a highly enhanced compression scheme in the condition of multiple objects in Integral Imaging (InIm) by use of sub-images (SIs) to segment each object and to remove the Motion Vector (MV) of residual image array transformed from Sub-Image Array (SIA). In the pick-up process, SIA is generated from EIA after the perspectives passing through virtual pinhole array is recorded as Elemental Image Array (EIA). The similarity enhancement among SIs expects compression efficiency to improve, but the compression efficiency of the EIA in the picked-up condition of multiple objects does not correspond to that of the picked-up condition of a simplified object. In the proposed scheme, the depth of objects is computed by two adaptive SIs located at horizontal left and right side from the reference SI positioned to the center of the SIA. A depth map image generated from two adaptive the SIs and a reference SI is applied to segment each object considering to the distance between those. Therefore, an adaptive objectsegmented SI is obtained and, which is motion-estimated from the original SIA based on MSE to generate the motioncompensated object-segmented SIA and which SIAs from each segmented object are finally combined as the motioncompensated SIA, and which based on multiple objects is transformed to residual SIA to minimize the spatial redundancy and which SIA is compressed by MPEG-4. The proposed algorithm shows the enhanced compression efficiency than that of the baseline JPEG and the conventional EIA compression scheme.
Lee, Hyoung-Woo; Lee, Ju-Han; Kang, Ho-Hyun; Kim, Eun-Soo
Visual image interpretation and digital image classification have been used to map and monitor mangrove extent and composition for decades. The presence of a high-spatial resolution hyperspectral sensor can potentially improve our ability to differentiate mangrove species. However, little research has explored the use of pixel-based and object-based approaches on high-spatial hyperspectral datasets for this purpose. This study assessed the ability of CASI-2 data for mangrove species mapping ...
The textile and clothing industry, aware of the marketing evolution cannot neglect the requests of comfort, which has been an increased and actual exigency of the clothing goods consumers. There is an urgent need to evaluate and quantify the comfort properties of textile in general. This work aims to make a study of the different types of lightweight wool fabrics, based on the objective evaluation of thermophysiological and sensorial comfort, according to real preferences, with the g...
Broega, A. C.; Silva, Maria Elisabete
Object-based stochastic modelling techniques are routinely employed to generate multiple realisations of the spatial distribution of sediment properties in settings where data density is insufficient to construct a unique deterministic facies architecture model. Challenge is to limit the wide range of possible outcomes of the stochastic model. Ideally, this is done by direct validation with the ‘real-world’ sediment distribution. In a reservoir setting this is impossible because of the li...
Geel, C. R.; Donselaar, M. E.
Abstract Background Given the documented physical activity disparities that exist among low-income minority communities and the increased focused on socio-ecological approaches to address physical inactivity, efforts aimed at understanding the built environment to support physical activity are needed. This community-based participatory research (CBPR) project investigates walking trails perceptions in a high minority southern community and objectively examines walking trails....
Zoellner Jamie; Hill Jennie L; Zynda Karen; Sample Alicia D; Yadrick Kathleen
J2ME services play an important role in the field of Communication industry. In this paper, we discuss and analyze the consumptive behaviour based on object pool with RMS capabilities. We discuss and analyze different aspects of RMS mining techniques and their behaviour in mobile devices. We also analyze the better method or rule of implementing services which is more suitable for mobile devices. The method this paper mentioned has benefit to analyze large numbers of data in consumptive behav...
In this study, the effect of motor expertise on an object-based mental rotation task was investigated. 60 males and 60 females (40 soccer players, 40 gymnasts, and 40 non-athletes, equivalent males and females in each group) solved a psychometric mental rotation task with both cube and human figures. The results revealed that all participants had a higher mental rotation accuracy for human figures compared to cubed figures, that the gender difference was reduced with human f...
Jansen, Petra; Lehmann, Jennifer
Cloud computing is an emerging high performance computing environment with a large scale, heterogeneous collection of autonomous systems and flexible computational architecture. To improve the overall performance of cloud computing, with the deadline constraint, a task scheduling model is established for reducing the system power consumption of cloud computing and improving the profit of service providers. For the scheduling model, a solving method based on multi-objective genetic algorithm (...
Jing Liu; Xing-Guo Luo; Xing-Ming Zhang; Fan Zhang; Bai-Nan Li
Full Text Available This paper considers how object-based learning (OBL can be used to complement reflective skills development systems, which are commonplace in UK universities. It describes how some UCL students had difficulty understanding the concept of such a system and in choosing skills to develop. We therefore began developing a series of OBL activities, which could be used to help students understand how the system should be used and to identify their skill strengths and weaknesses.
In this paper we report the results of an experiment with automated landform delineation and classification from digital elevation models (DEMs) using object-based image analysis (OBIA). Archaeologists rely on accurate and detailed geomorphological maps to predict and interpret the location of archaeological sites. However, they have been using high-resolution DEMs primarily for visual interpretation and expert-judgement classification of landform. OBIA can perform these classifications much ...
In this paper we present a technique for shape similarity estimation for content-based indexing and retrieval over large image databases. Here the high curvature points are detected using wavelet decomposition. The feature set is extracted under the framework of polygonal approximation. It uses simple features extracted at high curvature points. The experimental result and comparisons show the performance of the proposed technique. This technique is also suitable to be extended to the retrieval of 3D objects.
Quddus, Azhar; Cheikh, Faouzi A.; Gabbouj, Moncef
The high spatial resolution of state-of-the-art commercial satellite imagery provides a good basis for recognising and monitoring even small-scale structural changes within nuclear facilities and for planning of routine and/or challenge inspections of nuclear sites. Despite the advantages of the improved spatial resolution some problems exist that may make the interpretation of the changes more difficult: Firstly, the results of the change analysis can be very complex and unclear at a glance. Secondly, shadow formation and off-nadir images due to different sensor and solar conditions at the acquisition times can cause false signals or overlap real changes. In view to the fast-growing amount of data from different sensor types there are then some requirements of an effective change detection procedure for safeguards purposes: i. The techniques involved should possess a certain amount of robustness in terms of small misregistration errors, different atmospheric conditions at the acquiring dates, off-nadir angles. ii. Given large multisensor data sets it would be necessary that the procedure operates as automatically as possible. Iii. The procedure has to imply techniques to initially pinpoint out those parts of a scene in which significant changes have taken place. iv. All image areas indicating changes then should be subject to a detailed classification and interpretation procedure. We have already investigated some pixel-based change detection for the routine nuclear verification, based on recently published visualisation and change detection algorithms: canonical correlation analysis (MAD transformation) to enhance the change information in the difference images and bayesian techniques for the automatic determination of significant thresholds. Some steps have been taken by combining pixel- and object-oriented approaches, i.e. MAD transformation of the image data and object- oriented post-classification of the changes and object extraction for the image data or MAD transformation of the objects and post-classification of the change objects. Another approach implies a solely object-oriented change detection technique: Object extraction, semantic classification and post- classification comparison by means of a change matrix. The object-oriented change analysis procedures are carried out with a relatively new technology for image analysis, eCognition (http://www.definiens-imaging.com)
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Danyali, Habibiollah; Mertins, Alfred
Full Text Available moving objects detection is a fundamental step in many vision based applications. Background subtraction is the typical method. Many background models have been introduced to deal with different problems. The method based on mixture of Gaussians is a good balance between accuracy and complexity, and is used frequently by many researchers. But it still cannot provide satisfied results in some cases. In this paper, we solve this problem by introducing a post process to the initial results of mixture of Gaussians method. An over-segmentation based on color information is used to segment the input frame into patches. The goal of segmentation is to split each image into regions that are likely to belong to the same object. After moving shadow suppression, the outputs of mixture of Gaussians are combined with the color clustered regions to a module for area confidence measurement. In this way, two major segment errors can be corrected. Finally, by connected component labeling, blobs with too small area are filter out, and the contour of moving objects are extracted. Experimental results show that the proposed approach can significantly enhance segmentation results.
Full Text Available Downed logs on the forest floor provide habitat for species, fuel for forest fires, and function as a key component of forest nutrient cycling and carbon storage. Ground-based field surveying is a conventional method for mapping and characterizing downed logs but is limited. In addition, optical remote sensing methods have not been able to map these ground targets due to the lack of optical sensor penetrability into the forest canopy and limited sensor spectral and spatial resolutions. Lidar (light detection and ranging sensors have become a more viable and common data source in forest science for detailed mapping of forest structure. This study evaluates the utility of discrete, multiple return airborne lidar-derived data for image object segmentation and classification of downed logs in a disturbed forested landscape and the efficiency of rule-based object-based image analysis (OBIA and classification algorithms. Downed log objects were successfully delineated and classified from lidar derived metrics using an OBIA framework. 73% of digitized downed logs were completely or partially classified correctly. Over classification occurred in areas with large numbers of logs clustered in close proximity to one another and in areas with vegetation and tree canopy. The OBIA methods were found to be effective but inefficient in terms of automation and analyst’s time in the delineation and classification of downed logs in the lidar data.
Military land managers and decision makers face an ever increasing challenge to balance maximum flexibility for the mission with a diverse set of multiple land use, social, political, and economic goals. In addition, these goals encompass environmental requirements for maintaining ecosystem health and sustainability over the long term. Spatiotemporal modeling and simulation in support of adaptive ecosystem management can be best accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques through a geographic information system (GIS)-based backbone. Recently, an object-oriented (OO) architectural framework was developed for IDLAMS (OO-IDLAMS). This OO-IDLAMS Prototype was built upon and leverages from the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS is an object-based architectural framework that affords a more integrated, dynamic, and flexible approach to comprehensive ecosystem modeling than was possible with the GIS-based integration approach of the original IDLAMS. The flexibility, dynamics, and interoperability demonstrated through this case study of an object-oriented approach have the potential to provide key technology solutions for many of the military's multiple-use goals and needs for integrated natural resource planning and ecosystem management.
Sydelko, P. J.; Dolph, J. E.; Majerus, K. A.; Taxon, T. N.
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. T...
Multi-temporal LiDAR DTMs are used for the development and testing of a method for geomorphological change analysis in western Austria. Our test area is located on a mountain slope in the Gargellen Valley in western Austria. Six geomorphological features were mapped by using stratified Object-Based Image Analysis (OBIA) and segmentation optimization using 1m LiDAR DTMs of 2002 and 2005. Based on the 2002 data, the scale parameter for each geomorphological feature was optimized by comparing ma...
The devastating series of fire events that occurred during the summers of 2007 and 2009 in Greece made evident the need for an operational mechanism to map burned areas in an accurate and timely fashion to be developed. In this work, Système pour l’Observation de la Terre (SPOT)-4 HRVIR images are introduced in an object-based classification environment in order to develop a classification procedure for burned area mapping. The development of the procedure was based on two images and then ...
Full Text Available This paper proposes the Multi Objective Optimization (MOO of Vehicle Active Suspension System (VASS with a hybrid Differential Evolution (DE based Biogeography-Based Optimization (BBO (DEBBO for the parameter tuning of Proportional Integral Derivative (PID controller. Initially a conventional PID controller, secondly a BBO, an rising nature enthused global optimization procedure based on the study of the ecological distribution of biological organisms and a hybridized DEBBO algorithm which inherits the behaviours of BBO and DE have been used to find the tuning parameters of the PID controller to improve the performance of VASS by considering a MOO function as the performance index. Simulations of passive system, active system having PID controller with and without optimizations have been performed by considering dual and triple bump kind of road disturbances in MATLAB/Simulink environment. The simulation results show the effectiveness of DEBBO based PID (DEBBOPID in achieving the goal.
Whisker-based object localization requires activation and plasticity of somatosensory and motor cortex. These parts of the cerebral cortex receive strong projections from the cerebellum via the thalamus, but it is unclear whether and to what extent cerebellar processing may contribute to such a sensorimotor task. Here, we subjected knock-out mice, which suffer from impaired intrinsic plasticity in their Purkinje cells and long-term potentiation at their parallel fiber-to-Purkinje cell synapses (L7-PP2B), to an object localization task with a time response window (RW). Water-deprived animals had to learn to localize an object with their whiskers, and based upon this location they were trained to lick within a particular period ("go" trial) or refrain from licking ("no-go" trial). L7-PP2B mice were not ataxic and showed proper basic motor performance during whisking and licking, but were severely impaired in learning this task compared with wild-type littermates. Significantly fewer L7-PP2B mice were able to learn the task at long RWs. Those L7-PP2B mice that eventually learned the task made unstable progress, were significantly slower in learning, and showed deficiencies in temporal tuning. These differences became greater as the RW became narrower. Trained wild-type mice, but not L7-PP2B mice, showed a net increase in simple spikes and complex spikes of their Purkinje cells during the task. We conclude that cerebellar processing, and potentiation in particular, can contribute to learning a whisker-based object localization task when timing is relevant. This study points toward a relevant role of cerebellum-cerebrum interaction in a sophisticated cognitive task requiring strict temporal processing. PMID:24478374
Rahmati, Negah; Owens, Cullen B; Bosman, Laurens W J; Spanke, Jochen K; Lindeman, Sander; Gong, Wei; Potters, Jan-Willem; Romano, Vincenzo; Voges, Kai; Moscato, Letizia; Koekkoek, Sebastiaan K E; Negrello, Mario; De Zeeuw, Chris I
We assessed the potential of multi-spectral GeoEye imagery for biodiversity assessment in an urban context in Bangalore, India. Twenty one grids of 150 by 150 m were randomly located in the city center and all tree species within these grids mapped in the field. The six most common species, collectively representing 43% of the total trees sampled, were selected for mapping using pixel-based and object-based approaches. All pairs of species were separable based on spectral reflectance values i...
Shivani Agarwal; Lionel Sujay Vailshery; Madhumitha Jaganmohan; Harini Nagendra
Object recognition algorithms are fundamental tools in automatic matching of geometric shapes within a background scene. Many approaches have been proposed in the past to solve the object recognition problem. Two of the key aspects that distinguish them in terms of their practical usability are: (i) the type of input model description and (ii) the comparison criteria used. In this paper we introduce a novel scheme for 3D object recognition based on line segment representation of the input shapes and comparison using the Hausdor distance. This choice of model representation provides the flexibility to apply the scheme in different application areas. We define several variants of the Hausdor distance to compare the models within the framework of well defined metric spaces. We present a matching algorithm that efficiently finds a pattern in a 3D scene. The algorithm approximates a minimization procedure of the Hausdor distance. The output error due to the approximation is guaranteed to be within a known constant bound. Practical results are presented for two classes of objects: (i) polyhedral shapes extracted from segmented range images and (ii) secondary structures of large molecules. In both cases the use of our approximate algorithm allows to match correctly the pattern in the background while achieving the efficiency necessary for practical use of the scheme. In particular the performance is improved substantially with minor degradation of the quality of the matching
Full Text Available Nonlinear object tracking from noisy measurements is a basic skill and a challenging task of mobile robotics, especially under dynamic environments. The particle filter is a useful tool for nonlinear object tracking with non?Gaussian noise. Nonlinear object tracking needs the real?time processing capability of the particle filter. While the number in a traditional particle filter is fixed, that can lead to a lot of unnecessary computation. To address this issue, a confidence?level? based new adaptive particle filter (NAPF algorithm is proposed in this paper. In this algorithm the idea of confidence interval is utilized. The least number of particles for the next time instant is estimated according to the confidence level and the variance of the estimated state. Accordingly, an improved systematic re?sampling algorithm is utilized for the new improved particle filter. NAPF can effectively reduce the computation while ensuring the accuracy of nonlinear object tracking. The simulation results and the ball tracking results of the robot verify the effectiveness of the algorithm.
Object recognition algorithms are fundamental tools in automatic matching of geometric shapes within a background scene. Many approaches have been proposed in the past to solve the object recognition problem. Two of the key aspects that distinguish them in terms of their practical usability are: (i) the type of input model description and (ii) the comparison criteria used. In this paper we introduce a novel scheme for 3D object recognition based on line segment representation of the input shapes and comparison using the Hausdor distance. This choice of model representation provides the flexibility to apply the scheme in different application areas. We define several variants of the Hausdor distance to compare the models within the framework of well defined metric spaces. We present a matching algorithm that efficiently finds a pattern in a 3D scene. The algorithm approximates a minimization procedure of the Hausdor distance. The output error due to the approximation is guaranteed to be within a known constant bound. Practical results are presented for two classes of objects: (i) polyhedral shapes extracted from segmented range images and (ii) secondary structures of large molecules. In both cases the use of our approximate algorithm allows to match correctly the pattern in the background while achieving the efficiency necessary for practical use of the scheme. In particular the performance is improved substantially with minor degradation of the quality of the matching.
Guerra, C; Pascucci, V
Water is a kind of special natural background. The image acquired with laser imaging system for the underwater object always has some speckle noises caused by the backscattering of water and suspended particles, which gives birth to inconvenient to extract features of the image. In this paper, a set of laser underwater imaging system, which uses range-gated technique to avoid the backscatter and imaging distance up to above 20 meters, and its experimental results in boat-pool are introduced. According to the inherent mechanism of the underwater laser image, we propose a fractal-characters-based method for segmentation of the nature scene to find the artifact object from the image, which adopts region segmentation by Hausdroff dimension obtained by blanket covering method, and depends on the different distribution of the texture characteristic and multi-scale analysis to carry out the image segmentation. Experiments show the approach is suitable for texture segmentation and object finding in the image acquired b laser imaging laser system for the underwater object.
Chang, Yanjun; Peng, Fuyuan; Luo, Lin; Zhang, Ying
Previously, urban growth pattern is described and measured by the pixel-by-pixel comparison of satellite images. The geographic extent, patterns and types of urban growth are derived from satellite images separated in time. However, the pixel-by-pixel comparison approach suffers from several drawbacks. Firstly, slight error in image geo-reference can cause false detection of changes. Secondly, it's difficult to recognize and correct artifact changes induced by data noise and data processing errors. Thirdly, only limited information can be derived. In this paper, we present a new objectbased method to describe and quantify urban growth patterns. The different types of land cover are classified from sequential satellite images as urban objects. The geometric and shape attributes of objects and the spatial relationship between them are employed to identify the different types of urban growth pattern. The algorithms involved in the object-based method are implemented by using C++ programming language and the software user interface is developed by using ArcObjects and VB.Net. A simulated example is given to demonstrate the utility and effectiveness of this new method.
Yu, Bailang; Liu, Hongxing; Gao, Yige; Wu, Jianping
Full Text Available Many image watermarking schemes have been proposed in recent years, but they usually involve embedding a watermark to the entire image without considering only a particular object in the image, which the image owner may be interested in. This paper proposes a watermarking scheme that can embed a watermark to an arbitrarily shaped object in an image. Before embedding, the image owner specifies an object of arbitrary shape that is of a concern to him. Then the object is transformed into the wavelet domain using in place lifting shape adaptive DWT(SADWT and a watermark is embedded by modifying the wavelet coefficients. In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy compression (e.g. JPEG, JPEG2000, scaling, adding noise, filtering, etc.
Many image watermarking schemes have been proposed in recent years, but they usually involve embedding a watermark to the entire image without considering only a particular object in the image, which the image owner may be interested in. This paper proposes a watermarking scheme that can embed a watermark to an arbitrarily shaped object in an image. Before embedding, the image owner specifies an object of arbitrary shape that is of a concern to him. Then the object is transformed into the wavelet domain using in place lifting shape adaptive DWT(SADWT) and a watermark is embedded by modifying the wavelet coefficients. In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as loss...
Essaouabi, A; Fegragui, F
In this paper, we propose N-Dimensional (ND) Tensor Supervised Neighborhood Embedding (ND TSNE) for discriminant feature representation, which is used for view-based object recognition. ND TSNE uses a general Nth order tensor discriminant and neighborhood-embedding analysis approach for object representation. The benefits of ND TSNE include: (1) a natural way of representing data without losing structure information, i.e., the information about the relative positions of pixels or regions; (2) a reduction in the small sample size problem, which occurs in conventional supervised learning because the number of training samples is much less than the dimensionality of the feature space; (3) preserving a neighborhood structure in tensor feature space for object recognition and a good convergence property in training procedure. With Tensor-subspace features, the random forests is used as a multi-way classifier for object recognition, which is much easier for training and testing compared with multi-way SVM. We demonstrate the performance advantages of our proposed approach over existing techniques using experiments on the COIL-100 and the ETH-80 datasets.
Han, Xian-Hua; Chen, Yen-Wei; Ruan, Xiang
Information theory has the potential to provide a common language for the quantification of uncertainty and its reduction by choosing optimally informative monitoring network layout. Numerous different objectives based on information measures have been proposed in recent literature, often focusing simultaneously on maximum information and minimum dependence between the chosen locations for data collection. We discuss these objective functions and conclude that a single objective optimization of joint entropy suffices to maximize the collection of information. Minimum dependence is a secondary objective that automatically follows from the first, but has no intrinsic justification. Furthermore it is demonstrated how the curse of dimensionality complicates the determination of information content for time series. In many cases found in the monitoring network literature, discrete multivariate joint distributions are estimated from relatively little data, leading to the occurrence of spurious dependencies in data, which change interpretations of previously published results. Aforementioned numerical challenges stem from inherent difficulties and subjectivity in determining information content. From information-theoretical logic it is clear that the information content of data depends on the state of knowledge prior to obtaining them. Less assumptions in formulating this state of knowledge leads to higher data requirements in formulating it. We further clarify the role of prior information in information content by drawing an analogy with data compression.
Weijs, S. V.; Huwald, H.; Parlange, M. B.
The acquisition of 3D point data with the use of both aerial laser scanning (ALS) and matching of aerial stereo images coupled with advances in image processing algorithms in the past years provide opportunities to map land cover types with better precision than before. The present study applies Object-Based Image Analysis (OBIA) to 3D point cloud data obtained from matching of stereo aerial images together with spectral data to map land cover types of the Nord-Trøndelag county of Norway. The multi-resolution segmentation algorithm of the Definiens eCognition™ software is used to segment the scenes into homogenous objects. The objects are then classified into different land cover types using rules created based on the definitions given for each land cover type by the Norwegian Forest and Landscape Institute. The quality of the land cover map was evaluated using data collected in the field as part of the Norwegian National Forest Inventory. The results show that the classification has an overall accuracy of about 80% and a kappa index of about 0.65. OBIA is found to be a suitable method for utilizing 3D remote sensing data for land cover mapping in an effort to replace manual delineation methods.
Debella-Gilo, M.; Bjørkelo, K.; Breidenbach, J.; Rahlf, J.
Full Text Available Over the past few decades, clearing for shrimp farming has caused severe losses of mangroves in the Mekong Delta (MD of Vietnam. Although the increasing importance of shrimp aquaculture in Vietnam has brought significant financial benefits to the local communities, the rapid and largely uncontrolled increase in aquacultural area has contributed to a considerable loss of mangrove forests and to environmental degradation. Although different approaches have been used for mangrove classification, no approach to date has addressed the challenges of the special conditions that can be found in the aquaculture-mangrove system in the Ca Mau province of the MD. This paper presents an object-based classification approach for estimating the percentage of mangroves in mixed mangrove-aquaculture farming systems to assist the government to monitor the extent of the shrimp farming area. The method comprises multi-resolution segmentation and classification of SPOT5 data using a decision tree approach as well as local knowledge from the region of interest. The results show accuracies higher than 75% for certain classes at the object level. Furthermore, we successfully detect areas with mixed aquaculture-mangrove land cover with high accuracies. Based on these results, mangrove development, especially within shrimp farming-mangrove systems, can be monitored. However, the mangrove forest cover fraction per object is affected by image segmentation and thus does not always correspond to the real farm boundaries. It remains a serious challenge, then, to accurately map mangrove forest cover within mixed systems.
Quoc Tuan Vo
The aim of this study is to propose and test a multi-level methodology for detection of oil slicks in ENVISAT Advanced Synthetic Aperture Radar (ASAR) imagery, which can be used to support the identification of hydrocarbon seeps. We selected Andrusov Ridge in the Central Black Sea as the test study area where extensive hydrocarbon seepages were known to occur continuously. Hydrocarbon seepage from tectonic or stratigraphic origin at the sea floor causes oily gas plumes to rise up to the sea surface and form thin oil films called oil slicks. Microwave sensors like synthetic aperture radar (SAR) are very suitable for ocean remote sensing as they measure the backscattered radiation from the surface and show the roughness of the terrain. Oil slicks dampen the sea waves creating dark patches in the SAR image. The proposed and applied methodology includes three levels: visual interpretation, image filtering and object-based oil spill detection. Level I, after data preparation with visual interpretation, includes dark spots identification and subsets/scenes creation. After this process, the procedure continues with categorization of subsets/scenes into three cases based on contrast difference of dark spots to the surroundings. In level II, by image and morphological filtering, it includes preparation of subsets/scenes for segmentation. Level III includes segmentation and feature extraction which is followed by object-based classification. The object-based classification is applied with the fuzzy membership functions defined by extracted features of ASAR subsets/scenes, where the parameters of the detection algorithms are tuned specifically for each case group. As a result, oil slicks are discriminated from look-alikes with an overall classification accuracy of 83% for oil slicks and 77% for look-alikes obtained by averaging three different cases. PMID:21380923
Akar, Sertac; Süzen, Mehmet Lutfi; Kaymakci, Nuretdin
This document describes ROCIT, a neural-inspired object recognition algorithm based on a rank-order coding scheme that uses a light-weight neuron model. ROCIT coarsely simulates a subset of the human ventral visual stream from the retina through the inferior temporal cortex. It was designed to provide an extensible baseline from which to improve the fidelity of the ventral stream model and explore the engineering potential of rank order coding with respect to object recognition. This report describes the baseline algorithm, the model's neural network architecture, the theoretical basis for the approach, and reviews the history of similar implementations. Illustrative results are used to clarify algorithm details. A formal benchmark to the 1998 FERET fafc test shows above average performance, which is encouraging. The report concludes with a brief review of potential algorithmic extensions for obtaining scale and rotational invariance.
Gonzales, Antonio Ignacio; Reeves, Paul C.; Jones, John J.; Farkas, Benjamin D.
This paper deals with the comparison of planar parallel manipulator architectures based on a multi-objective design optimization approach. The manipulator architectures are compared with regard to their mass in motion and their regular workspace size, i.e., the objective functions. The optimization problem is subject to constraints on the manipulator dexterity and stiffness. For a given external wrench, the displacements of the moving platform have to be smaller than given values throughout the obtained maximum regular dexterous workspace. The contributions of the paper are highlighted with the study of 3-RPR, 3-RPR and 3-RPR planar parallel manipulator architectures, which are compared by means of their Pareto frontiers obtained with a genetic algorithm.
Chablat, Damien; Ur-Rehman, Raza; Wenger, Philippe
Full Text Available Feature selection (FS is considered to be an important preprocessing step in machine learning and pattern recognition, and feature evaluation is the key issue for constructing a feature selection algorithm. Feature selection process can also reduce noise and this way enhance the classification accuracy. In this article, feature selection method based on fuzzy similarity measures by multi objective genetic algorithm (FSFSM - MOGA is introduced and performance of the proposed method on published data sets from UCI was evaluated. The results show the efficiency of the method is compared with the conventional version. When this method multi-objective genetic algorithms and fuzzy similarity measures used in CFS method can improve it.
Hassan Nosrati Nahook
Full Text Available Feature selection (FS is considered to be an important preprocessing step in machine learning and pattern recognition, and feature evaluation is the key issue for constructing a feature selection algorithm. Feature selection process can also reduce noise and this way enhance the classification accuracy. In this article, feature selection method based on ? - fuzzy similarity measures by multi objective genetic algorithm (FSFSM – MOGA is introduced and performance of the proposed method on published data sets from UCI was evaluated. The results show the efficiency of the method is compared with the conventional version. When this method multi-objective genetic algorithms and fuzzy similarity measures used in CFS method can improve it.
Hassan Nosrati Nahook,
Full Text Available Moving objects detection is an important research of computer vision. Optical flow method is a main way, but it is limited to use because of its complexity. A moving objects detection algorithm based on three-frame difference and optical flow is proposed. The calculation of optical flow is simplified. Harris corners are detected and then only the corners are selected to compute optical flow information, which reduce the algorithm’s complexity. Because the detected moving target area is not complete, three-frame difference method is introduced as a supplement. The experimental results show that the algorithm can achieve real-time and has better results than anyone of these two separate algorithms.
The anisotropic coefficients of Hill's yield criterion are determined through a novel genetic algorithms-based multi-objective optimization approach. The classical method of determining anisotropic coefficients is sensitive to the effective plastic strain. In the present procedure, that limitation is overcome using a genetically evolved meta-model of the entire stress strain curve, obtained from uniaxial tension tests conducted in the rolling direction and transverse directions, and biaxial tension. Then, an effective strain that causes the least error in terms of two theoretically derived objective functions is chosen. The anisotropic constants evolved through genetic algorithms correlate very well with the classical results. This approach is expected to be successful for more complex constitutive equations as well.
Hariharan, Krishnaswamy; Chakraborti, Nirupam; Barlat, Frédéric; Lee, Myoung-Gyu
We estimate the fraction of mass that is composed of compact objects in gravitational lens galaxies. This study is based on microlensing measurements (obtained from the literature) of a sample of 29 quasar image pairs seen through 20 lens galaxies. We determine the baseline for no microlensing magnification between two images from the ratios of emission line fluxes. Relative to this baseline, the ratio between the continua of the two images gives the difference in microlensing magnification. The histogram of observed microlensing events peaks close to no magnification and is concentrated below 0.6 magnitudes, although two events of high magnification, $\\Delta m \\sim 1.5$, are also present. We study the likelihood of the microlensing measurements using frequency distributions obtained from simulated microlensing magnification maps for different values of the fraction of mass in compact objects, $\\alpha$. The concentration of microlensing measurements close to $\\Delta m \\sim 0$ can be explained only by simulati...
Mediavilla, E; Falco, E; Motta, V; Guerras, E; Canovas, H; Jean, C; Oscoz, A; Mosquera, A M
Full Text Available The main goal of this exploratory project was to quantify seedling density in post fire regeneration sites, with the following objectives: to evaluate the application of second order image texture (SOIT in image segmentation, and to apply the object-based image analysis (OBIA approach to develop a hierarchical classification. With the utilization of image texture we successfully developed a methodology to classify hyperspatial (high-spatial imagery to fine detail level of tree crowns, shadows and understory, while still allowing discrimination between density classes and mature forest versus burn classes. At the most detailed hierarchical Level I classification accuracies reached 78.8%, a Level II stand density classification produced accuracies of 89.1% and the same accuracy was achieved by the coarse general classification at Level III. Our interpretation of these results suggests hyperspatial imagery can be applied to post-fire forest density and regeneration mapping.
L. Monika Moskal
Impervious surface is an important part of urban underlying surface, as well as an important monitoring index for city ecological system and environment changes. However, accurate impervious surface extraction is still a challenge. This paper uses the color, shape and overall heterogeneity features from the high spatial resolution remote sensing image to extract the impervious surface. An edge-based image segmentation algorithm is put forward to fuse heterogeneous objects which integrates edge features and multi-scale segmentation algorithm and uses the edge information to guide image objects generation. Results showed that this method can greatly improve the accuracy of image segmentation. Accuracy assessment indicated that the overall impervious surface classification accuracy and a Kappa coefficient yield 87% and 0.84, respectively.
Liu, Aixia; Zhao, Xiaojie; Wang, Jing; He, Ting
The objective of this thesis is to provide a possible solution to one of the current problems in dermatology: the lack of suitable methods to objectively evaluate the severity of dermatological lesions. An image based system is developed with the goal of automatically obtaining summarization values that characterize the lesion and help to track the evolution of the disease. The thesis starts by analyzing an accurate type of equipment with which collect dermatological images. Later, a method to segment the different areas embedded in dermatological lesions is developed. Results of the segmentation task will be used to obtain values that characterize the lesion. The last part of the thesis considers the possibility of including more bands in the analysis in order to increase the accuracy of the proposed method.
Gomez, David Delgado
Full Text Available Problem statement: Diabetes mellitus or diabetes epidemic is one of the high prevalence diseases worldwide with increased number of disability, complications and death toll. An early diagnosis helps patients and medical practitioners to reduce the burden of diabetes. Approach: In this research, we propose a framework for a system using rule-based reasoning and object-oriented methodologies to diagnose both Type 1 and Type 2 diabetes. Results: Extensive literature reviews were carried out and questionnaires were distributed to medical practitioners to build the knowledge base. This knowledge base stores the rules needed to perform a diagnosis. Conclusion: This study only presents the proposed framework and not the system itself. We believe that great improvements can be provided to the medical practitioners and also the diabetics with the implementation of this system in future.
Full Text Available For an unknown environment, how to make a mobile robot identify a target object and locate it autonomously, this is a very challenging question. In this paper, a novel multi-sensor fusion method based on a camera and a laser range finder (LRF for mobile manipulations is proposed. Although a camera can acquire large quantities of information, it does not directly get the 3D data of the environment. Moreover, the camera image processing is complex and easily influenced from the change in ambient light. In view of the ability of the LRF to directly get the 3D coordinates of the environment and its stability against outside influence, and the superiority of the camera to acquire rich color information, the combination of the two sensors by making use of their advantages is employed to obtain more accurate measurement as well as to simplify information processing. To overlay the camera image with the measurement point cloud of the pitching LRF and to reconstruct the 3D image which includes pixel depth information, the homogeneous transformation model of the system is built. Then, based on the combination of the color features from the camera image and the shape features from the LRF measurement data, the autonomous identification and location of target object are achieved. In order to extract the shape features of the object, a two-step method is introduced, and a sliced point cloud algorithm is proposed for the preliminary classification of the measurement data of the LRF. The effectiveness of the proposed method is validated by the experimental testing and analysis carried out on the mobile manipulator platform. The experimental results show that by this method, the robot can not only identify target object autonomously, but also determine whether it can be operated, and acquire a proper grasping location.
The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.
Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.
Object-based attention operates on perceptual objects, opening the possibility that the costs and benefits humans have to pay to move attention between-objects might be affected by the nature of the stimuli. The current study reported two experiments with adults and 8-month-old infants investigating whether object-based-attention is affected by the type of stimulus (faces vs. non-faces stimuli). Using the well-known cueing task developed by Egly et al. (1994) to study the object-based component of attention, in Experiment 1 adult participants were presented with two upright, inverted or scrambled faces and an eye-tracker measured their saccadic latencies to find a target that could appear on the same object that was just cued or on the other object that was uncued. Data showed that an object-based effect (a smaller cost to shift attention within- compared to between-objects) occurred only with scrambled face, but not with upright or inverted faces. In Experiment 2 the same task was performed with 8-month-old infants, using upright and inverted faces. Data revealed that an object-based effect emerges only for inverted faces but not for upright faces. Overall, these findings suggest that object-based attention is modulated by the type of stimulus and by the experience acquired by the viewer with different objects. PMID:24723860
Valenza, Eloisa; Franchin, Laura; Bulf, Hermann
Full Text Available Object-based attention operates on perceptual objects, opening the possibility that the costs and benefits humans have to pay to move attention between objects might be affected by the nature of the stimuli. The current study reported two experiments with adults and 8-month-old infants investigating whether object-based-attention is affected by the stimulus social salience (faces vs. non-faces stimuli. Using the well-known cueing task developed by Egly et al. (1994 to study the object-based component of attention, in Experiment 1 adult participants were presented with two upright, inverted or scrambled faces and an eye-tracker measured their saccadic latencies to find a target that could appear on the same object that was just cued or on the other object that was uncued. Data showed that an object-based effect (a minor cost to shift attention within- compared to between-objects occurred only with scrambled face, but not with upright or inverted faces. In Experiment 2 the same task was performed with 8-month-old infants, using upright and inverted faces. Data revealed that an object-based effect only emerges for inverted faces but not for upright faces. Overall, these findings suggest that object-based attention is modulated by the stimulus social salience and by the experience acquired by the viewer with different objects.
A method for high-speed measurement of the three-dimensional (3D) shape of spatially isolated objects is proposed. Two sinusoidal fringe patterns with phase difference ? and an encoded pattern are used to measure the 3D shape. A modified Fourier transform profilometry (FTP) method is used for phase retrieval and obtaining high-quality texture. The measurable slope of the height variation is larger than for methods based on traditional FTP and the same as that for methods based on phase measurement profilometry (PMP). The number of patterns is less than for the high-speed methods based on PMP, using which isolated objects can be measured. Consequently, this approach is less sensitive to object motion. In the proposed method, the encoded pattern consists of vertical stripes with width the same as the period of the sinusoidal fringe. Three gray levels are used to form the stripes. Six symbols are encoded with these three gray levels. Then, a pseudorandom sequence is constructed with an alphabet of these six symbols. The stripes are arranged according to the sequence to form the pattern. In the procedure of phase unwrapping, the strings (subsequences) are constructed with symbols corresponding to three neighbor periods of the deformed fringe. The position of the subsequence is worked out by string matching in the pseudorandom sequence. The ranking number of the fringe is identified and then the absolute phase of the deformed fringe is obtained. The 3D shape of the objects is reconstructed with triangulation. A system consisting of a specially designed digital light processing projector and a high-speed camera is presented. The 3D capture speed of 60 frames per second (fps), with a resolution of 640 × 480 points, and that of 120 fps, with a resolution of 320 × 240 points, were achieved. Preliminary experimental results are given. If the control logic of the digital micromirror device was modified and a camera with higher speed was employed, the measurement speed would reach thousands of fps. This makes it possible to analyze dynamic objects
Full Text Available This paper aims to prove the effectiveness of the operational objectives-based sports training program in the junior II male handball players from the Bacau School Sports Club, during the technical-tactical drills. The aim of this research was to test the subjects throughout the competition season by applying a series of technical-tactical tests. We presumed that after applying the operational objective-based athletic training program in the junior II male handball players, the progress in the technical-tactical drills will be ascending, recording superior final results, these being conditioned by an optimal programing of the training.The goal of this study was to prove that the progress of the Bacau School Sports Club junior II male handball players in the technical-tactical drills was correlated with the operational objective-based program.The research subjects consisted of an experimental group from the Bacau School Sports Club and a witness group from School 3 - the Adjud School Sports Club Both groups comprised 19 junior II male players. In order to emphasize the progress recorded in the technical-tactical training, we gathered data from three control drills and from the assessment of the players' actions during the game. The progress was conditioned by the programing of the athletic training; during the training sessions, we intervened each time through the operational objectives, for improving the training process.The more prominent progress during the first part of the competition season is due to the technical training, while the sufficiently prominent progress in the last part of the season is due to the tactical training, a fact that results from the programing of the operational objectives - the ones for the technical training are more in the first part of the season, while the ones for the tactical training are in larger number in the second part of the season.The effectiveness of the programs gained them a first place in the Junior II National Championship.
A method for high-speed measurement of the three-dimensional (3D) shape of spatially isolated objects is proposed. Two sinusoidal fringe patterns with phase difference ? and an encoded pattern are used to measure the 3D shape. A modified Fourier transform profilometry (FTP) method is used for phase retrieval and obtaining high-quality texture. The measurable slope of the height variation is larger than for methods based on traditional FTP and the same as that for methods based on phase measurement profilometry (PMP). The number of patterns is less than for the high-speed methods based on PMP, using which isolated objects can be measured. Consequently, this approach is less sensitive to object motion. In the proposed method, the encoded pattern consists of vertical stripes with width the same as the period of the sinusoidal fringe. Three gray levels are used to form the stripes. Six symbols are encoded with these three gray levels. Then, a pseudorandom sequence is constructed with an alphabet of these six symbols. The stripes are arranged according to the sequence to form the pattern. In the procedure of phase unwrapping, the strings (subsequences) are constructed with symbols corresponding to three neighbor periods of the deformed fringe. The position of the subsequence is worked out by string matching in the pseudorandom sequence. The ranking number of the fringe is identified and then the absolute phase of the deformed fringe is obtained. The 3D shape of the objects is reconstructed with triangulation. A system consisting of a specially designed digital light processing projector and a high-speed camera is presented. The 3D capture speed of 60 frames per second (fps), with a resolution of 640 × 480 points, and that of 120 fps, with a resolution of 320 × 240 points, were achieved. Preliminary experimental results are given. If the control logic of the digital micromirror device was modified and a camera with higher speed was employed, the measurement speed would reach thousands of fps. This makes it possible to analyze dynamic objects.
Li, Yong; Zhao, Cuifang; Wang, Hui; Jin, Hongzhen
A case-based reasoning (CBR) knowledge base has been incorporated into a Micro-Electro-Mechanical Systems (MEMS) design tool that uses a multi-objective genetic algorithm (MOGA) to synthesize and optimize conceptual designs. CBR utilizes previously successful MEMS designs and sub-assemblies as building blocks stored in an indexed case library, which serves as the knowledge base for the synthesis process. Designs in the case library are represented in a parameterized object-oriented format, incorporating MEMS domain knowledge into the design synthesis loop as well as restrictions for the genetic operations of mutation and crossover for MOGA optimization. Reasoning tools locate cases in the design library with solved problems similar to the current design problem and suggest promising conceptual designs which have the potential to be starting design populations for a MOGA evolutionary optimization process, to further generate more MEMS designs concepts. Surface micro-machined resonators are used as an example to introduce this integrated MEMS design synthesis process. The results of testing on resonator case studies demonstrate how the combination of CBR and MOGA synthesis tools can help increase the number of optimal design concepts presented to MEMS designers.
Cobb, Corie L.; Zhang, Ying; Agogino, Alice M.
Full Text Available AIM: To develop an image-based objective method to precisely evaluate regional ocular bulbar injection.METHODS:Six healthy adult volunteers were photographed in four orientations (superior, inferior, nasal and temporal sides with and without stimulating eye drops. Six line segments (covering 30° were drawn 4mm away from the limbus on each image using ImageJ software. The graph peaks, which were derived from the areas under the line segments and corresponded to the cross-sectional grey-level of the vessels, were analyzed to obtain peak area, peak height/width (PH/PW, and peak numbers. Different-sized areas were selected to calculate the pixels based on the edge-detection algorithm. Also, conjunctival and superficial scleral vessels were analyzed separately. RESULTS:This method had a smaller coefficient of variation, especially for PH/PW, in all four orientations. Hyperaemia parameters changed the least after challenging in the superior region. Moreover, 95% of the PH/PW ratios were greater than 0.87 in conjunctival vessels and less than 1.00 in superficial scleral vessels. PH/PW significantly increased in conjunctival vessels and changed less in superficial scleral vessels.CONCLUSION:A new method of objectively assessing bulbar injection based on ocular surface images was developed. This method can be used to quantify ocular regional injection and to distinguish the superficial scleral and conjunctival vessels.
The purpose of this work is to provide a method for developing data quality objectives (DQOs) based on human health risk criteria to support waste characterization and remediation efforts for single-shell tanks (SSTs) at the Hanford site. Data quality objectives provide decision makers with information on the type, quality, and quantity of data needed to characterize waste and make closure decision; they also help focus the work and minimize the resources (worker impact, cost, and schedule). For the SST characterization project, DQOs were developed using a risk-based code that integrates source term, transport, and exposure models. Preliminary DQOs for SST waste characterization and remediation at the Hanford site were developed using a health risk-based code that integrates source term, transport, and exposure models. The large number of analytes under consideration for characterization were prioritized according to their risk contribution, and detection limit goals were developed to identify potentially inadequate analytical methods. These DQOs will help determine the most efficient use of resources to characterize, remediate, and close SSTs
This paper describes the Run Control system developed for the Obelix experiment at the Low Energy Antiproton Ring of CERN. The adopted approach is based on a State Manager developed as a part of the MODEL project. The State Manager incorporates a model for the different activities and for the way they must be organized. An object-oriented decomposition of the on-line system is performed. A clean separation of the control, logic and operating tasks is achieved. Remote Procedure Call techniques are employed to cope with the problems of a distributed system architecture
This paper reports describes the Run Control system developed for the Obelix experiment at the Low Energy Antiproton Ring of CERN. The adopted approach is based on a State Manager developed as a part of the MODEL project. The State Manager incorporates a model of the different activities and of the way they must be organized. An object-oriented decomposition of the on-line system is performed. A clean separation of the control. logic and operating tasks is achieved. Remote Procedure Call techniques are employed to cope with the problems of a distributed system architecture
This paper investigates the role of composite filters in reducing the search time for 3D model based object recognition. When one moves from 2D to 3D, one is faced with a huge amount of information to deal with. The composite filter combined with a multistage scheme is developed for processing this huge information. The design scheme for the composite filters is also elaborated. The procedures discussed in this paper demonstrate how detection of these various model images might help formulate a new metric for recognition performance.
Rahman, Mahbuba; Awwal, Abdul Ahad S.; Gudmundsson, Karl S.
Full Text Available This paper presents an Interior Pont Method (IPM and variant of Particle Swarm Optimization (APSO based hybrid method to solve optimal power flow in power system incorporating Flexible AC Transmission Systems (FACTS such as Thyristor Controlled Phase Shifter (TCPS for minimization of multiple objectives. The proposed IPM-APSO algorithm identifies the optimal values of generator active-power output and the adjustment of reactive power control devices. The proposed optimization process with IPM-APSO is presented with case study example using IEEE 30-bus test system to demonstrate its applicability. The results are presented to show the feasibility and potential of this new approach.
In this paper knowledge based system technology is adopted in the application process of the ASME boiler and pressure vessel core section III for bettering design quality and efficiency of nuclear component. At present no a single knowledge representation method could express all of the ASME code's rules sufficiently, integrally and exactly. An object-oriented hybrid knowledge representation method (OOHKRM) is presented in the paper. The rule expression of the ASME code is divided into three modes such as statement, list and graphic illustration by detailed analyzing the organization characteristics and rules of the code. According to the differences of knowledge features, knowledge of the ASME code is classified approximately into three main categories: illustrative knowledge, procedural knowledge, and Meta knowledge, which are represented by list, frame, production rule and Petri net respectively for expressing the knowledge integrally and exactly. A knowledge Petri net model is also defined for the same reason. Moreover, several class objects corresponding to different types of knowledge are defined especially. The method not only reserves merits of the other four used representation methods, but also processes characteristics of object-oriented technologies. Consequently, the method has good universality while it is used to represent the knowledge of ASME codes or other engineering standards. (author)
One of the most important themes in the development of foods and drinks is the accurate evaluation of taste properties. In general, a sensory evaluation system is frequently used for evaluating food and drink. This method, which is dependent on human senses, is highly sensitive but is influenced by the eating experience and food palatability of individuals, leading to subjective results. Therefore, a more effective method for objectively estimating taste properties is required. Here we show that salivary hemodynamic signals, as measured by near-infrared spectroscopy, are a useful objective indicator for evaluating sour taste stimulus. In addition, the hemodynamic responses of the parotid gland are closely correlated to the salivary secretion volume of the parotid gland in response to basic taste stimuli and respond to stimuli independently of the hedonic aspect. Moreover, we examined the hemodynamic responses to complex taste stimuli in food-based solutions and demonstrated for the first time that the complicated phenomenon of the "masking effect," which decreases taste intensity despite the additional taste components, can be successfully detected by near-infrared spectroscopy. In summary, this study is the first to demonstrate near-infrared spectroscopy as a novel tool for objectively evaluating complex sour taste properties in foods and drinks. PMID:24474216
Hoshi, Ayaka; Aoki, Soichiro; Kouno, Emi; Ogasawara, Masashi; Onaka, Takashi; Miura, Yutaka; Mamiya, Kanji
Understanding the difference between data objects is a major problem especially in a scientific collaboration which allows scientists to collectively reuse data, modify and adapt scripts developed by their peers to process data while publishing the results to a centralized data store. Although data provenance has been significantly studied to address the origins of a data item, it does not however address changes made to the source code. Systems often appear as a large number of modules each containing hundreds of lines of code. It is, in general, not obvious which parts of source code contributed to the change in data object. The paper introduces the Class-Based Object Versioning framework, which overcomes some of the shortcomings of popular versioning systems (e.g. CVS, SVN) in maintaining data and code provenance information in scientific computing environments. The framework automatically identifies and captures useful fine-grained changes in the data and code of scripts that perform scientific experiments so that important information about intermediate stages (i.e. unrecorded changes in experiment parameters and procedures) can be identified and analyzed. The benefits of such a system include querying specific methods and code attributes for data items of interest, finding missing gaps of data lineage and implicit storage of intermediate data.
Mwebaze, Johnson; Boxhoorn, Danny; Rai, Idris; Valentijn, Edwin A.
The paper describes an approach to real time detection and tracking of underwater objects, using image sequences from an electrically scanned high-resolution sonar. The use of a high resolution sonar provides a good estimate of the location of the objects, but strains the computers on board, because of the high rate of raw data. The amount of data can be cut down by decreasing the scanned area, but this reduces the possibility of planning an optimal path. In the paper methods are described, that maintains the wide area of detection, without significant loss of precision or speed. This is done by using different scanning patterns for each sample. The detection is based on a two level threshold, making processing fast. Once detected the objects are followed through consecutive sonar images, and by use of an observer the estimation errors on position and velocities are reduced. Intensive use of different on-board sensors also makes it possible to scan a map of a larger area of the seabed in world coordinates. The work is in collaboration with partners under MAST-C-T90-0059
Full Text Available Mobile robot operators must make rapid decisions based on information about the robot’s surrounding environment. This means that terrain modeling and photorealistic visualization are required for the remote operation of mobile robots. We have produced a voxel map and textured mesh from the 2D and 3D datasets collected by a robot’s array of sensors, but some upper parts of objects are beyond the sensors’ measurements and these parts are missing in the terrain reconstruction result. This result is an incomplete terrain model. To solve this problem, we present a new ground segmentation method to detect non-ground data in the reconstructed voxel map. Our method uses height histograms to estimate the ground height range, and a Gibbs-Markov random field model to refine the segmentation results. To reconstruct a complete terrain model of the 3D environment, we develop a 3D boundary estimation method for non-ground objects. We apply a boundary detection technique to the 2D image, before estimating and refining the actual height values of the non-ground vertices in the reconstructed textured mesh. Our proposed methods were tested in an outdoor environment in which trees and buildings were not completely sensed. Our results show that the time required for ground segmentation is faster than that for data sensing, which is necessary for a real-time approach. In addition, those parts of objects that were not sensed are accurately recovered to retrieve their real-world appearances.
Reconstruction of unknown objects by microwave illumination requires efficient inversion for measured electromagnetic scattering data. In the integral equation approach for reconstructing dielectric objects based on the Born iterative method or its variations, the volume integral equations are involved because the imaging domain is fully inhomogeneous. When solving the forward scattering integral equation, the Nyström method is used because the traditional method of moments may be inconvenient due to the inhomogeneity of the imaging domain. The benefits of the Nyström method include the simple implementation without using any basis and testing functions and low requirement on geometrical discretization. When solving the inverse scattering integral equation, the Gauss-Newton minimization approach with a line search method (LSM) and multiplicative regularization method (MRM) is employed. The LSM can optimize the search of step size in each iteration, whereas the MRM may reduce the number of numerical experiments for choosing the regularization parameter. Numerical examples for reconstructing typical dielectric objects under limited observation angles are presented to illustrate the inversion approach. PMID:23996559
Tong, Mei Song; Yang, Kuo; Sheng, Wei Tian; Zhu, Zhen Ying
Full Text Available A reference point based multi-objective optimization using a combination between trust region (TR algorithm and particle swarm optimization (PSO to solve the multi-objective environmental/economic dispatch (EED problem is presented in this paper. The EED problem is handled by Reference Point Interactive Approach. One of the main advantages of the proposed approach is integrating the merits of both TR and PSO, where TR has provided the initial set (close to the Pareto set as possible and the reference point of the decision maker followed by PSO to improve the quality of the solutions and get all the points on the Pareto frontier. The performance of the proposed algorithm is tested on standard IEEE 30-bus 6-genrator test system and is compared with conventional methods. The results demonstrate the capabilities of the proposed approach to generate true and well-distributed Pareto-optimal non-dominated solutions in one single run. The comparison with the classical methods demonstrates the superiority of the proposed approach and confirms its potential to solve the multi-objective EED problem.
Mohamed A. El-Shorbagy
Full Text Available LiDAR-derived slope models may be used to detect abandoned logging roads in steep forested terrain. An object-based classification approach of abandoned logging road detection was employed in this study. First, a slope model of the study site in Marin County, California was created from a LiDAR derived DEM. Multiresolution segmentation was applied to the slope model and road seed objects were iteratively grown into candidate objects. A road classification accuracy of 86% was achieved using this fully automated procedure and post processing increased this accuracy to 90%. In order to assess the sensitivity of the road classification to LiDAR ground point spacing, the LiDAR ground point cloud was repeatedly thinned by a fraction of 0.5 and the classification procedure was reapplied. The producer’s accuracy of the road classification declined from 79% with a ground point spacing of 0.91 to below 50% with a ground point spacing of 2, indicating the importance of high point density for accurate classification of abandoned logging roads.
The author presents a method to solve for 3-D object position and orientation from single images based on a systematic hierarchy of features: primitive features, generalized features, and compound structures. A virtual viewpoint analysis shows that a feature - centered coordinate system allows us to drastically reduce the mathematical complexity of the inverse projection problem for higher order curves. At the heart of the matchings are the matching functions for primitive features, which are: point-to-point correspondence, line-to-line-correspondence, and ellipse-to-ellipse correspondence. Parametric matching by generalized features is used to solve for viewpoints at any stage of matching. The generalized features can be viewed as the generalization of features used by previous researchers. The matching method goes top-down and then bottom-up if necessary. Model of the objects are analyzed first. The major contributions of the system include: (1) Solving the problems of ellipse-to-ellipse correspondence, thus greatly extending the class of recognizable objects. (2) The generalized features can assume various configurations, thus enabling the system to detect all kinds of features. (3) Parametric matching instead of point-point matching makes the system robust. (4) The top-down-bottom-up strategy combines efficiency with robustness. The method is suitable for parallel processing. It is believed that one second performance is within reach in the near future by proper arrangement of parallel processors.
Sheu, Dong-Liang D.
Full Text Available The tendency of the human being to apply the selective attention mechanism so as to determine about a truly intelligent perception system, which has the cognitive capability of learning and thinking about how to perceive the environment on its own. There are two attention mechanisms involved one of which is the top–down and the other bottom–up that correspond to the goal-directed and automatic perceptual behaviors, respectively. In this paper we review an artificial system with goal-directed visual perception approach and which uses the object-based top–down visual attention mechanism. This system will able us to determine the perception to an object of interest according to the current task, context and learned knowledge. This system can be mainly divided into three successive stages: first one is preattentive processing, second one top–down attentional selection and last is post-attentive perception. In first stage that is preattentive processing we consider an input scene which gets divided into what we say similar proto-objects, out of these one is then selected by applying the top–down attention and finally it is sent to the post-attentive perception stage for analysis and final outcome.
Aniket D. Pathak, Priti Subramanium
Full Text Available Compared to conventional vehicles Hybrid Electric Vehicles (HEVs provide fairly high fuel economy with lower emissions. To enhance HEV performance in terms of fuel economy and emissions, and ensure user satisfaction with driving performance, the need for simultaneous optimization for the main parameters of powertrain components and control system is inevitable. However, this problem is challenging due to the large amount of coupling design parameters, conflicting design objectives and nonlinear constraints. Considering the defect of the methods which convert multi-objective optimization problems into single-objective ones, a comprehensive methodology based on the non-dominated sorting genetic algorithms II (NSGA II to achieve parameter optimization for powertrain components and control system simultaneously and successfully find the Pareto-optimal solutions set is presented in this paper. A case simulation is carried out and simulated by ADVISOR, The simulation results show that this method can produce many Pareto-optimal solutions and a satisfactory solution can be selected by decision-makers according to their requirements. The results demonstrate the effectiveness of the algorithms proposed in this paper.
A computational efficient approach to identify very small mine-shaped plastic objects, e.g. M56 Anti-Personnel (AP) mines buried in the ground, is presented. The size of the objects equals the smallest AP-mines in use today, i.e., the most difficult mines to detect with respect to humanitarian mine clearance. Our approach consists of three stages, the phase stepped-frequency radar method, generation of a quarternary image and template crosscorrelation. The phase stepped-frequency radar method belongs to the class of stepped-frequency radar methods. In a two-dimensional mesh-grid above the ground a radar probe is moved automatically to measure in each grid point a set of reflection coefficients from which phase and amplitude information are extracted. Based on a simple processing of the phase information, quarternary image and template cross-correlation a successful detection of metal- and non-metal mine-shaped objects is possible. Measurements have been performed on loamy soil containing different mine-shapedobjects
SÃ¸rensen, Helge Bjarup Dissing; Jakobsen, Kaj Bjarne
In hazardous applications such as remediation of buried waste and dismantlement of radioactive facilities, robots are an attractive solution. Sensing to recognize and locate objects is a critical need for robotic operations in unstructured environments. An accurate 3-D model of objects in the scene is necessary for efficient high level control of robots. Drawing upon concepts from supervisory control, the authors have developed an interactive system for creating object models from range data, based on simulated annealing. Site modeling is a task that is typically performed using purely manual or autonomous techniques, each of which has inherent strengths and weaknesses. However, an interactive modeling system combines the advantages of both manual and autonomous methods, to create a system that has high operator productivity as well as high flexibility and robustness. The system is unique in that it can work with very sparse range data, tolerate occlusions, and tolerate cluttered scenes. The authors have performed an informal evaluation with four operators on 16 different scenes, and have shown that the interactive system is superior to either manual or automatic methods in terms of task time and accuracy
A comprehensive thermodynamic modeling and optimization is reported of a polygeneration energy system for the simultaneous production of heating, cooling, electricity and hot water from a common energy source. This polygeneration system is composed of four major parts: gas turbine (GT) cycle, Rankine cycle, absorption cooling cycle and domestic hot water heater. A multi-objective optimization method based on an evolutionary algorithm is applied to determine the best design parameters for the system. The two objective functions utilized in the analysis are the total cost rate of the system, which is the cost associated with fuel, component purchasing and environmental impact, and the system exergy efficiency. The total cost rate of the system is minimized while the cycle exergy efficiency is maximized by using an evolutionary algorithm. To provide a deeper insight, the Pareto frontier is shown for multi-objective optimization. In addition, a closed form equation for the relationship between exergy efficiency and total cost rate is derived. Finally, a sensitivity analysis is performed to assess the effects of several design parameters on the system total exergy destruction rate, CO2 emission and exergy efficiency.
People have paid more attention to enhancing voltage stability margin since voltage collapses happened in some power systems recently. This paper proposes an optimal reactive power flow (ORPF) incorporating static voltage stability based on a multi-objective adaptive immune algorithm (MOAIA). The main idea of the proposed algorithm is to add two parts to an existing immune algorithm. The first part defines both partial affinity and global affinity to evaluate the antibody affinity to the multi-objective functions. The second part uses adaptive crossover, mutation and clone rates for antibodies to maintain the antibodies diversity. Hence, the proposed algorithm can achieve a dynamic balance between individual diversity and population convergence. The paper describes ORPF's multi-objective functional mathematical model and the constraint conditions. The problems associated with the antibody are also discussed in detail. The proposed method has been tested in the IEEE-30 system and compared with IGA (immune genetic algorithm). The results show that the proposed algorithm has improved performance over the IGA
There is an increasing need for optimization of energy conversion systems, in particular concerning energy consumption and efficiency to reduce their environmental impact. Usually, optimization is based on designers' backgrounds, which are able to analyze system performances and modify appropriate operating parameters. However, if these changes aim to optimize simultaneously multiple conflicting objectives, the task becomes quite complex and the use of sophisticated tools is mandatory. This paper presents a multi-objective optimization method that permits solutions that simultaneously satisfy multiple conflicting objectives to be determined. The optimization process is carried out by using an evolutionary algorithm developed around an innovative technique that consists of partitioning the solution search space (i.e., a population of solutions) into parallel corridors. Within these corridors, 'header' solutions are trapped to be then involved in a reproduction process of new populations by using genetic operators. The proposed methodology is coupled to specific power plant models that are used to optimize two different power plants: (i) a cogeneration thermal plant and (ii) an advanced steam power station. In both cases the proposed technique has shown to be very powerful, robust and reliable. Further, this methodology can be used as an effective tool to find the set of best solutions and thus providing a realistic support to the decision-making.
Full Text Available This paper deals with an approach to the optimization of enterprise information system (EIS based on the object-based knowledge mesh (OKM and binary tree. Firstly, to explore the optimization of EIS by the user’s function requirements, an OKM expression representation based on the user’s satisfaction and binary tree is proposed. Secondly, based on the definitions of the fuzzy function-satisfaction degree relationships on the OKM functions, the optimization model is constructed. Thirdly, the OKM multiple set operation expression is optimized by the immune genetic algorithm and binary tree, with the steps of the OKM optimization presented in detail as well. Finally, the optimization of EIS is illustrated by an example to verify the proposed approaches.
Full Text Available In this paper we propose a novel java based computation and comparison method (JBCCM. In this method we taking three type of object oriented files for showing the computation. Those three files belong to C++, Java and C#. We first compute class, Inheritance, Interface, object and Line of Codes (LOC. Then we assume three databases based on several properties of C++, java, C#. Then we compare three files Based on class (BOC, Based on Inheritance (BOI, Based on Interfaces (BOIN, Based on Object (BOO and Based on LOC (BOL. Then we deduce a comparative result for that particular object oriented file. The result approximate that the file is best suited on which platform. So we can deduce best platform and coupling measures for Object Oriented Paradigm.
A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to proper wages usage. Thus a more precise and unambiguous boundaries of segments (objects) were received. As a results of the classification 5 classes of land cover (buildings, water, high and low vegetation and others) were extracted. Both pixel-based image analysis and OBIA were conducted with a minimum mapping unit of 10m2. Results were validated on the basis on manual classification and random points (80 per test area), reference data set was manually interpreted using ortophotomaps and expert knowledge of the test site areas.
Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna
We present an approach to detect anatomical structures by configurations of interest points, from a single example image. The representation of the configuration is based on Markov Random Fields, and the detection is performed in a single iteration by the MAX-SUM algorithm. Instead of sequentially matching pairs of interest points, the method takes the entire set of points, their local descriptors and the spatial configuration into account to find an optimal mapping of modeled object to target image. The image information is captured by symmetry-based interest points and local descriptors derived from Gradient Vector Flow. Experimental results are reported for two data-sets showing the applicability to complex medical data. PMID:18044601
Donner, René; Micusik, Branislav; Langs, Georg; Szumilas, Lech; Peloschek, Philipp; Friedrich, Klaus; Bischof, Horst
An objective methodology that does not require any user-defined parameter assumptions is introduced to obtain an improved soil moisture product along with associated uncertainty estimates. This new product is obtained by merging model-, thermal infrared remote sensing-, and microwave remote sensing-based soil moisture estimates in a least squares framework where uncertainty estimates for each product are obtained using triple collocation. The merged anomaly product is validated against in situ based soil moisture data and shows higher correlations with observations than individual input products; however, it is not superior to a naively merged product acquired by averaging the products using equal weighting. The resulting combined soil moisture estimate is an improvement over currently available soil moisture products due to its reduced uncertainty and can be used as a standalone soil moisture product with available uncertainty estimates.
Yilmaz, M. T.; Crow, W. T.; Anderson, M. C.; Hain, C.
Full Text Available Environmental protection is now an important area of study as well as the global concern. Many teaching and learning may involve the environment as part of their study. In engineering, there are many topics related to them, but most of the syllabus may too theoretical and it is necessary to develop an object based method to assist students’ learning. A demonstration of the environmental protection and energy saving concept to Outcome-based education through a web-site development has been successfully made. A vertical and horizontal link of the environmental aspects to the study have been used in the development. It enhances the student’s learning and has provided linkage and correlation among various subject areas.
Ka Wai E. Cheng
Landslide detection and classification is an essential requirement in pre- and post-disaster hazard analysis. In earlier studies landslide detection often was achieved through time-consuming and cost-intensive field surveys and visual orthophoto interpretation. Recent studies show that Earth Observation (EO) data offer new opportunities for fast, reliable and accurate landslide detection and classification, which may conduce to an effective landslide monitoring and landslide hazard management. To ensure the fast recognition and classification of landslides at a regional scale, a (semi-)automated object-based landslide detection approach is established for a study site situated in the Huaguoshan catchment, Southern Taiwan. The study site exhibits a high vulnerability to landslides and debris flows, which are predominantly typhoon-induced. Through the integration of optical satellite data (SPOT-5 with 2.5 m GSD), SAR (Synthetic Aperture Radar) data (TerraSAR-X Spotlight with 2.95 m GSD) and digital elevation information (DEM with 5 m GSD) including its derived products (e.g. slope, curvature, flow accumulation) landslides may be examined in a more efficient way as if relying on single data sources only. The combination of optical and SAR data in an object-based image analysis (OBIA) domain for landslide detection and classification has not been investigated so far, even if SAR imagery show valuable properties for landslide detection, which differ from optical data (e.g. high sensitivity to surface roughness and soil moisture). The main purpose of this study is to recognize and analyze existing landslides by applying object-based image analysis making use of eCognition software. OBIA provides a framework for examining features defined by spectral, spatial, textural, contextual as well as hierarchical properties. Objects are derived through image segmentation and serve as input for the classification process, which relies on transparent rulesets, representing knowledge. Through class modeling, an iterative process of segmentation and classification, objects can be addressed individually in a region-specific manner. The presented approach is marked by the comprehensive use of available data sets from various sources. This full integration of optical, SAR and DEM data conduces to the development of a robust method, which makes use of the most appropriate characteristics (e.g. spectral, textural, contextual) of each data set. The proposed method contributes to a more rapid and accurate landslide mapping in order to assist disaster and crisis management. Especially SAR data proves to be useful in the aftermath of an event, as radar sensors are mostly independent of illumination and weather conditions and therefore data is more likely to be available. The full data integration allows coming up with a robust approach for the detection and classification of landslides. However, more research is needed to make the best of the integration of SAR data in an object-based environment and for making the approach easier adaptable to different study sites and data.
Friedl, B.; Hölbling, D.; Füreder, P.
During recent decades, unplanned settlements have been appeared around the big cities in most developing countries and as consequence, numerous problems have emerged. Thus the identification of different kinds of settlements is a major concern and challenge for authorities of many countries. Very High Resolution (VHR) Remotely Sensed imagery has proved to be a very promising way to detect different kinds of settlements, especially through the using of new objectbased image analysis (OBIA). The most important key is in understanding what characteristics make unplanned settlements differ from planned ones, where most experts characterize unplanned urban areas by small building sizes at high densities, no orderly road arrangement and Lack of green spaces. Knowledge about different kinds of settlements can be captured as a domain ontology that has the potential to organize knowledge in a formal, understandable and sharable way. In this work we focus on extracting knowledge from VHR images and expert's knowledge. We used an object based strategy by segmenting a VHR image taken over urban area into regions of homogenous pixels at adequate scale level and then computing spectral, spatial and textural attributes for each region to create objects. A genetic-based data mining was applied to generate high predictive and comprehensible classification rules based on selected samples from the OBIA result. Optimized intervals of relevant attributes are found, linked with land use types for forming classification rules. The unplanned areas were separated from the planned ones, through analyzing of the line segments detected from the input image. Finally a simple ontology was built based on the previous processing steps. The approach has been tested to VHR images of one of the biggest Algerian cities, that has grown considerably in recent decades.
Khelifa, Dejrriri; Mimoun, Malki
Full Text Available Extraction of flower regions from complex background is a difficult task, it is an important part of flower image retrieval, and recognition .Image segmentation denotes a process of partitioning an image into distinct regions. A large variety of different segmentation approaches for images have been developed. Image segmentation plays an important role in image analysis. According to several authors, segmentation terminates when the observer’s goal is satisfied. For this reason, a unique method that can be applied to all possible cases does not yet exist. This paper studies the flower image segmentation in complex background. Based on the visual characteristics differences of the flower and the surrounding objects, the flower from different backgrounds are separated into a single set of flower image pixels. The segmentation methodology on flower images consists of five steps. Firstly, the original image of RGB space is transformed into Lab color space. In the second step ‘a’ component of Lab color space is extracted. Then segmentation by two-dimension OTSU of automatic threshold in ‘a-channel’ is performed. Based on the color segmentation result, and the texture differences between the background image and the required object, we extract the object by the gray level co-occurrence matrix for texture segmentation. The GLCMs essentially represent the joint probability of occurrence of grey-levels for pixels with a given spatial relationship in a defined region. Finally, the segmentation result is corrected by mathematical morphology methods. The algorithm was tested on plague image database and the results prove to be satisfactory. The algorithm was also tested on medical images for nucleus segmentation.
Software cost estimation (SCE) of a project is pivotal to the acceptance or rejection of the development of software project. Various SCE techniques have been in practice with their own strengths and limitations. The latest of these is object-oriented one. Currently object-oriented approach for SCE is based on Line of Code (LOC), function points, functions and classes etc. Relatively less attention has been paid to the SCE in component-based software engineering (CBSE). So there is a pressing need to search parameters/variables that have a vital role for the SCE using CBSE which is taken up in this paper. This paper further looks at level of significance of all the parameters/variables thus searched. The time is being used as an independent variable because time is a parameter which is almost, all previous in one. Therefore this approach may be in a way an alternate of all previous approaches. Infact the underlying research ultimately may lead towards SCE of complex systems, using CBSE, in a scientific, syste...
Ahmed, Nadeem; Qureshi, M Rizwan Jameel
Full Text Available In the last few years it has been made clear to the research community that further improvements in classic approaches for solving low-level computer vision and image/video understanding tasks are difficult to obtain. New approaches started evolving, employing knowledge-based processing, though transforming a priori knowledge to low-level models and rules are far from being straightforward. In this paper, we examine one of the most popular active contour models, snakes, and propose a snake model, modifying terms and introducing a model-based one that eliminates basic problems through the usage of prior shape knowledge in the model. A probabilistic rule-driven utilization of the proposed model follows, being able to handle (or cope with objects of different shapes, contour complexities and motions; different environments, indoor and outdoor; cluttered sequences; and cases where background is complex (not smooth and when moving objects get partially occluded. The proposed method has been tested in a variety of sequences and the experimental results verify its efficiency.
The success of the object-based image analysis (OBIA) paradigm can be attributed to the fact that regions obtained by means of segmentation process are depicted with a variety of spectral, shape, texture and context characteristics. These representative objectsattributes can be assigned to different land-cover/land-use types by means of two options. The first is to use supervised classifiers such as K-nearest neighbors (KNN) and Support Vector Machine (SVM), the second is to create classification rules. Supervised classifiers perform very well and have generally higher accuracies. However one of their drawbacks is that they provide no explicit knowledge in understandable and interpretable forms. The building of the rule set is generally based on the domain expert knowledge when dealing with a small number of classes and a small number of attributes, but having a dozens of continuously valued attributes attached to each image object makes it a tedious task and experts quickly get overwhelmed and become totally helpless. This is where data mining techniques for knowledge discovering help to understand the hidden relationships between classes and their attached attributes. The aim of this paper is to highlight the benefits of using knowledge discovery and data-mining tools, especially rule induction algorithms for useful and accurate information extraction from high spatial resolution remotely sensed imagery.
Djerriri, K.; Malki, M.
Full Text Available Software testing and maintenance being interleaved phases span more in software life cycle. The efforts to minimize this span rely obviously on testing when maintenance is natural. The features of Object-Oriented (OO software systems, when compared to the classical systems, claim much reducing the maintenance costwithout necessarily thepossibility of maintenance itself. It is natural that even such systems evolve due to many reasons. Though the specific reasons leading to the maintenance differ, the general rationale behind maintenance is to enhance the life-cycleand possibly the value of the existing system. Hence testing effort is more natural and significant even then. Moreover, the salient features of OO software systems furtherance the testing span despite their claim on maintenance. However, the availability of classical OO software metrics aid better early quality testing of OO systems. They exploit the critical parts of OO software systems thereby offering timely, thorough, and effective assurance. However there is not yet a common metric model in this regard. On the other hand, it is expected that the evolved model-based OO software metrics help define the subjective features more objectively facilitating users to perform metrics activities. The conflation of both classical and model-based metrics mutually alleviates their limitations and brings more synergy in reducing the test costs of OO software systems.
M. Raviraja Holla
Non-negative matrix factorization of an input data matrix into a matrix of basis vectors and a matrix of encoding coefficients is a subspace representation method that has attracted attention of researches in pattern recognition in the recent period. We have explored crucial aspects of NMF on massive recognition experiments with the ORL database of faces which include intuitively clear parts constituting the whole. Using a principal changing of the learning stage structure and by formulating NMF problems for each of a priori given parts separately, we developed a novel modular NMF algorithm. Although this algorithm provides uniquely separated basis vectors which code individual face parts in accordance with the parts-based principle of the NMF methodology applied to object recognition problems, any significant improvement of recognition rates for occluded parts, predicted in several papers, was not reached. We claim that using the parts-based concept in NMF as a basis for solving recognition problems with occluded objects has not been justified.
Bajla, Ivan; Soukup, Daniel
Christopher Becket Mahnke
Full Text Available Abstract Background In order to reduce time and efforts to develop microbial strains with better capability of producing desired bioproducts, genome-scale metabolic simulations have proven useful in identifying gene knockout and amplification targets. Constraints-based flux analysis has successfully been employed for such simulation, but is limited in its ability to properly describe the complex nature of biological systems. Gene knockout simulations are relatively straightforward to implement, simply by constraining the flux values of the target reaction to zero, but the identification of reliable gene amplification targets is rather difficult. Here, we report a new algorithm which incorporates physiological data into a model to improve the model’s prediction capabilities and to capitalize on the relationships between genes and metabolic fluxes. Results We developed an algorithm, flux variability scanning based on enforced objective flux (FVSEOF with grouping reaction (GR constraints, in an effort to identify gene amplification targets by considering reactions that co-carry flux values based on physiological omics data via “GR constraints”. This method scans changes in the variabilities of metabolic fluxes in response to an artificially enforced objective flux of product formation. The gene amplification targets predicted using this method were validated by comparing the predicted effects with the previous experimental results obtained for the production of shikimic acid and putrescine in Escherichia coli. Moreover, new gene amplification targets for further enhancing putrescine production were validated through experiments involving the overexpression of each identified targeted gene under condition-controlled batch cultivation. Conclusions FVSEOF with GR constraints allows identification of gene amplification targets for metabolic engineering of microbial strains in order to enhance the production of desired bioproducts. The algorithm was validated through the experiments on the enhanced production of putrescine in E. coli, in addition to the comparison with the previously reported experimental data. The FVSEOF strategy with GR constraints will be generally useful for developing industrially important microbial strains having enhanced capabilities of producing chemicals of interest.
The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO) algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO) algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching facto...
Theme of this talk: (1) Net-benefit of activities and decisions derives from objectives (and their priority) -- similarly: need for integration, value of technology/capability. (2) Risk is a lack of confidence that objectives will be met. (2a) Risk-informed decision making requires objectives. (3) Consideration of objectives is central to recent guidance.
Full Text Available Abstract Background Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. Results The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105–112. Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 Å to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 ± 0.18 seconds per molecule renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. Conclusion On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms other four multiple conformer generators in the case of reproducing the bioactive conformations against 329 structures. The speed advantage indicates Cyndi is a powerful alternative method for extensive conformational sampling and large-scale conformer database preparation.
Full Text Available An integrated multi-objective method for environmental flow assessments was developed that considered variability of potential habitats as a critical factor in determining how ecosystems respond to hydrological alterations. Responses of habitat area, and the magnitude of those responses as influenced by salinity and water depth, were established and assessed according to fluctuations in river discharge and tidal currents. The requirements of typical migratory species during pivotal life-stage seasons (e.g., reproduction and juvenile growth and natural flow variations were integrated into the flow-needs assessment. Critical environmental flows for a typical species were defined based on two primary objectives: (1 high level of habitat area and (2 low variability of habitat area. After integrating