WorldWideScience

Sample records for content immuno-detection based

  1. Sensitivity improvement of an immuno-detection method for azaspiracids based on the use of microspheres coupled to a flow-fluorimetry system

    Directory of Open Access Journals (Sweden)

    María Fraga Corral

    2014-06-01

    These results demonstrate the high capability in terms of sensitivity of the microsphere-based immuno-detection assay for AZAs. The immobilization of AZA-1 instead of the synthetic AZA-2 used in Rodríguez et al (Rodriguez et al., 2014, combined with a lower mAb 8F4 concentration provided a remarkable improvement of sensitivity. The ON protocol used in Rodríguez et al. (Rodriguez et al., 2014 displayed a similar IC50 than the new short assay (around 1 nM while the new ON protocol provided an IC50 5-fold more sensitive (0.3 nM. Therefore, the new short assay allows a reduction of the experimental time. Additionally, the increase of sensitivity could help to avoid shellfish matrix interferences. Previously published works using immunoassays for the detection of phycotoxins present in shellfish avoided matrix interference by further extract dilution in combination with an increase of assay sensitivity (Fraga et al., 2012;Fraga et al., 2013. The extraction protocol described by Rodríguez et al. (Rodriguez et al., 2014 will probably be suitable for this newly optimized AZA-detection method since many reagents are the same and the higher sensitivity will allow higher extract dilution. Considering the extraction protocol recovery, sensitivity of the current assay and the regulated limit, shellfish extracts could be diluted up to 1:30 or 1:150 (v/v for detection with the short or long protocols, respectively. Additionally, mAb 8F4 was demonstrated to recognize AZA-2 and AZA-3 with cross-reactivities of 42 and 138 %, respectively. Presumably, this optimized assay will detect these analogs with similar cross-reactivity. The sensitivity of the microsphere-based assay for AZAs is enough to detect these compounds at the regulated levels in shellfish. This microsphere-based multi-detection method provides an easy-to-perform, highly sensitive and rapid method for the detection of AZAs. It could be included in a multi-detection method, which would allow time and sample volume

  2. Magnetic bead based immuno-detection of Listeria monocytogenes and Listeria ivanovii from infant formula and leafy green vegetables using the Bio-Plex suspension array system.

    Science.gov (United States)

    Day, J B; Basavanna, U

    2015-04-01

    Listeriosis, a disease contracted via the consumption of foods contaminated with pathogenic Listeria species, can produce severe symptoms and high mortality in susceptible people and animals. The development of molecular methods and immuno-based techniques for detection of pathogenic Listeria in foods has been challenging due to the presence of assay inhibiting food components. In this study, we utilize a macrophage cell culture system for the isolation and enrichment of Listeria monocytogenes and Listeria ivanovii from infant formula and leafy green vegetables for subsequent identification using the Luminex xMAP technique. Macrophage monolayers were exposed to infant formula, lettuce and celery contaminated with L. monocytogenes or L. ivanovii. Magnetic microspheres conjugated to Listeria specific antibody were used to capture Listeria from infected macrophages and then analyzed using the Bio-Plex 200 analyzer. As few as 10 CFU/mL or g of L. monocytogenes was detected in all foods tested. The detection limit for L. ivanovii was 10 CFU/mL in infant formula and 100 CFU/g in leafy greens. Microsphere bound Listeria obtained from infected macrophage lysates could also be isolated on selective media for subsequent confirmatory identification. This method presumptively identifies L. monocytogenes and L. ivanovii from infant formula, lettuce and celery in less than 28 h with confirmatory identifications completed in less than 48 h.

  3. Content-Based Instruction

    Science.gov (United States)

    DelliCarpini, M.; Alonso, O.

    2013-01-01

    DelliCarpini and Alonso's book "Content-Based Instruction" explores different approaches to teaching content-based instruction (CBI) in the English language classroom. They provide a comprehensive overview of how to teach CBI in an easy-to-follow guide that language teachers will find very practical for their own contexts. Topics…

  4. Content-Based Instruction

    Science.gov (United States)

    DelliCarpini, M.; Alonso, O.

    2013-01-01

    DelliCarpini and Alonso's book "Content-Based Instruction" explores different approaches to teaching content-based instruction (CBI) in the English language classroom. They provide a comprehensive overview of how to teach CBI in an easy-to-follow guide that language teachers will find very practical for their own contexts. Topics…

  5. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B. V. Patel

    2012-10-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  6. Multiplex microsphere immuno-detection of potato virus Y, X and PLRV

    NARCIS (Netherlands)

    Bergervoet, J.H.W.; Peters, J.; Beckhoven, van J.R.C.M.; Bovenkamp, van den G.W.; Jacobson, J.W.; Wolf, van der J.M.

    2008-01-01

    To monitor seed potatoes for potato virus X, Y and PLRV, a multiplex microsphere immunoassay (MIA) was developed based on the Luminex xMAP technology, as an alternative to ELISA. The xMAP technology allowed detection of a number of antigens simultaneously whereas ELISA only allowed simplex detection

  7. Immuno-detection of OCTN1 (SLC22A4) in HeLa cells and characterization of transport function.

    Science.gov (United States)

    Pochini, Lorena; Scalise, Mariafrancesca; Indiveri, Cesare

    2015-11-01

    OCTN1 was immuno-detected in the cervical cancer cell HeLa, in which the complete pattern of acetylcholine metabolizing enzymes is expressed. Comparison of immuno-staining intensity of HeLa OCTN1 with the purified recombinant human OCTN1 allowed measuring the specific OCTN1 concentration in the HeLa cell extract and, hence calculating the HeLa OCTN1 specific transport activity that was about 10 nmol×min(-1)×mg protein(-1), measured as uptake of [(3)H]acetylcholine in proteoliposomes reconstituted with HeLa extract. This value was very similar to the specific activity of the recombinant protein. Acetylcholine transport was suppressed by incubation of the protein or proteoliposomes with the anti-OCTN1 antibody and was strongly inhibited by PLP and MTSEA, known inhibitors of OCTN1. The absence of ATP in the internal side of proteoliposomes strongly impaired transport function of both the HeLa and, as expected, the recombinant OCTN1. HeLa OCTN1 was inhibited by spermine, NaCl (Na(+)), TEA, γ-butyrobetaine, choline, acetylcarnitine and ipratropium but not by neostigmine. Besides acetylcholine, choline was taken up by HeLa OCTN1 proteoliposomes. The transporter catalyzed also acetylcholine and choline efflux which, differently from uptake, was not inhibited by MTSEA. Time course of [(3)H]acetylcholine uptake in intact HeLa cells was measured. As in proteoliposomes, acetylcholine transport in intact cells was inhibited by TEA and NaCl. Efflux of [(3)H]acetylcholine occurred in intact cells, as well. The experimental data concur in demonstrating a role of OCTN1 in transporting acetylcholine and choline in HeLa cells.

  8. Applying Content Analysis to Web-based Content

    OpenAIRE

    Kim, Inhwa; Kuljis, Jasna

    2010-01-01

    Using Content Analysis onWeb-based content, in particular the content available onWeb 2.0 sites, is investigated. The relative strengths and limitations of the method are described. To illustrate how content analysis may be used, we provide a brief overview of a case study that investigates cultural impacts on the use of design features with regard to self-disclosure on the blogs of South Korean and United Kingdom’s users. In this study we took a standard approach to conducting the content an...

  9. CONTENT BASED BATIK IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    A. Haris Rangkuti

    2014-01-01

    Full Text Available Content Based Batik Image Retrieval (CBBIR is an area of research that focuses on image processing based on characteristic motifs of batik. Basically the image has a unique batik motif compared with other images. Its uniqueness lies in the characteristics possessed texture and shape, which has a unique and distinct characteristics compared with other image characteristics. To study this batik image must start from a preprocessing stage, in which all its color images must be removed with a grayscale process. Proceed with the feature extraction process taking motifs characteristic of every kind of batik using the method of edge detection. After getting the characteristic motifs seen visually, it will be calculated by using 4 texture characteristic function is the mean, energy, entropy and stadard deviation. Characteristic function will be added as needed. The results of the calculation of characteristic functions will be made more specific using the method of wavelet transform Daubechies type 2 and invariant moment. The result will be the index value of every type of batik. Because each motif there are the same but have different sizes, so any kind of motive would be divided into three sizes: Small, medium and large. The perfomance of Batik Image similarity using this method about 90-92%.

  10. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  11. Content-based vessel image retrieval

    Science.gov (United States)

    Mukherjee, Satabdi; Cohen, Samuel; Gertner, Izidor

    2016-05-01

    This paper describes an approach to vessel classification from satellite images using content based image retrieval methodology. Content-based image retrieval is an important problem in both medical imaging and surveillance applications. In many cases the archived reference database is not fully structured, thus making content-based image retrieval a challenging problem. In addition, in surveillance applications, the query image may be affected by weather or/and geometric distortions. Our approach of content-based vessel image retrieval consists of two phases. First, we create a structured reference database, then for each new query image of a vessel we find the closest cluster of images in the structured reference database, thus identifying and classifying the vessel. Then we update the closest cluster with new query image.

  12. Content-based retrieval of visual information

    NARCIS (Netherlands)

    Oerlemans, Adrianus Antonius Johannes

    2011-01-01

    In this dissertation, I investigate new approaches relevant to content-based image retrieval techniques. First, the MOD paradigm is proposed, a method for detecting salient points in images. These salient points are specifically designed to enhance image retrieval accuracy by maximizing distinctive

  13. Material Recognition for Content Based Image Retrieval

    NARCIS (Netherlands)

    Geusebroek, J.M.

    2002-01-01

    One of the open problems in content-based Image Retrieval is the recognition of material present in an image. Knowledge about the set of materials present gives important semantic information about the scene under consideration. For example, detecting sand, sky, and water certainly classifies the im

  14. Material Recognition for Content Based Image Retrieval

    NARCIS (Netherlands)

    Geusebroek, J.M.

    2002-01-01

    One of the open problems in content-based Image Retrieval is the recognition of material present in an image. Knowledge about the set of materials present gives important semantic information about the scene under consideration. For example, detecting sand, sky, and water certainly classifies the

  15. Information Audit Based on Image Content Filtering

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    At present, network information audit system is almost based on text information filtering, but badness information is embedded into image or image file directly by badness information provider, in order to avoid monitored by. The paper realizes an information audit system based on image content filtering. Taking the pornographic program identification for an example, the system can monitor the video including any abnormal human body information by matching the texture characters with those defined in advance, which consist of contrast, energy, correlation measure and entropy character measure and so on.

  16. Content Based Image Retrieval through Clustering

    Directory of Open Access Journals (Sweden)

    Sandhya

    2012-06-01

    Full Text Available Content-based image retrieval (CBIR is a technique usedfor extracting similar images from an image database.CBIR system is required to access images effectively andefficiently using information contained in image databases.Here, K-Means is to be used for Image retrieval. The Kmeansmethod can be applied only in those cases when themean of a cluster is defined. The K-means method is notsuitable for discovering clusters with non-convex shapes orclusters of very different size. In this paper, CBIR,clustering and K-Means are defined. With the help of these,the data consisting images can be grouped and retrieved.

  17. Client Device Based Content Adaptation Using Rule Base

    Directory of Open Access Journals (Sweden)

    Velammal

    2011-01-01

    Full Text Available Problem statement: Content adaptation have been playing an important role in mobile devices, wherein the content display differs from desktop computers in many aspects, such as display screens, processing power, network connection bandwidth. In order to display web contents appropriately on mobile devices and on other types of devices such as hand computers, PDAs, Smart phones, it is important to adapt or transcode them to fit the characteristics of these devices. Approach: Existing content adaptation systems deploy various techniques which have been developed for specific purposes and goals. By exploiting various possible combinations of available resources, appropriate adaptation process can be carried over on the actual data, so that the information can be assimilated in a different end system other than the intended system. In this study, we present a content adaptation system based on rules created for mobile devices. Rules are invoked based on the individual client device information. Results: The adaptation has been performed according to the delivery device which was formalized through the profiler system. A profile holds information about the hardware and software specifications of the device thereby enabling the adaption of web content based on their characteristics which enables the user to access the web easily on various devices. Conclusion/Recommendation: This study enhances the viability of the information being presented to user, which will be independent of the end system being used for accessing the information. With the help of configurable rules, effective content adaptation can be achieved to provide optimal result.

  18. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  19. CONTENT-BASED AUTOFOCUSING IN AUTOMATED MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Peter Hamm

    2010-11-01

    Full Text Available Autofocusing is the fundamental step when it comes to image acquisition and analysis with automated microscopy devices. Despite all efforts that have been put into developing a reliable autofocus system, recent methods still lack robustness towards different microscope modes and distracting artefacts. This paper presents a novel automated focusing approach that is generally applicable to different microscope modes (bright-field, phase contrast, Differential Interference Contrast (DIC and fluorescence microscopy. The main innovation consists in a Content-based focus search that makes use of a priori knowledge about the observed objects by employing local object features and Boosted Learning. Hence, this method turns away from common autofocus approaches that apply solely whole image frequency measurements to obtain the focus plane. Thus, it is possible to exclude artefacts from being brought into focus calculation as well as locating the in-focus layer of specific microscopic objects.

  20. Content Based Image Retrieval Based on Color: A Survey

    Directory of Open Access Journals (Sweden)

    Mussarat Yasmin

    2015-11-01

    Full Text Available Information sharing, interpretation and meaningful expression have used digital images in the past couple of decades very usefully and extensively. This extensive use not only evolved the digital communication world with ease and usability but also produced unwanted difficulties around the use of digital images. Because of their extensive usage it sometimes becomes harder to filter images based on their visual contents. To overcome these problems, Content Based Image Retrieval (CBIR was introduced as one of the recent ways to find specific images in massive databases of digital images for efficiency or in other words for continuing the use of digital images in information sharing. In the past years, many systems of CBIR have been anticipated, developed and brought into usage as an outcome of huge research done in CBIR domain. Based on the contents of images, different approaches of CBIR have different implementations for searching images resulting in different measures of performance and accuracy. Some of them are in fact very effective approaches for fast and efficient content based image retrieval. This research highlights the hard work done by researchers to develop the image retrieval techniques based on the color of images. These techniques along with their pros and cons as well as their application in relevant fields are discussed in the survey paper. Moreover, the techniques are also categorized on the basis of common approach used.

  1. Metadata and API Based Environment Aware Content Delivery Architecture

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    One of the limitations of current content delivery networks is lack of support for environment aware content delivery. This paper first discusses the requirements of such support, and proposes a new metadata gateway based environment aware content delivery architecture. The paper discusses in some details key functions and technologies of environment aware content delivery architecture, including its APIs and control policies. Finally the paper presents an application to illustrate advantages of environment aware content delivery architecture in the context of next generation network.

  2. Graph Based Segmentation in Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    P. S. Suhasini

    2008-01-01

    Full Text Available Problem statement: Traditional image retrieval systems are content based image retrieval systems which rely on low-level features for indexing and retrieval of images. CBIR systems fail to meet user expectations because of the gap between the low level features used by such systems and the high level perception of images by humans. To meet the requirement as a preprocessing step Graph based segmentation is used in Content Based Image Retrieval (CBIR. Approach: Graph based segmentation is has the ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions. After segmentation the features are extracted for the segmented images, texture features using wavelet transform and color features using histogram model and the segmented query image features are compared with the features of segmented data base images. The similarity measure used for texture features is Euclidean distance measure and for color features Quadratic distance approach. Results: The experimental results demonstrate about 12% improvement in the performance for color feature with segmentation. Conclusions/Recommendations: Along with this improvement Neural network learning can be embedded in this system to reduce the semantic gap.

  3. Speech Transduction Based on Linguistic Content

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter; Christiansen, Thomas Ulrich

    material six times the duration of previous investigations. Our results show that the correlation of spectral tilt with information content is relatively constant across time, even if averaged across talkers. This indicates that it is possible to devise a robust method for estimating information density...

  4. Content-Based Instruction and Content and Language Integrated Learning: The Same or Different?

    Science.gov (United States)

    Cenoz, Jasone

    2015-01-01

    This article looks at the characteristics of Content-Based Instruction (CBI) and Content and Language Integrated Learning (CLIL) in order to examine their similarities and differences. The analysis shows that CBI/CLIL programmes share the same essential properties and are not pedagogically different from each other. In fact, the use of an L2 as…

  5. Cobra: A Content-Based Video Retrieval System

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.

    2002-01-01

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  6. Prenatal Care: A Content-Based ESL Curriculum.

    Science.gov (United States)

    Hassel, Elissa Anne

    A content-based curriculum in English as a Second Language (ESL) focusing on prenatal self-care is presented. The course was designed as a solution to the problem of inadequate prenatal care for limited-English-proficient Mexican immigrant women. The first three sections offer background information on and discussion of (1) content-based ESL…

  7. ADAPTIVE CONTENT BASED TEXTUAL INFORMATION SOURCE PRIORITIZATION

    Directory of Open Access Journals (Sweden)

    Nikhil Mitra

    2014-10-01

    Full Text Available The world-wide-web offers a posse of textual information sources which are ready to be utilized for several applications. In fact, given the rapidly evolving nature of online data, there is a real risk of information overload unless we continue to develop and refine techniques to meaningfully segregate these information sources. Specifically, there is a dearth of content-oriented and intelligent techniques which can learn from past search experiences and also adapt to a user’s specific requirements during her current search. In this paper, we tackle the core issue of prioritizing textual information sources on the basis of the relevance of their content to the central theme that a user is currently exploring. We propose a new Source Prioritization Algorithm that adopts an iterative learning approach to assess the proclivity of given information sources towards a set of user-defined seed words in order to prioritise them. The final priorities obtained serve as initial priorities for the next search request. This serves a dual purpose. Firstly, the system learns incrementally from several users’ cumulative search experiences and re-adjusts the source priorities to reflect the acquired knowledge. Secondly, the refreshed source priorities are utilized to direct a user’s current search towards more relevant sources while adapting also to the new set of keywords acquired from that user. Experimental results show that the proposed algorithm progressively improves the system’s ability to discern between different sources, even in the presence of several random sources. Further, it is able to scale well to identify the augmented information source when a new enriched information source is generated by clubbing existing ones.

  8. Bread Water Content Measurement Based on Hyperspectral Imaging

    DEFF Research Database (Denmark)

    Liu, Zhi; Møller, Flemming

    2011-01-01

    for bread quality based on near-infrared hyperspectral imaging against the conventional manual loss-in-weight method. For this purpose, the hyperspectral components unmixing technology is used for measuring the water content quantitatively. And the definition on bread water content index is presented......Water content is one of the most important properties of the bread for tasting assesment or store monitoring. Traditional bread water content measurement methods mostly are processed manually, which is destructive and time consuming. This paper proposes an automated water content measurement...... for this measurement. The proposed measurement scheme is relatively inexpensive to implement, easy to set up. The experimental results demonstrate the effectiveness....

  9. Mashup Based Content Search Engine for Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-05-01

    Full Text Available Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, pdf files, moving picture.

  10. Quality-based content delivery over the Internet

    CERN Document Server

    Li, Xiang

    2011-01-01

    ""Quality-Based Content Delivery over the Internet"" mainly discusses the methodology of doing quality-based content delivery in an Internet environment. Because the network is becoming intelligent and active, more and more researchers are talking about achieving personalization and customization in Internet content delivery. As researchers are aware, by introducing intelligence into a web intermediary server, they can make the content delivery more efficient and of higher quality. Still, the detailed methodology of doing so is never illustrated fully. The most critical part will be the active

  11. CONTENT BASED VIDEO RETRIEVAL BASED ON HDWT AND SPARSE REPRESENTATION

    Directory of Open Access Journals (Sweden)

    Sajad Mohamadzadeh

    2016-04-01

    Full Text Available Video retrieval has recently attracted a lot of research attention due to the exponential growth of video datasets and the internet. Content based video retrieval (CBVR systems are very useful for a wide range of applications with several type of data such as visual, audio and metadata. In this paper, we are only using the visual information from the video. Shot boundary detection, key frame extraction, and video retrieval are three important parts of CBVR systems. In this paper, we have modified and proposed new methods for the three important parts of our CBVR system. Meanwhile, the local and global color, texture, and motion features of the video are extracted as features of key frames. To evaluate the applicability of the proposed technique against various methods, the P(1 metric and the CC_WEB_VIDEO dataset are used. The experimental results show that the proposed method provides better performance and less processing time compared to the other methods.

  12. Speech Transduction Based on Linguistic Content

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter; Christiansen, Thomas Ulrich

    Digital hearing aids use a variety of advanced digital signal processing methods in order to improve speech intelligibility. These methods are based on knowledge about the acoustics outside the ear as well as psychoacoustics. This paper investigates the recent observation that speech elements...

  13. AN INVESTIGATION OF TEACHERS’ PEDAGOGICAL SKILLS AND CONTENT KNOWLEDGE IN A CONTENT-BASED INSTRUCTION CONTEXT

    Directory of Open Access Journals (Sweden)

    Tengku Nor Rizan Tengku Mohamad Maasum

    2012-01-01

    Full Text Available Advocates of the content-based approach believed that a language can be learnt effectively when it is the medium of instruction rather than just a subject. Integrating English and content as part of instruction has become one of the cornerstones of second language pedagogy. Researchers claimed that there are many benefits of integrating English and content instruction. Among the benefits are the increase in students’ interest with content themes, meaningful input and understanding. In 2003, the Malaysian Ministry of Education introduced the teaching and learning of science and mathematics in English for Year One, Form One and Lower Six Form in all government public schools. This paper describes the challenges faced by teachers when they are required to teach content subjects such as science and mathematics in English. The focus of the paper is on the teachers’ pedagogical skills and content knowldge which comprises subject matter content, pedagogical approach, classroom management, use of resources, assessment, preparation of teaching materials, managing students, teachers’ compensatory communication strategies, use of first language and teachers’ perspectives of teaching content subjects in English. Data were obtained from a self-report questionnaire administered to 495 secondary school teachers in West Malaysia. Results from the study provide implications for school administrators in making decisions in assignment of capable teachers to teach the various levels of classes. Suggestions for teacher self-development and life-long learning efforts are also provided.

  14. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  15. Calculation of the debris flow concentration based on clay content

    Institute of Scientific and Technical Information of China (English)

    CHEN Ningsheng; CUI Peng; LIU Zhonggang; WEI Fangqiang

    2003-01-01

    The debris flow clay content has very tremendous influence on its concentration (γC). It is reported that the concentration can be calculated by applying the relative polynomial based on the clay content. Here one polynomial model and one logarithm model to calculate the concentration based on the clay content for both the ordinary debris flow and viscous debris flow are obtained. The result derives from the statistics and analysis of the relationship between the debris flow concentrations and clay content in 45 debris flow sites located in the southwest of China. The models can be applied for the concentration calculation to those debris flows that are impossible to observe. The models are available to calculate the debris flow concentration, the principles of which are in the clay content affecting on the debris flow formation, movement and suspending particle diameter. The mechanism of the relationship of the clay content and concentration is clear and reliable. The debris flow is usually of micro-viscous when the clay content is low (<3%), by analyzing the developing tendency on the basics of the relationship between the clay content and debris flow concentration. Indeed, the less the clay content, the less the concentration for most debris flows. The debris flow tends to become the water rock flow or the hyperconcentrated flow with the clay content decrease. Through statistics it is apt to transform the soil into the viscous debris flow when the clay content of ranges is in 3%-18%. Its concentration increases with the increasing of the clay content when the clay content is between 5% and 10%. But the value decreases with the increasing of the clay content when the clay content is between 10% and 18%. It is apt to transform the soil into the mudflow, when the clay content exceeds 18%. The concentration of the mudflow usually decreases with the increase of the clay content, and this developing tendency reverses to that of the micro-viscous debris flow. There is

  16. Web-Based Media Contents Editor for UCC Websites

    Science.gov (United States)

    Kim, Seoksoo

    The purpose of this research is to "design web-based media contents editor for establishing UCC(User Created Contents)-based websites." The web-based editor features user-oriented interfaces and increased convenience, significantly different from previous off-line editors. It allows users to edit media contents online and can be effectively used for online promotion activities of enterprises and organizations. In addition to development of the editor, the research aims to support the entry of enterprises and public agencies to the online market by combining the technology with various UCC items.

  17. Rotational invariant similarity measurement for content-based image indexing

    Science.gov (United States)

    Ro, Yong M.; Yoo, Kiwon

    2000-04-01

    We propose a similarity matching technique for contents based image retrieval. The proposed technique is invariant from rotated images. Since image contents for image indexing and retrieval would be arbitrarily extracted from still image or key frame of video, the rotation invariant property of feature description of image is important for general application of contents based image indexing and retrieval. In this paper, we propose a rotation invariant similarity measurement in cooperating with texture featuring base on HVS. To simplify computational complexity, we employed hierarchical similarity distance searching. To verify the method, experiments with MPEG-7 data set are performed.

  18. AN INVESTIGATION OF TEACHERS’ PEDAGOGICAL SKILLS AND CONTENT KNOWLEDGE IN A CONTENT-BASED INSTRUCTION CONTEXT

    Directory of Open Access Journals (Sweden)

    Tengku Nor Rizan Tengku Mohamad Maasum

    2012-01-01

    Full Text Available Abstract: Advocates of the content-based approach believed that a language can be learnt effectively when it is the medium of instruction rather than just a subject.  Integrating English and content as part of instruction has become one of the cornerstones of second language pedagogy. Researchers claimed that there are many benefits of integrating English and content instruction.  Among the benefits are the increase in students’ interest with content themes, meaningful input and understanding. In 2003, the Malaysian Ministry of Education introduced the teaching and learning of science and mathematics in English for Year One, Form One and Lower Six Form in all government public schools. This paper describes the challenges faced by teachers when they are required to teach content subjects such as science and mathematics in English.  The focus of the paper is on the teachers’ pedagogical skills  and content knowldge which comprises subject matter content, pedagogical approach, classroom management, use of resources, assessment, preparation of teaching materials, managing students, teachers’ compensatory communication strategies, use of first language and teachers’ perspectives of teaching content subjects in English. Data were obtained from a self-report questionnaire administered to 495 secondary school teachers in West Malaysia. Results from the study provide implications for school administrators in making decisions in assignment of  capable teachers to teach the various levels of classes. Suggestions for teacher self-development and life-long learning efforts are also provided.   Key words: Content-based instruction, ESL instruction, second language, first language and second language pedagogy

  19. Rock and Roll English Teaching: Content-Based Cultural Workshops

    Science.gov (United States)

    Robinson, Tim

    2011-01-01

    In this article, the author shares a content-based English as a Second/Foreign Language (ESL/EFL) workshop that strengthens language acquisition, increases intrinsic motivation, and bridges cultural divides. He uses a rock and roll workshop to introduce an organizational approach with a primary emphasis on cultural awareness content and a…

  20. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  1. Privacy-preserving content-based recommender system

    NARCIS (Netherlands)

    Erkin, Z.; Beye, M.; Veugen, T.; Lagendijk, R.L.

    2012-01-01

    By offering personalized content to users, recommender systems have become a vital tool in e-commerce and online media applications. Content-based algorithms recommend items or products to users, that are most similar to those previously purchased or consumed. Unfortunately, collecting and storing r

  2. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  3. Privacy-Preserving Content-Based Recommender System

    NARCIS (Netherlands)

    Erkin, Z.; Beye, M.; Veugen, P.J.M.; Lagendijk, R.L.

    2012-01-01

    By offering personalized content to users, recommender systems have become a vital tool in e-commerce and online media applications. Content-based algorithms recommend items or products to users, that are most similar to those previously purchased or consumed. Unfortunately, collecting and storing r

  4. Privacy-preserving content-based recommendations through homomorphic encryption

    NARCIS (Netherlands)

    Erkin, Z.; Beye, M.; Veugen, T.; Lagendijk, R.L.

    2012-01-01

    By offering personalized content to users, recommender systems have become a vital tool in ecommerce and online media applications. Content-based algorithms recommend items or products to users, that are most similar to those previously purchased or consumed. Unfortunately, collecting and storing ra

  5. Privacy-Preserving Content-Based Recommendations through Homomorphic Encryption

    NARCIS (Netherlands)

    Erkin, Z.; Beye, M.; Veugen, P.J.M.; Lagendijk, R.L.

    2012-01-01

    By offering personalized content to users, recommender systems have become a vital tool in ecommerce and online media applications. Content-based algorithms recommend items or products to users, that are most similar to those previously purchased or consumed. Unfortunately, collecting and storing ra

  6. An Efficient Content Based Image Retrieval Scheme

    Directory of Open Access Journals (Sweden)

    Zukuan WEI

    2013-11-01

    Full Text Available Due to the recent improvements in digital photography and storage capacity, storing large amounts of images has been made possible. Consequently efficient means to retrieve images matching a user’s query are needed. In this paper, we propose a framework based on a bipartite graph model (BGM for semantic image retrieval. BGM is a scalable data structure that aids semantic indexing in an efficient manner, and it can also be incrementally updated. Firstly, all the images are segmented into several regions with image segmentation algorithm, pre-trained SVMs are used to annotate each region, and final label is obtained by merging all the region labels. Then we use the set of images and the set of region labels to build a bipartite graph. When a query is given, a query node, initially containing a fixed number of labels, is created to attach to the bipartite graph. The node then distributes the labels based on the edge weight between the node and its neighbors. Image nodes receiving the most labels represent the most relevant images. Experimental results demonstrate that our proposed technique is promising.

  7. Sugar content of popular sweetened beverages based on objective laboratory analysis: focus on fructose content.

    Science.gov (United States)

    Ventura, Emily E; Davis, Jaimie N; Goran, Michael I

    2011-04-01

    The consumption of fructose, largely in the form of high fructose corn syrup (HFCS), has risen over the past several decades and is thought to contribute negatively to metabolic health. However, the fructose content of foods and beverages produced with HFCS is not disclosed and estimates of fructose content are based on the common assumption that the HFCS used contains 55% fructose. The objective of this study was to conduct an objective laboratory analysis of the sugar content and composition in popular sugar-sweetened beverages with a particular focus on fructose content. Twenty-three sugar-sweetened beverages along with four standard solutions were analyzed for sugar profiles using high-performance liquid chromatography (HPLC) in an independent, certified laboratory. Total sugar content was calculated as well as percent fructose in the beverages that use HFCS as the sole source of fructose. Results showed that the total sugar content of the beverages ranged from 85 to 128% of what was listed on the food label. The mean fructose content in the HFCS used was 59% (range 47-65%) and several major brands appear to be produced with HFCS that is 65% fructose. Finally, the sugar profile analyses detected forms of sugar that were inconsistent with what was listed on the food labels. This analysis revealed significant deviations in sugar amount and composition relative to disclosures from producers. In addition, the tendency for use of HFCS that is higher in fructose could be contributing to higher fructose consumption than would otherwise be assumed.

  8. Content-based multimedia retrieval: indexing and diversification

    NARCIS (Netherlands)

    van Leuken, R.H.

    2009-01-01

    The demand for efficient systems that facilitate searching in multimedia databases and collections is vastly increasing. Application domains include criminology, musicology, trademark registration, medicine and image or video retrieval on the web. This thesis discusses content-based retrieval

  9. Providing content based billing architecture over Next Generation Network

    CERN Document Server

    Lakhtaria, Kamaljit I

    2010-01-01

    Mobile Communication marketplace has stressed that "content is king" ever since the initial footsteps for Next Generation Networks like 3G, 3GPP, IP Multimedia subsystem (IMS) services. However, many carriers and content providers have struggled to drive revenue for content services, primarily due to current limitations of certain types of desirable content offerings, simplistic billing models, and the inability to support flexible pricing, charging and settlement. Unlike wire line carriers, wireless carriers have a limit to the volume of traffic they can carry, bounded by the finite wireless spectrum. Event based services like calling, conferencing etc., only perceive charge per event, while the Content based charging system attracts Mobile Network Operators (MNOs) to maximize service delivery to customer and achieve best ARPU. With the Next Generation Networks, the number of data related services that can be offered, is increased significantly. The wireless carrier will be able to move from offering wireles...

  10. Text Content Pushing Technology Research Based on Location and Topic

    Science.gov (United States)

    Wei, Dongqi; Wei, Jianxin; Wumuti, Naheman; Jiang, Baode

    2016-11-01

    In the field, geological workers usually want to obtain related geological background information in the working area quickly and accurately. This information exists in the massive geological data, text data is described in natural language accounted for a large proportion. This paper studied location information extracting method in the mass text data; proposed a geographic location—geological content—geological content related algorithm based on Spark and Mapreduce2, finally classified content by using KNN, and built the content pushing system based on location and topic. It is running in the geological survey cloud, and we have gained a good effect in testing by using real geological data.

  11. A study of real-time content marketing : formulating real-time content marketing based on content, search and social media

    OpenAIRE

    Nguyen, Thi Kim Duyen

    2015-01-01

    The primary objective of this research is to understand profoundly the new concept of content marketing – real-time content marketing on the aspect of the digital marketing experts. Particularly, the research will focus on the real-time content marketing theories and how to build real-time content marketing strategy based on content, search and social media. It also finds out how marketers measure and keep track of conversion rates of their real-time content marketing plan. Practically, th...

  12. Information Theoretic Similarity Measures for Content Based Image Retrieval.

    Science.gov (United States)

    Zachary, John; Iyengar, S. S.

    2001-01-01

    Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

  13. Density-based similarity measures for content based search

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory

    2009-01-01

    We consider the query by multiple example problem where the goal is to identify database samples whose content is similar to a coUection of query samples. To assess the similarity we use a relative content density which quantifies the relative concentration of the query distribution to the database distribution. If the database distribution is a mixture of the query distribution and a background distribution then it can be shown that database samples whose relative content density is greater than a particular threshold {rho} are more likely to have been generated by the query distribution than the background distribution. We describe an algorithm for predicting samples with relative content density greater than {rho} that is computationally efficient and possesses strong performance guarantees. We also show empirical results for applications in computer network monitoring and image segmentation.

  14. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-10-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  15. Content Based Image Retrieval : Classification Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Shereena V.B

    2014-11-01

    Full Text Available In a content-based image retrieval system (CBIR, the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color correlogram and Gabor texture are compared. The paper reviews the increase in efficiency of image retrieval when the color and texture features are combined. The similarity measures based on which matches are made and images are retrieved are also discussed. For effective indexing and fast searching of images based on visual features, neural network based pattern learning can be used to achieve effective classification.

  16. Mashup Based Content Search Engine for Mobile Devices

    OpenAIRE

    Kohei Arai

    2013-01-01

    Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiv...

  17. Content-Based Design and Implementation of Ambient Intelligence Applications

    NARCIS (Netherlands)

    Diggelen, J. van; Grootjen, M.; Ubink, E.M.; Zomeren, M. van; Smets, N.J.J.M.

    2013-01-01

    Optimal support of professionals in complex ambient task environments requires a system that delivers the Right Message at the Right Moment in the Right Modality: (RM)3. This paper describes a content-based design methodology and an agent-based architecture to enable real time decisions of informati

  18. DNA methylation detection based on difference of base content

    Science.gov (United States)

    Sato, Shinobu; Ohtsuka, Keiichi; Honda, Satoshi; Sato, Yusuke; Takenaka, Shigeori

    2016-04-01

    Methylation frequently occurs in cytosines of CpG sites to regulate gene expression. The identification of aberrant methylation of certain genes is important for cancer marker analysis. The aim of this study was to determine the methylation frequency in DNA samples of unknown length and/or concentration. Unmethylated cytosine is known to be converted to thymine following bisulfite treatment and subsequent PCR. For this reason, the AT content in DNA increases with an increasing number of methylation sites. In this study, the fluorescein-carrying bis-acridinyl peptide (FKA) molecule was used for the detection of methylation frequency. FKA contains fluorescein and two acridine moieties, which together allow for the determination of the AT content of double-stranded DNA fragments. Methylated and unmethylated human genomes were subjected to bisulfide treatment and subsequent PCR using primers specific for the CFTR, CDH4, DBC1, and NPY genes. The AT content in the resulting PCR products was estimated by FKA, and AT content estimations were found to be in good agreement with those determined by DNA sequencing. This newly developed method may be useful for determining methylation frequencies of many PCR products by measuring the fluorescence in samples excited at two different wavelengths.

  19. Content

    DEFF Research Database (Denmark)

    Keiding, Tina Bering

    Aim, content and methods are fundamental categories of both theoretical and practical general didactics. A quick glance in recent pedagogical literature on higher education, however, reveals a strong preoccupation with methods, i.e. how teaching should be organized socially (Biggs & Tang, 2007......; Race, 2001; Ramsden, 2003). This trend appears closely related to the ‘from-teaching-to-learning’ movement, which has had a strong influence on pedagogy since the early nineties (Keiding, 2007; Terhart, 2003). Another interpretation of the current interest in methodology can be derived from...... for selection of content (Klafki, 1985, 2000; Myhre, 1961; Nielsen, 2006). These attempts all share one feature, which is that criteria for selection of content appear very general and often, more or less explicitly, deal with teaching at the first Bologna-cycle; i.e. schooling at the primary and lower...

  20. Content-Based Book Recommending Using Learning for Text Categorization

    OpenAIRE

    Mooney, Raymond J.; Roy, Loriene

    1999-01-01

    Recommender systems improve access to relevant products and information by making personalized suggestions based on previous examples of a user's likes and dislikes. Most existing recommender systems use social filtering methods that base recommendations on other users' preferences. By contrast, content-based methods use information about an item itself to make suggestions. This approach has the advantage of being able to recommended previously unrated items to users with unique interests and...

  1. Content Linking for UGC based on Word Embedding Model

    Directory of Open Access Journals (Sweden)

    Zhiqiao Gao

    2015-09-01

    Full Text Available There are huge amounts of User Generated Contents (UGCs consisting of authors’ articles of different themes and readers’ on-line comments on social networks every day. Generally, an article often gives rise to thousands of readers’ comments, which are related to specific points of the originally published article or previous comments. Hence it has suggested the urgent need for automated methods to implement the content linking task, which can also help other related applications, such as information retrieval, summarization and content management. So far content linking is still a relatively new issue. Because of the unsatisfactory of traditional ways based on feature extraction, we look forward to using deeper textual semantic analysis. The Word Embedding model based on deep learning has performed well in Natural Language Processing (NLP, especially in mining deep semantic information recently. Therefore, we study further on the Word Embedding model trained by different neural network models from which we can learn the structure, principles and training ways of the neural network language model in more depth to complete deep semantic feature extraction. With the aid of the semantic features, we expect to do further research on content linking between comments and their original articles from social networks, and finally verify the validity of the proposed method by comparison with traditional ways based on feature extraction.

  2. Content Based Video Retrieval using trajectory and Velocity features

    Directory of Open Access Journals (Sweden)

    Dr. S. D. Sawarkar

    2012-09-01

    Full Text Available The Internet forms today’s largest source of Information containing a high density of multimedia objects and its content is often semantically related. The identification of relevant media objects in such a vast collection poses a major problem that is studied in the area of multimedia information retrieval. Before the emergence of content-based retrieval, media was annotated with text, allowing the media to be accessed by text-based searching based on the classification of subject or semantics.In typical content-based retrieval systems, the contents of the media in the database are extracted and described by multi-dimensional feature vectors, also called descriptors. In our paper to retrieve desired data, users submit query examples to the retrieval system. The system then represents these examples with feature vectors. The distances (i.e.,similarities between the feature vectors of the query example and those of the media in the feature dataset are then computed and ranked. Retrieval is conducted by applying an indexing scheme to provide an efficient way to search the video database. Finally, the system ranks the search results and then returns the top search results that are most similar to the query examples.Therefore, a content-based retrieval system has four aspects: feature extraction and representation, dimension reduction of feature, indexing, and query specifications. With the search engine being developed, the user should have the ability to initiate a retrieval procedure by using video retrieval in a way that there is a better chance for a user to find the desired content.

  3. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  4. Student Engagement with a Content-Based Learning Design

    Science.gov (United States)

    Padilla Rodriguez, Brenda Cecilia; Armellini, Alejandro

    2013-01-01

    While learning is commonly conceptualised as a social, collaborative process in organisations, online courses often provide limited opportunities for communication between people. How do students engage with content-based courses? How do they find answers to their questions? How do they achieve the learning outcomes? This paper aims to answer…

  5. Student engagement with a content-based learning design

    Directory of Open Access Journals (Sweden)

    Brenda Cecilia Padilla Rodriguez

    2013-09-01

    Full Text Available While learning is commonly conceptualised as a social, collaborative process in organisations, online courses often provide limited opportunities for communication between people. How do students engage with content-based courses? How do they find answers to their questions? How do they achieve the learning outcomes? This paper aims to answer these questions by focusing on students’ experiences in an online content-based course delivered in a large Mexican organisation. Sales supervisors (n=47 participated as students. Four main data sources were used to evaluate engagement with and learning from the course: surveys (n=40, think-aloud sessions (n=8, activity logs (n=47 and exams (n=43. Findings suggest that: (1 Students engage with a content-based course by following the guidance available and attempting to make the materials relevant to their own context. (2 Students are resourceful when trying to find support. If the materials do not provide the answers to their questions, they search for alternatives such as colleagues to talk to. (3 Content-based online learning designs may be engaging and effective. However, broadening the range of support options available to students may derive in more meaningful, contextualised and rewarding learning experiences.

  6. Text mining of web-based medical content

    CERN Document Server

    Neustein, Amy

    2014-01-01

    Text Mining of Web-Based Medical Content examines web mining for extracting useful information that can be used for treating and monitoring the healthcare of patients. This work provides methodological approaches to designing mapping tools that exploit data found in social media postings. Specific linguistic features of medical postings are analyzed vis-a-vis available data extraction tools for culling useful information.

  7. Human-Centered Content-Based Image Retrieval

    NARCIS (Netherlands)

    van den Broek, Egon

    2005-01-01

    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image

  8. An Integrated Approach for Image Retrieval based on Content

    Directory of Open Access Journals (Sweden)

    Kavita Choudhary

    2010-05-01

    Full Text Available The difficulties faced in an image retrieval system used for browsing, searching and retrieving of image in an image databases cannot be underestimated also the efficient management of the rapidly expanding visual information has become an urgent problem in science and technology. This requirement formed the driving force behind the emergence of image retrieval techniques. Image retrieval based on content also called content based image retrieval, is a technique which uses the visual contents to search an image in the scale database. This Image retrieval technique integrate both low-level visual features, addressing the more detailed perceptual aspects, and high-level semantic features underlying the more general conceptual aspects of visual data. In connection with this Content Based Image Retrieval is a technology that is being developed to address different application areas, remote sensing, geographic information systems, and weather forecasting, architectural and engineering design, multimedia documents for digital libraries. In this paper we present an approach that significantly automates the retrieving process by relying on image analysis techniques that are based on image visual features like color with spatial information, texture and shape.

  9. Concept-Based Content of Professional Linguistic Education

    Science.gov (United States)

    Makshantseva, Nataliia Veniaminovna; Bankova, Liudmila Lvovna

    2016-01-01

    The article deals with professional education of future linguists built on the basis of conceptual approach. The topic is exemplified by the Russian language and a successful attempt to implement the concept-based approach to forming the content of professional language education. Within the framework of the proposed research, the concept is…

  10. Content-Based Information Retrieval from Forensic Databases

    NARCIS (Netherlands)

    Geradts, Z.J.M.H.

    2002-01-01

    In forensic science, the number of image databases is growing rapidly. For this reason, it is necessary to have a proper procedure for searching in these images databases based on content. The use of image databases results in more solved crimes; furthermore, statistical information can be obtained

  11. Human-Centered Content-Based Image Retrieval

    NARCIS (Netherlands)

    Broek, van den Egon L.

    2005-01-01

    Retrieval of images that lack a (suitable) annotations cannot be achieved through (traditional) Information Retrieval (IR) techniques. Access through such collections can be achieved through the application of computer vision techniques on the IR problem, which is baptized Content-Based Image Retrie

  12. Application of Bayesian Classification to Content-Based Data Management

    Science.gov (United States)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  13. Content Analysis of a Computer-Based Faculty Activity Repository

    Science.gov (United States)

    Baker-Eveleth, Lori; Stone, Robert W.

    2013-01-01

    The research presents an analysis of faculty opinions regarding the introduction of a new computer-based faculty activity repository (FAR) in a university setting. The qualitative study employs content analysis to better understand the phenomenon underlying these faculty opinions and to augment the findings from a quantitative study. A web-based…

  14. Reengineering the ESL Practitioner for Content-Based Instruction.

    Science.gov (United States)

    Haynes, Lilith M.

    The idea of content-based instruction (CBI) is at odds with the curricula of most English-as-a-Second-Language (ESL) teacher preparation programs. Nor does it fit easily with the skill-based texts and learning packages that are used widely in the field. There is also little agreement about the methods to be used to effect it at various levels of…

  15. A CONTENT ANALYSIS ON PROBLEM-BASED LEARNING APPROACH

    OpenAIRE

    BİBER, Mahir; Esen ERSOY; KÖSE BİBER, Sezer

    2014-01-01

    Problem Based Learning is one of the learning models that contain the general principles of active learning and students can use scientific process skills. Within this research it was aimed to investigate in detail the postgraduate thesis held in Turkey about PBL approach. The content analysis method was used in the research. The study sample was consisted of a total of 64 masters and PhD thesis made between the years 2012-2013 and reached over the web. A “Content Analysis Template” prepared ...

  16. A CONTENT ANALYSIS ON PROBLEM-BASED LEARNING APPROACH

    OpenAIRE

    BİBER, Mahir; Esen ERSOY; KÖSE BİBER, Sezer

    2015-01-01

    Problem Based Learning is one of the learning models that contain the general principles of active learning and students can use scientific process skills. Within this research it was aimed to investigate in detail the postgraduate thesis held in Turkey about PBL approach. The content analysis method was used in the research. The study sample was consisted of a total of 64 masters and PhD thesis made between the years 2012-2013 and reached over the web. A “Content Analysis Template” prepared ...

  17. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  18. CONTENTS

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    The Development and Evolution of the Idea of the Mandate of Heaven in the Zhou Dynasty The changes in the idea of Mandate of Heaven during the Shang and Zhou dynasties are of great significance in the course of the development of traditional Chinese culture. The quickening and awakening of the humanistic spirit was not the entire content of the Zhou idea of Mandate of Heaven. In the process of annihilating the Shang dynasty and setting up their state, the Zhou propagated the idea of the Mandate of Heaven out of practical needs. Their idea of the Mandate of Heaven was not very different from that of the Shang. From the Western Zhou on, the Zhou idea of Mandate of Heaven by no means developed in a linear way along a rational track. The intermingling of rationality and irrationality and of awakening and non-awakening remained the overall state of the Zhou intellectual superstructure after their "spiritual awakening".

  19. Content-Based Spam Filtering on Video Sharing Social Networks

    CERN Document Server

    da Luz, Antonio; Araujo, Arnaldo

    2011-01-01

    In this work we are concerned with the detection of spam in video sharing social networks. Specifically, we investigate how much visual content-based analysis can aid in detecting spam in videos. This is a very challenging task, because of the high-level semantic concepts involved; of the assorted nature of social networks, preventing the use of constrained a priori information; and, what is paramount, of the context dependent nature of spam. Content filtering for social networks is an increasingly demanded task: due to their popularity, the number of abuses also tends to increase, annoying the user base and disrupting their services. We systematically evaluate several approaches for processing the visual information: using static and dynamic (motionaware) features, with and without considering the context, and with or without latent semantic analysis (LSA). Our experiments show that LSA is helpful, but taking the context into consideration is paramount. The whole scheme shows good results, showing the feasib...

  20. CONTENTS

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    Grassroots Officials Promotion in Differentiated Model Association, The Researching of Rural Clan :Paradigm Should be Shifted Analysis Based on the Practice of Rural Community Construction in Ganzhou of Jiangxi Province and Yueping of Hubei Province

  1. Application of fuzzy logic in content-based image retrieval

    Institute of Scientific and Technical Information of China (English)

    WANG Xiao-ling; XIE Kang-lin

    2008-01-01

    We propose a fuzzy logic-based image retrieval system, in which the image similarity can be inferred in a nonlinear manner as human thinking. In the fuzzy inference process, weight assignments of multi-image features were resolved impliedly. Each fuzzy rule was embedded into the subjectivity of human perception of image contents. A color histogram called the average area histogram is proposed to represent the color features. Experimental results show the efficiency and feasibility of the proposed algorithms.

  2. A New Approach to Coding in Content Based MANETs

    OpenAIRE

    Joy, Joshua; Yu, Yu-Ting; Perez, Victor; Lu, Dennis; Gerla, Mario

    2015-01-01

    In content-based mobile ad hoc networks (CB-MANETs), random linear network coding (NC) can be used to reliably disseminate large files under intermittent connectivity. Conventional NC involves random unrestricted coding at intermediate nodes. This however is vulnerable to pollution attacks. To avoid attacks, a brute force approach is to restrict the mixing at the source. However, source restricted NC generally reduces the robustness of the code in the face of errors, losses and mobility induc...

  3. Active index for content-based medical image retrieval.

    Science.gov (United States)

    Chang, S K

    1996-01-01

    This paper introduces the active index for content-based medical image retrieval. The dynamic nature of the active index is its most important characteristic. With an active index, we can effectively and efficiently handle smart images that respond to accessing, probing and other actions. The main applications of the active index are to prefetch image and multimedia data, and to facilitate similarity retrieval. The experimental active index system is described.

  4. Content based Image Retrieval from Forensic Image Databases

    Directory of Open Access Journals (Sweden)

    Swati A. Gulhane

    2015-03-01

    Full Text Available Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. Searching for images using color, texture and shape features has attracted much attention. There are many content based image retrieval techniques in the literature. This paper gives the overview of different existing methods used for content based image retrieval and also suggests an efficient image retrieval method for digital image database of criminal photos, using dynamic dominant color, texture and shape features of an image which will give an effective retrieval result.

  5. Content-based Image Retrieval by Information Theoretic Measure

    Directory of Open Access Journals (Sweden)

    Madasu Hanmandlu

    2011-09-01

    Full Text Available Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases based on the content of the images. A new entropy function that serves as a measure of information content in an image termed as 'an information theoretic measure' is devised in this paper. Among the various query paradigms, 'query by example' (QBE is adopted to set a query image for retrieval from a large image database. In this paper, colour and texture features are extracted using the new entropy function and the dominant colour is considered as a visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices in a large database system help retrieve the images relevant to the query image without looking at every image in the database. The entropy values of colour and texture and the dominant colour are considered for measuring the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is demonstrated on a benchmark dataset.Defence Science Journal, 2011, 61(5, pp.415-430, DOI:http://dx.doi.org/10.14429/dsj.61.1177

  6. Contents

    Directory of Open Access Journals (Sweden)

    Editor IJRED

    2012-11-01

    Full Text Available International Journal of Renewable Energy Development www.ijred.com Volume 1             Number 3            October 2012                ISSN 2252- 4940   CONTENTS OF ARTICLES page Design and Economic Analysis of a Photovoltaic System: A Case Study 65-73 C.O.C. Oko , E.O. Diemuodeke, N.F. Omunakwe, and E. Nnamdi     Development of Formaldehyde Adsorption using Modified Activated Carbon – A Review 75-80 W.D.P Rengga , M. Sudibandriyo and M. Nasikin     Process Optimization for Ethyl Ester Production in Fixed Bed Reactor Using Calcium Oxide Impregnated Palm Shell Activated Carbon (CaO/PSAC 81-86 A. Buasri , B. Ksapabutr, M. Panapoy and N. Chaiyut     Wind Resource Assessment in Abadan Airport in Iran 87-97 Mojtaba Nedaei       The Energy Processing by Power Electronics and its Impact on Power Quality 99-105 J. E. Rocha and B. W. D. C. Sanchez       First Aspect of Conventional Power System Assessment for High Wind Power Plants Penetration 107-113 A. Merzic , M. Music, and M. Rascic   Experimental Study on the Production of Karanja Oil Methyl Ester and Its Effect on Diesel Engine 115-122 N. Shrivastava,  , S.N. Varma and M. Pandey  

  7. Estimation of rice leaf nitrogen contents based on hyperspectral LIDAR

    Science.gov (United States)

    Du, Lin; Gong, Wei; Shi, Shuo; Yang, Jian; Sun, Jia; Zhu, Bo; Song, Shalei

    2016-02-01

    Precision agriculture has become a global research hotspot in recent years. Thus, a technique for rapidly monitoring a farmland in a large scale and for accurately monitoring the growing status of crops needs to be established. In this paper, a novel technique, i.e., hyperspectral LIDAR (HL) which worked based on wide spectrum emission and a 32-channel detector was introduced, and its potential in vegetation detection was then evaluated. These spectra collected by HL were used to classify and derive the nitrogen contents of rice under four different nitrogen content levels with support vector machine (SVM) regression. Meanwhile the wavelength selection and channel correction method for achieving high spectral resolution were discussed briefly. The analysis results show that: (1) the reflectance intensity of the selected characteristic wavelengths of HL system has high correlation with different nitrogen contents levels of rice. (2) By increasing the number of wavelengths in calculation, the classification accuracy is greatly improved (from 54% with 4 wavelengths to 83% with 32 wavelengths) and so the regression coefficient r2 is (from 0.51 with 4 wavelengths to 0.75 with 32 wavelengths). (3) Support vector machine (SVM) is a useful regression method for rice leaf nitrogen contents retrieval. These analysis results can help farmers to make fertilization strategies more accurately. The receiving channels and characteristic wavelengths of HL system can be flexibly selected according to different requirements and thus this system will be applied in other fields, such as geologic exploration and environmental monitoring.

  8. Image content authentication technique based on Laplacian Pyramid

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper proposes a technique of image content authentication based on the Laplacian Pyramid to verify the authenticity of image content.First,the image is decomposed into Laplacian Pyramid before the transformation.Next,the smooth and detail properties of the original image are analyzed according to the Laplacian Pyramid,and the properties are classified and encoded to get the corresponding characteristic values.Then,the signature derived from the encrypted characteristic values is embedded in the original image as a watermark.After the reception,the characteristic values of the received image are compared with the watermark drawn out from the image.The algorithm automatically identifies whether the content is tampered by means of morphologic filtration.The information of tampered location is Presented at the same time.Experimental results show that the pro posed authentication algorithm can effectively detect the event and location when the original image content is tampered.Moreover,it can tolerate some distortions produced by compression,filtration and noise degradation.

  9. CONTENTS

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    In this article, the author presents his opinions on the extent of China ancient territory, viz. the territory administrated by integrated nation and separate states in China ancient history, which is based on his formal studies. The author also makes a comment on the new academic views since 1990s, and presents four new opinions on the development phases of China ancient territory.

  10. Information Theoretical Analysis of Identification based on Active Content Fingerprinting

    OpenAIRE

    Farhadzadeh, F Farzad; Willems, FMJ Frans; Voloshynovskiy, S

    2014-01-01

    Content fingerprinting and digital watermarking are techniques that are used for content protection and distribution monitoring. Over the past few years, both techniques have been well studied and their shortcomings understood. Recently, a new content fingerprinting scheme called {\\em active content fingerprinting} was introduced to overcome these shortcomings. Active content fingerprinting aims to modify a content to extract robuster fingerprints than the conventional content fingerprinting....

  11. Information theoretical analysis of identification based on active content fingerprinting

    OpenAIRE

    Farhadzadeh, F Farzad; Willems, FMJ Frans; Voloshynovskiy, S

    2014-01-01

    Content fingerprinting and digital watermarking are techniques that are used for content protection and distribution monitoring. Over the past few years, both techniques have been well studied and their shortcomings understood. Recently, a new content fingerprinting scheme called {\\em active content fingerprinting} was introduced to overcome these shortcomings. Active content fingerprinting aims to modify a content to extract robuster fingerprints than the conventional content fingerprinting....

  12. System refinement for content based satellite image retrieval

    Directory of Open Access Journals (Sweden)

    NourElDin Laban

    2012-06-01

    Full Text Available We are witnessing a large increase in satellite generated data especially in the form of images. Hence intelligent processing of the huge amount of data received by dozens of earth observing satellites, with specific satellite image oriented approaches, presents itself as a pressing need. Content based satellite image retrieval (CBSIR approaches have mainly been driven so far by approaches dealing with traditional images. In this paper we introduce a novel approach that refines image retrieval process using the unique properties to satellite images. Our approach uses a Query by polygon (QBP paradigm for the content of interest instead of using the more conventional rectangular query by image approach. First, we extract features from the satellite images using multiple tiling sizes. Accordingly the system uses these multilevel features within a multilevel retrieval system that refines the retrieval process. Our multilevel refinement approach has been experimentally validated against the conventional one yielding enhanced precision and recall rates.

  13. NEW APPROACH FOR IMAGE REPRESENTATION BASED ON GEOMETRIC STRUCTURAL CONTENTS

    Institute of Scientific and Technical Information of China (English)

    Jia Xiaomeng; Wang Guoyu

    2003-01-01

    This paper presents a novel approach for representation of image contents based on edge structural features. Edge detection is carried out for an image in the pre-processing stage.For feature representation, edge pixels are grouped into a set of segments through geometrical partitioning of the whole edge image. Then the invariant feature vector is computed for each edge-pixel segment. Thereby the image is represented with a set of spatially distributed feature vectors, each of which describes the local pattern of edge structures. Matching of two images can be achieved by the correspondence of two sets of feature vectors. Without the difficulty of image segmentation and object extraction due to the complexity of the real world images, the proposed approach provides a simple and flexible description for the image with complex scene, in terms of structural features of the image content. Experiments with real images illustrate the effectiveness of this new method.

  14. Identification and annotation of erotic film based on content analysis

    Science.gov (United States)

    Wang, Donghui; Zhu, Miaoliang; Yuan, Xin; Qian, Hui

    2005-02-01

    The paper brings forward a new method for identifying and annotating erotic films based on content analysis. First, the film is decomposed to video and audio stream. Then, the video stream is segmented into shots and key frames are extracted from each shot. We filter the shots that include potential erotic content by finding the nude human body in key frames. A Gaussian model in YCbCr color space for detecting skin region is presented. An external polygon that covered the skin regions is used for the approximation of the human body. Last, we give the degree of the nudity by calculating the ratio of skin area to whole body area with weighted parameters. The result of the experiment shows the effectiveness of our method.

  15. Content Authentication Based on JPEG-to-JPEG Watermarking

    Institute of Scientific and Technical Information of China (English)

    Hong-Xia Wang; Jie Hou; Ke Ding

    2009-01-01

    A content authentication technique based on JPEG-to-JPEG watermarking is proposed in this paper. In this technique, each 8×8 block in a JPEG compressed image is first processed by entropy decoding, and then the quantized discrete cosine transform (DCT) is applied to generate DCT coefficients: one DC coefficient and 63 AC coefficients in frequency coeffi- cients. The DCT AC coefficients are used to form zero planes in which the watermark is embedded by a chaotic map. In this way, the watermark information is embedded into JPEG compressed domain, and the output watermarked image is still a JPEG format. The proposed method is especially applicable to content authentication of JPEG image since the quantized coefficients are modified for embedding the watermark and the chaotic system possesses an important property with the high sensitivity on initial values. Experimental results show that the tamper regions are localized accurately when the watermarked JPEG image is maliciously tampered.

  16. Semantic-Based Requirements Content Management for Cloud Software

    Directory of Open Access Journals (Sweden)

    Jianqiang Hu

    2015-01-01

    Full Text Available Cloud Software is a software complex system whose topology and behavior can evolve dynamically in Cloud-computing environments. Given the unpredictable, dynamic, elasticity, and on-demand nature of the Cloud, it would be unrealistic to assume that traditional software engineering can “cleanly” satisfy the behavioral requirements of Cloud Software. In particular, the majority of traditional requirements managements take document-centric approaches, which have low degree of automation, coarse-grained management, and limited support for requirements modeling activities. Facing the challenges, based on metamodeling frame called RGPS (Role-Goal-Process-Service international standard, this paper firstly presents a hierarchical framework of semantic-based requirements content management for Cloud Software. And then, it focuses on some of the important management techniques in this framework, such as the native storage scheme, an ordered index with keywords, requirements instances classification based linear conditional random fields (CRFs, and breadth-first search algorithm for associated instances. Finally, a prototype tool called RGPS-RM for semantic-based requirements content management is implemented to provide supporting services for open requirements process of Cloud Software. The proposed framework applied to the Cloud Software development is demonstrated to show the validity and applicability. RGPS-RM also displays effect of fine-grained retrieval and breadth-first search algorithm for associated instance in visualization.

  17. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  18. The Use of QBIC Content-Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Ching-Yi Wu

    2004-03-01

    Full Text Available The fast increase in digital images has caught increasing attention on the development of image retrieval technologies. Content-based image retrieval (CBIR has become an important approach in retrieving image data from a large collection. This article reports our results on the use and users study of a CBIR system. Thirty-eight students majored in art and design were invited to use the IBM’s OBIC (Query by Image Content system through the Internet. Data from their information needs, behaviors, and retrieval strategies were collected through an in-depth interview, observation, and self-described think-aloud process. Important conclusions are:(1)There are four types of information needs for image data: implicit, inspirational, ever-changing, and purposive. The types of needs may change during the retrieval process. (2)CBIR is suitable for the example-type query, text retrieval is suitable for the scenario-type query, and image browsing is suitable for the symbolic query. (3)Different from text retrieval, detailed description of the query condition may lead to retrieval failure more easily. (4)CBIR is suitable for the domain-specific image collection, not for the images on the Word-Wide Web.[Article content in Chinese

  19. Content-based Image Retrieval by Spatial Similarity

    Directory of Open Access Journals (Sweden)

    Archana M. Kulkarn

    2002-07-01

    Full Text Available Similarity-based retrieval of images is an important task in image databases. Most of the user's queries are on retrieving those database images that are spatially similar to a query image. In defence strategies, one wants to know a number of armoured vehicles, such as battle tanks, portable missile launching vehicles, etc. moving towards it, so that one can decide counter strategy. Content-based spatial similarity retrieval of images can be used to locate spatial relationship of various objects in a specific area from the aerial photographs and to retrieve images similar to the query image from image database. A content-based image retrieval system that efficiently and effectively retrieves information from a defence image database along with the architecture for retrieving images by spatial similarity is presented. A robust algorithm SIMdef for retrieval by spatial similarity is proposed that utilises both directional and topological relations for computing similarity between images, retrieves similar images and recognises images even after they undergo modelling transformations (translation, scale and rotation. A case study for some of the common objects, used in defence applications using SIMdef algorithm, has been done.

  20. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  1. Relevance Feedback in Content Based Image Retrieval: A Review

    Directory of Open Access Journals (Sweden)

    Manesh B. Kokare

    2011-01-01

    Full Text Available This paper provides an overview of the technical achievements in the research area of relevance feedback (RF in content-based image retrieval (CBIR. Relevance feedback is a powerful technique in CBIR systems, in order to improve the performance of CBIR effectively. It is an open research area to the researcher to reduce the semantic gap between low-level features and high level concepts. The paper covers the current state of art of the research in relevance feedback in CBIR, various relevance feedback techniques and issues in relevance feedback are discussed in detail.

  2. Natural ingredients based cosmetics. Content of selected fragrance sensitizers

    DEFF Research Database (Denmark)

    Rastogi, Suresh Chandra; Johansen, J D; Menné, T

    1996-01-01

    In the present study, we have investigated 42 cosmetic products based on natural ingredients for content of 11 fragrance substances: geraniol, hydroxycitronellal, eugenol, isoeugenol, cinnamic aldehyde, cinnamic alcohol, alpha-amylcinnamic aldehyde, citral, coumarin, dihydrocoumarin and alpha...... cosmetic products (shampoos, creams, tonics, etc) were found to contain 0.0003-0.0820% of 1 to 3 of the target fragrances. Relatively high concentrations of hydroxycitronellal, coumarin, cinnamic alcohol and alpha-amyl cinnamic aldehyde were found in some of the investigated products. The detection...

  3. Demographic-Based Content Analysis of Web-Based Health-Related Social Media

    OpenAIRE

    2016-01-01

    Background An increasing number of patients from diverse demographic groups share and search for health-related information on Web-based social media. However, little is known about the content of the posted information with respect to the users’ demographics. Objective The aims of this study were to analyze the content of Web-based health-related social media based on users’ demographics to identify which health topics are discussed in which social media by which demographic groups and to he...

  4. Mobile Web Browsing Based On Content Preserving With Reduced Cost

    Directory of Open Access Journals (Sweden)

    Dr.N.Saravanaselvam

    2015-01-01

    Full Text Available Internet has played a drastic change in today’s life. Especially, web browsing has become more exclusive in compact devices. This tempts the people to migrate their innovations & skills into an unimaginable world. With these things in mind, it is necessary for us to concentrate more on the techniques that how the web data’s are accessed and accounted. Developed countries use a widely popular technique called Flat- rate pricing, which is solely independent on data usage. But whereas, developing countries are still behind the concept of “pay as you use”, which leads to high usage bills.With an effort to resolve the problem of high usage bills, we propose a cost effective technique, which reduces the data consumption in web mobile browsing. It reduces the usage bills in the mechanism of usage-based pricing. The key idea of our approach is to leverage the data plan of the user to compute a cost quota for each web request and a network middle-box to automatically adapt any web page to the cost quota. Here we use a simple but effective content adaption technique that highly decides which image or data best fits the mobile display with low cost and high quality resolution. It also emphasis on the trendy technique,” The Data Mining “which mines the requested & required data. The mined data’s are filtered based on the content adaption technique and fit into the display effectively. Interesting and noticeable feature in this concept is that only important web contents requested by the user are exhibited. A feedback process involves in this concept to retrieve the required data alone and also to improve the best fit resolution. With this proposed system web mobile browsing becomes cheaper & contributes an enormous logic for the future project in the field of Mobile browsing.

  5. Content-based image database system for epilepsy.

    Science.gov (United States)

    Siadat, Mohammad-Reza; Soltanian-Zadeh, Hamid; Fotouhi, Farshad; Elisevich, Kost

    2005-09-01

    We have designed and implemented a human brain multi-modality database system with content-based image management, navigation and retrieval support for epilepsy. The system consists of several modules including a database backbone, brain structure identification and localization, segmentation, registration, visual feature extraction, clustering/classification and query modules. Our newly developed anatomical landmark localization and brain structure identification method facilitates navigation through an image data and extracts useful information for segmentation, registration and query modules. The database stores T1-, T2-weighted and FLAIR MRI and ictal/interictal SPECT modalities with associated clinical data. We confine the visual feature extractors within anatomical structures to support semantically rich content-based procedures. The proposed system serves as a research tool to evaluate a vast number of hypotheses regarding the condition such as resection of the hippocampus with a relatively small volume and high average signal intensity on FLAIR. Once the database is populated, using data mining tools, partially invisible correlations between different modalities of data, modeled in database schema, can be discovered. The design and implementation aspects of the proposed system are the main focus of this paper.

  6. Content-based image hashing using wave atoms

    Institute of Scientific and Technical Information of China (English)

    Liu Fang; Leung Hon-Yin; Cheng Lee-Ming; Ji Xiao-Yong

    2012-01-01

    It is well known that robustness,fragility,and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge.In this paper,a content-based image hashing scheme using wave atoms is proposed,which satisfies the above criteria.Compared with traditional transforms like wavelet transform and discrete cosine transform (DCT),wave atom transform is adopted for the sparser expansion and better characteristics of texture feature extraction which shows better performance in both robustness and fragility.In addition,multi-frequency detection is presented to provide an application-defined trade-off.To ensure the security of the proposed approach and its resistance to a chosen-plaintext attack,a randomized pixel modulation based on the Rényi chaotic map is employed,combining with the nonliner wave atom transform.The experimental results reveal that the proposed scheme is robust against content-preserving manipulations and has a good discriminative capability to malicious tampering.

  7. Content-based microarray search using differential expression profiles

    Directory of Open Access Journals (Sweden)

    Thathoo Rahul

    2010-12-01

    Full Text Available Abstract Background With the expansion of public repositories such as the Gene Expression Omnibus (GEO, we are rapidly cataloging cellular transcriptional responses to diverse experimental conditions. Methods that query these repositories based on gene expression content, rather than textual annotations, may enable more effective experiment retrieval as well as the discovery of novel associations between drugs, diseases, and other perturbations. Results We develop methods to retrieve gene expression experiments that differentially express the same transcriptional programs as a query experiment. Avoiding thresholds, we generate differential expression profiles that include a score for each gene measured in an experiment. We use existing and novel dimension reduction and correlation measures to rank relevant experiments in an entirely data-driven manner, allowing emergent features of the data to drive the results. A combination of matrix decomposition and p-weighted Pearson correlation proves the most suitable for comparing differential expression profiles. We apply this method to index all GEO DataSets, and demonstrate the utility of our approach by identifying pathways and conditions relevant to transcription factors Nanog and FoxO3. Conclusions Content-based gene expression search generates relevant hypotheses for biological inquiry. Experiments across platforms, tissue types, and protocols inform the analysis of new datasets.

  8. Content patterns in topic-based overlapping communities.

    Science.gov (United States)

    Ríos, Sebastián A; Muñoz, Ricardo

    2014-01-01

    Understanding the underlying community structure is an important challenge in social network analysis. Most state-of-the-art algorithms only consider structural properties to detect disjoint subcommunities and do not include the fact that people can belong to more than one community and also ignore the information contained in posts that users have made. To tackle this problem, we developed a novel methodology to detect overlapping subcommunities in online social networks and a method to analyze the content patterns for each subcommunities using topic models. This paper presents our main contribution, a hybrid algorithm which combines two different overlapping sub-community detection approaches: the first one considers the graph structure of the network (topology-based subcommunities detection approach) and the second one takes the textual information of the network nodes into consideration (topic-based subcommunities detection approach). Additionally we provide a method to analyze and compare the content generated. Tests on real-world virtual communities show that our algorithm outperforms other methods.

  9. Motion feature extraction scheme for content-based video retrieval

    Science.gov (United States)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  10. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    Directory of Open Access Journals (Sweden)

    Jaykrishna Joshi

    2015-02-01

    Full Text Available The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global descriptor attributes, where global features are extracted to make system more efficient by using color features which are color expectancy, color variance, skewness and texture feature correlation.

  11. Content Based Image Recognition by Information Fusion with Multiview Features

    Directory of Open Access Journals (Sweden)

    Rik Das

    2015-09-01

    Full Text Available Substantial research interest has been observed in the field of object recognition as a vital component for modern intelligent systems. Content based image classification and retrieval have been considered as two popular techniques for identifying the object of interest. Feature extraction has played the pivotal role towards successful implementation of the aforesaid techniques. The paper has presented two novel techniques of feature extraction from diverse image categories both in spatial domain and in frequency domain. The multi view features from the image categories were evaluated for classification and retrieval performances by means of a fusion based recognition architecture. The experimentation was carried out with four different popular public datasets. The proposed fusion framework has exhibited an average increase of 24.71% and 20.78% in precision rates for classification and retrieval respectively, when compared to state-of-the art techniques. The experimental findings were validated with a paired t test for statistical significance.

  12. Towards Better Retrievals in Content -Based Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Kumar Vaibhava

    2014-04-01

    Full Text Available -This paper presents a Content-Based Image Retrieval (CBIR System called DEICBIR-2. The system retrieves images similar to a given query image by searching in the provided image database.Standard MPEG-7 image descriptors are used to find the relevant images which are similar to thegiven query image. Direct use of the MPEG-7 descriptors for creating the image database and retrieval on the basis of nearest neighbor does not yield accurate retrievals. To further improve the retrieval results, B-splines are used for ensuring smooth and continuous edges of the images in the edge-based descriptors. Relevance feedback is also implemented with user intervention. These additional features improve the retrieval performance of DEICBIR-2 significantly. Computational performance on a set of query images is presented and the performance of the proposed system is much superior to the performance of DEICBIR[9] on the same database and on the same set of query images.

  13. Novel Approach to Content Based Image Retrieval Using Evolutionary Computing

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-08-01

    Full Text Available Content Based Image Retrieval (CBIR is an active research area in multimedia domain in this era of information technology. One of the challenges of CBIR is to bridge the gap between low level features and high level semantic. In this study we investigate the Particle Swarm Optimization (PSO, a stochastic algorithm and Genetic Algorithm (GA for CBIR to overcome this drawback. We proposed a new CBIR system based on the PSO and GA coupled with Support Vector Machine (SVM. GA and PSO both are evolutionary algorithms and in this study are used to increase the number of relevant images. SVM is used to perform final classification. To check the performance of the proposed technique, rich experiments are performed using coral dataset. The proposed technique achieves higher accuracy compared to the previously introduced techniques (FEI, FIRM, simplicity, simple HIST and WH.

  14. Permutation-based Homogeneous Block Content Authentication for Watermarking

    Directory of Open Access Journals (Sweden)

    S.Maruthuperumal

    2013-02-01

    Full Text Available In modern days, digital watermarking has become an admired technique for hitting data in digital images to help guard against copyright infringement. The proposed Permutation-based Homogeneous Block Content authentication (PHBC methods develop a secure and excellence strong watermarking algorithm that combines the reward of permutation-based Homogeneous block (PHB with that of significant and insignificant bit values with X0R encryption function using Max coefficient of least coordinate value for embedding the watermark. In the projected system uses the relationship between the permutation blocks to embed many data into Homogeneous blocks without causing solemn distortion to the watermarked image. The experimental results show that the projected system is very efficient in achieving perceptual invisibility with an increase in the Peak Signal to Noise Ratio (PSNR. Moreover, the projected system is robust to a variety of signal processing operations, such as image Cropping, Rotation, Resizing, Adding noise, Filtering , Blurring and Motion blurring.

  15. Content-based image retrieval with ontological ranking

    Science.gov (United States)

    Tsai, Shen-Fu; Tsai, Min-Hsuan; Huang, Thomas S.

    2010-02-01

    Images are a much more powerful medium of expression than text, as the adage says: "One picture is worth a thousand words." It is because compared with text consisting of an array of words, an image has more degrees of freedom and therefore a more complicated structure. However, the less limited structure of images presents researchers in the computer vision community a tough task of teaching machines to understand and organize images, especially when a limit number of learning examples and background knowledge are given. The advance of internet and web technology in the past decade has changed the way human gain knowledge. People, hence, can exchange knowledge with others by discussing and contributing information on the web. As a result, the web pages in the internet have become a living and growing source of information. One is therefore tempted to wonder whether machines can learn from the web knowledge base as well. Indeed, it is possible to make computer learn from the internet and provide human with more meaningful knowledge. In this work, we explore this novel possibility on image understanding applied to semantic image search. We exploit web resources to obtain links from images to keywords and a semantic ontology constituting human's general knowledge. The former maps visual content to related text in contrast to the traditional way of associating images with surrounding text; the latter provides relations between concepts for machines to understand to what extent and in what sense an image is close to the image search query. With the aid of these two tools, the resulting image search system is thus content-based and moreover, organized. The returned images are ranked and organized such that semantically similar images are grouped together and given a rank based on the semantic closeness to the input query. The novelty of the system is twofold: first, images are retrieved not only based on text cues but their actual contents as well; second, the grouping

  16. Content-based, Task-based, and Participatory Approaches (I)

    Institute of Scientific and Technical Information of China (English)

    Diane Larsen-Freeman

    2012-01-01

    1. Introduction In this chapter we will be investigating three more approaches that make communication central: content-based instruction, task-based instruction, and the participatory approach. The approaches we examine in this chapter do not begin with functions, or approaches we examine in this chapter don't begin with functions, or indeed, any other language items. Instead, they give priority to process over predetermined linguistic content. In these approaches rather than "learning to use English",

  17. Multimedia Content Based Image Retrieval Iii: Local Tetra Pattern

    Directory of Open Access Journals (Sweden)

    Nagaraja G S

    2014-06-01

    Full Text Available Content Based Image Retrieval methods face several challenges while presentation of results and precision levels due to various specific applications. To improve the performance and address these problems a novel algorithm Local Tetra Pattern (LTrP is proposed which is coded in four direction instead of two direction used in Local Binary Pattern (LBP, Local Derivative Pattern (LDP andLocal Ternary Pattern(LTP.To retrieve the images the surrounding neighbor pixel value is calculated by gray level difference, which gives the relation between various multisorting algorithms using LBP, LDP, LTP and LTrP for sorting the images. This method mainly uses low level features such as color, texture and shape layout for image retrieval.

  18. Content-Based Publish/Subscribe System for Web Syndication

    Institute of Scientific and Technical Information of China (English)

    Zeinab Hmedeh; Harry Kourdounakis; Vassilis Christophides; Cedric du Mouza; Michel Scholl; Nicolas Travers

    2016-01-01

    Content syndication has become a popular way for timely delivery of frequently updated information on the Web. Today, web syndication technologies such as RSS or Atom are used in a wide variety of applications spreading from large-scale news broadcasting to medium-scale information sharing in scientific and professional communities. However, they exhibit serious limitations for dealing with information overload in Web 2.0. There is a vital need for efficient real-time filtering methods across feeds, to allow users to effectively follow personally interesting information. We investigate in this paper three indexing techniques for users’ subscriptions based on inverted lists or on an ordered trie for exact and partial matching. We present analytical models for memory requirements and matching time and we conduct a thorough experimental evaluation to exhibit the impact of critical parameters of realistic web syndication workloads.

  19. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  20. Weighted feature fusion for content-based image retrieval

    Science.gov (United States)

    Soysal, Omurhan A.; Sumer, Emre

    2016-07-01

    The feature descriptors such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) are known as the most commonly used solutions for the content-based image retrieval problems. In this paper, a novel approach called "Weighted Feature Fusion" is proposed as a generic solution instead of applying problem-specific descriptors alone. Experiments were performed on two basic data sets of the Inria in order to improve the precision of retrieval results. It was found that in cases where the descriptors were used alone the proposed approach yielded 10-30% more accurate results than the ORB alone. Besides, it yielded 9-22% and 12-29% less False Positives compared to the SIFT alone and SURF alone, respectively.

  1. Web Pages Content Analysis Using Browser-Based Volunteer Computing

    Directory of Open Access Journals (Sweden)

    Wojciech Turek

    2013-01-01

    Full Text Available Existing solutions to the problem of finding valuable information on the Websuffers from several limitations like simplified query languages, out-of-date in-formation or arbitrary results sorting. In this paper a different approach to thisproblem is described. It is based on the idea of distributed processing of Webpages content. To provide sufficient performance, the idea of browser-basedvolunteer computing is utilized, which requires the implementation of text pro-cessing algorithms in JavaScript. In this paper the architecture of Web pagescontent analysis system is presented, details concerning the implementation ofthe system and the text processing algorithms are described and test resultsare provided.

  2. Approach to extracting hot topics based on network traffic content

    Institute of Scientific and Technical Information of China (English)

    Yadong ZHOU; Xiaohong GUAN; Qindong SUN; Wei LI; Jing TAO

    2009-01-01

    This article presents the formal definition and description of popular topics on the Internet,analyzes the relationship between popular words and topics,and finally introduces a method that uses statistics and correlation of the popular words in traffic content and network flow characteristics as input for extracting popular topics on the Internet.Based on this,this article adapts a clustering algorithm to extract popular topics and gives formalized results.The test results show that this method has an accuracy of 16.7% in extracting popular topics on the Internet.Compared with web mining and topic detection and tracking (TDT),it can provide a more suitable data source for effective recovery of Internet public opinions.

  3. PERFORMANCE EVALUATION OF CONTENT BASED IMAGE RETRIEVAL FOR MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    SASI KUMAR. M

    2013-04-01

    Full Text Available Content-based image retrieval (CBIR technology benefits not only large image collections management, but also helps clinical care, biomedical research, and education. Digital images are found in X-Rays, MRI, CT which are used for diagnosing and planning treatment schedules. Thus, visual information management is challenging as the data quantity available is huge. Currently, available medical databases utilization is limited image retrieval issues. Archived digital medical images retrieval is always challenging and this is being researched more as images are of great importance in patient diagnosis, therapy, medical reference, and medical training. In this paper, an image matching scheme using Discrete Sine Transform for relevant feature extraction is presented. The efficiency of different algorithm for classifying the features to retrieve medical images is investigated.

  4. Incorporating Semantics into Data Driven Workflows for Content Based Analysis

    Science.gov (United States)

    Argüello, M.; Fernandez-Prieto, M. J.

    Finding meaningful associations between text elements and knowledge structures within clinical narratives in a highly verbal domain, such as psychiatry, is a challenging goal. The research presented here uses a small corpus of case histories and brings into play pre-existing knowledge, and therefore, complements other approaches that use large corpus (millions of words) and no pre-existing knowledge. The paper describes a variety of experiments for content-based analysis: Linguistic Analysis using NLP-oriented approaches, Sentiment Analysis, and Semantically Meaningful Analysis. Although it is not standard practice, the paper advocates providing automatic support to annotate the functionality as well as the data for each experiment by performing semantic annotation that uses OWL and OWL-S. Lessons learnt can be transmitted to legacy clinical databases facing the conversion of clinical narratives according to prominent Electronic Health Records standards.

  5. Efficient content-based P2P image retrieval using peer content descriptions

    Science.gov (United States)

    Muller, Wolfgang T.; Eisenhardt, Martin; Henrich, Andreas

    2003-12-01

    Peer-to-peer (P2P) networks are overlay networks that connect independent computers (also called nodes or peers). In contrast to client/server solutions, all nodes offer and request services from other peers in a P2P network. P2P networks are very attractive in that they harness the computing power of many common desktop machines and necessitate little administrative overhead. While the resulting computing power is impressive, efficiently looking up data still is the major challenge in P2P networks. Current work comprises fast lookup of one-dimensional values (Distributed Hash Tables, DHT) and retrieval of texts using few keywords. However, the lookup of multimedia data in P2P networks is still attacked by very few groups. In this paper, we present experiments with efficient Content Based Image Retrieval in a P2P environment, thus a P2P-CBIR system. The challenge in such systems is to limit the number of messages sent, and to maximize the usefulness of each peer contacted in the query process. We achieve this by distributing peer data summaries over the network. Obviously, the data summaries have to be compact in order to limit the communication overhead. We propose an CBIR scheme based on a compact peer data summary. This peer data summary relies on cluster frequencies. To obtain the compact representation of a peer's collection, a global clustering of the data is efficiently calculated in a distributed manner. After that, each peer publishes how many of its images fall into each cluster. These cluster frequencies are then used by the querying peer to contact only those peers that have the largest number of images present in one cluster given by the query. In our paper we further detail the various challenges that have to be met by the designers of such a P2P-CBIR, and we present experiments with varying degree of data replication (duplicates of images), as well as quality of clustering within the network.

  6. AN EFFICIENT CONTENT AND SEGMENTATION BASED VIDEO COPY DETECTION

    Directory of Open Access Journals (Sweden)

    N. Kalaiselvi

    2015-10-01

    Full Text Available The field of multimedia technology has become easier to store, creation and access large amount of video data. This technology has editing and duplication of video data that will cause to violation of digital rights. So in this project we implemented an efficient content and segmentation based video copy detection concept to detect the illegal manipulation of video. In this Work or proposed system, Instead of SIFT matching algorithms, used combination of SIFT and SURF matching algorithms to detect the matching features in images. Because, SIFT is slow and not good at illumination changes, while it is invariant to rotation, scale changes and affine transformations and then SURF is fast and has good performance, but it is also have some issues that it is not stable to rotation and affine transformations. So combined the above two algorithms SIFT and SURF to extract the image features. Auto dual Threshold method is used to segment the video into segments and extract key frames from each segment and it also eliminate the redundant frame. SIFT and SURF features based on SVD is used to compare the two frames features sets points, where the SIFT and SURF features are extracted from the key frames of the segments. Graph-based video sequence matching method is used to match the sequence of query video and train video. It skillfully converts the video sequence matching result to a matching result graph.

  7. Natural ingredients based cosmetics. Content of selected fragrance sensitizers.

    Science.gov (United States)

    Rastogi, S C; Johansen, J D; Menné, T

    1996-06-01

    In the present study, we have investigated 42 cosmetic products based on natural ingredients for content of 11 fragrance substances: geraniol, hydroxycitronellal, eugenol, isoeugenol, cinnamic aldehyde, cinnamic alcohol, alpha-amylcinnamic aldehyde, citral, coumarin, dihydrocoumarin and alpha-hexylcinnamic aldehyde. The study revealed that the 91% (20/22) of the natural ingredients based perfumes contained 0.027%-7.706% of 1 to 7 of the target fragrances. Between 1 and 5 of the chemically defined synthetic constituents of fragrance mix were found in 82% (18/22) of the perfumes. 35% (7/20) of the other cosmetic products (shampoos, creams, tonics, etc) were found to contain 0.0003-0.0820% of 1 to 3 of the target fragrances. Relatively high concentrations of hydroxycitronellal, coumarin, cinnamic alcohol and alpha-amyl cinnamic aldehyde were found in some of the investigated products. The detection of hydroxycitronellal and alpha-hexylcinnamic aldehyde in some of the products demonstrates that artificial fragrances, i.e., compounds not yet regarded as natural substances, may be present in products claimed to be based on natural ingredients.

  8. Content Based Image Retrieval Using Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    K. Harshini

    2012-10-01

    Full Text Available A computer application which automatically identifies or verifies a person from a digital image or a video frame from a video source, one of the ways to do this is by com-paring selected facial features from the image and a facial database. Content based image retrieval (CBIR, a technique for retrieving images on the basis of automatically derived features. This paper focuses on a low-dimensional feature based indexing technique for achieving efficient and effective retrieval performance. An appearance based face recognition method called singular value decomposition (SVD is proposed in this paper and is different from principal component analysis (PCA, which effectively considers only Euclidean structure of face space for analysis which lead to poor classification performance in case of great facial variations such as expression, lighting, occlusion and so on, due to the fact the image gray value matrices on which they manipulate are very sensitive to these facial variations. We consider the fact that every image matrix can always have the well known singular value decomposition (SVD and can be regarded as a composition of a set of base images generated by SVD and we further point out that base images are sensitive to the composition of face image. Finally our experimental results show that SVD has the advantage of providing a better representation and achieves lower error rates in face recognition but it has the disadvantage that it drags the performance evaluation. So, in order to overcome that, we conducted experiments by introducing a controlling parameter ‘α’, which ranges from 0 to 1, and we achieved better results for α=0.4 when compared with the other values of ‘α’. Key words: Singular value decomposition (SVD, Euclidean distance, original gray value matrix (OGVM.

  9. Content-Based Image Retrieval for Semiconductor Process Characterization

    Directory of Open Access Journals (Sweden)

    Kenneth W. Tobin

    2002-07-01

    Full Text Available Image data management in the semiconductor manufacturing environment is becoming more problematic as the size of silicon wafers continues to increase, while the dimension of critical features continues to shrink. Fabricators rely on a growing host of image-generating inspection tools to monitor complex device manufacturing processes. These inspection tools include optical and laser scattering microscopy, confocal microscopy, scanning electron microscopy, and atomic force microscopy. The number of images that are being generated are on the order of 20,000 to 30,000 each week in some fabrication facilities today. Manufacturers currently maintain on the order of 500,000 images in their data management systems for extended periods of time. Gleaning the historical value from these large image repositories for yield improvement is difficult to accomplish using the standard database methods currently associated with these data sets (e.g., performing queries based on time and date, lot numbers, wafer identification numbers, etc.. Researchers at the Oak Ridge National Laboratory have developed and tested a content-based image retrieval technology that is specific to manufacturing environments. In this paper, we describe the feature representation of semiconductor defect images along with methods of indexing and retrieval, and results from initial field-testing in the semiconductor manufacturing environment.

  10. Content-Based Object Movie Retrieval and Relevance Feedbacks

    Directory of Open Access Journals (Sweden)

    Lee Greg C

    2007-01-01

    Full Text Available Object movie refers to a set of images captured from different perspectives around a 3D object. Object movie provides a good representation of a physical object because it can provide 3D interactive viewing effect, but does not require 3D model reconstruction. In this paper, we propose an efficient approach for content-based object movie retrieval. In order to retrieve the desired object movie from the database, we first map an object movie into the sampling of a manifold in the feature space. Two different layers of feature descriptors, dense and condensed, are designed to sample the manifold for representing object movies. Based on these descriptors, we define the dissimilarity measure between the query and the target in the object movie database. The query we considered can be either an entire object movie or simply a subset of views. We further design a relevance feedback approach to improving retrieved results. Finally, some experimental results are presented to show the efficacy of our approach.

  11. A transport protocol for best-effort content-based networks

    OpenAIRE

    Malekpour, Amirhossein; Carzaniga, Antonio ; Pedone, Fernando

    2013-01-01

    Content-based publish/subscribe (or simply content-based) networking is a relatively new communication paradigm compared to IP networking, with a different approach to addressing network hosts. In content- based networking addressing as well as information dissemination center around information and interests. A host's address is represented by its interest and information is routed by a network of brokers to hosts with relevant interests. We advocate the idea that content-based network...

  12. Implementing a Web-based content management system.

    Science.gov (United States)

    Pomeroy, Brian; Crawford, Evan; Sinisi, Albert

    2003-01-01

    The Children's Hospital of Philadelphia transformed its web site to enhance patient satisfaction and attract new patients, as well as meet the needs of clinicians and the hospital's business plans. They accomplished this by implementing a content management system that would allow content creation and updating to be delegated to appropriate department staffers, thereby eliminating bottlenecks and unnecessary steps and ensuring that the web site receives fresh content much more quickly.

  13. Socioscientific Issues-Based Instruction: An Investigation of Agriscience Students' Content Knowledge Based on Student Variables

    Science.gov (United States)

    Shoulders, Catherine W.; Myers, Brian E.

    2013-01-01

    Numerous researchers in science education have reported student improvement in areas of scientific literacy resulting from socioscientific issues (SSI)-based instruction. The purpose of this study was to describe student agriscience content knowledge following a six-week SSI-based instructional unit focusing on the introduction of cultured meat…

  14. Content-based versus semantic-based retrieval: an LIDC case study

    Science.gov (United States)

    Jabon, Sarah A.; Raicu, Daniela S.; Furst, Jacob D.

    2009-02-01

    Content based image retrieval is an active area of medical imaging research. One use of content based image retrieval (CBIR) is presentation of known, reference images similar to an unknown case. These comparison images may reduce the radiologist's uncertainty in interpreting that case. It is, therefore, important to present radiologists with systems whose computed-similarity results correspond to human perceived-similarity. In our previous work, we developed an open-source CBIR system that inputs a computed tomography (CT) image of a lung nodule as a query and retrieves similar lung nodule images based on content-based image features. In this paper, we extend our previous work by studying the relationships between the two types of retrieval, content-based and semantic-based, with the final goal of integrating them into a system that will take advantage of both retrieval approaches. Our preliminary results on the Lung Image Database Consortium (LIDC) dataset using four types of image features, seven radiologists' rated semantic characteristics and two simple similarity measures show that a substantial number of nodules identified as similar based on image features are also identified as similar based on semantic characteristics. Furthermore, by integrating the two types of features, the similarity retrieval improves with respect to certain nodule characteristics.

  15. Content-based, Task-based, and Participatory Approaches (II)

    Institute of Scientific and Technical Information of China (English)

    Diane Larsen-Freeman

    2012-01-01

    4. Participatory Approach Although it originated in the early sixties with the work of Paulo Freire, and therefore antedates modern versions of content-based and task-based approaches, it was not until the 1980s that the participatory approach started being widely discussed in the language teaching literature.

  16. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  17. Content-Based Human Motion Retrieval with Scene Description Language

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    As commercial motion capture systems are widely used, more and more 3D motion libraries become available, reinforcing the demand for efficient indexing and retrieving methods. Usually, the user will only have a sketchy idea of which kind of motion to look for in the motion database. As a result, how to clearly describe the user's demands is a bottleneck for motion retrieval system. This paper presented a framework that can handle this problem effectively for motion retrieval. This content-based retrieval system supports two kinds of query modes:textual query mode and query-by-example mode. In both query modes, user's input is translated into scene description language first, which can be processed by the system efficiently. By using various kinds of qualitative features and adaptive segments of motion capture data stream, indexing and retrieval methods are carried out at the segment level rather than at the frame level, making them quite efficient. Some experimental examples are given to demonstrate the effectiveness and efficiency of the proposed algorithms.

  18. Content-Based Analysis of Bumper Stickers in Jordan

    Directory of Open Access Journals (Sweden)

    Abdullah A. Jaradat

    2016-12-01

    Full Text Available This study has set out to investigate bumper stickers in Jordan focusing mainly on the themes of the stickers. The study hypothesized that bumper stickers in Jordan reflect a wide range of topics including social, economic, and political. Due to being the first study of this phenomenon, the study has adopted content-based analysis to determine the basic topics. The study has found that the purpose of most bumper sticker is fun and humor; most of them are not serious and do not carry any biting messages. They do not present any criticism to the most dominant problems at the level of society including racism, nepotism, anti-feminism, inflation, high taxes, and refugees. Another finding is that politics is still a taboo; no political bumper stickers were found in Jordan. Finally, the themes the stickers targeted are: lessons of life 28.85 %; challenging or warning other drivers 16%; funny notes about social issues 12%; religious sayings 8%; treating the car as a female 7%; the low economic status of the driver 6%; love and treachery 5.5%; the prestigious status of the car 5%; envy 4%; nicknames for the car or the driver 4%; irony 3 %; and English sayings 1.5 %. Keywords: bumper stickers, themes, politics

  19. Soil Moisture Content Monitoring Based on ERS Wind Scatterometer Data

    Institute of Scientific and Technical Information of China (English)

    WANG Jian-ming; SHI Jian-cheng; SHAO Yun; LIU Wei

    2005-01-01

    The ERS-1/2 wind scatterometer (WSC) has a low resolution cell of about 50 km but provides a high repetition rate (<4 d) and can make measurements at multiple incidence angles. In order to estimate effective surface reflectivity (related to soil moisture content) over bare soil using WSC data, an original methodology based on the advance integral equation model (AIEM) is presented, which takes advantage of its multiple view angular characteristics. This method includes two steps. First, a simplified two-parameter surface scattering model is calibrated by AIEM simulated-database over a wide parameter space. Second, regression analyses are carried out using the simulated database to build the relation between those parameters of our model at different incident angles from two observations of Mid and Fore beams. From the model simulated database, our technique works quite well in estimating Γ0. The possibility of applying the model to retrieve soil moisture is investigated using a set of data collected from the Intensive Observation Period field campaign in 1998 of the Asian Monsoon Experiment Tibet (GAME-Tibet). The retrieved values obtained for the bare land surface are consistent with ground measurements collected in these areas and the correlation coefficient between retrieved soil moisture and the measured one reaches 0.65.

  20. Automatic indexing of news video for content-based retrieval

    Science.gov (United States)

    Yang, Myung-Sup; Yoo, Cheol-Jung; Chang, Ok-Bae

    1998-06-01

    Since it is impossible to automatically parse a general video, we investigated an integrated solution for the content-based news video indexing and the retrieval. Thus, a specific structural video such as news video is parsed, because it is included both temporal and spatial characteristics that the news event with an anchor-person is iteratively appeared, a news icon and a caption are involved in some frame, respectively. To extract automatically the key frames by using the structured knowledge of news, the model used in this paper is consisted of the news event segmentation, caption recognition and search browser module. The following are three main modules represented in this paper: (1) The news event segmentation module (NESM) for both the recognition and the division of an anchor-person shot. (2) The caption recognition module (CRM) for the detection of the caption-frames in a news event, the extraction of their caption region in the frame by using split-merge method, and the recognition of the region as a text with OCR software. 3) The search browser module (SBM) for the display of the list of news events and news captions, which are included in selected news event. However, the SBM can be caused various searching mechanisms.

  1. Copyright Protection for Modifiable Digital Content Based on Distributed Environment

    Science.gov (United States)

    Park, Heejae; Kim, Jong

    Today, users themselves are becoming subjects of content creation. The fact that blog, wiki, and UCC have become very popular shows that users want to participate to create and modify digital content. Users who participate in composing content also want to have their copyrights on their modification parts. Thus, a copyright protection system for the content which can be modified by multiple users is required. However, the conventional DRM (Digital Rights Management) systems like OMA DRM are not suitable for the modifiable content because they do not support the content created and modified by different users. Therefore in this paper, we propose a new copyright protection system which allows each modifier of the content created and modified by multiple users to have one's own copyright. We propose data formats and protocols, and analyze the proposed system in terms of the correctness and security. Performance evaluation in the view of response time shows that the proposed system is 2 to 18 times shorter than other comparative schemes.

  2. Teaching Concepts of Natural Sciences to Foreigners through Content-Based Instruction: The Adjunct Model

    Science.gov (United States)

    Satilmis, Yilmaz; Yakup, Doganay; Selim, Guvercin; Aybarsha, Islam

    2015-01-01

    This study investigates three models of content-based instruction in teaching concepts and terms of natural sciences in order to increase the efficiency of teaching these kinds of concepts in realization and to prove that the content-based instruction is a teaching strategy that helps students understand concepts of natural sciences. Content-based…

  3. Development of Content Management System-based Web Applications

    NARCIS (Netherlands)

    Souer, J.

    2012-01-01

    Web engineering is the application of systematic and quantifiable approaches (concepts, methods, techniques, tools) to cost-effective requirements analysis, design, implementation, testing, operation, and maintenance of high quality web applications. Over the past years, Content Management Systems (

  4. A personalized web page content filtering model based on segmentation

    CERN Document Server

    Kuppusamy, K S; 10.5121/ijist.2012.2104

    2012-01-01

    In the view of massive content explosion in World Wide Web through diverse sources, it has become mandatory to have content filtering tools. The filtering of contents of the web pages holds greater significance in cases of access by minor-age people. The traditional web page blocking systems goes by the Boolean methodology of either displaying the full page or blocking it completely. With the increased dynamism in the web pages, it has become a common phenomenon that different portions of the web page holds different types of content at different time instances. This paper proposes a model to block the contents at a fine-grained level i.e. instead of completely blocking the page it would be efficient to block only those segments which holds the contents to be blocked. The advantages of this method over the traditional methods are fine-graining level of blocking and automatic identification of portions of the page to be blocked. The experiments conducted on the proposed model indicate 88% of accuracy in filter...

  5. Content-Based Filtering for Video Sharing Social Networks

    CERN Document Server

    Valle, Eduardo; Luz, Antonio da; de Souza, Fillipe; Coelho, Marcelo; Araújo, Arnaldo

    2011-01-01

    In this paper we compare the use of several features in the task of content filtering for video social networks, a very challenging task, not only because the unwanted content is related to very high-level semantic concepts (e.g., pornography, violence, etc.) but also because videos from social networks are extremely assorted, preventing the use of constrained a priori information. We propose a simple method, able to combine diverse evidence, coming from different features and various video elements (entire video, shots, frames, keyframes, etc.). We evaluate our method in three social network applications, related to the detection of unwanted content - pornographic videos, violent videos, and videos posted to artificially manipulate popularity scores. Using challenging test databases, we show that this simple scheme is able to obtain good results, provided that adequate features are chosen. Moreover, we establish a representation using codebooks of spatiotemporal local descriptors as critical to the success o...

  6. An E-Commerce Recommender System Based on Content-Based Filtering

    Institute of Scientific and Technical Information of China (English)

    HE Weihong; CAO Yi

    2006-01-01

    Content-based filtering E-commerce recommender system was discussed fully in this paper. Users' unique features can be explored by means of vector space model firstly. Then based on the qualitative value of products information, the recommender lists were obtained. Since the system can adapt to the users' feedback automatically, its performance were enhanced comprehensively. Finally the evaluation of the system and the experimental results were presented.

  7. Hybrid Content-Based Collaborative-Filtering Music Recommendations

    NARCIS (Netherlands)

    Siles Del Castillo, H.

    2007-01-01

    Recommendation of music is emerging with force nowadays due to the huge amount of music content and because users normally do not have the time to search through these collections looking for new items. The main purpose of a recommendation system is to estimate the user’s preferences and present him

  8. Content Design Patterns for Game-Based Learning

    Science.gov (United States)

    Maciuszek, Dennis; Ladhoff, Sebastian; Martens, Alke

    2011-01-01

    To address the lack of documented best practices in the development of digital educational games, the authors have previously proposed a reference software architecture. One of its components is the rule system specifying learning and gameplay content. It contains quest, player character, non-player character, environment, and item rules.…

  9. Text Indexing of Images Based on Graphical Image Content.

    Science.gov (United States)

    Patrick, Timothy B.; Sievert, MaryEllen C.; Popescu, Mihail

    1999-01-01

    Describes an alternative method for indexing images in an image database. The method consists of manually indexing a selected reference image, and then using retrieval by graphical content to automatically transfer the manually assigned index terms from the reference image to the images to be indexed. (AEF)

  10. Empirically Based Recommendations for Content of Graduate Nursing Administration Programs.

    Science.gov (United States)

    Scalzi, Cynthia C.; Wilson, David L.

    1990-01-01

    To determine content for graduate programs in nursing administration, 184 nurse executives from acute care, home care, long-term care, and occupational health rated their job functions. All respondents spend time on activities requiring knowledge of law, health care policy, and organizational behavior. Ethics ranked lowest in terms of time spent.…

  11. Surface Electromyographic Onset Detection Based On Statistics and Information Content

    Science.gov (United States)

    López, Natalia M.; Orosco, Eugenio; di Sciascio, Fernando

    2011-12-01

    The correct detection of the onset of muscular contraction is a diagnostic tool to neuromuscular diseases and an action trigger to control myoelectric devices. In this work, entropy and information content concepts were applied in algorithmic methods to automatic detection in surface electromyographic signals.

  12. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  13. A tree-based paradigm for content-based video retrieval and management

    Science.gov (United States)

    Fang, H.; Yin, Y.; Jiang, J.

    2006-01-01

    As video databases become increasingly important for full exploitation of multimedia resources, this paper aims at describing our recent efforts in feasibility studies towards building up a content-based and high-level video retrieval/management system. The study is focused on constructing a semantic tree structure via combination of low-level image processing techniques and high-level interpretation of visual content. Specifically, two separate algorithms were developed to organise input videos in terms of two layers: the shot layer and the key-frame layer. While the shot layer is derived by developing a multi-featured shot cut detection, the key frame layer is extracted automatically by a genetic algorithm. This paves the way for applying pattern recognition techniques to analyse those key frames and thus extract high level information to interpret the visual content or objects. Correspondingly, content-based video retrieval can be conducted in three stages. The first stage is to browse the digital video via the semantic tree at structural level, the second stage is match the key frame in terms of low-level features, such as colour, shape of objects, and texture etc. Finally, the third stage is to match the high-level information, such as conversation with indoor background, moving vehicles along a seaside road etc. Extensive experiments are reported in this paper for shot cut detection and key frame extraction, enabling the tree structure to be constructed.

  14. Reputation-based content dissemination for user generated wireless podcasting

    DEFF Research Database (Denmark)

    Hu, Liang; Dittmann, Lars; Le Boudec, J.-Y.

    2009-01-01

    User-generated podcasting service over human-centric opportunistic network can facilitate user-generated content sharing while humans are on the move beyond the coverage of infrastructure networks. We focus on the aspects of designing efficient forwarding and cache replacement schemes of such ser......User-generated podcasting service over human-centric opportunistic network can facilitate user-generated content sharing while humans are on the move beyond the coverage of infrastructure networks. We focus on the aspects of designing efficient forwarding and cache replacement schemes...... of such service under the constraints of limited capability of handheld device and limited network capacity. In particular, the design of those schemes is challenged by the lack of podcast channel popularity information at each node which is crucial for forwarding and caching decisions. We design a distributed...

  15. Location based content delivery solution using iBeacon

    OpenAIRE

    2015-01-01

    There is a growing trend of using mobile devices in retail stores. Access to instant product related information is desirable by consumers in making a purchase decision. This information may include offers or sales discount associated with a product, comparing prices, ingredients or materials composition and many more. Retail business owners have seen this as a big opportunity to increase their in-store sales and they are adapting various technologies to deliver helpful contents and informati...

  16. Content-Based Video Description for Automatic Video Genre Categorization

    OpenAIRE

    Ionescu, Bogdan; Seyerlehner, Klaus; Rasche, Christoph; Vertan, Constantin; Lambert, Patrick

    2012-01-01

    International audience; In this paper, we propose an audio-visual approach to video genre categorization. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At temporal structural level, we asses action contents with respect to human perception. Further, color perception is quantified with statistics of color distribution, elementary hues, color properties and relationship of color. The last category of descriptors determines statis...

  17. Semantic Based Cluster Content Discovery in Description First Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    MUHAMMAD WASEEM KHAN

    2017-01-01

    Full Text Available In the field of data analytics grouping of like documents in textual data is a serious problem. A lot of work has been done in this field and many algorithms have purposed. One of them is a category of algorithms which firstly group the documents on the basis of similarity and then assign the meaningful labels to those groups. Description first clustering algorithm belong to the category in which the meaningful description is deduced first and then relevant documents are assigned to that description. LINGO (Label Induction Grouping Algorithm is the algorithm of description first clustering category which is used for the automatic grouping of documents obtained from search results. It uses LSI (Latent Semantic Indexing; an IR (Information Retrieval technique for induction of meaningful labels for clusters and VSM (Vector Space Model for cluster content discovery. In this paper we present the LINGO while it is using LSI during cluster label induction and cluster content discovery phase. Finally, we compare results obtained from the said algorithm while it uses VSM and Latent semantic analysis during cluster content discovery phase.

  18. Content Sharing over Smartphone-Based Delay-Tolerant Networks

    Directory of Open Access Journals (Sweden)

    L. Ramya Rekha

    2014-10-01

    Full Text Available With the growing number of smartphone end users, peer-to-peer ad hoc content giving is likely to occur often. Thus, new articles sharing mechanisms must be developed since traditional information delivery schemes will not be efficient with regard to content sharing due to the sporadic connectivity between smartphones on the market. To obtain data delivery such challenging environments, researchers include proposed the employment of store-carry-forward methodologies, in which a node stores a communication and holds it until a forwarding prospect arises through an encounter together with other nodes. Most past works in this field have dedicated to the conjecture of whether two nodes could encounter the other, without thinking about the place and also time from the encounter. In this particular paper, we propose to her discover-predict-deliver as a possible efficient articles sharing scheme for delay-tolerant touch screen phone networks. In this proposed scheme, contents are usually shared while using the mobility information of people. Specifically, our strategy employs the mobility understanding algorithm to spot places inside your own home and outdoor.

  19. Content-based retrieval based on binary vectors for 2-D medical images

    Institute of Scientific and Technical Information of China (English)

    龚鹏; 邹亚东; 洪海

    2003-01-01

    In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. To facilitate the decision-making in the health-care and the related areas, in this paper, a two-step content-based medical image retrieval algorithm is proposed. Firstly, in the preprocessing step, the image segmentation is performed to distinguish image objects, and on the basis of the ...

  20. A Content-Based Search Algorithm for Motion Estimation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The basic search algorithm toimplement Motion Estimation (ME) in the H. 263 encoder is a full search.It is simple but time-consuming. Traditional search algorithms are fast, but may cause a fall in image quality or an increase in bit-rate in low bit-rate applications. A fast search algorithm for ME with consideration on image content is proposed in this paper. Experiments show that the proposed algorithm can offer up to 70 percent savings in execution time with almost no sacrifice in PSNR and bit-rate, compared with the full search.

  1. Web content adaptation for mobile device: A fuzzy-based approach

    Directory of Open Access Journals (Sweden)

    Frank C.C. Wu

    2012-03-01

    Full Text Available While HTML will continue to be used to develop Web content, how to effectively and efficiently transform HTML-based content automatically into formats suitable for mobile devices remains a challenge. In this paper, we introduce a concept of coherence set and propose an algorithm to automatically identify and detect coherence sets based on quantified similarity between adjacent presentation groups. Experimental results demonstrate that our method enhances Web content analysis and adaptation on the mobile Internet.

  2. Content-Based Hierarchical Analysis of News Video Using Audio and Visual Information

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A schema for content-based analysis of broadcast news video is presented. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.

  3. Efficient stereoscopic contents file format on the basis of ISO base media file format

    Science.gov (United States)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  4. Design and realisation of an efficient content based music playlist generation system

    NARCIS (Netherlands)

    Balkema, Jan Wietse

    2009-01-01

    This thesis is on the subject of content based music playlist generation systems. The primary aim is to develop algorithms for content based music playlist generation that are faster than the current state of technology while keeping the quality of the playlists at a level that is at least comparabl

  5. The Integration of Language and Content: Form-Focused Instruction in a Content-Based Language Program

    Directory of Open Access Journals (Sweden)

    Antonella Valeo

    2013-06-01

    Full Text Available Abstract This comparative, classroom-based study investigated the effect and effectiveness of introducing a focus on form approach to a content-based, occupation-specific language program for adults. Thirty-six adults in two classes participated in a 10-week study. One group of 16 adults received content-based instruction that included a focus on form component while the other group of 20 adults received the same content-based instruction with a focus on meaning only. Pre-tests/post-tests/delayed post-tests measured learning of two grammatical forms, the present conditional and the simple past tense, as well as occupational content knowledge. Results indicated significant gains on most of the language measures for both learner groups but significant advantages for the form-focused group on the content knowledge tests. The results are discussed in relation to the impact of specific strategies designed to focus on form and the relationship between attention to form and comprehension of content in the context of content-based language programs. Résumé Cette étude comparative menée en salle de classe a examiné l'effet et l'efficacité d’un enseignement mettant l’accent sur ​​la forme dans un programme de langues professionnelles pour adultes. Trente-six apprenants de deux classes intactes ont participé à cette recherche pendant 10 semaines. Un groupe de 16 personnes a reçu les instructions qui se concentraient sur la forme, tandis que l'autre groupe de 20 personnes a reçu les mêmes instructions qui portaient sur ​​le sens seulement. Des pré-tests, des post-tests ainsi que des post-tests retardés ont mesuré l'apprentissage de la langue et du contenu de deux traits grammaticaux; premièrement, la connaissance du conditionnel et du passé et, deuxièmement, la connaissance du contenu professionnel. Les résultats ont indiqué une amélioration sensible de la plupart des compétences linguistiques pour les deux groupes d

  6. RETRIEVAL TIME RESEARCH IN TEMPORAL KNOWLEDGE BASES WITH DYNAMIC CONTENT

    Directory of Open Access Journals (Sweden)

    J. A. Koroleva

    2015-07-01

    Full Text Available Results of retrieval time research of actual data effectiveness search in temporal knowledge bases built in the basis of state of events have been proposed. This type of knowledge base gives the possibility for quick access to relevant states as well as for history based on events chronology. It is shown that data storage for deep retrospective increases significantly the search time due to the growth of the decision tree. The search time for temporal knowledge bases depending on the average number of events prior to the current state has been investigated. Experimental results confirm the advantage of knowledge bases in the basis of state of events over traditional methods for design of intelligent systems.

  7. Outcomes-based Teaching for Brain-based Learning Vis-à-vis Pedagogical Content Knowledge

    Directory of Open Access Journals (Sweden)

    Reynaldo B. Inocian

    2016-05-01

    Full Text Available The study determined the essential elements of an Outcomes-based Teaching and Learning (OBTL component of an Outcomes-based Education (OBE cycle. It sought to answer these objectives: (1 extrapolate notable teaching attributes based on the actual teaching demonstration of the 7 subjects; (2 describe each of the OBTL’s quadrant elements (3 design a prototype for an integrated arts-based OBTL.This study utilized a case analysis of the actual observation of recurring subtleties exhibited by the seven subject demonstrators during the In-service Training (INSET held last October 27, 2015 in one of the city divisions in Cebu, Philippines. Each of them was rated based on the specific skills used according to Hermann’s Learning Quadrants, after a short lecture on pedagogical content knowledge (PCK.A documentation of a sample Lesson Plan (LP for Quadrant Modelling for Teaching (QMT was juxtaposed as a noble exemplar.The quest for outcomes-based teaching for brain-based learning vis-à-vis pedagogical content knowledge or PCK cascaded with more brain-based inspired learning activities among teacher-demonstrators, with less emphasis on creativity, thus a teaching exemplar was created as part of its modelling. Though, an INSET in the public schools enhanced opportunities to exhibit teaching attributes such as: vivacity, sense of humor, creativity, inquisitiveness, concentration, cautiousness, and dynamism in the achievement of the 21st century skills, however these attributes remained uniquely apparent in every individual teacher. Dreaming to acquire many of these attributes among individual teachers propelled their authentic experience and sincerity to integrate appropriate OBTL activities, which emphasized its four spiral elements, by which the learners would:own knowledge in discovering experiences (sarili, master skills in critical evaluation (husay, engage understating and reflection in dialogical abstraction (saysay, and achieve wonderment in

  8. mage Mining using Content Based Image Retrieval System

    OpenAIRE

    Rajshree S. Dubey; Niket Bhargava; Rajnish Choubey

    2010-01-01

    The image depends on the Human perception and is also based on the Machine Vision System. The Image Retrieval is based on the color Histogram, texture. The perception of the Human System of Image is based on the Human Neurons which hold the 1012 of Information; the Human brain continuously learns with the sensory organs like eye which transmits the Image to the brain which interprets the Image. The research challenge is that how the brain processes the informationin the semantic manner is hot...

  9. WormBase: new content and better access

    OpenAIRE

    Bieri, Tamberlyn; Blasiar, Darin; Ozersky, Philip; Antoshechkin, Igor; Bastiani, Carol; Canaran, Payan; Chan, Juancarlos; Chen, Nansheng; Chen, Wen J.; Davis, Paul; Fiedler, Tristan J.; Girard, Lisa; Han, Michael; Harris, Todd W.; Kishore, Ranjana

    2006-01-01

    WormBase (http://wormbase.org), a model organism database for Caenorhabditis elegans and other related nematodes, continues to evolve and expand. Over the past year WormBase has added new data on C.elegans, including data on classical genetics, cell biology and functional genomics; expanded the annotation of closely related nematodes with a new genome browser for Caenorhabditis remanei; and deployed new hardware for stronger performance. Several existing datasets including phenotype descripti...

  10. Demographic-Based Content Analysis of Web-Based Health-Related Social Media

    Science.gov (United States)

    Shahbazi, Moloud; Wiley, Matthew T; Hristidis, Vagelis

    2016-01-01

    Background An increasing number of patients from diverse demographic groups share and search for health-related information on Web-based social media. However, little is known about the content of the posted information with respect to the users’ demographics. Objective The aims of this study were to analyze the content of Web-based health-related social media based on users’ demographics to identify which health topics are discussed in which social media by which demographic groups and to help guide educational and research activities. Methods We analyze 3 different types of health-related social media: (1) general Web-based social networks Twitter and Google+; (2) drug review websites; and (3) health Web forums, with a total of about 6 million users and 20 million posts. We analyzed the content of these posts based on the demographic group of their authors, in terms of sentiment and emotion, top distinctive terms, and top medical concepts. Results The results of this study are: (1) Pregnancy is the dominant topic for female users in drug review websites and health Web forums, whereas for male users, it is cardiac problems, HIV, and back pain, but this is not the case for Twitter; (2) younger users (0-17 years) mainly talk about attention-deficit hyperactivity disorder (ADHD) and depression-related drugs, users aged 35-44 years discuss about multiple sclerosis (MS) drugs, and middle-aged users (45-64 years) talk about alcohol and smoking; (3) users from the Northeast United States talk about physical disorders, whereas users from the West United States talk about mental disorders and addictive behaviors; (4) Users with higher writing level express less anger in their posts. Conclusion We studied the popular topics and the sentiment based on users' demographics in Web-based health-related social media. Our results provide valuable information, which can help create targeted and effective educational campaigns and guide experts to reach the right users on Web-based

  11. Proto-object based rate control for JPEG2000: an approach to content-based scalability.

    Science.gov (United States)

    Xue, Jianru; Li, Ce; Zheng, Nanning

    2011-04-01

    The JPEG2000 system provides scalability with respect to quality, resolution and color component in the transfer of images. However, scalability with respect to semantic content is still lacking. We propose a biologically plausible salient region based bit allocation mechanism within the JPEG2000 codec for the purpose of augmenting scalability with respect to semantic content. First, an input image is segmented into several salient proto-objects (a region that possibly contains a semantically meaningful physical object) and background regions (a region that contains no object of interest) by modeling visual focus of attention on salient proto-objects. Then, a novel rate control scheme distributes a target bit rate to each individual region according to its saliency, and constructs quality layers of proto-objects for the purpose of more precise truncation comparable to original quality layers in the standard. Empirical results show that the suggested approach adds to the JPEG2000 system scalability with respect to content as well as the functionality of selectively encoding, decoding, and manipulation of each individual proto-object in the image, with only some slightly trivial modifications to the JPEG2000 standard. Furthermore, the proposed rate control approach efficiently reduces the computational complexity and memory usage, as well as maintains the high quality of the image to a level comparable to the conventional post-compression rate distortion (PCRD) optimum truncation algorithm for JPEG2000.

  12. Watermarking Digital Images Based on a Content Based Image Retrieval Technique

    CERN Document Server

    Tsolis, Dimitrios K; Papatheodorou, Theodore S

    2008-01-01

    The current work is focusing on the implementation of a robust watermarking algorithm for digital images, which is based on an innovative spread spectrum analysis algorithm for watermark embedding and on a content-based image retrieval technique for watermark detection. The highly robust watermark algorithms are applying "detectable watermarks" for which a detection mechanism checks if the watermark exists or no (a Boolean decision) based on a watermarking key. The problem is that the detection of a watermark in a digital image library containing thousands of images means that the watermark detection algorithm is necessary to apply all the keys to the digital images. This application is non-efficient for very large image databases. On the other hand "readable" watermarks may prove weaker but easier to detect as only the detection mechanism is required. The proposed watermarking algorithm combine's the advantages of both "detectable" and "readable" watermarks. The result is a fast and robust watermarking algor...

  13. User-Based Interaction for Content-Based Image Retrieval by Mining User Navigation Patterns.

    Directory of Open Access Journals (Sweden)

    A. Srinagesh

    2013-09-01

    Full Text Available In Internet, Multimedia and Image Databases image searching is a necessity. Content-Based Image Retrieval (CBIR is an approach for image retrieval. With User interaction included in CBIR with Relevance Feedback (RF techniques, the results are obtained by giving more number of iterative feedbacks for large databases is not an efficient method for real- time applications. So, we propose a new approach which converges rapidly and can aptly be called as Navigation Pattern-Based Relevance Feedback (NPRF with User-based interaction mode. We combined NPRF with RF techniques with three concepts viz., query Re-weighting (QR, Query Expansion (QEX and Query Point Movement (QPM. By using, these three techniques efficient results are obtained by giving a small number of feedbacks. The efficiency of the proposed method with results is proved by calculating Precision, Recall and Evaluation measures.

  14. Identifying content for simulation-based curricula in urology

    DEFF Research Database (Denmark)

    Nayahangan, Leizl Joy; Bølling Hansen, Rikke; Lindorff-Larsen, Karen

    2017-01-01

    to identify technical procedures in urology that should be included in a simulation-based curriculum for residency training. MATERIALS AND METHODS: A national needs assessment was performed using the Delphi method involving 56 experts with significant roles in the education of urologists. Round 1 identified......OBJECTIVE: Simulation-based training is well recognized in the transforming field of urological surgery; however, integration into the curriculum is often unstructured. Development of simulation-based curricula should follow a stepwise approach starting with a needs assessment. This study aimed...... technical procedures that newly qualified urologists should perform. Round 2 included a survey using an established needs assessment formula to explore: the frequency of procedures; the number of physicians who should be able to perform the procedure; the risk and/or discomfort to patients when a procedure...

  15. Natural ingredients based cosmetics. Content of selected fragrance sensitizers

    DEFF Research Database (Denmark)

    Rastogi, Suresh Chandra; Johansen, J D; Menné, T

    1996-01-01

    -hexylcinnamic aldehyde. The study revealed that the 91% (20/22) of the natural ingredients based perfumes contained 0.027%-7.706% of 1 to 7 of the target fragrances. Between 1 and 5 of the chemically defined synthetic constituents of fragrance mix were found in 82% (18/22) of the perfumes. 35% (7/20) of the other...

  16. Evaluation of a Problem Based Learning Curriculum Using Content Analysis

    Science.gov (United States)

    Prihatiningsih, Titi Savitri; Qomariyah, Nurul

    2016-01-01

    Faculty of Medicine UGM has implemented Problem Based Learning (PBL) since 1985. Seven jump tutorial discussions are applied. A scenario is used as a trigger to stimulate students to identify learning objectives (LOs) in step five which are used as the basis for self study in step six. For each scenario, the Block Team formulates the LOs which are…

  17. GreenDelivery: Proactive Content Caching and Push with Energy-Harvesting-based Small Cells

    OpenAIRE

    Zhou, Sheng; Gong, Jie; ZHOU, Zhenyu; Chen, Wei; Niu, Zhisheng

    2015-01-01

    The explosive growth of mobile multimedia traffic calls for scalable wireless access with high quality of service and low energy cost. Motivated by the emerging energy harvesting communications, and the trend of caching multimedia contents at the access edge and user terminals, we propose a paradigm-shift framework, namely GreenDelivery, enabling efficient content delivery with energy harvesting based small cells. To resolve the two-dimensional randomness of energy harvesting and content requ...

  18. A NEW CONTENT BASED IMAGE RETRIEVAL SYSTEM USING GMM AND RELEVANCE FEEDBACK

    Directory of Open Access Journals (Sweden)

    N. Shanmugapriya

    2014-01-01

    Full Text Available Content-Based Image Retrieval (CBIR is also known as Query By Image Content (QBIC is the application of computer vision techniques and it gives solution to the image retrieval problem such as searching digital images in large databases. The need to have a versatile and general purpose Content Based Image Retrieval (CBIR system for a very large image database has attracted focus of many researchers of information-technology-giants and leading academic institutions for development of CBIR techniques. Due to the development of network and multimedia technologies, users are not fulfilled by the traditional information retrieval techniques. So nowadays the Content Based Image Retrieval (CBIR are becoming a source of exact and fast retrieval. Texture and color are the important features of Content Based Image Retrieval Systems. In the proposed method, images can be retrieved using color-based, texture-based and color and texture-based. Algorithms such as auto color correlogram and correlation for extracting color based images, Gaussian mixture models for extracting texture based images. In this study, Query point movement is used as a relevance feedback technique for Content Based Image Retrieval systems. Thus the proposed method achieves better performance and accuracy in retrieving images.

  19. Evolution-based Virtual Content Insertion with Visually Virtual Interactions in Videos

    Science.gov (United States)

    Chang, Chia-Hu; Wu, Ja-Ling

    With the development of content-based multimedia analysis, virtual content insertion has been widely used and studied for video enrichment and multimedia advertising. However, how to automatically insert a user-selected virtual content into personal videos in a less-intrusive manner, with an attractive representation, is a challenging problem. In this chapter, we present an evolution-based virtual content insertion system which can insert virtual contents into videos with evolved animations according to predefined behaviors emulating the characteristics of evolutionary biology. The videos are considered not only as carriers of message conveyed by the virtual content but also as the environment in which the lifelike virtual contents live. Thus, the inserted virtual content will be affected by the videos to trigger a series of artificial evolutions and evolve its appearances and behaviors while interacting with video contents. By inserting virtual contents into videos through the system, users can easily create entertaining storylines and turn their personal videos into visually appealing ones. In addition, it would bring a new opportunity to increase the advertising revenue for video assets of the media industry and online video-sharing websites.

  20. Mineral content prediction for unconventional oil and gas reservoirs based on logging data

    Science.gov (United States)

    Maojin, Tan; Youlong, Zou; Guoyue

    2012-09-01

    Coal bed methane and shale oil &gas are both important unconventional oil and gas resources, whose reservoirs are typical non-linear with complex and various mineral components, and the logging data interpretation model are difficult to establish for calculate the mineral contents, and the empirical formula cannot be constructed due to various mineral. The radial basis function (RBF) network analysis is a new method developed in recent years; the technique can generate smooth continuous function of several variables to approximate the unknown forward model. Firstly, the basic principles of the RBF is discussed including net construct and base function, and the network training is given in detail the adjacent clustering algorithm specific process. Multi-mineral content for coal bed methane and shale oil &gas, using the RBF interpolation method to achieve a number of well logging data to predict the mineral component contents; then, for coal-bed methane reservoir parameters prediction, the RBF method is used to realized some mineral contents calculation such as ash, volatile matter, carbon content, which achieves a mapping from various logging data to multimineral. To shale gas reservoirs, the RBF method can be used to predict the clay content, quartz content, feldspar content, carbonate content and pyrite content. Various tests in coalbed and gas shale show the method is effective and applicable for mineral component contents prediction

  1. Content Based Image Retrieval Using Local Color Histogram

    Directory of Open Access Journals (Sweden)

    Metty Mustikasari, Eri Prasetyo,, Suryadi Harmanto

    2014-01-01

    Full Text Available —This paper proposes a technique to retrieve images based on color feature using local histogram. The image is divided into nine sub blocks of equal size. The color of each sub-block is extracted by quantifying the HSV color space into 12x6x6 histogram. In this retrieval system Euclidean distance and City block distance are used to measure similarity of images. This algorithm is tested by using Corel image database. The performance of retrieval system is measured in terms of its recall and precision. The effectiveness of retrieval system is also measured based on AVRR (Average Rank of Relevant Images and IAVRR (Ideal Average Rank of Relevant Images which is proposed by Faloutsos. The experimental results show that the retrieval system has a good performance and the evaluation results of city block has achieved higher retrieval performance than the evaluation results of the Euclidean distance.

  2. Situational Requirements Engineering for the Development of Content Management System-based Web Applications

    OpenAIRE

    2005-01-01

    Web applications are evolving towards strong content-centered Web applications. The development processes and implementation of these applications are unlike the development and implementation of traditional information systems. In this paper we propose WebEngineering Method; a method for developing content management system (CMS) based Web applications. Critical to a successful development of CMS-based Web applications, is the adaptation to the dynamic business. We first define CMS-based Web...

  3. The application of content based teaching method in English writing in Senior High School

    Institute of Scientific and Technical Information of China (English)

    李素霞

    2016-01-01

    Content-based instruction (CBI) refers to an approach of subject integration to language teaching in which the teaching is organized around the subject or information that students will acquire. Based on the deficiency of the previous studies, the thesis proposes a Content-Based Instruction, in order to improve the quality of English Writing Teaching in senior school and students' actual level of writing.

  4. Content-based Image Retrieval Using Color Histogram

    Institute of Scientific and Technical Information of China (English)

    HUANG Wen-bei; HE Liang; GU Jun-zhong

    2006-01-01

    This paper introduces the principles of using color histogram to match images in CBIR. And a prototype CBIR system is designed with color matching function. A new method using 2-dimensional color histogram based on hue and saturation to extract and represent color information of an image is presented. We also improve the Euclidean-distance algorithm by adding Center of Color to it. The experiment shows modifications made to Euclidean-distance significantly elevate the quality and efficiency of retrieval.

  5. Texture based feature extraction methods for content based medical image retrieval systems.

    Science.gov (United States)

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  6. A Novel Optimization-Based Approach for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Manyu Xiao

    2013-01-01

    Full Text Available Content-based image retrieval is nowadays one of the possible and promising solutions to manage image databases effectively. However, with the large number of images, there still exists a great discrepancy between the users’ expectations (accuracy and efficiency and the real performance in image retrieval. In this work, new optimization strategies are proposed on vocabulary tree building, retrieval, and matching methods. More precisely, a new clustering strategy combining classification and conventional K-Means method is firstly redefined. Then a new matching technique is built to eliminate the error caused by large-scaled scale-invariant feature transform (SIFT. Additionally, a new unit mechanism is proposed to reduce the cost of indexing time. Finally, the numerical results show that excellent performances are obtained in both accuracy and efficiency based on the proposed improvements for image retrieval.

  7. Content-Based Instruction in Primary and Secondary School Settings. Case Studies in TESOL Practice Series

    Science.gov (United States)

    Kaufman, Dorit, Ed.; Crandall, JoAnn, Ed.

    2005-01-01

    Content-based instruction (CBI) challenges English language educators to teach English using materials that learners encounter in their regular subject-area classes. This volume helps ESL and EFL teachers meet that challenge by providing them with creative ways to integrate English language learning with the content that students study at primary…

  8. CBRecSys 2015. New Trends on Content-Based Recommender Systems

    DEFF Research Database (Denmark)

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either in...

  9. CBRecSys 2016. New Trends on Content-Based Recommender Systems

    DEFF Research Database (Denmark)

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either in...

  10. Third Workshop on New Trends in Content-based Recommender Systems (CBRecSys 2016)

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn; Musto, Cataldo

    2016-01-01

    While content-based recommendation has been applied successfully in many different domains, it has not seen the same level of attention as collaborative filtering techniques have. However, there are many recommendation domains and applications where content and metadata play a key role, either in...

  11. Way Forward in the Twenty-First Century in Content-Based Instruction: Moving towards Integration

    Science.gov (United States)

    Ruiz de Zarobe, Yolanda; Cenoz, Jasone

    2015-01-01

    The aim of this paper is to reflect on the theoretical and methodological underpinnings that provide the basis for an understanding of Content-Based Instruction/Content and Language Integrated Learning (CBI/CLIL) in the field and its relevance in education in the twenty-first century. It is argued that the agenda of CBI/CLIL needs to move towards…

  12. Content-Based Curriculum for High-Ability Learners, Second Edition

    Science.gov (United States)

    VanTassel-Baska, Joyce, Ed.; Little, Catherine A., Ed.

    2011-01-01

    The newly updated "Content-Based Curriculum for High-Ability Learners" provides a solid introduction to curriculum development in gifted and talented education. Written by experts in the field of gifted education, this text uses cutting-edge design techniques and aligns the core content with national and state standards. In addition to a revision…

  13. Way Forward in the Twenty-First Century in Content-Based Instruction: Moving towards Integration

    Science.gov (United States)

    Ruiz de Zarobe, Yolanda; Cenoz, Jasone

    2015-01-01

    The aim of this paper is to reflect on the theoretical and methodological underpinnings that provide the basis for an understanding of Content-Based Instruction/Content and Language Integrated Learning (CBI/CLIL) in the field and its relevance in education in the twenty-first century. It is argued that the agenda of CBI/CLIL needs to move towards…

  14. Content Based Mammogram Retrieval based on Breast Tissue Characterization using Statistical Features

    Directory of Open Access Journals (Sweden)

    K. Vaidehi

    2014-08-01

    Full Text Available The aim of the study is to retrieve the similar mammographic images based on the type of breast tissue density of the given query image. Statistical descriptors were extracted from the candidate blocks of the breast parenchyma. The mean of extracted features are fed into the SVM classifier for classification of the tissue density into any of the three classes namely dense, glandular and fatty and the classification accuracy obtained is 91.54%. After classification the mammogram images along with its feature vector are stored into three separate databases based on tissue type. Then K-means clustering algorithm is used to divide each database into 2 clusters. For content based retrieval of the mammograms based on the given query image, first the query image is classified into any of the three tissue class. Then the feature vector of the query image is compared with the two cluster centroids of the corresponding class, so as to confine the search within the closest cluster. Top 5 similar images are retrieved from its corresponding class database. Euclidean distance based k-NN is used for mammogram retrieval and this study obtained the highest precision rate ranging between 98 and 99%.

  15. What Combinations of Contents is Driving Popularity in IPTV-based Social Networks?

    Science.gov (United States)

    Bhatt, Rajen

    IPTV-based Social Networks are gaining popularity with TV programs coming over IP connection and internet like applications available on home TV. One such application is rating TV programs over some predefined genres. In this paper, we suggest an approach for building a recommender system to be used by content distributors, publishers, and motion pictures producers-directors to decide on what combinations of contents may drive popularity or unpopularity. This may be used then for creating a proper mixture of media contents which can drive high popularity. This may also be used for the purpose of catering customized contents for group of users whose taste is similar and thus combinations of contents driving popularity for a certain group is also similar. We use a novel approach for this formulation utilizing fuzzy decision trees. Computational experiments performed over real-world program review database shows that the proposed approach is very efficient towards understanding of the content combinations.

  16. Visible Light Image-Based Method for Sugar Content Classification of Citrus.

    Science.gov (United States)

    Wang, Xuefeng; Wu, Chunyan; Hirafuji, Masayuki

    2016-01-01

    Visible light imaging of citrus fruit from Mie Prefecture of Japan was performed to determine whether an algorithm could be developed to predict the sugar content. This nondestructive classification showed that the accurate segmentation of different images can be realized by a correlation analysis based on the threshold value of the coefficient of determination. There is an obvious correlation between the sugar content of citrus fruit and certain parameters of the color images. The selected image parameters were connected by addition algorithm. The sugar content of citrus fruit can be predicted by the dummy variable method. The results showed that the small but orange citrus fruits often have a high sugar content. The study shows that it is possible to predict the sugar content of citrus fruit and to perform a classification of the sugar content using light in the visible spectrum and without the need for an additional light source.

  17. [Estimation of forest canopy chlorophyll content based on PROSPECT and SAIL models].

    Science.gov (United States)

    Yang, Xi-guang; Fan, Wen-yi; Yu, Ying

    2010-11-01

    The forest canopy chlorophyll content directly reflects the health and stress of forest. The accurate estimation of the forest canopy chlorophyll content is a significant foundation for researching forest ecosystem cycle models. In the present paper, the inversion of the forest canopy chlorophyll content was based on PROSPECT and SAIL models from the physical mechanism angle. First, leaf spectrum and canopy spectrum were simulated by PROSPECT and SAIL models respectively. And leaf chlorophyll content look-up-table was established for leaf chlorophyll content retrieval. Then leaf chlorophyll content was converted into canopy chlorophyll content by Leaf Area Index (LAD). Finally, canopy chlorophyll content was estimated from Hyperion image. The results indicated that the main effect bands of chlorophyll content were 400-900 nm, the simulation of leaf and canopy spectrum by PROSPECT and SAIL models fit better with the measured spectrum with 7.06% and 16.49% relative error respectively, the RMSE of LAI inversion was 0. 542 6 and the forest canopy chlorophyll content was estimated better by PROSPECT and SAIL models with precision = 77.02%.

  18. Building ESP Content-Based Materials to Promote Strategic Reading

    Directory of Open Access Journals (Sweden)

    Bautista Barón Myriam Judith

    2013-04-01

    Full Text Available This article reports on an action research project that proposes to improve the reading comprehension and vocabulary of undergraduate students of English for Specific Purposes–explosives majors, at a police training institute in Colombia. I used the qualitative research method to explore and reflect upon the teaching-learning processes during implementation. Being the teacher of an English for specific purposes course without the appropriate didactic resources, I designed six reading comprehension workshops based on the cognitive language learning approach not only to improve students’ reading skills but also their autonomy through the use of learning strategies. The data were collected from field notes, artifacts, progress reviews, surveys, and photographs. Este artículo informa sobre un proyecto de investigación cualitativa que propone mejorar la comprensión de lectura y el vocabulario de estudiantes universitarios de inglés que se especializan en temas relativos a explosivos en una escuela de policía, en Colombia. Por tratarse de un curso de inglés específico que carece de los recursos didácticos apropiados, diseñé seis talleres de comprensión de lectura basados en el enfoque del aprendizaje cognitivo de la lengua, para mejorar tanto su comprensión de lectura como su autonomía para usar estrategias de aprendizaje. Para la recolección de datos se emplearon notas de campo, artefactos, pruebas de progreso, encuestas y fotografías. 

  19. High content cell-based assay for the inflammatory pathway

    Science.gov (United States)

    Mukherjee, Abhishek; Song, Joon Myong

    2015-07-01

    Cellular inflammation is a non-specific immune response to tissue injury that takes place via cytokine network orchestration to maintain normal tissue homeostasis. However chronic inflammation that lasts for a longer period, plays the key role in human diseases like neurodegenerative disorders and cancer development. Understanding the cellular and molecular mechanisms underlying the inflammatory pathways may be effective in targeting and modulating their outcome. Tumor necrosis factor alpha (TNF-α) is a pro-inflammatory cytokine that effectively combines the pro-inflammatory features with the pro-apoptotic potential. Increased levels of TNF-α observed during acute and chronic inflammatory conditions are believed to induce adverse phenotypes like glucose intolerance and abnormal lipid profile. Natural products e. g., amygdalin, cinnamic acid, jasmonic acid and aspirin have proven efficacy in minimizing the TNF-α induced inflammation in vitro and in vivo. Cell lysis-free quantum dot (QDot) imaging is an emerging technique to identify the cellular mediators of a signaling cascade with a single assay in one run. In comparison to organic fluorophores, the inorganic QDots are bright, resistant to photobleaching and possess tunable optical properties that make them suitable for long term and multicolor imaging of various components in a cellular crosstalk. Hence we tested some components of the mitogen activated protein kinase (MAPK) pathway during TNF-α induced inflammation and the effects of aspirin in HepG2 cells by QDot multicolor imaging technique. Results demonstrated that aspirin showed significant protective effects against TNF-α induced cellular inflammation. The developed cell based assay paves the platform for the analysis of cellular components in a smooth and reliable way.

  20. Content-Based Digital Image Retrieval based on Multi-Feature Amalgamation

    Directory of Open Access Journals (Sweden)

    Linhao Li

    2013-12-01

    Full Text Available In actual implementation, digital image retrieval are facing all kinds of problems. There still exists some difficulty in measures and methods for application. Currently there is not a unambiguous algorithm which can directly shown the obvious feature of image content and satisfy the color, scale invariance and rotation invariance of feature simultaneously. So the related technology about image retrieval based on content is analyzed by us. The research focused on global features such as seven HU invariant moments, edge direction histogram and eccentricity. The method for blocked image is also discussed. During the process of image matching, the extracted image features are looked as the points in vector space. The similarity of two images is measured through the closeness between two points and the similarity is calculated by Euclidean distance and the intersection distance of histogram. Then a novel method based on multi-features amalgamation is proposed, to solve the problems in retrieval method for global feature and local feature. It extracts the eccentricity, seven HU invariant moments and edge direction histogram to calculate the similarity distance of each feature of the images, then they are normalized. Contraposing the interior of global feature the weighted feature distance is adopted to form similarity measurement function for retrieval. The features of blocked images are extracted with the partitioning method based on polar coordinate. Finally by the idea of hierarchical retrieval between global feature and local feature, the results are output through global features like invariant moments etc. These results will be taken as the input of local feature match for the second-layer retrieval, which can improve the accuracy of retrieval effectively.

  1. Retrieving Biomedical Images through Content-Based Learning from Examples Using Fine Granularity

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Songhua [ORNL; Jiang, Hao [University of Hong Kong, The; Lau, Francis [University of Hong Kong, The

    2012-01-01

    Traditional content-based image retrieval methods based on learning from examples analyze and attempt to understand high-level semantics of an image as a whole. They typically apply certain case-based reasoning technique to interpret and retrieve images through measuring the semantic similarity or relatedness between example images and search candidate images. The drawback of such a traditional content-based image retrieval paradigm is that the summation of imagery contents in an image tends to lead to tremendous variation from image to image. Hence, semantically related images may only exhibit a small pocket of common elements, if at all. Such variability in image visual composition poses great challenges to content-based image retrieval methods that operate at the granularity of entire images. In this study, we explore a new content-based image retrieval algorithm that mines visual patterns of finer granularities inside a whole image to identify visual instances which can more reliably and generically represent a given search concept. We performed preliminary experiments to validate our new idea for content-based image retrieval and obtained very encouraging results.

  2. Retrieving biomedical images through content-based learning from examples using fine granularity

    Science.gov (United States)

    Jiang, Hao; Xu, Songhua; Lau, Francis C. M.

    2012-02-01

    Traditional content-based image retrieval methods based on learning from examples analyze and attempt to understand high-level semantics of an image as a whole. They typically apply certain case-based reasoning technique to interpret and retrieve images through measuring the semantic similarity or relatedness between example images and search candidate images. The drawback of such a traditional content-based image retrieval paradigm is that the summation of imagery contents in an image tends to lead to tremendous variation from image to image. Hence, semantically related images may only exhibit a small pocket of common elements, if at all. Such variability in image visual composition poses great challenges to content-based image retrieval methods that operate at the granularity of entire images. In this study, we explore a new content-based image retrieval algorithm that mines visual patterns of finer granularities inside a whole image to identify visual instances which can more reliably and generically represent a given search concept. We performed preliminary experiments to validate our new idea for content-based image retrieval and obtained very encouraging results.

  3. Ad-hoc Content-based Queries and Data Analysis for Virtual Observatories Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aquilent, Inc. proposes to support ad-hoc, content-based query and data retrieval from virtual observatories (VxO) by developing 1) Higher Order Query Services that...

  4. OneWeb: web content adaptation platform based on W3C Mobile Web Initiative guidelines

    National Research Council Canada - National Science Library

    Francisco O Martínez P; Gustavo A Uribe G; Fabián L Mosquera P

    2011-01-01

    .... This article presents the main features and functional modules of OneWeb, an MWI-based Web content adaptation platform developed by Mobile Devices Applications Development Interest Group's (W@PColombia...

  5. Single-labelled music genre classification using content-based features

    CSIR Research Space (South Africa)

    Ajoodha, R

    2015-11-01

    Full Text Available In this paper we use content-based features to perform automatic classification of music pieces into genres. We categorise these features into four groups: features extracted from the Fourier transform’s magnitude spectrum, features designed...

  6. Revisiting the Content-Based Instruction in Language Teaching in relation with CLIL: Implementation and Outcome

    Directory of Open Access Journals (Sweden)

    Abdul Karim

    2016-12-01

    Full Text Available The present article has reviewed literature on Content-Based Instruction (CBI along with the Content and Language Integrated Learning (CLIL in Language Teaching based on the recent development in the field. This includes the learning principle, factors responsible for the successful implementation of CBI/CLIL, their prospect and outcome. The paper is written based on secondary data from different articles providing exploratory account of contexts observed, paying attention to the views and practices of participants, and review papers on previous studies. The goal is to understand the aspects of CBI, its relation with CLIL, success and shortcoming resulted from the implementation in language teaching. Keywords: Overview, Content-Based Instruction (CBI, Content and Language Integrated Learning (CLIL, Immersion

  7. Measurements of water content in hydroxypropyl-methyl-cellulose based hydrogels via texture analysis.

    Science.gov (United States)

    Lamberti, Gaetano; Cascone, Sara; Cafaro, Maria Margherita; Titomanlio, Giuseppe; d'Amore, Matteo; Barba, Anna Angela

    2013-01-30

    In this work, a fast and accurate method to evaluate the water content in a cellulose derivative-based matrix subjected to controlled hydration was proposed and tuned. The method is based on the evaluation of the work of penetration required in the needle compression test. The work of penetration was successfully related to the hydrogel water content, assayed by a gravimetric technique. Moreover, a fitting model was proposed to correlate the two variables (the water content and the work of penetration). The availability of a reliable tool is useful both in the quantification of the water uptake phenomena, both in the management of the testing processes of novel pharmaceutical solid dosage forms.

  8. Obtaining Application-based and Content-based Internet Traffic Statistics

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Pedersen, Jens Myrup

    2012-01-01

    the Volunteer-Based System for Research on the Internet, developed at Aalborg University, is capable of providing detailed statistics of Internet usage. Since an increasing amount of HTTP traffic has been observed during the last few years, the system also supports creating statistics of different kinds of HTTP...... traffic, like audio, video, file transfers, etc. All statistics can be obtained for individual users of the system, for groups of users, or for all users altogether. This paper presents results with real data collected from a limited number of real users over six months. We demonstrate that the system can...... be useful for studying characteristics of computer network traffic in application-oriented or content-type- oriented way, and is now ready for a larger-scale implementation. The paper is concluded with a discussion about various applications of the system and possibilities of further enhancement....

  9. [Vegetation index estimation by chlorophyll content of grassland based on spectral analysis].

    Science.gov (United States)

    Xiao, Han; Chen, Xiu-Wan; Yang, Zhen-Yu; Li, Huai-Yu; Zhu, Han

    2014-11-01

    Comparing the methods of existing remote sensing research on the estimation of chlorophyll content, the present paper confirms that the vegetation index is one of the most practical and popular research methods. In recent years, the increasingly serious problem of grassland degradation. This paper, firstly, analyzes the measured reflectance spectral curve and its first derivative curve in the grasslands of Songpan, Sichuan and Gongger, Inner Mongolia, conducts correlation analysis between these two spectral curves and chlorophyll content, and finds out the regulation between REP (red edge position) and grassland chlorophyll content, that is, the higher the chlorophyll content is, the higher the REIP (red-edge inflection point) value would be. Then, this paper constructs GCI (grassland chlorophyll index) and selects the most suitable band for retrieval. Finally, this paper calculates the GCI by the use of satellite hyperspectral image, conducts the verification and accuracy analysis of the calculation results compared with chlorophyll content data collected from field of twice experiments. The result shows that for grassland chlorophyll content, GCI has stronger sensitivity than other indices of chlorophyll, and has higher estimation accuracy. GCI is the first proposed to estimate the grassland chlorophyll content, and has wide application potential for the remote sensing retrieval of grassland chlorophyll content. In addition, the grassland chlorophyll content estimation method based on remote sensing retrieval in this paper provides new research ideas for other vegetation biochemical parameters' estimation, vegetation growth status' evaluation and grassland ecological environment change's monitoring.

  10. RSAD: A Robust Distributed Contention-Based Adaptive Mechanism for IEEE 802.11 Wireless LANs

    Institute of Scientific and Technical Information of China (English)

    Yong Peng; Shi-Duan Cheng; Jun-Liang Chen

    2005-01-01

    Previous researches have shown that Distributed Coordination Function (DCF) access mode of IEEE 802.11 has lower performance in heavy contention environment. Based on the in-depth analysis of IEEE 802.11 DCF, NSAD (New Self-adapt DCF-based protocol) has been proposed to improve system saturation throughput in heavy contention condition.The initial contention window tuning algorithm of NSAD is proved effective in error-free environment. However, problems concerning the exchanging of initial contention window occur in error-prone environment. Based on the analysis of NSAD's performance in error-prone environment, RSAD is proposed to further enhance the performance. Simulation in a more real shadowing error-prone environment is done to compare the performance of NSAD and RSAD and results have shown that RSAD can achieve further performance improvement as expected in the error-prone environment than NSAD (i.e., better goodput and fairness index).

  11. REMOTE SENSING OF WATER VAPOR CONTENT USING GROUND-BASED GPS DATA

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Spatial and temporal resolution of water vapor content is useful in improving the accuracy of short-term weather prediction.Dense and continuously tracking regional GPS arrays will play an important role in remote sensing atmospheric water vapor content.In this study,a piecewise linear solution method was proposed to estimate the precipitable water vapor (PWV) content from ground-based GPS observations in Hong Kong.To evaluate the solution accuracy of the water vapor content sensed by GPS,the upper air sounding data (radiosonde) that are collected locally was used to calculate the precipitable water vapor during the same period.One-month results of PWV from both ground-based GPS sensing technique and radiosonde method are in agreement within 1~2 mm.This encouraging result will motivate the GPS meteorology application based on the establishment of a dense GPS array in Hong Kong.

  12. The Effect Of Implementing Content-Based Instruction For Young Learners.

    Directory of Open Access Journals (Sweden)

    Ima Isnaini Taufiqur Rohmah

    2015-07-01

    Full Text Available The integration of language and content instruction has become a new phenomenon in language education field. The aim of this research is to explore the implementation of Content-Based Instruction and the effect of implementing Content-Based Instruction for young learners. This research used a qualitative research method, in order to be able to observe and get detail information on how the students react and interact in any situation. This research was conducted on the fifth grade students. The data were taken through interview, observation, and analyze the documents. The Result of this study indicated that (1 the implementation of Content-Based Instruction in the fifth grade was well implemented, they used English language as instructional language but it does not supported by appropriate teaching documents. (2 Content Based Instruction automatically give a significance effect to the students’ speaking ability: students could answer the teacher’s questions; the use of mother tongue was reduced. It also improving class situation: the atmosphere in the whole class became alive, there were many chances for students to practice their speaking skill; students had great motivation, learning process became easy and fun for the students. Key words: Content-Based Instruction, Young Learners.

  13. [The new method monitoring crop water content based on NIR-Red spectrum feature space].

    Science.gov (United States)

    Cheng, Xiao-juan; Xu, Xin-gang; Chen, Tian-en; Yang, Gui-jun; Li, Zhen-hai

    2014-06-01

    Moisture content is an important index of crop water stress condition, timely and effective monitoring of crop water content is of great significance for evaluating crop water deficit balance and guiding agriculture irrigation. The present paper was trying to build a new crop water index for winter wheat vegetation water content based on NIR-Red spectral space. Firstly, canopy spectrums of winter wheat with narrow-band were resampled according to relative spectral response function of HJ-CCD and ZY-3. Then, a new index (PWI) was set up to estimate vegetation water content of winter wheat by improveing PDI (perpendicular drought index) and PVI (perpendicular vegetation index) based on NIR-Red spectral feature space. The results showed that the relationship between PWI and VWC (vegetation water content) was stable based on simulation of wide-band multispectral data HJ-CCD and ZY-3 with R2 being 0.684 and 0.683, respectively. And then VWC was estimated by using PWI with the R2 and RMSE being 0.764 and 0.764, 3.837% and 3.840%, respectively. The results indicated that PWI has certain feasibility to estimate crop water content. At the same time, it provides a new method for monitoring crop water content using remote sensing data HJ-CCD and ZY-3.

  14. Lithium bis(fluorosulfonyl)imide based low ethylene carbonate content electrolyte with unusual solvation state

    Science.gov (United States)

    Uchida, Satoshi; Ishikawa, Masashi

    2017-08-01

    We prepared a lithium bis(fluorosulfonyl)imide (LiFSI)-based low ethylene carbonate (EC) content electrolyte as a new electrolyte. LiFSI enough dissociates in mixed solvents containing only a small amount of EC and the LiFSI-based low EC content electrolyte shows a high ionic conductivity comparable to that of a conventional LiPF6-based high EC content electrolyte. In addition, the LiFSI-based low EC content electrolyte has an unusual solvation state of Li ion and we consider that the desolvation process from Li ion in our new electrolyte system is different from that in the conventional high EC content systems. A graphite half-cell assembled with our new electrolyte shows a quite low Li ion transfer resistance and outstanding charge and discharge rate performance compared to the conventional high EC content systems. A graphite/LiNi1/3Mn1/3Co1/3O2 cell assembled with our new electrolyte also shows superior charge and discharge rate performance and excellent long cycle stability.

  15. The waterborne polyurethane dispersions based on polycarbonate diol: Effect of ionic content

    Energy Technology Data Exchange (ETDEWEB)

    Cakić, Suzana M., E-mail: suzana.cakic@yahoo.com [University of Niš, Faculty of Technology, Bulevar oslobodjenja 124, 16000 Leskovac (Serbia); Špírková, Milena [Institute of Macromolecular Chemistry AS CR v.v.i., Heyrovskeho Nam. 2, 16206 Prague (Czech Republic); Ristić, Ivan S.; B-Simendić, Jaroslava K. [University of Novi Sad, Faculty of Technology, Bulevar cara Lazara 1, 21000 Novi Sad (Serbia); M-Cincović, Milena [University of Belgrade, Vinča Institute of Nuclear Science, P.O. Box 522, 11001 Belgrade (Serbia); Poręba, Rafał [Institute of Macromolecular Chemistry AS CR v.v.i., Heyrovskeho Nam. 2, 16206 Prague (Czech Republic)

    2013-02-15

    Three water-based polyurethane dispersions (PUD) were synthesized by modified dispersing procedure using polycarbonate diol (PCD), isophorone diisocyanate (IPDI), dimethylolpropionic acid (DMPA), triethylamine (TEA) and ethylenediamine (EDA). The ionic group content in the polyurethane-ionomer structure was varied by changing the amount of the internal emulsifier, DMPA (4.5, 7.5 and 10 wt.% to the prepolymer weight). The expected structures of obtained materials were confirmed by FTIR spectroscopy. The effect of the DMPA content on the thermal properties of polyurethane films was measured by TGA, DTA, DSC and DMTA methods. Increased DMPA amounts result in the higher hard segment contents and in the increase of the weight loss corresponding to the degradation of the hard segments. The reduction of hard segment content led to the elevated temperature of decomposition and to the decrease of the glass transition temperature and thermoplasticity. The atomic force microscopy (AFM), results indicated that phase separation between hard and soft segment of PUD with higher DMPA content is more significant than of PUD with lower DMPA content. The physico-mechanical properties, such as hardness, adhesion test and gloss of the dried films were also determined considering the effect of DMPA content on coating properties. Highlights: ► Polyurethane dispersions (PUD) were synthesized from polycarbonate diol. ► The effect of the DMPA content on the thermal properties of PUD films was measured. ► The thermal stability of PUD was increased by decreasing the DMPA content. ► T{sub g} values of PUD were increased by increasing ionic content. ► The PUD with the highest content of DMPA showed more significant phase separation confirmed by AFM results.

  16. Intimate evolution of proteins. Proteome atomic content correlates with genome base composition.

    Science.gov (United States)

    Baudouin-Cornu, Peggy; Schuerer, Katja; Marlière, Philippe; Thomas, Dominique

    2004-02-13

    Discerning the significant relations that exist within and among genome sequences is a major step toward the modeling of biopolymer evolution. Here we report the systematic analysis of the atomic composition of proteins encoded by organisms representative of each kingdoms. Protein atomic contents are shown to vary largely among species, the larger variations being observed for the main architectural component of proteins, the carbon atom. These variations apply to the bulk proteins as well as to subsets of ortholog proteins. A pronounced correlation between proteome carbon content and genome base composition is further evidenced, with high G+C genome content being related to low protein carbon content. The generation of random proteomes and the examination of the canonical genetic code provide arguments for the hypothesis that natural selection might have driven genome base composition.

  17. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen;

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue contains) and we measured the water they contained. Results:The 37 Celsius degree system and the analysis can be reproduced in a similar way. MR T1...

  18. FFT-based Network Coding For Peer-To-Peer Content Delivery

    CERN Document Server

    Soro, Alexandre

    2009-01-01

    In this paper, we propose a structured peer-to-peer (P2P) distribution scheme based on Fast Fourier Transform (FFT) graphs. We build a peer-to-peer network that reproduces the FFT graph initially designed for hardware FFT codecs. This topology allows content delivery with a maximum diversity level for a minimum global complexity. The resulting FFTbased network is a structured architecture with an adapted network coding that brings flexibility upon content distribution and robustness upon the dynamic nature of the network. This structure can achieve optimal capacity in terms of content recovery while solving the problem of last remaining blocks, even for large networks.

  19. Design of Content-Based Retrieval System in Remote Sensing Image Database

    Institute of Scientific and Technical Information of China (English)

    LI Feng; ZENG Zhiming; HU Yanfeng; FU Kun

    2006-01-01

    To retrieve the object region efficaciously from massive remote sensing image database, a model for content-based retrieval of remote sensing image is given according to the characters of remote sensing image application firstly, and then the algorithm adopted for feature extraction and multidimensional indexing, and relevance feedback by this model are analyzed in detail. Finally, the contents intending to be researched about this model are proposed.

  20. Content Based Image Retrieval using Hierarchical and K-Means Clustering Techniques

    Directory of Open Access Journals (Sweden)

    V.S.V.S. Murthy

    2010-03-01

    Full Text Available In this paper we present an image retrieval system that takes an image as the input query and retrieves images based on image content. Content Based Image Retrieval is an approach for retrieving semantically-relevant images from an image database based on automatically-derived image features. The unique aspect of the system is the utilization of hierarchical and k-means clustering techniques. The proposed procedure consists of two stages. First, here we are going to filter most of the images in the hierarchical clustering and then apply the clustered images to KMeans, so that we can get better favored image results.

  1. Towards case-based medical learning in radiological decision making using content-based image retrieval

    Directory of Open Access Journals (Sweden)

    Günther Rolf W

    2011-10-01

    Full Text Available Abstract Background Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. Methods We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii Case-based reasoning (CBR parallels the human problem-solving process; (iii Content-based image retrieval (CBIR can be useful for computer-aided diagnosis (CAD. To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE. The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL. In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. Results We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i the IRMA core, i.e., the IRMA CBIR engine; and (ii the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM and Health Level Seven (HL7. Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. Conclusions The IBCR-RE paradigm

  2. Towards case-based medical learning in radiological decision making using content-based image retrieval.

    Science.gov (United States)

    Welter, Petra; Deserno, Thomas M; Fischer, Benedikt; Günther, Rolf W; Spreckelsen, Cord

    2011-10-27

    Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i) Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii) Case-based reasoning (CBR) parallels the human problem-solving process; (iii) Content-based image retrieval (CBIR) can be useful for computer-aided diagnosis (CAD). To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE). The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL). In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA) framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i) the IRMA core, i.e., the IRMA CBIR engine; and (ii) the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT) infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7). Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. The IBCR-RE paradigm incorporates a novel combination of essential aspects

  3. Newspaper Content Analysis in Evaluation of a Community-Based Participatory Project to Increase Physical Activity

    Science.gov (United States)

    Granner, Michelle L.; Sharpe, Patricia A.; Burroughs, Ericka L.; Fields, Regina; Hallenbeck, Joyce

    2010-01-01

    This study conducted a newspaper content analysis as part of an evaluation of a community-based participatory research project focused on increasing physical activity through policy and environmental changes, which included activities related to media advocacy and media-based community education. Daily papers (May 2003 to December 2005) from both…

  4. Evaluation of the Professional Development Program on Web Based Content Development

    Science.gov (United States)

    Yurdakul, Bünyamin; Uslu, Öner; Çakar, Esra; Yildiz, Derya G.

    2014-01-01

    The aim of this study is to evaluate the professional development program on web based content development (WBCD) designed by the Ministry of National Education (MoNE). Based on the theoretical CIPP model by Stufflebeam and Guskey's levels of evaluation, the study was carried out as a case study. The study group consisted of the courses that…

  5. Brain Based Learning in Science Education in Turkey: Descriptive Content and Meta Analysis of Dissertations

    Science.gov (United States)

    Yasar, M. Diyaddin

    2017-01-01

    This study aimed at performing content analysis and meta-analysis on dissertations related to brain-based learning in science education to find out the general trend and tendency of brain-based learning in science education and find out the effect of such studies on achievement and attitude of learners with the ultimate aim of raising awareness…

  6. NUNI (New User and New Item) Problem for SRSs Using Content Aware Multimedia-Based Approach

    DEFF Research Database (Denmark)

    Chaudhary, Pankaj; Deshmukh, Aaradhana A.; Mihovska, Albena Dimitrova

    2015-01-01

    Recommendation systems suggest items and users of interest based on preferences of items or users and item or user attributes. In social media-based services of dynamic content (such as news, blog, video, movies, books, etc.), recommender systems face the problem of discovering new items, new use...

  7. Content Analysis of Conceptually Based Physical Education in Southeastern United States Universities and Colleges

    Science.gov (United States)

    Williams, Suzanne Ellen; Greene, Leon; Satinsky, Sonya; Neuberger, John

    2016-01-01

    Purpose: The purposes of this study were to explore PE in higher education through the offering of traditional activity- and skills-based physical education (ASPE) and conceptually-based physical education (CPE) courses, and to conduct an exploratory content analysis on the CPE available to students in randomized colleges and universities in the…

  8. Multi Attribute Content Distribution and Replication Based Video Streaming in Wireless Networks for Qos Improvement

    Directory of Open Access Journals (Sweden)

    A. Balakrishnan

    2015-10-01

    Full Text Available The growth of information technology has introduced various functionalities and services to support video streaming like video streaming and live streaming. There are many approaches has been discussed to support content delivery in wireless networks, but suffers with the problem of latency and quality of streaming which takes more time and the frequency of retransmission is high. To solve these problems, we propose a multi attribute location selection and distribution approach to select the location from where the video content has to be fetched and based on the multi attribute replication scheme new copy of the video content will be replicated in more locations according to various factors of quality of service. The proposed method maintains number of replicas  of video content in different locations of wireless networks. The method selects the location of the video content or the node which has the requested data according to the delay present in the network and the user location. Also the number of replicas maintained is performed according to the spatial request factor which represents the number of request being received from different user from a specific spatial region and the other factors like delay, number of users and the traffic incurred in the network towards a video content. The proposed method reduces the overall latency present in the network and increases the efficiency of content delivery which supports multimedia data transfer. Also the proposed method reduces the overall time complexity and reduces the overhead introduced by data transfer.

  9. Semantic query processing and annotation generation for content-based retrieval of histological images

    Science.gov (United States)

    Tang, Lilian H.; Hanka, Rudolf; Ip, Horace H. S.; Cheung, Kent K. T.; Lam, Ringo

    2000-05-01

    In this paper we present a semantic content representation scheme and the associated techniques for supporting (1) query by image examples or by natural language in a histological image database and (2) automatic annotation generation for images through image semantic analysis. In this research, various types of query are analyzed by either a semantic analyzer or a natural language analyzer to extract high level concepts and histological information, which are subsequently converted into an internal semantic content representation structure code-named 'Papillon.' Papillon serves not only as an intermediate representation scheme but also stores the semantic content of the image that will be used to match against the semantic index structure within the image database during query processing. During the image database population phase, all images that are going to be put into the database will go through the same processing so that every image would have its semantic content represented by a Papillon structure. Since the Papillon structure for an image contains high level semantic information of the image, it forms the basis of the technique that automatically generates textual annotation for the input images. Papillon bridges the gap between different media in the database, allows complicated intelligent browsing to be carried out efficiently, and also provides a well- defined semantic content representation scheme for different content processing engines developed for content-based retrieval.

  10. Multi Feature Content Based Video Retrieval Using High Level Semantic Concept

    Directory of Open Access Journals (Sweden)

    Hamdy K. Elminir

    2012-07-01

    Full Text Available Content-based retrieval allows finding information by searching its content rather than its attributes. The challenge facing content-based video retrieval (CBVR is to design systems that can accurately and automatically process large amounts of heterogeneous videos. Moreover, content-based video retrieval system requires in its first stage to segment the video stream into separate shots. Afterwards features are extracted for video shots representation. And finally, choose a similarity/distance metric and an algorithm that is efficient enough to retrieve query - related videos results. There are two main issues in this process; the first is how to determine the best way for video segmentation and key frame selection. The second is the features used for video representation. Various features can be extracted for this sake including either low or high level features. A key issue is how to bridge the gap between low and high level features. This paper proposes a system for a content based video retrieval system that tries to address the aforementioned issues by using adaptive threshold for video segmentation and key frame selection as well as using both low level features together with high level semantic object annotation for video representation. Experimental results show that the use of multi features increases both precision and recall rates by about 13% to 19 % than traditional system that uses only color feature for video retrieval.

  11. Performance Evaluation of Content Based Image Retrieval on Feature Optimization and Selection Using Swarm Intelligence

    Directory of Open Access Journals (Sweden)

    Kirti Jain

    2016-03-01

    Full Text Available The diversity and applicability of swarm intelligence is increasing everyday in the fields of science and engineering. Swarm intelligence gives the features of the dynamic features optimization concept. We have used swarm intelligence for the process of feature optimization and feature selection for content-based image retrieval. The performance of content-based image retrieval faced the problem of precision and recall. The value of precision and recall depends on the retrieval capacity of the image. The basic raw image content has visual features such as color, texture, shape and size. The partial feature extraction technique is based on geometric invariant function. Three swarm intelligence algorithms were used for the optimization of features: ant colony optimization, particle swarm optimization (PSO, and glowworm optimization algorithm. Coral image dataset and MatLab software were used for evaluating performance.

  12. Segmentation and Content-Based Watermarking for Color Image and Image Region Indexing and Retrieval

    Directory of Open Access Journals (Sweden)

    Nikolaos V. Boulgouris

    2002-04-01

    Full Text Available In this paper, an entirely novel approach to image indexing is presented using content-based watermarking. The proposed system uses color image segmentation and watermarking in order to facilitate content-based indexing, retrieval and manipulation of digital images and image regions. A novel segmentation algorithm is applied on reduced images and the resulting segmentation mask is embedded in the image using watermarking techniques. In each region of the image, indexing information is additionally embedded. In this way, the proposed system is endowed with content-based access and indexing capabilities which can be easily exploited via a simple watermark detection process. Several experiments have shown the potential of this approach.

  13. A simple method for determining water content in organic solvents based on cobalt(II) complexes

    Institute of Scientific and Technical Information of China (English)

    Lin Zhou; Xiao Hua Liu; Hai Xin Bai; Hong Juan Wang

    2011-01-01

    A method to determine water content in organic solvents was developed based on the color change of cobalt(II) nitrate in different solvents. The color-change mechanism and optimal conditions for determining the water content were investigated. The results showed that there was a good linear relationships between the absorbance of cobalt(II) complexes in organic solvents and water contents with y in 0.9989~0.9994. This method has the advantages of low cost, good reproducibility, good sensitivity, simple in operation, fast in detection, friendly to the environment and no limitation on linear range for determining water content. It was used to determine water in samples with a satisfactory recovery in 97.81%~101.24%.

  14. The Flexural Behavior of Denture Base Reinforced by Different Contents of Ultrahigh-Modulus Polyethylene Fiber

    Institute of Scientific and Technical Information of China (English)

    XU Dong-xuan; CHENG Xiang-rong; ZHANG Yu-feng; WANG Jun; CHENG Han-ting

    2003-01-01

    Denture base made from acrylic resin (polymethyl methacrylate,PMMA) was reinforced by different contents of ultrahigh-modulus polyethylene fiber (UHMPEF).The flexural strength of the denture base was tested,the failure modes and microstructures were investigated with a scanning electron microscope(SEM).The results indicate that 3.5wt%UHMPEF increased the ultimate flexural strength of the denture base.

  15. An Internet content overview and implementation on an IP based set-top box

    OpenAIRE

    Widborg, Linus

    2006-01-01

    This thesis covers the investigation of different content sources on the Internet and the analysis of the requirements they put on a set-top box. It also covers the adaptation of the set-top box to one of these sources. An IP based set-top box (IP-STB) is mainly constructed for access to TV and video distributed over a high speed network. The IP-STB is also connected to the Internet and it potentially has access to all of the Internet based content. This could provide the user of the IP-STB ...

  16. Conceptualizing In-service Secondary School Science Teachers' Knowledge Base for Climate Change Content

    Science.gov (United States)

    Campbell, K. M.; Roehrig, G.; Dalbotten, D. M.; Bhattacharya, D.; Nam, Y.; Varma, K.; Wang, J.

    2011-12-01

    The need to deepen teachers' knowledge of the science of climate change is crucial under a global climate change (GCC) scenario. With effective collaboration between researchers, scientists and teachers, conceptual frameworks can be developed for creating climate change content for classroom implementation. Here, we discuss how teachers' conceptualized content knowledge about GCC changes over the course of a professional development program in which they are provided with place-based and culturally congruent content. The NASA-funded Global Climate Change Education (GCCE) project, "CYCLES: Teachers Discovering Climate Change from a Native Perspective", is a 3-year teacher professional development program designed to develop culturally-sensitive approaches for GCCE in Native American communities using traditional knowledge, data and tools. As a part of this program, we assessed the progression in the content knowledge of participating teachers about GCC. Teachers were provided thematic GCC content focused on the elements of the medicine wheel-Earth, Fire, Air, Water, and Life -during a one week summer workshop. Content was organized to emphasize explanations of the natural world as interconnected and cyclical processes and to align with the Climate and Earth Science Literacy Principles and NASA resources. Year 1 workshop content was focused on the theme of "Earth" and teacher knowledge was progressively increased by providing content under the themes of 1) understanding of timescale, 2) understanding of local and global perspectives, 3) understanding of proxy data and 4) ecosystem connectivity. We used a phenomenographical approach for data analysis to qualitatively investigate different ways in which the teachers experienced and conceptualized GCC. We analyzed categories of teachers' climate change knowledge using information generated by tools such as photo elicitation interviews, concept maps and reflective journal perceptions. Preliminary findings from the pre

  17. A Simulation for Content-based and Utility-based Recommendation of Candidate Coalitions in Virtual Creativity Teams

    NARCIS (Netherlands)

    Sie, Rory; Bitter-Rijpkema, Marlies; Sloep, Peter

    2010-01-01

    Sie, R. L. L., Bitter-Rijpkema, M. E., Sloep, P. B. (2010). A Simulation for Content-based and Utility-based Recommendation of Candidate Coalitions in Virtual Creativity Teams. 1st Workshop on Recommender Systems for Technology Enhanced Learning (RecSysTEL 2010). September, 29 -30, 2010, Barcelona,

  18. Sugar and inorganic anions content in mineral and spring water-based beverages.

    Science.gov (United States)

    Bilek, Maciej; Matłok, Natalia; Kaniuczak, Janina; Gorzelany, Józef

    2014-01-01

    Carbonated and non-carbonated beverages manufactured based on mineral and spring waters have been present at the Polish market shortly, and their production and sales are regularly growing. The products have become commonly known as flavoured waters. The aim of the work was to identify and assess the content of carbohydrates used for sweetening mineral and spring water-based beverages and to estimate a concentration of inorganic anions. The study was undertaken for 15 mineral and spring water-based beverages subject to an analysis contents of fructose, glucose and sucrose with the high-performance liquid chromatography method with ELSD detection) and chlorides, nitrates and sulphates contents using the ion chromatography method. A chromatographic analysis has confirmed the total contents of sugar declared by the manufacturers. The carbohydrates identified included fructose, glucose and sucrose (added sugar). Chlorides and sulphates were found in the content of all the analysed beverages while nitrates were not determined in only one of the 15 examined beverages. Mass consumption of mineral and spring water-based beverages should be considered as an important source of sugar and their excessive consumption may be disadvantageous for human health. A consumer should be informed by a manufacturer about a daily dose of sugar in a portion of a drink in per cents, and the easiest way to do it is to provide GDA marks on the label. Mineral and spring water-based beverages do not pose threats to consumer health in terms of their contents of inorganic ions: chlorides, nitrates and sulphates.

  19. Data Structures and Algorithms for Graph Based Remote Sensed Image Content Storage and Retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Grant, C W

    2004-06-24

    The Image Content Engine (ICE) project at Lawrence Livermore National Laboratory (LLNL) extracts, stores and allows queries of image content on multiple levels. ICE is designed for multiple application domains. The domain explored in this work is aerial and satellite surveillance imagery. The highest level of semantic information used in ICE is graph based. After objects are detected and classified, they are grouped based in their interrelations. The graph representing a locally related set of objects is called a 'graphlet'. Graphlets are interconnected into a larger graph which covers an entire set of images. Queries based on graph properties are notoriously difficult due the inherent complexity of the graph isomorphism and sub-graph isomorphism problems. ICE exploits limitations in graph and query structure and uses a set of auxiliary data structures to quickly process a useful set of graph based queries. These queries could not be processed using semantically lower level (tile and object based) queries.

  20. A Content-Based Parallel Image Retrieval System on Cluster Architectures

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing; SHEN Jun-yi; PENG Qin-ke

    2004-01-01

    We propose a content-based parallel image retrieval system to achieve high responding ability.Our system is developed on cluster architectures.It has several retrieval servers to supply the service of content-based image retrieval.It adopts the Browser/Server (B/S) mode.The users could visit our system though web pages.It uses the symmetrical color-spatial features (SCSF) to represent the content of an image.The SCSF is effective and efficient for image matching because it is independent of image distortion such as rotation and flip as well as it increases the matching accuracy.The SCSF was organized by M-tree, which could speedup the searching procedure.Our experiments show that the image matching is quickly and efficiently with the use of SCSF.And with the support of several retrieval servers, the system could respond to many users at mean time.

  1. A hierarchical P2P overlay network for interest-based media contents lookup

    Science.gov (United States)

    Lee, HyunRyong; Kim, JongWon

    2006-10-01

    We propose a P2P (peer-to-peer) overlay architecture, called IGN (interest grouping network), for contents lookup in the DHC (digital home community), which aims to provide a formalized home-network-extended construction of current P2P file sharing community. The IGN utilizes the Chord and de Bruijn graph for its hierarchical overlay network construction. By combining two schemes and by inheriting its features, the IGN efficiently supports contents lookup. More specifically, by introducing metadata-based lookup keyword, the IGN offers detailed contents lookup that can reflect the user interests. Moreover, the IGN tries to reflect home network environments of DHC by utilizing HG (home gateway) of each home network as a participating node of the IGN. Through experimental and analysis results, we show that the IGN is more efficient than Chord, a well-known DHT (distributed hash table)-based lookup protocol.

  2. Mediating Language Learning: Teacher Interactions with ESL Students in a Content-Based Classroom.

    Science.gov (United States)

    Gibbons, Pauline

    2003-01-01

    Draws on constructs of "mediation" from sociocultural theory and "mode continuum" from systemic functional linguistics to investigate how student-teacher talk in a content-based classroom contributes to learners' language development. Shows how teachers mediate between students' linguistic levels in English and their…

  3. Content-Based Recreational Book Reading and Taiwanese Adolescents' Academic Achievement

    Science.gov (United States)

    Chen, Su-Yen; Chang, Hsing-Yu; Yang, Shih Ruey

    2017-01-01

    The linkage between reading for pleasure and language ability has been well established, but the relationship between content-based recreational reading and academic achievement in various subject areas has rarely been explored. To investigate whether reading literature, social studies, and science trade books for pleasure is related to students'…

  4. Approaches to inclusive English classrooms a teacher's handbook for content-based instruction

    CERN Document Server

    Mastruserio Reynolds, Kate

    2015-01-01

    This accessible book takes a critical approach towards content-based instruction methods, bridging the gap between theory and practice in order to allow teachers to make an informed decision about best practices for an inclusive classroom. It is a resource for both educators and ESL teachers working within an English learner inclusion environment.

  5. Comparison of color representations for content-based image retrieval in dermatology

    NARCIS (Netherlands)

    Bosman, Hedde H.W.J.; Petkov, Nicolai; Jonkman, Marcel F.

    2010-01-01

    Background/purpose: We compare the effectiveness of 10 different color representations in a content-based image retrieval task for dermatology. Methods: As features, we use the average colors of healthy and lesion skin in an image. The extracted features are used to retrieve similar images from a da

  6. Automation of a Local Table of Contents Service Using dBase III.

    Science.gov (United States)

    Bellamy, Lois M.; Guyton, Joanne

    1987-01-01

    Automation of a table of contents service at the Methodist Hospital School of Nursing Library using dBase III facilitates matching patrons with journals. The program is also used for journal check-in and mailing labels. Future applications may include production of a journal holdings list, statistics, and reporting. (21 references) (MES)

  7. A picture is worth a thousand words : content-based image retrieval techniques

    NARCIS (Netherlands)

    Thomée, Bart

    2010-01-01

    In my dissertation I investigate techniques for improving the state of the art in content-based image retrieval. To place my work into context, I highlight the current trends and challenges in my field by analyzing over 200 recent articles. Next, I propose a novel paradigm called ‘artificial imagina

  8. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    2011-01-01

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn favorabl

  9. Negotiating Curricular Transitions: Foreign Language Teachers' Learning Experience with Content-Based Instruction

    Science.gov (United States)

    Cammarata, Laurent

    2009-01-01

    Content-based instruction (CBI) has been touted as an effective curricular approach in a wide range of educational contexts, including immersion and English as a second language. Yet this approach to curriculum design is rarely implemented in conventional K-16 foreign language (FL) programs in the United States today. The phenomenological study…

  10. New formulations of sunflower based bio-lubricants with high oleic acid content – VOSOLUB project

    Directory of Open Access Journals (Sweden)

    Leao J. D.

    2016-09-01

    Full Text Available VOSOLUB project is a demonstration project supported by Executive Agency for Small and Medium-sized Enterprises (EASME that aims at testing under real operating conditions new formulations of sunflower-based biolubricants with high oleic acid content. These biolubricant formulations (including hydraulic fluids, greases, and neat oil metal-working fluids will be tested in three European demonstrating sites. Their technical performance will be evaluated and compared to corresponding mineral lubricants ones. In order to cover the demand for the sunflower base oil, a European SMEs network will be established to ensure the supply of the base at a competitive market price. Results presented concerns the base oil quality confirmed to be in accordance with the specification required, in particular on Free Fatty acid content, Phosphorus content, rancimat induction time and oleic acid content (ITERG. The oil characteristics specific for lubricant application analyzed by BfB Oil Research under normalized methods, match with lubricant specifications requirement such as viscosity, cold & hot properties, surface properties, anti-oxidant properties and thermal stability, anti-wear and EP properties, anti-corrosion properties Performance of the new biolubricant have been assessed by formulators and TEKNIKER First results on the use of new lubricant on real condition for rail Grease (produced by RS CLARE and tested with Sheffield Supertram, Hydraulic oil (produced by BRUGAROLAS and cutting oil (produced by MOTUL TECH and tested with innovative machining, turning are described.

  11. Task-Based Learning and Content and Language Integrated Learning Materials Design: Process and Product

    Science.gov (United States)

    Moore, Pat; Lorenzo, Francisco

    2015-01-01

    Content and language integrated learning (CLIL) represents an increasingly popular approach to bilingual education in Europe. In this article, we describe and discuss a project which, in response to teachers' pleas for materials, led to the production of a significant bank of task-based primary and secondary CLIL units for three L2s (English,…

  12. Pivot Points: Direct Measures of the Content and Process of Community-Based Learning

    Science.gov (United States)

    Wickersham, Carol; Westerberg, Charles; Jones, Karen; Cress, Margaret

    2016-01-01

    This research is an initial investigation into the ways community-based learning increase the cognitive skills central to the exercise of the sociological imagination. In addition to identifying a means to reveal that learning had occurred, we looked for evidence that the students were mastering sociological content, especially the concepts and…

  13. Design and realisation of an efficient content based music playlist generation system

    NARCIS (Netherlands)

    Balkema, Jan Wietse

    2009-01-01

    This thesis presents research in the field of content based music playlist generation. The focus is on speeding up music similarity calculations at playlist generation time. Two improvements on the current state of technology are presented. Furthermore, a study on user preferences and requirements o

  14. An Overview of Data Models and Query Languages for Content-based Video Retrieval

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.

    2000-01-01

    As a large amount of video data becomes publicly available, the need to model and query this data efficiently becomes significant. Consequently, content-based retrieval of video data turns out to be a challenging and important problem addressing areas such as video modelling, indexing, querying, etc

  15. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    Huiskes, M.J.; Pauwels, E.J.

    2004-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by t

  16. Indexing, learning and content-based retrieval for special purpose image databases

    NARCIS (Netherlands)

    Huiskes, M.J.; Pauwels, E.J.

    2005-01-01

    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by t

  17. Density-independent algorithm for sensing moisture content of sawdust based on reflection measurements

    Science.gov (United States)

    A density-independent algorithm for moisture content determination in sawdust, based on a one-port reflection measurement technique is proposed for the first time. Performance of this algorithm is demonstrated through measurement of the dielectric properties of sawdust with an open-ended haft-mode s...

  18. Caravaggio: A Design for an Interdisciplinary Content-Based EAP/ESP Unit.

    Science.gov (United States)

    Kirschner, Michal; Wexler, Carol

    2002-01-01

    Presents a detailed design for a content-based unit, the focus of which is the film "Caravaggio." The unit also includes readings in art history and film and is part of a specialized English for academic purposes/English for special purposes reading comprehension course for first-year students majoring in art history and in a…

  19. The Impact of Content-Based Network Technologies on Perceptions of Nutrition Literacy

    Science.gov (United States)

    Brewer, Hannah; Church, E. Mitchell; Brewer, Steven L.

    2016-01-01

    Background: Consumers are exposed to obesogenic environments on a regular basis. Building nutrition literacy is critical for sustaining healthy dietary habits for a lifetime and reducing the prevalence of chronic disease. Purpose: There is a need to investigate the impact of content-based network (CBN) technologies on perceptions of nutrition…

  20. Technology-Based Content through Virtual and Physical Modeling: A National Research Study

    Science.gov (United States)

    Ernst, Jeremy V.; Clark, Aaron C.

    2009-01-01

    Visualization is becoming more prevalent as an application in science, engineering, and technology related professions. The analysis of static and dynamic graphical visualization provides data solutions and understandings that go beyond traditional forms of communication. The study of technology-based content and the application of conceptual…

  1. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  2. Content-Based Language Instruction: A New Window of Opportunity in Geography Education

    Science.gov (United States)

    Hardwick, Susan W.; Davis, Robert L.

    2009-01-01

    The use of content-based language instruction (CBI) offers an innovative and effective method for teaching core geographic concepts and skills while students study a second language. This article focuses on a collaborative initiative developed and tested by university and high school level geography and second-language educators. The goal of the…

  3. Experienced Teachers' Pedagogical Content Knowledge of Teaching Acid-Base Chemistry

    Science.gov (United States)

    Drechsler, Michal; Van Driel, Jan

    2008-01-01

    We investigated the pedagogical content knowledge (PCK) of nine experienced chemistry teachers. The teachers took part in a teacher training course on students' difficulties and the use of models in teaching acid-base chemistry, electrochemistry, and redox reactions. Two years after the course, the teachers were interviewed about their PCK of (1)…

  4. Learning effective color features for content based image retrieval in dermatology

    NARCIS (Netherlands)

    Bunte, Kerstin; Biehl, Michael; Jonkman, Marcel F.; Petkov, Nicolai

    We investigate the extraction of effective color features for a content-based image retrieval (CBIR) application in dermatology. Effectiveness is measured by the rate of correct retrieval of images from four color classes of skin lesions. We employ and compare two different methods to learn

  5. Online Problem-Based Learning in Postgraduate Medical Education--Content Analysis of Reflection Comments

    Science.gov (United States)

    Gonzalez, Maria L.; Salmoni, Alan J.

    2008-01-01

    We developed the Med-e-Conference, an online tool to teach clinical skills to medical students, which integrated problem-based learning with collaborative group tasks. The final task asked students to consider what they had done (reflection). These comments were analysed using content analysis, and 10 themes were elicited. The number of agreements…

  6. A Model-Based Method for Content Validation of Automatically Generated Test Items

    Science.gov (United States)

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  7. Content-Based Internet-Assisted ESP Teaching to Ukrainian University Students Majoring in Psychology

    Science.gov (United States)

    Tarnopolsky, Oleg

    2009-01-01

    This article discusses the issues of teaching ESP to Ukrainian tertiary students majoring in psychology. The suggested approach is based on teaching English through the content matter of special subjects included in the program of training practical psychologists. The example of an ESP textbook for psychologists is used for demonstrating the…

  8. Automation of a Local Table of Contents Service Using dBase III.

    Science.gov (United States)

    Bellamy, Lois M.; Guyton, Joanne

    1987-01-01

    Automation of a table of contents service at the Methodist Hospital School of Nursing Library using dBase III facilitates matching patrons with journals. The program is also used for journal check-in and mailing labels. Future applications may include production of a journal holdings list, statistics, and reporting. (21 references) (MES)

  9. Experienced Teachers' Pedagogical Content Knowledge of Teaching Acid-Base Chemistry

    Science.gov (United States)

    Drechsler, Michal; Van Driel, Jan

    2008-01-01

    We investigated the pedagogical content knowledge (PCK) of nine experienced chemistry teachers. The teachers took part in a teacher training course on students' difficulties and the use of models in teaching acid-base chemistry, electrochemistry, and redox reactions. Two years after the course, the teachers were interviewed about their PCK of (1)…

  10. Content-Based Image Retrieval Benchmarking: Utilizing color categories and color distributions

    NARCIS (Netherlands)

    van den Broek, Egon; Kisters, Peter M.F.; Vuurpijl, Louis G.

    From a human centered perspective three ingredients for Content-Based Image Retrieval (CBIR) were developed. First, with their existence confirmed by experimental data, 11 color categories were utilized for CBIR and used as input for a new color space segmentation technique. The complete HSI color

  11. Task-Based Learning and Content and Language Integrated Learning Materials Design: Process and Product

    Science.gov (United States)

    Moore, Pat; Lorenzo, Francisco

    2015-01-01

    Content and language integrated learning (CLIL) represents an increasingly popular approach to bilingual education in Europe. In this article, we describe and discuss a project which, in response to teachers' pleas for materials, led to the production of a significant bank of task-based primary and secondary CLIL units for three L2s (English,…

  12. Content-Based Multimedia Retrieval in the Presence of Unknown User Preferences

    DEFF Research Database (Denmark)

    Beecks, Christian; Assent, Ira; Seidl, Thomas

    2011-01-01

    Content-based multimedia retrieval requires an appropriate similarity model which reflects user preferences. When these preferences are unknown or when the structure of the data collection is unclear, retrieving the most preferable objects the user has in mind is challenging, as the notion of sim...

  13. A Novel Technique for Shape Feature Extraction Using Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Dhanoa Jaspreet Singh

    2016-01-01

    Full Text Available With the advent of technology and multimedia information, digital images are increasing very quickly. Various techniques are being developed to retrieve/search digital information or data contained in the image. Traditional Text Based Image Retrieval System is not plentiful. Since it is time consuming as it require manual image annotation. Also, the image annotation differs with different peoples. An alternate to this is Content Based Image Retrieval (CBIR system. It retrieves/search for image using its contents rather the text, keywords etc. A lot of exploration has been compassed in the range of Content Based Image Retrieval (CBIR with various feature extraction techniques. Shape is a significant image feature as it reflects the human perception. Moreover, Shape is quite simple to use by the user to define object in an image as compared to other features such as Color, texture etc. Over and above, if applied alone, no descriptor will give fruitful results. Further, by combining it with an improved classifier, one can use the positive features of both the descriptor and classifier. So, a tryout will be made to establish an algorithm for accurate feature (Shape extraction in Content Based Image Retrieval (CBIR. The main objectives of this project are: (a To propose an algorithm for shape feature extraction using CBIR, (b To evaluate the performance of proposed algorithm and (c To compare the proposed algorithm with state of art techniques.

  14. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    Directory of Open Access Journals (Sweden)

    Wei Liu

    Full Text Available One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP. First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  15. Ensemble learning for spatial interpolation of soil potassium content based on environmental information.

    Science.gov (United States)

    Liu, Wei; Du, Peijun; Wang, Dongchen

    2015-01-01

    One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.

  16. Quantitative Prediction of Coalbed Gas Content Based on Seismic Multiple-Attribute Analyses

    Directory of Open Access Journals (Sweden)

    Renfang Pan

    2015-09-01

    Full Text Available Accurate prediction of gas planar distribution is crucial to selection and development of new CBM exploration areas. Based on seismic attributes, well logging and testing data we found that seismic absorption attenuation, after eliminating the effects of burial depth, shows an evident correlation with CBM gas content; (positive structure curvature has a negative correlation with gas content; and density has a negative correlation with gas content. It is feasible to use the hydrocarbon index (P*G and pseudo-Poisson ratio attributes for detection of gas enrichment zones. Based on seismic multiple-attribute analyses, a multiple linear regression equation was established between the seismic attributes and gas content at the drilling wells. Application of this equation to the seismic attributes at locations other than the drilling wells yielded a quantitative prediction of planar gas distribution. Prediction calculations were performed for two different models, one using pre-stack inversion and the other one disregarding pre-stack inversion. A comparison of the results indicates that both models predicted a similar trend for gas content distribution, except that the model using pre-stack inversion yielded a prediction result with considerably higher precision than the other model.

  17. Identifying content-based and relational techniques to change behaviour in motivational interviewing.

    Science.gov (United States)

    Hardcastle, Sarah J; Fortier, Michelle; Blake, Nicola; Hagger, Martin S

    2017-03-01

    Motivational interviewing (MI) is a complex intervention comprising multiple techniques aimed at changing health-related motivation and behaviour. However, MI techniques have not been systematically isolated and classified. This study aimed to identify the techniques unique to MI, classify them as content-related or relational, and evaluate the extent to which they overlap with techniques from the behaviour change technique taxonomy version 1 [BCTTv1; Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., … Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46, 81-95]. Behaviour change experts (n = 3) content-analysed MI techniques based on Miller and Rollnick's [(2013). Motivational interviewing: Preparing people for change (3rd ed.). New York: Guildford Press] conceptualisation. Each technique was then coded for independence and uniqueness by independent experts (n = 10). The experts also compared each MI technique to those from the BCTTv1. Experts identified 38 distinct MI techniques with high agreement on clarity, uniqueness, preciseness, and distinctiveness ratings. Of the identified techniques, 16 were classified as relational techniques. The remaining 22 techniques were classified as content based. Sixteen of the MI techniques were identified as having substantial overlap with techniques from the BCTTv1. The isolation and classification of MI techniques will provide researchers with the necessary tools to clearly specify MI interventions and test the main and interactive effects of the techniques on health behaviour. The distinction between relational and content-based techniques within MI is also an important advance, recognising that changes in motivation and behaviour in MI is a function of both intervention content and the interpersonal style

  18. Content-Based Video Quality Prediction for MPEG4 Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2009-08-01

    Full Text Available There are many parameters that affect video quality but their combined effect is not well identified and understood when video is transmitted over mobile/ wireless networks. In addition, video content has an impact on video quality under same network conditions. The main aim of this paper is the prediction of video quality combining the application and network level parameters for all content types. Firstly, video sequences are classified into groups representing different content types using cluster analysis. The classification of contents is based on the temporal (movement and spatial (edges, brightness feature extraction. Second, to study and analyze the behaviour of video quality for wide range variations of a set of selected parameters. Finally, to develop two learning models based on – (1 ANFIS to estimate the visual perceptual quality in terms of the Mean Opinion Score (MOS and decodable frame rate (Q value and (2 regression modeling to estimate the visual perceptual quality in terms of the MOS. We trained three ANFIS-based ANNs and regression based- models for the three distinct content types using a combination of network and application level parameters and tested the two models using unseen dataset. We confirmed that the video quality is more sensitive to network level compared to application level parameters. Preliminary results show that a good prediction accuracy was obtained from both models. However, the regression based model performed better in terms of the correlation coefficient and the root mean squared error. The work should help in the development of a reference-free video prediction model and Quality of Service (QoS control methods for video over wireless/mobile networks.

  19. Petalz: Search-based Procedural Content Generation for the Casual Gamer

    DEFF Research Database (Denmark)

    Risi, S.; Lehman, J.; D'Ambrosio, D.B

    2015-01-01

    The impact of game content on the player experience is potentially more critical in casual games than in competitive games because of the diminished role of strategic or tactical diversions. Interestingly, until now procedural content generation (PCG) has nevertheless been investigated almost...... exclusively in the context of competitive, skills-based gaming. This paper therefore opens a new direction for PCG by placing it at the center of an entirely casual flower-breeding game platform called Petalz. That way, the behavior of players and their reactions to different game mechanics in a casual...

  20. Content subscribing mechanism in P2P streaming based on gamma distribution prediction

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    P2P systems are categorized into tree-based and mesh-based systems according to their topologies. Mesh-based systems are considered more suitable for large-scale Internet applications, but require optimization on latency issue. This paper proposes a content subscribing mechanism (CSM) to eliminate unnecessary time delays during data relaying. A node can send content data to its neighbors as soon as it receives the data segment. No additional time is taken during the interactive stages prior to data segment transmission of streaming content.CSM consists of three steps. First, every node records its historical segments latency, and adopts gamma distribution, which possesses powerful expression ability, to express latency statistics. Second, a node predicts subscribing success ratio of every neighbor by comparing the gamma distribution parameters of the node and its neighbors before selecting a neighbor node to subscribe a data segment. The above steps would not increase latency as they are executed before the data segments are ready at the neighbor nodes. Finally, the node, which was subscribed to, sends the subscribed data segment to the subscriber immediately when it has the data segment. Experiments show that CSM significantly reduces the content data transmission latency.

  1. Stiffness of a granular base under optimum and saturated water contents

    Directory of Open Access Journals (Sweden)

    Fausto Andrés Molina Gómez

    2016-07-01

    Full Text Available Objective: This research work addressed the comparison of the stiffness of a granular base under optimum water content and total saturation conditions.Methodology: The methodology focused in the development of an experimental program and the computation of a function, which permits to assess the elastic moduli of the material. A triaxial cell equipped by local LVDT transducers, capable of managing different stress paths, was used to measure the small-strain stiffness of a granular base under two different conditions of moisture. The material was compacted with optimum water content and subjected to a series of loading-unloading cycles under isotropic conditions. In addition, identical specimens were prepared to be saturated and the experimental procedure was repeated to obtain the moduli in these new circumstances. The moduli were assessed by a hyperbolic model, and its relationship with the confining pressure was computed.Results: The results indicated that numerical model was adjusted to the experimental results. In addition, it was found that the elastic moduli decrease 3% to 8% in conditions of total saturation versus the condition of optimum water contents. Conclusions: The small-strain stiffness in the granular base depends on the water content, and the moisture can affect the deformation in the pavement structures. 

  2. Relationship between oxygen content and seebeck coefficient of Bi-based superconducting oxides

    Science.gov (United States)

    Miura, N.; Sakata, F.; Shimizu, Y.; Deshimaru, Y.; Yamazoe, N.

    1994-12-01

    The correlations among Seebeck coefficient, oxygen content and superconducting property were examined for four Bi-based oxides (2223 and 2212 phases). Each oxide underwent reversible sorption and desorption of small amounts of oxygen (ca.3x10 -5mol/g) in the temperature range 100-600 °C. In good agreement with such behavior, the Seebeck coefficient (Q) of each oxide was found to change reversibly with changing temperature, suggesting that Q is a reversible function of oxygen content. It was further found that the highest Tc was reached at the oxygen content at which Q was incidentally brought to be around zero at 100 °C for each oxide.

  3. MR-based Water Content Estimation in Cartilage: Design and Validation of a Method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen

    2012-01-01

    system (the closest to the body temperature) we measured, using the modified MR sequences, the T1 map intensity signal on 6 cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...

  4. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue contains) and we measured the water they contained. Results:The 37 Celsius degree system and the analysis can be reproduced in a similar way. MR T1...

  5. Automatic Ferrite Content Measurement based on Image Analysis and Pattern Classification

    Directory of Open Access Journals (Sweden)

    Hafiz Muhammad Tanveer

    2015-05-01

    Full Text Available The existing manual point counting technique for ferrite content measurement is a difficult time consuming method which has limited accuracy due to limited human perception and error induced by points on boundaries of grid spacing. In this paper, we present a novel algorithm, based on image analysis and pattern classification, to evaluate the volume fraction of ferrite in microstructure containing ferrite and austenite. The prime focus of the proposed algorithm is to solve the problem of ferrite content measurement using automatic binary classification approach. Classification of image data into two distinct classes, using optimum threshold finding method, is the key idea behind the new algorithm. Automation of the process to measure the ferrite content and to speed up specimen’s testing procedure is the main feature of the newly developed algorithm. Improved performance index by reducing error sources is reflected from obtained results and validated through the comparison with a well-known method of Ohtsu.

  6. MR-based Water Content Estimation in Cartilage: Design and Validation of a Method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sofie; Ringgaard, Steffen;

    2012-01-01

    system (the closest to the body temperature) we measured, using the modified MR sequences, the T1 map intensity signal on 6 cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer...... was costumed and programmed. Finally, we validated the method after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained data was analyzed and the water content calculated. Then, the same samples were freeze-dried (this technique allows to take out all the water that a tissue...... contains) and we measured the water they contained. Results We could reproduce twice the 37 Celsius degree system and could perform the measurements in a similar way. We found that the MR T1 map based water content sequences can provide information that, after being analyzed with a special software, can...

  7. Graph cut and image intensity-based splitting improves nuclei segmentation in high-content screening

    Science.gov (United States)

    Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Yli-Harja, Olli; Dehio, Christoph

    2013-02-01

    Quantification of phenotypes in high-content screening experiments depends on the accuracy of single cell analysis. In such analysis workflows, cell nuclei segmentation is typically the first step and is followed by cell body segmentation, feature extraction, and subsequent data analysis workflows. Therefore, it is of utmost importance that the first steps of high-content analysis are done accurately in order to guarantee correctness of the final analysis results. In this paper, we present a novel cell nuclei image segmentation framework which exploits robustness of graph cut to obtain initial segmentation for image intensity-based clump splitting method to deliver the accurate overall segmentation. By using quantitative benchmarks and qualitative comparison with real images from high-content screening experiments with complicated multinucleate cells, we show that our method outperforms other state-of-the-art nuclei segmentation methods. Moreover, we provide a modular and easy-to-use implementation of the method for a widely used platform.

  8. Implementation of a Text-Based Content Intervention in Secondary Social Studies Classes.

    Science.gov (United States)

    Wanzek, Jeanne; Vaughn, Sharon

    2016-12-01

    We describe teacher fidelity (adherence to the components of the treatment as specified by the research team) based on a series of studies of a multicomponent intervention, Promoting Acceleration of Comprehension and Content Through Text (PACT), with middle and high school social studies teachers and their students. Findings reveal that even with highly specified materials and implementing practices that are aligned with effective reading comprehension and content instruction, teachers' fidelity was consistently low for some components and high for others. Teachers demonstrated consistently high implementation fidelity and quality for the instructional components of building background knowledge (comprehension canopy) and teaching key content vocabulary (essential words), whereas we recorded consistently lower fidelity and quality of implementation for the instructional components of critical reading and knowledge application.

  9. Social media use by community-based organizations conducting health promotion: a content analysis.

    Science.gov (United States)

    Ramanadhan, Shoba; Mendez, Samuel R; Rao, Megan; Viswanath, Kasisomayajula

    2013-12-05

    Community-based organizations (CBOs) are critical channels for the delivery of health promotion programs. Much of their influence comes from the relationships they have with community members and other key stakeholders and they may be able to harness the power of social media tools to develop and maintain these relationships. There are limited data describing if and how CBOs are using social media. This study assesses the extent to which CBOs engaged in health promotion use popular social media channels, the types of content typically shared, and the extent to which the interactive aspects of social media tools are utilized. We assessed the social media presence and patterns of usage of CBOs engaged in health promotion in Boston, Lawrence, and Worcester, Massachusetts. We coded content on three popular channels: Facebook, Twitter, and YouTube. We used content analysis techniques to quantitatively summarize posts, tweets, and videos on these channels, respectively. For each organization, we coded all content put forth by the CBO on the three channels in a 30-day window. Two coders were trained and conducted the coding. Data were collected between November 2011 and January 2012. A total of 166 organizations were included in our census. We found that 42% of organizations used at least one of the channels of interest. Across the three channels, organization promotion was the most common theme for content (66% of posts, 63% of tweets, and 93% of videos included this content). Most organizations updated Facebook and Twitter content at rates close to recommended frequencies. We found limited interaction/engagement with audience members. Much of the use of social media tools appeared to be uni-directional, a flow of information from the organization to the audience. By better leveraging opportunities for interaction and user engagement, these organizations can reap greater benefits from the non-trivial investment required to use social media well. Future research should

  10. The method of soft sensor modeling for fly ash carbon content based on ARMA deviation prediction

    Science.gov (United States)

    Yang, Xiu; Yang, Wei

    2017-03-01

    The carbon content of fly ash is an important parameter in the process of boiler combustion. Aiming at the existing problems of fly ash detection, the soft measurement model was established based on PSO-SVM, and the method of deviation correction based on ARMA model was put forward on this basis, the soft sensing model was calibrated by the values which were obtained by off-line analysis at intervals. The 600 MW supercritical sliding pressure boiler was taken for research objects, the auxiliary variables were selected and the data which collected by DCS were simulated. The result shows that the prediction model for the carbon content of fly ash based on PSO-SVM is good in effect of fitting, and introducing the correction module is helpful to improve the prediction accuracy.

  11. Video segmentation and classification for content-based storage and retrieval using motion vectors

    Science.gov (United States)

    Fernando, W. A. C.; Canagarajah, Cedric N.; Bull, David R.

    1998-12-01

    Video parsing is an important step in content-based indexing techniques, where the input video is decomposed into segments with uniform content. In video parsing detection of scene changes is one of the approaches widely used for extracting key frames from the video sequence. In this paper, an algorithm, based on motion vectors, is proposed to detect sudden scene changes and gradual scene changes (camera movements such as panning, tilting and zooming). Unlike some of the existing schemes, the proposed scheme is capable of detecting both sudden and gradual changes in uncompressed, as well as, compressed domain video. It is shown that the resultant motion vector can be used to identify and classify gradual changes due to camera movements. Results show that algorithm performed as well as the histogram-based schemes, with uncompressed video. The performance of the algorithm was also investigated with H.263 compressed video. The detection and classification of both sudden and gradual scene changes was successfully demonstrated.

  12. An optimized fast image resizing method based on content-aware

    Science.gov (United States)

    Lu, Yan; Gao, Kun; Wang, Kewang; Xu, Tingfa

    2014-11-01

    In traditional image resizing theory based on interpolation, the prominent object may cause distortion, and the image resizing method based on content-aware has become a research focus in image processing because the prominent content and structural features of images are considered in this method. In this paper, we present an optimized fast image resizing method based on content-aware. Firstly, an appropriate energy function model is constructed on the basis of image meshes, and multiple energy constraint templates are established. In addition, this paper deducts the image saliency constraints, and then the problem of image resizing is used to reformulate a kind of convex quadratic program task. Secondly, a method based on neural network is presented in solving the problem of convex quadratic program. The corresponding neural network model is constructed; moreover, some sufficient conditions of the neural network stability are given. Compared with the traditional numerical algorithm such as iterative method, the neural network method is essentially parallel and distributed, which can expedite the calculation speed. Finally, the effects of image resizing by the proposed method and traditional image resizing method based on interpolation are compared by adopting MATLAB software. Experiment results show that this method has a higher performance of identifying the prominent object, and the prominent features can be preserved effectively after the image is resized. It also has the advantages of high portability and good real-time performance with low visual distortion.

  13. Addressing the systems-based practice requirement with health policy content and educational technology.

    Science.gov (United States)

    Nagler, Alisa; Andolsek, Kathryn; Dossary, Kristin; Schlueter, Joanne; Schulman, Kevin

    2010-01-01

    Duke University Hospital Office of Graduate Medical Education and Duke University's Fuqua School of Business collaborated to offer a Health Policy lecture series to residents and fellows across the institution, addressing the "Systems-based Practice" competency.During the first year, content was offered in two formats: live lecture and web/podcast. Participants could elect the modality which was most convenient for them. In Year Two, the format was changed so that all content was web/podcast and a quarterly live panel discussion was led by module presenters or content experts. Lecture evaluations, qualitative focus group feedback, and post-test data were analyzed.A total of 77 residents and fellows from 8 (of 12) Duke Graduate Medical Education departments participated. In the first year, post-test results were the same for those who attended the live lectures and those who participated via web/podcast. A greater number of individuals participated in Year Two. Participants from both years expressed the need for health policy content in their training programs. Participants in both years valued a hybrid format for content delivery, recognizing a desire for live interaction with the convenience of accessing web/podcasts at times and locations convenient for them. A positive unintended consequence of the project was participant networking with residents and fellows from other specialties.

  14. A Fuzzy Color-Based Approach for Understanding Animated Movies Content in the Indexing Task

    Directory of Open Access Journals (Sweden)

    Vasile Buzuloiu

    2008-04-01

    Full Text Available This paper proposes a method for detecting and analyzing the color techniques used in the animated movies. Each animated movie uses a specific color palette which makes its color distribution one major feature in analyzing the movie content. The color palette is specially tuned by the author in order to convey certain feelings or to express artistic concepts. Deriving semantic or symbolic information from the color concepts or the visual impression induced by the movie should be an ideal way of accessing its content in a content-based retrieval system. The proposed approach is carried out in two steps. The first processing step is the low-level analysis. The movie color content gets represented with several global statistical parameters computed from the movie global weighted color histogram. The second step is the symbolic representation of the movie content. The numerical parameters obtained from the first step are converted into meaningful linguistic concepts through a fuzzy system. They concern mainly the predominant hues of the movie, some of Itten’s color contrasts and harmony schemes, color relationships and color richness. We use the proposed linguistic concepts to link to given animated movies according to their color techniques. In order to make the retrieval task easier, we also propose to represent color properties in a graphical manner which is similar to the color gamut representation. Several tests have been conducted on an animated movie database.

  15. A Fuzzy Color-Based Approach for Understanding Animated Movies Content in the Indexing Task

    Directory of Open Access Journals (Sweden)

    Buzuloiu Vasile

    2008-01-01

    Full Text Available Abstract This paper proposes a method for detecting and analyzing the color techniques used in the animated movies. Each animated movie uses a specific color palette which makes its color distribution one major feature in analyzing the movie content. The color palette is specially tuned by the author in order to convey certain feelings or to express artistic concepts. Deriving semantic or symbolic information from the color concepts or the visual impression induced by the movie should be an ideal way of accessing its content in a content-based retrieval system. The proposed approach is carried out in two steps. The first processing step is the low-level analysis. The movie color content gets represented with several global statistical parameters computed from the movie global weighted color histogram. The second step is the symbolic representation of the movie content. The numerical parameters obtained from the first step are converted into meaningful linguistic concepts through a fuzzy system. They concern mainly the predominant hues of the movie, some of Itten's color contrasts and harmony schemes, color relationships and color richness. We use the proposed linguistic concepts to link to given animated movies according to their color techniques. In order to make the retrieval task easier, we also propose to represent color properties in a graphical manner which is similar to the color gamut representation. Several tests have been conducted on an animated movie database.

  16. Leaf Chlorophyll Content Estimation of Winter Wheat Based on Visible and Near-Infrared Sensors.

    Science.gov (United States)

    Zhang, Jianfeng; Han, Wenting; Huang, Lvwen; Zhang, Zhiyong; Ma, Yimian; Hu, Yamin

    2016-03-25

    The leaf chlorophyll content is one of the most important factors for the growth of winter wheat. Visual and near-infrared sensors are a quick and non-destructive testing technology for the estimation of crop leaf chlorophyll content. In this paper, a new approach is developed for leaf chlorophyll content estimation of winter wheat based on visible and near-infrared sensors. First, the sliding window smoothing (SWS) was integrated with the multiplicative scatter correction (MSC) or the standard normal variable transformation (SNV) to preprocess the reflectance spectra images of wheat leaves. Then, a model for the relationship between the leaf relative chlorophyll content and the reflectance spectra was developed using the partial least squares (PLS) and the back propagation neural network. A total of 300 samples from areas surrounding Yangling, China, were used for the experimental studies. The samples of visible and near-infrared spectroscopy at the wavelength of 450,900 nm were preprocessed using SWS, MSC and SNV. The experimental results indicate that the preprocessing using SWS and SNV and then modeling using PLS can achieve the most accurate estimation, with the correlation coefficient at 0.8492 and the root mean square error at 1.7216. Thus, the proposed approach can be widely used for winter wheat chlorophyll content analysis.

  17. [Quantitative inversion of rock SiO2 content based on thermal infrared emissivity spectrum].

    Science.gov (United States)

    Yang, Hang; Zhang, Li-Fu; Huang, Zhao-Qiang; Zhang, Xue-Wen; Tong, Qing-Xi

    2012-06-01

    The present paper used the emissivity of non-processed rocks measured by M304, a hyperspectral Fourier transform infrared (FTIR) spectroradiometer, and SiO2 content by the X-ray fluorescence spectrometry. After continuum removal and normalization, stepwise regress method was employed to select the feature bands of rocks emissivity. And then quantitative relationship between SiO2 content and continuum removal emissivity of feature bands was analysed. Based on that, by comparing twelve SiO2 indices models, the optimal model for predicting SiO2 content was built. The result showed that the SiO2 indices can predict SiO2 content efficiently, and especially the normalization silicon dioxide index (NSDI) about 11.18 and 12.36 microm is the best; compared with regression models, NSDI is simpler and has higher practicality; the result has an important application value in rock classification and SiO2 content extraction with high precision.

  18. Near infrared spectroscopy detection of the content of wheat based on improved deep belief network

    Science.gov (United States)

    Li, Wenwen; Lin, Min; Huang, Yongmei; Liu, Huijun; Zhou, Xinqi

    2017-08-01

    In order to solve the complicated problem of traditional detection of the content of Wheat, a method for predicting the content of wheat components by near infrared spectroscopy based on improved deep belief network is proposed. In this paper, wavelet transform is used to preprocess near infrared spectroscopy of wheat, and then a quantitative analysis model of wheat’s moisture, protein and ash content is established by using deep belief network. And combined with the random hidden algorithm, the network model is sparse processed, and the sparse network is obtained. So as to improve the accuracy and stability of the network. The experimental results show that using the improved deep of belief network to establish the quantitative analysis model on the content of wheat, and the correlation coefficient of moisture, protein and ash content were 0.9978, 0.9928, 0.9920, standard error of prediction were 0.0069, 0.0628,0.0535. Compared with the traditional deep belief network (DBN) and the traditional shallow learning BP neural network algorithm, the prediction results have been significantly improved.

  19. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  20. Report on RecSys 2015 Workshop on New Trends in Content-Based Recommender Systems (CBRecSys 2015)

    DEFF Research Database (Denmark)

    Bogers, Toine; Koolen, Marijn

    2016-01-01

    This article reports on the CBRecSys 2015 workshop, the second edition of the workshop on new trends in content-based recommender systems, co-located with RecSys 2015 in Vienna, Austria. Content-based recommendation has been applied successfully in many different domains, but it has not seen the ...... venue for work dedicated to all aspects of content-based recommender systems....

  1. Social Content Recommendation Based on Spatial-Temporal Aware Diffusion Modeling in Social Networks

    Directory of Open Access Journals (Sweden)

    Farman Ullah

    2016-09-01

    Full Text Available User interactions in online social networks (OSNs enable the spread of information and enhance the information dissemination process, but at the same time they exacerbate the information overload problem. In this paper, we propose a social content recommendation method based on spatial-temporal aware controlled information diffusion modeling in OSNs. Users interact more frequently when they are close to each other geographically, have similar behaviors, and fall into similar demographic categories. Considering these facts, we propose multicriteria-based social ties relationship and temporal-aware probabilistic information diffusion modeling for controlled information spread maximization in OSNs. The proposed social ties relationship modeling takes into account user spatial information, content trust, opinion similarity, and demographics. We suggest a ranking algorithm that considers the user ties strength with friends and friends-of-friends to rank users in OSNs and select highly influential injection nodes. These nodes are able to improve social content recommendations, minimize information diffusion time, and maximize information spread. Furthermore, the proposed temporal-aware probabilistic diffusion process categorizes the nodes and diffuses the recommended content to only those users who are highly influential and can enhance information dissemination. The experimental results show the effectiveness of the proposed scheme.

  2. Influence of Ta content on hot corrosion behaviour of a directionally solidified nickel base superalloy

    Energy Technology Data Exchange (ETDEWEB)

    Han, F.F. [Superalloy Division, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang 110016 (China); Chang, J.X., E-mail: jxchang11s@imr.ac.cn [Superalloy Division, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang 110016 (China); Li, H.; Lou, L.H. [Superalloy Division, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang 110016 (China); Zhang, J. [Superalloy Division, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang 110016 (China); Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, 72 Wenhua Road, Shenyang 110016 (China)

    2015-01-15

    Highlights: • Three nickel-base superalloys containing different Ta content were subjected to Na{sub 2}SO{sub 4}-induced hot corrosion. • Ta improved the hot corrosion resistance. • Ta decreased the diffusion rate of alloying elements. • Ta promoted the formation of (Cr, Ti)TaO{sub 4} layer. - Abstract: Hot corrosion behaviour of a directionally solidified nickel base superalloy with different tantalum (Ta) addition in fused sodium sulphate (Na{sub 2}SO{sub 4}) under an oxidizing atmosphere at 900 °C has been studied. It was shown that the hot corrosion resistance was improved by increasing of Ta content. The hot corrosion kinetics of the alloy with lower Ta content deviated from parabolic law after 60 h corrosion test, whereas the corrosion kinetics of the alloy with high Ta content followed the parabolic law before 60 h and with less mass change afterwards. A detailed microstructure study using scanning electron microscopy (SEM) equipped with an energy dispersive spectroscopy (EDS), transmission electron microscopy (TEM) and X-ray diffraction (XRD) was performed to investigate the corrosion products and mechanisms. The beneficial effect of Ta was found to be resulted from a Ta-enriched (Cr, Ti)TaO{sub 4} layer inside the corrosion scale, which led to the retarding of the element diffusion so as to decrease the hot corrosion kinetics.

  3. Optimal Relay Selection using Efficient Beaconless Geographic Contention-Based Routing Protocol in Wireless Adhoc Networks

    Directory of Open Access Journals (Sweden)

    G. Srimathy

    2012-04-01

    Full Text Available In Wireless Ad hoc network, cooperation of nodes can be achieved by more interactions at higher protocol layers, particularly the MAC (Medium Access Control and network layers play vital role. MAC facilitates a routing protocol based on position location of nodes at network layer specially known as Beacon-less geographic routing (BLGR using Contention-based selection process. This paper proposes two levels of cross-layer framework -a MAC network cross-layer design for forwarder selection (or routing and a MAC-PHY for relay selection. CoopGeo; the proposed cross-layer protocol provides an efficient, distributed approach to select next hops and optimal relays to form a communication path. Wireless networks suffers huge number of communication at the same time leads to increase in collision and energy consumption; hence focused on new Contention access method that uses a dynamical change of channel access probability which can reduce the number of contention times and collisions. Simulation result demonstrates the best Relay selection and the comparative of direct mode with the cooperative networks. And Performance evaluation of contention probability with Collision avoidance.

  4. Face and content validity of a novel, web-based otoscopy simulator for medical education.

    Science.gov (United States)

    Wickens, Brandon; Lewis, Jordan; Morris, David P; Husein, Murad; Ladak, Hanif M; Agrawal, Sumit K

    2015-02-24

    Despite the fact that otoscopy is a widely used and taught diagnostic tool during medical training, errors in diagnosis are common. Physical otoscopy simulators have high fidelity, but they can be expensive and only a limited number of students can use them at a given time. 1) To develop a purely web-based otoscopy simulator that can easily be distributed to students over the internet. 2) To assess face and content validity of the simulator by surveying experts in otoscopy. An otoscopy simulator, OtoTrain™, was developed at Western University using web-based programming and Unity 3D. Eleven experts from academic institutions in North America were recruited to test the simulator and respond to an online questionnaire. A 7-point Likert scale was used to answer questions related to face validity (realism of the simulator), content validity (expert evaluation of subject matter and test items), and applicability to medical training. The mean responses for the face validity, content validity, and applicability to medical training portions of the questionnaire were all ≤3, falling between the "Agree", "Mostly Agree", and "Strongly Agree" categories. The responses suggest good face and content validity of the simulator. Open-ended questions revealed that the primary drawbacks of the simulator were the lack of a haptic arm for force feedback, a need for increased focus on pneumatic otoscopy, and few rare disorders shown on otoscopy. OtoTrain™ is a novel, web-based otoscopy simulator that can be easily distributed and used by students on a variety of platforms. Initial face and content validity was encouraging, and a skills transference study is planned following further modifications and improvements to the simulator.

  5. Ionospheric Slant Total Electron Content Analysis Using Global Positioning System Based Estimation

    Science.gov (United States)

    Sparks, Lawrence C. (Inventor); Mannucci, Anthony J. (Inventor); Komjathy, Attila (Inventor)

    2017-01-01

    A method, system, apparatus, and computer program product provide the ability to analyze ionospheric slant total electron content (TEC) using global navigation satellite systems (GNSS)-based estimation. Slant TEC is estimated for a given set of raypath geometries by fitting historical GNSS data to a specified delay model. The accuracy of the specified delay model is estimated by computing delay estimate residuals and plotting a behavior of the delay estimate residuals. An ionospheric threat model is computed based on the specified delay model. Ionospheric grid delays (IGDs) and grid ionospheric vertical errors (GIVEs) are computed based on the ionospheric threat model.

  6. Topsoil organic carbon content of Europe, a new map based on a generalised additive model

    Science.gov (United States)

    de Brogniez, Delphine; Ballabio, Cristiano; Stevens, Antoine; Jones, Robert J. A.; Montanarella, Luca; van Wesemael, Bas

    2014-05-01

    There is an increasing demand for up-to-date spatially continuous organic carbon (OC) data for global environment and climatic modeling. Whilst the current map of topsoil organic carbon content for Europe (Jones et al., 2005) was produced by applying expert-knowledge based pedo-transfer rules on large soil mapping units, the aim of this study was to replace it by applying digital soil mapping techniques on the first European harmonised geo-referenced topsoil (0-20 cm) database, which arises from the LUCAS (land use/cover area frame statistical survey) survey. A generalized additive model (GAM) was calibrated on 85% of the dataset (ca. 17 000 soil samples) and a backward stepwise approach selected slope, land cover, temperature, net primary productivity, latitude and longitude as environmental covariates (500 m resolution). The validation of the model (applied on 15% of the dataset), gave an R2 of 0.27. We observed that most organic soils were under-predicted by the model and that soils of Scandinavia were also poorly predicted. The model showed an RMSE of 42 g kg-1 for mineral soils and of 287 g kg-1 for organic soils. The map of predicted OC content showed the lowest values in Mediterranean countries and in croplands across Europe, whereas highest OC content were predicted in wetlands, woodlands and in mountainous areas. The map of standard error of the OC model predictions showed high values in northern latitudes, wetlands, moors and heathlands, whereas low uncertainty was mostly found in croplands. A comparison of our results with the map of Jones et al. (2005) showed a general agreement on the prediction of mineral soils' OC content, most probably because the models use some common covariates, namely land cover and temperature. Our model however failed to predict values of OC content greater than 200 g kg-1, which we explain by the imposed unimodal distribution of our model, whose mean is tilted towards the majority of soils, which are mineral. Finally, average

  7. Method for Learning Effciency Improvements Based on Gaze Location Notifications on e-learning Content Screen Display

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2012-06-01

    Full Text Available Method for learning efficiency improvement based on gaze notifications on e-learning content screen display is proposed. Experimental results with e-learning two types of contents (Relatively small motion of e-learning content and e-learning content with moving picture and annotation marks show that 0.8038 to 0.9615 of R square value are observed between duration time period of proper gaze location and achievement test score.

  8. Plant leaf chlorophyll content retrieval based on a field imaging spectroscopy system.

    Science.gov (United States)

    Liu, Bo; Yue, Yue-Min; Li, Ru; Shen, Wen-Jing; Wang, Ke-Lin

    2014-10-23

    A field imaging spectrometer system (FISS; 380-870 nm and 344 bands) was designed for agriculture applications. In this study, FISS was used to gather spectral information from soybean leaves. The chlorophyll content was retrieved using a multiple linear regression (MLR), partial least squares (PLS) regression and support vector machine (SVM) regression. Our objective was to verify the performance of FISS in a quantitative spectral analysis through the estimation of chlorophyll content and to determine a proper quantitative spectral analysis method for processing FISS data. The results revealed that the derivative reflectance was a more sensitive indicator of chlorophyll content and could extract content information more efficiently than the spectral reflectance, which is more significant for FISS data compared to ASD (analytical spectral devices) data, reducing the corresponding RMSE (root mean squared error) by 3.3%-35.6%. Compared with the spectral features, the regression methods had smaller effects on the retrieval accuracy. A multivariate linear model could be the ideal model to retrieve chlorophyll information with a small number of significant wavelengths used. The smallest RMSE of the chlorophyll content retrieved using FISS data was 0.201 mg/g, a relative reduction of more than 30% compared with the RMSE based on a non-imaging ASD spectrometer, which represents a high estimation accuracy compared with the mean chlorophyll content of the sampled leaves (4.05 mg/g). Our study indicates that FISS could obtain both spectral and spatial detailed information of high quality. Its image-spectrum-in-one merit promotes the good performance of FISS in quantitative spectral analyses, and it can potentially be widely used in the agricultural sector.

  9. Science Concierge: A Fast Content-Based Recommendation System for Scientific Publications.

    Directory of Open Access Journals (Sweden)

    Titipat Achakulvisut

    Full Text Available Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.

  10. Science Concierge: A Fast Content-Based Recommendation System for Scientific Publications.

    Science.gov (United States)

    Achakulvisut, Titipat; Acuna, Daniel E; Ruangrong, Tulakan; Kording, Konrad

    2016-01-01

    Finding relevant publications is important for scientists who have to cope with exponentially increasing numbers of scholarly material. Algorithms can help with this task as they help for music, movie, and product recommendations. However, we know little about the performance of these algorithms with scholarly material. Here, we develop an algorithm, and an accompanying Python library, that implements a recommendation system based on the content of articles. Design principles are to adapt to new content, provide near-real time suggestions, and be open source. We tested the library on 15K posters from the Society of Neuroscience Conference 2015. Human curated topics are used to cross validate parameters in the algorithm and produce a similarity metric that maximally correlates with human judgments. We show that our algorithm significantly outperformed suggestions based on keywords. The work presented here promises to make the exploration of scholarly material faster and more accurate.

  11. Fast Measurement of Soluble Solid Content in Mango Based on Visible and Infrared Spectroscopy Technique

    Science.gov (United States)

    Yu, Jiajia; He, Yong

    Mango is a kind of popular tropical fruit, and the soluble solid content is an important in this study visible and short-wave near-infrared spectroscopy (VIS/SWNIR) technique was applied. For sake of investigating the feasibility of using VIS/SWNIR spectroscopy to measure the soluble solid content in mango, and validating the performance of selected sensitive bands, for the calibration set was formed by 135 mango samples, while the remaining 45 mango samples for the prediction set. The combination of partial least squares and backpropagation artificial neural networks (PLS-BP) was used to calculate the prediction model based on raw spectrum data. Based on PLS-BP, the determination coefficient for prediction (Rp) was 0.757 and root mean square and the process is simple and easy to operate. Compared with the Partial least squares (PLS) result, the performance of PLS-BP is better.

  12. A Protocol for Content-Based Communication in Disconnected Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Julien Haillot

    2010-01-01

    Full Text Available In content-based communication, information flows towards interested hosts rather than towards specifically set destinations. This new style of communication perfectly fits the needs of applications dedicated to information sharing, news distribution, service advertisement and discovery, etc. In this paper we address the problem of supporting content-based communication in partially or intermittently connected mobile ad hoc networks (MANETs. The protocol we designed leverages on the concepts of opportunistic networking and delay-tolerant networking in order to account for the absence of end-to-end connectivity in disconnected MANETs. The paper provides an overview of the protocol, as well as simulation results that show how this protocol can perform in realistic conditions.

  13. ACBRAAM: A Content Based Routing Algorithm Using Ant Agents for MANETs

    Directory of Open Access Journals (Sweden)

    Ramkumar K. R.

    2011-01-01

    Full Text Available A mobile ad hoc network (MANET is a temporary network which is formed by a group of wireless mobile devices without the aid of any centralized infrastructure. In such environments, finding the identity of a mobile device and maintaining the paths between any two nodes are challenging tasks, in real time the limited propagation range of mobile devices restrict its identity only to its neighbors and a new host enters in to a MANET does not know the complete details of that instantaneous MANET. This paper analyses the possibility of content based route discovery and proposes a framework for request based route discovery and path maintenance using ant agents. The ant agents fetch routing information along with content relevancy which will have a major influence on pheromone value. The pheromone value is used to find the probability of goodness. The proposed framework consists of ant structures and algorithms for route discovery and path maintenance.

  14. Image Content Based Retrieval System using Cosine Similarity for Skin Disease Images

    Directory of Open Access Journals (Sweden)

    Sukhdeep Kaur

    2013-09-01

    Full Text Available A content based image retrieval system (CBIR is proposed to assist the dermatologist for diagnosis of skin diseases. First, after collecting the various skin disease images and their text information (disease name, symptoms and cure etc, a test database (for query image and a train database of 460 images approximately (for image matching are prepared. Second, features are extracted by calculating the descriptive statistics. Third, similarity matching using cosine similarity and Euclidian distance based on the extracted features is discussed. Fourth, for better results first four images are selected during indexing and their related text information is shown in the text file. Last, the results shown are compared according to doctor’s description and according to image content in terms of precision and recall and also in terms of a self developed scoring system.

  15. Content-Based Persian Language Instruction at the University of Maryland: A Field-Report

    Directory of Open Access Journals (Sweden)

    Ali Reza Abasi

    2014-01-01

    Full Text Available Content-based language instruction (CBI has been increasingly gaining prominence in foreign language education. There is, however, a paucity of reports on less commonly taught language programs in the USA that have adopted this approach. This paper reports on the introduction of CBI in a Persian language program at the University of Maryland. The paper begins with an overview of the most common CBI models in higher education settings. Next, a description of a particular CBI model developed in response to the program needs is presented, followed by a description of an offered course based on this model and a discussion of the views of the students, content faculty, and the language instructor. In conclusion, key considerations and the lessons learned in the process of implementing CBI are discussed.

  16. BRINGING CULTURAL CONTENT AND AUTHENTIC MATERIALS TO ENHANCE PROBLEM-BASED LEARNING IN EFL CLASSES

    OpenAIRE

    2012-01-01

    In a class where elements and filters of Problem-Based Learning are used, students are engaged in language learning through organied and purposeful activities with authentic materials and collaborative learning models. Research has shown that this approach is effective in raising student’s motivation, enhancing their problem solving and critical thinking skills, and deepening their understanding of the subject contents. This paper aims to answer the questions of when and how authentic materia...

  17. Support for Interactive Features of E-learning Content Based on the Formal Theory

    Directory of Open Access Journals (Sweden)

    Oleg BISIKALO

    2014-03-01

    Full Text Available A formal theory based on a binary operator of directional associative relation is constructed in the article and an understanding of an associative normal form of image constructions is introduced. A model of a commutative semigroup, which provides a presentation of a sentence as three components of an interrogative linguistic image construction, is considered. Given examples demonstrate development of interactive features of e-Learning content.

  18. Feature-based watermarking for digital right management in content distribution

    Science.gov (United States)

    Zhou, Wensheng; Xie, Hua; Sagetong, Phoom

    2004-10-01

    More and more digital services provide capability of distributing digital content to end-users through high-band networks, such as satellite systems. In such systems, Digital Right Management has become more and more important and is encountering great challenges. Digital watermarking is proposed as a possible solution for the digital copyright tracking and enforcement. The nature of DRM systems puts high requirements on the watermark's robustness, uniqueness, easy detection, accurate retrieval and convenient management. We have developed a series of feature-based watermarking algorithms for digital video for satellite transmission. In this paper, we will first describe a general secure digital content distribution system model and the requirements of watermark as one mechanism of DRM in digital content distribution applications. Then we will present a few feature-based digital watermarking methods in detail which are integrated with a dynamic watermarking schema to protect the digital content in a dynamic environment. For example, a watermark which is embedded in the DFT feature domain is invariant to rotation, scale and translation. Our proposed DFT domain watermarking schemas in which exploit the magnitude property of the DFT feature domain will allow both robust and easy watermark tracking and detection in the case of copyright infringement using cameras or camcorders. This DFT feature-based watermarking algorithm is able to tolerate large angle rotation and there is no need to search for possible rotated angles, which reduces the complexity of the watermark detection process and allows fast retrieval and easy management. We will then present a wavelet feature-based watermark algorithm for dynamic watermark key updates and key management, and we will conclude the paper with the summary, pointing our future research directions.

  19. Comparison of color representations for content-based image retrieval in dermatology

    OpenAIRE

    Bosman, Hedde H.W.J.; Petkov, Nicolai; Jonkman, Marcel F.

    2010-01-01

    Background/purpose: We compare the effectiveness of 10 different color representations in a content-based image retrieval task for dermatology. Methods: As features, we use the average colors of healthy and lesion skin in an image. The extracted features are used to retrieve similar images from a database using a k-nearest-neighbor search and Euclidean distance. The images in the database are divided into four different color categories. We measure the effectiveness of retrieval by the averag...

  20. A semantic middleware to enhance current multimedia retrieval systems with content-based functionalities

    OpenAIRE

    2011-01-01

    210 p. : graf. [EN]This work reviews the information retrieval theory and focuses on the revolution experimented in that field promoted by the digitalization and the widespread use of the multimedia information. After analyzing the trends and promising results in the main disciplines surrounding the content-based information retrieval field, this thesis proposes a reference model for Multimedia Information Retrieval that aims to contextualize the thesis contributions. According to this ref...

  1. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    Directory of Open Access Journals (Sweden)

    Kai Lin

    2016-07-01

    Full Text Available With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC. The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  2. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network.

    Science.gov (United States)

    Lin, Kai; Wang, Di; Hu, Long

    2016-07-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods.

  3. Content-based audio authentication using a hierarchical patchwork watermark embedding

    Science.gov (United States)

    Gulbis, Michael; Müller, Erika

    2010-05-01

    Content-based audio authentication watermarking techniques extract perceptual relevant audio features, which are robustly embedded into the audio file to protect. Manipulations of the audio file are detected on the basis of changes between the original embedded feature information and the anew extracted features during verification. The main challenges of content-based watermarking are on the one hand the identification of a suitable audio feature to distinguish between content preserving and malicious manipulations. On the other hand the development of a watermark, which is robust against content preserving modifications and able to carry the whole authentication information. The payload requirements are significantly higher compared to transaction watermarking or copyright protection. Finally, the watermark embedding should not influence the feature extraction to avoid false alarms. Current systems still lack a sufficient alignment of watermarking algorithm and feature extraction. In previous work we developed a content-based audio authentication watermarking approach. The feature is based on changes in DCT domain over time. A patchwork algorithm based watermark was used to embed multiple one bit watermarks. The embedding process uses the feature domain without inflicting distortions to the feature. The watermark payload is limited by the feature extraction, more precisely the critical bands. The payload is inverse proportional to segment duration of the audio file segmentation. Transparency behavior was analyzed in dependence of segment size and thus the watermark payload. At a segment duration of about 20 ms the transparency shows an optimum (measured in units of Objective Difference Grade). Transparency and/or robustness are fast decreased for working points beyond this area. Therefore, these working points are unsuitable to gain further payload, needed for the embedding of the whole authentication information. In this paper we present a hierarchical extension

  4. Application of Content-Based Approach in Research Paper Recommendation System for a Digital Library

    Directory of Open Access Journals (Sweden)

    Simon Philip

    2014-10-01

    Full Text Available Recommender systems are software applications that provide or suggest items to intended users. These systems use filtering techniques to provide recommendations. The major ones of these techniques are collaborative-based filtering technique, content-based technique, and hybrid algorithm. The motivation came as a result of the need to integrate recommendation feature in digital libraries in order to reduce information overload. Content-based technique is adopted because of its suitability in domains or situations where items are more than the users. TF-IDF (Term Frequency Inverse Document Frequency and cosine similarity were used to determine how relevant or similar a research paper is to a user's query or profile of interest. Research papers and user's query were represented as vectors of weights using Keyword-based Vector Space model. The weights indicate the degree of association between a research paper and a user's query. This paper also presents an algorithm to provide or suggest recommendations based on users' query. The algorithm employs both TF-IDF weighing scheme and cosine similarity measure. Based on the result or output of the system, integrating recommendation feature in digital libraries will help library users to find most relevant research papers to their needs.

  5. Determining the bio-based content of bio-plastics used in Thailand by radiocarbon analysis

    Science.gov (United States)

    Ploykrathok, T.; Chanyotha, S.

    2017-06-01

    Presently, there is an increased interest in the development of bio-plastic products from agricultural materials which are biodegradable in order to reduce the problem of waste disposal. Since the amount of modern carbon in bio-plastics can indicate how much the amount of agricultural materials are contained in the bio-plastic products, this research aims to determine the modern carbon in bio-plastic using the carbon dioxide absorption method. The radioactivity of carbon-14 contained in the sample is measured by liquid scintillation counter (Tri-carb 3110 TR, PerkinElmer). The percentages of bio-based content in the samples were determined by comparing the observed modern carbon content with the values contained in agricultural raw materials. The experimental results show that only poly(lactic acid) samples have the modern carbon content of 97.4%, which is close to the agricultural materials while other bio-plastics types are found to have less than 50% of the modern carbon content. In other words, most of these bio-plastic samples were mixed with other materials which are not agriculturally originated.

  6. Cationic content effects of biodegradable amphoteric chitosan-based flocculants on the flocculation properties

    Institute of Scientific and Technical Information of China (English)

    Zhen Yang; Hu Yang; Rongshi Cheng; Yabo Shang; Xin Huang; Yichun Chen; Yaobo Lu; Aimin Chen; Yuxiang Jiang; Wei Gu; Xiaozhi Qian

    2012-01-01

    A series of biodegradable amphoteric chitosan-based flocculants(3-chloro-2-hydroxypropyl trimethyl ammonium chloride(CTA)modified carboxymethyl chitosan,denoted as CMC-CTA)with different substitution degrees of CTA were prepared successfully.The content of carboxymethyl groups in each CMC-CTA sample was kept almost constant.The solubility of the various flocculants showed that,higher cationic content of flocculants caused a better solubility.The flocculation experiments using kaolin suspension as synthetic water at the laboratory scale indicated that the substitution degree of CTA was one of the key factors for the flocculation properties.With the increase of cationic content,the flocculants were demonstrated better flocculation performance and lower dosage requirement.Flocculation kinetics model of particles collisions combining zeta potential and turbidity measurements was employed to investigate the effects of the cationic content of the flocculants on the flocculation properties from the viewpoint of flocculation mechanism in detail.Furthermore,flocculation performance using raw water from Zhenjiang part of Yangtze River at the pilot scale showed the similar effects to those at the laboratory scale.

  7. Cationic content effects of biodegradable amphoteric chitosan-based flocculants on the flocculation properties.

    Science.gov (United States)

    Yang, Zhen; Shang, Yabo; Huang, Xin; Chen, Yichun; Lu, Yaobo; Chen, Aimin; Jiang, Yuxiang; Gu, Wei; Qian, Xiaozhi; Yang, Hu; Cheng, Rongshi

    2012-01-01

    A series of biodegradable amphoteric chitosan-based flocculants (3-chloro-2-hydroxypropyl trimethyl ammonium chloride (CTA) modified carboxymethyl chitosan, denoted as CMC-CTA) with different substitution degrees of CTA were prepared successfully. The content of carboxymethyl groups in each CMC-CTA sample was kept almost constant. The solubility of the various flocculants showed that, higher cationic content of flocculants caused a better solubility. The flocculation experiments using kaolin suspension as synthetic water at the laboratory scale indicated that the substitution degree of CTA was one of the key factors for the flocculation properties. With the increase of cationic content, the flocculants were demonstrated better flocculation performance and lower dosage requirement. Flocculation kinetics model of particles collisions combining zeta potential and turbidity measurements was employed to investigate the effects of the cationic content of the flocculants on the flocculation properties from the viewpoint of flocculation mechanism in detail. Furthermore, flocculation performance using raw water from Zhenjiang part of Yangtze River at the pilot scale showed the similar effects to those at the laboratory scale.

  8. Novel Approach to Classify Plants Based on Metabolite-Content Similarity

    Directory of Open Access Journals (Sweden)

    Kang Liu

    2017-01-01

    Full Text Available Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes. This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward’s method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations.

  9. Inorganic arsenic contents in rice-based infant foods from Spain, UK, China and USA.

    Science.gov (United States)

    Carbonell-Barrachina, Angel A; Wu, Xiangchun; Ramírez-Gandolfo, Amanda; Norton, Gareth J; Burló, Francisco; Deacon, Claire; Meharg, Andrew A

    2012-04-01

    Spanish gluten-free rice, cereals with gluten, and pureed baby foods were analysed for total (t-As) and inorganic As (i-As) using ICP-MS and HPLC-ICP-MS, respectively. Besides, pure infant rice from China, USA, UK and Spain were also analysed. The i-As contents were significantly higher in gluten-free rice than in cereals mixtures with gluten, placing infants with celiac disease at high risk. All rice-based products displayed a high i-As content, with values being above 60% of the t-As content and the remainder being dimethylarsinic acid (DMA). Approximately 77% of the pure infant rice samples showed contents below 150 μg kg(-1) (Chinese limit). When daily intake of i-As by infants (4-12 months) was estimated and expressed on a bodyweight basis (μg d(-1) kg(-1)), it was higher in all infants aged 8-12 months than drinking water maximum exposures predicted for adults (assuming 1 L consumption per day for a 10 μg L(-1) standard). Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Novel Approach to Classify Plants Based on Metabolite-Content Similarity.

    Science.gov (United States)

    Liu, Kang; Abdullah, Azian Azamimi; Huang, Ming; Nishioka, Takaaki; Altaf-Ul-Amin, Md; Kanaya, Shigehiko

    2017-01-01

    Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes). This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward's method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations.

  11. A visual detection of protein content based on titration of moving reaction boundary electrophoresis.

    Science.gov (United States)

    Wang, Hou-Yu; Guo, Cheng-Ye; Guo, Chen-Gang; Fan, Liu-Yin; Zhang, Lei; Cao, Cheng-Xi

    2013-04-24

    A visual electrophoretic titration method was firstly developed from the concept of moving reaction boundary (MRB) for protein content analysis. In the developed method, when the voltage was applied, the hydroxide ions in the cathodic vessel moved towards the anode, and neutralized the carboxyl groups of protein immobilized via highly cross-linked polyacrylamide gel (PAG), generating a MRB between the alkali and the immobilized protein. The boundary moving velocity (V(MRB)) was as a function of protein content, and an acid-base indicator was used to denote the boundary displacement. As a proof of concept, standard model proteins and biological samples were chosen for the experiments to study the feasibility of the developed method. The experiments revealed that good linear calibration functions between V(MRB) and protein content (correlation coefficients R>0.98). The experiments further demonstrated the following merits of developed method: (1) weak influence of non-protein nitrogen additives (e.g., melamine) adulterated in protein samples, (2) good agreement with the classic Kjeldahl method (R=0.9945), (3) fast measuring speed in total protein analysis of large samples from the same source, and (4) low limit of detection (0.02-0.15 mg mL(-1) for protein content), good precision (R.S.D. of intra-day less than 1.7% and inter-day less than 2.7%), and high recoveries (105-107%).

  12. Objective Functions for Information-Content-Based Optimal Monitoring Network Design

    Science.gov (United States)

    Weijs, S. V.; Huwald, H.; Parlange, M. B.

    2013-12-01

    Information theory has the potential to provide a common language for the quantification of uncertainty and its reduction by choosing optimally informative monitoring network layout. Numerous different objectives based on information measures have been proposed in recent literature, often focusing simultaneously on maximum information and minimum dependence between the chosen locations for data collection. We discuss these objective functions and conclude that a single objective optimization of joint entropy suffices to maximize the collection of information. Minimum dependence is a secondary objective that automatically follows from the first, but has no intrinsic justification. Furthermore it is demonstrated how the curse of dimensionality complicates the determination of information content for time series. In many cases found in the monitoring network literature, discrete multivariate joint distributions are estimated from relatively little data, leading to the occurrence of spurious dependencies in data, which change interpretations of previously published results. Aforementioned numerical challenges stem from inherent difficulties and subjectivity in determining information content. From information-theoretical logic it is clear that the information content of data depends on the state of knowledge prior to obtaining them. Less assumptions in formulating this state of knowledge leads to higher data requirements in formulating it. We further clarify the role of prior information in information content by drawing an analogy with data compression.

  13. Uplink Contention-based CSI Feedback with Prioritized Layers for a Multi-Carrier System

    DEFF Research Database (Denmark)

    Kaneko, Megumi; Hayashi, Kazunori; Popovski, Petar

    2012-01-01

    performance analysis of the proposed scheme, assuming Maximum CSI (Max CSI) and normalized Proportional Fair Scheduler (PFS), where a tight approximation of the achievable throughput is obtained assuming discrete Adaptive Modulation (AM) and CSI feedback which are relevant for the practical systems......, several works have considered contention-based CSI feedback in the UL control channel. We propose such a feedback scheme for a generic MC system, based on the idea of variable collision protection, where the probability that a feedback information experiences a collision depends on its importance...

  14. A P2P Service Discovery Strategy Based on Content Catalogues

    Directory of Open Access Journals (Sweden)

    Lican Huang

    2007-08-01

    Full Text Available This paper presents a framework for distributed service discovery based on VIRGO P2P technologies. The services are classified as multi-layer, hierarchical catalogue domains according to their contents. The service providers, which have their own service registries such as UDDIs, register the services they provide and establish a virtual tree in a VIRGO network according to the domain of their service. The service location done by the proposed strategy is effective and guaranteed. This paper also discusses the primary implementation of service discovery based on Tomcat/Axis and jUDDI.

  15. Medium-Contention Based Energy-Efficient Distributed Clustering (MEDIC) for Wireless Sensor Networks

    OpenAIRE

    Liang Zhao; Qilian Liang

    2007-01-01

    In this paper, we utilize clustering to organize wireless sensors into an energy-efficient hierarchy. We propose a Medium-contention based Energy-efficient DIstributed Clustering (MEDIC) scheme, through which sensors self-organize themselves into energy-efficient clusters by bidding for cluster headship. This scheme is based on a new criterion that can be used by each sensor node to make a distributed decision on whether electing to be a cluster head or a non-head member, which is a fully dis...

  16. AN INTELLIGENT CONTENT BASED IMAGE RETRIEVAL SYSTEM FOR MAMMOGRAM IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. VAIDEHI

    2015-11-01

    Full Text Available An automated segmentation method which dynamically selects the parenchymal region of interest (ROI based on the patients breast size is proposed from which, statistical features are derived. SVM classifier is used to model the derived features to classify the breast tissue as dense, glandular and fatty. Then K-nn with different distance metrics namely city-block, Euclidean and Chebchev is used to retrieve the first k similar images closest to the given query image. The proposed method was tested with MIAS database and achieves an average precision of 86.15%. The results reveals that the proposed method could be employed for effective content based mammograms retrieval.

  17. Combining Block and Corner Features for Content-Based Trademark Retrieval

    Institute of Scientific and Technical Information of China (English)

    HONG Zhiling; JIANG Qingshan; WU Meihong

    2007-01-01

    In order to retrieve a similarly look trademark from a large trademark database, an automatic content based trademark retrieval method using block hit statistic and comer Delaunay Triangulation features was proposed. The block features are derived from the hit statistic on a series of concentric ellipse. The comers are detected based on an enhanced SUSAN (Smallest Univalue Segment Assimilating Nucleus) algorithm and the Delaunay Triangulation of corner points are used as the corner features. Experiments have been conducted on the MPEG-7 Core Experiment CE-Shape-1 database of 1 400 images and a trademark database of 2 000 images. The retrieval results are very encouraging.

  18. Content-Based Image Retrieval Using Texture Color Shape and Region

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    2016-01-01

    Full Text Available Interests to accurately retrieve required images from databases of digital images are growing day by day. Images are represented by certain features to facilitate accurate retrieval of the required images. These features include Texture, Color, Shape and Region. It is a hot research area and researchers have developed many techniques to use these feature for accurate retrieval of required images from the databases. In this paper we present a literature survey of the Content Based Image Retrieval (CBIR techniques based on Texture, Color, Shape and Region. We also review some of the state of the art tools developed for CBIR.

  19. Speckle tracking and speckle content based composite strain imaging for solid and fluid filled lesions.

    Science.gov (United States)

    Rabbi, Md Shifat-E; Hasan, Md Kamrul

    2017-02-01

    Strain imaging though for solid lesions provides an effective way for determining their pathologic condition by displaying the tissue stiffness contrast, for fluid filled lesions such an imaging is yet an open problem. In this paper, we propose a novel speckle content based strain imaging technique for visualization and classification of fluid filled lesions in elastography after automatic identification of the presence of fluid filled lesions. Speckle content based strain, defined as a function of speckle density based on the relationship between strain and speckle density, gives an indirect strain value for fluid filled lesions. To measure the speckle density of the fluid filled lesions, two new criteria based on oscillation count of the windowed radio frequency signal and local variance of the normalized B-mode image are used. An improved speckle tracking technique is also proposed for strain imaging of the solid lesions and background. A wavelet-based integration technique is then proposed for combining the strain images from these two techniques for visualizing both the solid and fluid filled lesions from a common framework. The final output of our algorithm is a high quality composite strain image which can effectively visualize both solid and fluid filled breast lesions in addition to the speckle content of the fluid filled lesions for their discrimination. The performance of our algorithm is evaluated using the in vivo patient data and compared with recently reported techniques. The results show that both the solid and fluid filled lesions can be better visualized using our technique and the fluid filled lesions can be classified with good accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Dynamic Contention Window Control Scheme in IEEE 802.11e EDCA-Based Wireless LANs

    Science.gov (United States)

    Abeysekera, B. A. Hirantha Sithira; Matsuda, Takahiro; Takine, Tetsuya

    In the IEEE 802.11 MAC protocol, access points (APs) are given the same priority as wireless terminals in terms of acquiring the wireless link, even though they aggregate several downlink flows. This feature leads to a serious throughput degradation of downlink flows, compared with uplink flows. In this paper, we propose a dynamic contention window control scheme for the IEEE 802.11e EDCA-based wireless LANs, in order to achieve fairness between uplink and downlink TCP flows while guaranteeing QoS requirements for real-time traffic. The proposed scheme first determines the minimum contention window size in the best-effort access category at APs, based on the number of TCP flows. It then determines the minimum and maximum contention window sizes in higher priority access categories, such as voice and video, so as to guarantee QoS requirements for these real-time traffic. Note that the proposed scheme does not require any modification to the MAC protocol at wireless terminals. Through simulation experiments, we show the effectiveness of the proposed scheme.

  1. A rapid automatic analyzer and its methodology for effective bentonite content based on image recognition technology

    Directory of Open Access Journals (Sweden)

    Wei Long

    2016-09-01

    Full Text Available Fast and accurate determination of effective bentonite content in used clay bonded sand is very important for selecting the correct mixing ratio and mixing process to obtain high-performance molding sand. Currently, the effective bentonite content is determined by testing the ethylene blue absorbed in used clay bonded sand, which is usually a manual operation with some disadvantages including complicated process, long testing time and low accuracy. A rapid automatic analyzer of the effective bentonite content in used clay bonded sand was developed based on image recognition technology. The instrument consists of auto stirring, auto liquid removal, auto titration, step-rotation and image acquisition components, and processor. The principle of the image recognition method is first to decompose the color images into three-channel gray images based on the photosensitive degree difference of the light blue and dark blue in the three channels of red, green and blue, then to make the gray values subtraction calculation and gray level transformation of the gray images, and finally, to extract the outer circle light blue halo and the inner circle blue spot and calculate their area ratio. The titration process can be judged to reach the end-point while the area ratio is higher than the setting value.

  2. Information content-based gene ontology semantic similarity approaches: toward a unified framework theory.

    Science.gov (United States)

    Mazandu, Gaston K; Mulder, Nicola J

    2013-01-01

    Several approaches have been proposed for computing term information content (IC) and semantic similarity scores within the gene ontology (GO) directed acyclic graph (DAG). These approaches contributed to improving protein analyses at the functional level. Considering the recent proliferation of these approaches, a unified theory in a well-defined mathematical framework is necessary in order to provide a theoretical basis for validating these approaches. We review the existing IC-based ontological similarity approaches developed in the context of biomedical and bioinformatics fields to propose a general framework and unified description of all these measures. We have conducted an experimental evaluation to assess the impact of IC approaches, different normalization models, and correction factors on the performance of a functional similarity metric. Results reveal that considering only parents or only children of terms when assessing information content or semantic similarity scores negatively impacts the approach under consideration. This study produces a unified framework for current and future GO semantic similarity measures and provides theoretical basics for comparing different approaches. The experimental evaluation of different approaches based on different term information content models paves the way towards a solution to the issue of scoring a term's specificity in the GO DAG.

  3. Interaction-Aware Video Community-Based Content Delivery in Wireless Mobile Networks

    Directory of Open Access Journals (Sweden)

    Lujie Zhong

    2016-01-01

    Full Text Available The increase in the demand of content quality and the number of mobile users brings new challenges for the multimedia streaming services in wireless mobile networks. The virtual community technologies are promising by grouping the users with common characteristics to get the gains in the performance of resource lookup and system scalability. In this paper, we propose a novel interaction-aware video community-based content delivery (IVCCD in wireless mobile networks. IVCCD collects and analyzes the interaction information between users to construct user interaction model and further capture the common characteristics in the request and delivery of video content. IVCCD employs a partition-based community discovery scheme to group the mobile users in terms of the common characteristics and uses a community member management mechanism and a resource sharing scheme to achieve low-cost community maintenance and high searching performance. Extensive tests show how IVCCD achieves much better performance results in comparison with other state-of-the-art solutions.

  4. Information Content-Based Gene Ontology Semantic Similarity Approaches: Toward a Unified Framework Theory

    Science.gov (United States)

    Mazandu, Gaston K.; Mulder, Nicola J.

    2013-01-01

    Several approaches have been proposed for computing term information content (IC) and semantic similarity scores within the gene ontology (GO) directed acyclic graph (DAG). These approaches contributed to improving protein analyses at the functional level. Considering the recent proliferation of these approaches, a unified theory in a well-defined mathematical framework is necessary in order to provide a theoretical basis for validating these approaches. We review the existing IC-based ontological similarity approaches developed in the context of biomedical and bioinformatics fields to propose a general framework and unified description of all these measures. We have conducted an experimental evaluation to assess the impact of IC approaches, different normalization models, and correction factors on the performance of a functional similarity metric. Results reveal that considering only parents or only children of terms when assessing information content or semantic similarity scores negatively impacts the approach under consideration. This study produces a unified framework for current and future GO semantic similarity measures and provides theoretical basics for comparing different approaches. The experimental evaluation of different approaches based on different term information content models paves the way towards a solution to the issue of scoring a term's specificity in the GO DAG. PMID:24078912

  5. Determination of Component Contents of Blend Oil Based on Characteristics Peak Value Integration.

    Science.gov (United States)

    Xu, Jing; Hou, Pei-guo; Wang, Yu-tian; Pan, Zhao

    2016-01-01

    Edible blend oil market is confused at present. It has some problems such as confusing concepts, randomly named, shoddy and especially the fuzzy standard of compositions and ratios in blend oil. The national standard fails to come on time after eight years. The basic reason is the lack of qualitative and quantitative detection of vegetable oils in blend oil. Edible blend oil is mixed by different vegetable oils according to a certain proportion. Its nutrition is rich. Blend oil is eaten frequently in daily life. Different vegetable oil contains a certain components. The mixed vegetable oil can make full use of their nutrients and make the nutrients more balanced in blend oil. It is conducive to people's health. It is an effectively way to monitor blend oil market by the accurate determination of single vegetable oil content in blend oil. The types of blend oil are known, so we only need for accurate determination of its content. Three dimensional fluorescence spectra are used for the contents in blend oil. A new method of data processing is proposed with calculation of characteristics peak value integration in chosen characteristic area based on Quasi-Monte Carlo method, combined with Neural network method to solve nonlinear equations to obtain single vegetable oil content in blend oil. Peanut oil, soybean oil and sunflower oil are used as research object to reconcile into edible blend oil, with single oil regarded whole, not considered each oil's components. Recovery rates of 10 configurations of edible harmonic oil is measured to verify the validity of the method of characteristics peak value integration. An effective method is provided to detect components content of complex mixture in high sensitivity. Accuracy of recovery rats is increased, compared the common method of solution of linear equations used to detect components content of mixture. It can be used in the testing of kinds and content of edible vegetable oil in blend oil for the food quality detection

  6. Social media use by community-based organizations conducting health promotion: a content analysis

    Science.gov (United States)

    2013-01-01

    Background Community-based organizations (CBOs) are critical channels for the delivery of health promotion programs. Much of their influence comes from the relationships they have with community members and other key stakeholders and they may be able to harness the power of social media tools to develop and maintain these relationships. There are limited data describing if and how CBOs are using social media. This study assesses the extent to which CBOs engaged in health promotion use popular social media channels, the types of content typically shared, and the extent to which the interactive aspects of social media tools are utilized. Methods We assessed the social media presence and patterns of usage of CBOs engaged in health promotion in Boston, Lawrence, and Worcester, Massachusetts. We coded content on three popular channels: Facebook, Twitter, and YouTube. We used content analysis techniques to quantitatively summarize posts, tweets, and videos on these channels, respectively. For each organization, we coded all content put forth by the CBO on the three channels in a 30-day window. Two coders were trained and conducted the coding. Data were collected between November 2011 and January 2012. Results A total of 166 organizations were included in our census. We found that 42% of organizations used at least one of the channels of interest. Across the three channels, organization promotion was the most common theme for content (66% of posts, 63% of tweets, and 93% of videos included this content). Most organizations updated Facebook and Twitter content at rates close to recommended frequencies. We found limited interaction/engagement with audience members. Conclusions Much of the use of social media tools appeared to be uni-directional, a flow of information from the organization to the audience. By better leveraging opportunities for interaction and user engagement, these organizations can reap greater benefits from the non-trivial investment required to use

  7. High Water Content Material Based on Ba-Bearing Sulphoaluminate Cement

    Institute of Scientific and Technical Information of China (English)

    CHANG Jun; CHENG Xin; LU Lingchao; HUANG Shifeng; YE Zhengmao

    2005-01-01

    A new type of high water content material which is made up of two pastes is prepared, one is made from lime and gypsum, and another is based on Ba-bearing stdphoaluminate cement. It has excellent properties such as slow single paste solidifing,fast double pastes solidifing,fast coagulating and hardening, high early strength, good suspension property at high W/C ratio and low cost. Meanwhile, the properties and hydration mechanism of the material were analyzed by using XRD , DTA- TG and SEM. The hydrated products of new type of high water content material are Ba-bearing ettringite, BaSO4 , aluminum gel and C-S-H gel.

  8. Audio-based Age and Gender Identification to Enhance the Recommendation of TV Content

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Tan, Zheng-Hua; Jensen, Søren Holdt

    2013-01-01

    Recommending TV content to groups of viewers is best carried out when relevant information such as the demographics of the group is available. However, it can be difficult and time consuming to extract information for every user in the group. This paper shows how an audio analysis of the age...... and gender of a group of users watching the TV can be used for recommending a sequence of N short TV content items for the group. First, a state of the art audio-based classifier determines the age and gender of each user in an M-user group and creates a group profile. A genetic recommender algorithm......, and the other half were selected using the audio-derived demographics. The recommended advertisements received a significant higher median rating of 7.75, as opposed to 4.25 for the randomly selected advertisements 1....

  9. Content based Zero-Watermarking Algorithm for Authentication of Text Documents

    CERN Document Server

    Jalil, Zunera; Sabir, Maria

    2010-01-01

    Copyright protection and authentication of digital contents has become a significant issue in the current digital epoch with efficient communication mediums such as internet. Plain text is the rampantly used medium used over the internet for information exchange and it is very crucial to verify the authenticity of information. There are very limited techniques available for plain text watermarking and authentication. This paper presents a novel zero-watermarking algorithm for authentication of plain text. The algorithm generates a watermark based on the text contents and this watermark can later be extracted using extraction algorithm to prove the authenticity of text document. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks identifying watermark accuracy and distortion rate on 10 different text samples of varying length and attacks.

  10. Damage healing ability of a shape-memory-polymer-based particulate composite with small thermoplastic contents

    Science.gov (United States)

    Nji, Jones; Li, Guoqiang

    2012-02-01

    The purpose of this study is to investigate the potential of a shape-memory-polymer (SMP)-based particulate composite to heal structural-length scale damage with small thermoplastic additive contents through a close-then-heal (CTH) self-healing scheme that was introduced in a previous study (Li and Uppu 2010 Comput. Sci. Technol. 70 1419-27). The idea is to achieve reasonable healing efficiencies with minimal sacrifice in structural load capacity. By first closing cracks, the gap between two crack surfaces is narrowed and a lesser amount of thermoplastic particles is required to achieve healing. The particulate composite was fabricated by dispersing copolyester thermoplastic particles in a shape memory polymer matrix. It is found that, for small thermoplastic contents of less than 10%, the CTH scheme followed in this study heals structural-length scale damage in the SMP particulate composite to a meaningful extent and with less sacrifice of structural capacity.

  11. OneWeb: web content adaptation platform based on W3C Mobile Web Initiative guidelines

    Directory of Open Access Journals (Sweden)

    Francisco O. Martínez P.

    2011-01-01

    Full Text Available  Restrictions regardingnavigability and user-friendliness are the main challenges the Mobile Web faces to be accepted worldwide. W3C has recently developed the Mobile Web Initiative (MWI, a set of directives for the suitable design and presentation of mobile Web interfaces. This article presents the main features and functional modules of OneWeb, an MWI-based Web content adaptation platform developed by Mobile Devices Applications Development Interest Group’s  (W@PColombia research activities, forming part of the Universidad de Cauca’s Telematics Engineering Group.Some performance measurementresults and comparison with other Web content adaptation platforms are presented. Tests have shown suitable response times for Mobile Web environments; MWI guidelines were applied to over twenty Web pages selected for testing purposes.  

  12. Content relatedness in the social web based on social explicit semantic analysis

    Science.gov (United States)

    Ntalianis, Klimis; Otterbacher, Jahna; Mastorakis, Nikolaos

    2017-06-01

    In this paper a novel content relatedness algorithm for social media content is proposed, based on the Explicit Semantic Analysis (ESA) technique. The proposed scheme takes into consideration social interactions. In particular starting from the vector space representation model, similarity is expressed by a summation of term weight products. In this paper, term weights are estimated by a social computing method, where the strength of each term is calculated by the attention the terms receives. For this reason each post is split into two parts, title and comments area, while attention is defined by the number of social interactions such as likes and shares. The overall approach is named Social Explicit Semantic Analysis. Experimental results on real data show the advantages and limitations of the proposed approach, while an initial comparison between ESA and S-ESA is very promising.

  13. Content based image retrieval using local binary pattern operator and data mining techniques.

    Science.gov (United States)

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  14. Branded webseries. Strategic actions of the advertiser based on corporate online fiction and marketing content

    Directory of Open Access Journals (Sweden)

    Jesús Segarra-Saavedra

    2016-05-01

    Full Text Available In the past, advertisers and brands have adhered to audiovisual content, film and television, through production sponsorship, product or brand placement and bartering. It´s strategy basically accessible to prominent budgets. But changes in communication and the democratization of creation and distribution of content have opened this tactic to other advertisers whose advertising items try to join brand values to entertainment through branded webseries. We present an exploratory study about the creation, dissemination, promotion, reception and socialization of webseries, brand stories based on fiction and Internet. From methodological triangulation, is dealt the case study of Risi and the three seasons of the webserie ¿Por qué esperar? With depth interviews with the creative, their descriptive analysis, as well as quantitative analysis audience and their interactions. The comparative results describe the use of creativity, famous characters related to the target and the effect generated by this type of stories that generate engagement.

  15. Content-Based Discovery for Web Map Service using Support Vector Machine and User Relevance Feedback

    Science.gov (United States)

    Cheng, Xiaoqiang; Qi, Kunlun; Zheng, Jie; You, Lan; Wu, Huayi

    2016-01-01

    Many discovery methods for geographic information services have been proposed. There are approaches for finding and matching geographic information services, methods for constructing geographic information service classification schemes, and automatic geographic information discovery. Overall, the efficiency of the geographic information discovery keeps improving., There are however, still two problems in Web Map Service (WMS) discovery that must be solved. Mismatches between the graphic contents of a WMS and the semantic descriptions in the metadata make discovery difficult for human users. End-users and computers comprehend WMSs differently creating semantic gaps in human-computer interactions. To address these problems, we propose an improved query process for WMSs based on the graphic contents of WMS layers, combining Support Vector Machine (SVM) and user relevance feedback. Our experiments demonstrate that the proposed method can improve the accuracy and efficiency of WMS discovery. PMID:27861505

  16. High content screening for G protein-coupled receptors using cell-based protein translocation assays

    DEFF Research Database (Denmark)

    Grånäs, Charlotta; Lundholt, Betina Kerstin; Heydorn, Arne

    2005-01-01

    G protein-coupled receptors (GPCRs) have been one of the most productive classes of drug targets for several decades, and new technologies for GPCR-based discovery promise to keep this field active for years to come. While molecular screens for GPCR receptor agonist- and antagonist-based drugs...... as valuable discovery tools for several years. The application of high content cell-based screening to GPCR discovery has opened up additional possibilities, such as direct tracking of GPCRs, G proteins and other signaling pathway components using intracellular translocation assays. These assays provide...... the capability to probe GPCR function at the cellular level with better resolution than has previously been possible, and offer practical strategies for more definitive selectivity evaluation and counter-screening in the early stages of drug discovery. The potential of cell-based translocation assays for GPCR...

  17. A candidate gene-based association study of tocopherol content and composition in rapeseed (Brassica napus

    Directory of Open Access Journals (Sweden)

    Steffi eFritsche

    2012-06-01

    Full Text Available Rapeseed (Brassica napus L. is the most important oil crop of temperate climates. Rapeseed oil contains tocopherols, also known as vitamin E, which is an indispensable nutrient for humans and animals due to its antioxidant and radical scavenging abilities. Moreover, tocopherols are also important for the oxidative stability of vegetable oils. Therefore, seed oil with increased tocopherol content or altered tocopherol composition is a target for breeding. We investigated the role of nucleotide variations within candidate genes from the tocopherol biosynthesis pathway. Field trials were carried out with 229 accessions from a worldwide B. napus collection which was divided into two panels of 96 and 133 accessions. Seed tocopherol content and composition were measured by HPLC. High heritabilities were found for both traits, ranging from 0.62 to 0.94. We identified polymorphisms by sequencing selected regions of the tocopherol genes from the 96 accession panel. Subsequently, we determined the population structure (Q and relative kinship (K as detected by genotyping with genome-wide distributed SSR markers. Association studies were performed using two models, the structure-based GLM+Q and the PK mixed model. Between 26 and 12 polymorphisms within two genes (BnaX.VTE3.a, BnaA.PDS1.c were significantly associated with tocopherol traits. The SNPs explained up to 16.93 % of the genetic variance for tocopherol composition and up to 10.48 % for total tocopherol content. Based on the sequence information we designed CAPS markers for genotyping the 133 accessions from the 2nd panel. Significant associations with various tocopherol traits confirmed the results from the first experiment. We demonstrate that the polymorphisms within the tocopherol genes clearly impact tocopherol content and composition in B. napus seeds. We suggest that these nucleotide variations may be used as selectable markers for breeding rapeseed with enhanced tocopherol quality.

  18. A fast and scalable content transfer protocol (FSCTP) for VANET based architecture

    Science.gov (United States)

    Santamaria, A. F.; Scala, F.; Sottile, C.; Tropea, M.; Raimondo, P.

    2016-05-01

    In the modern Vehicular Ad-hoc Networks (VANET) based systems even more applications require lot of data to be exchanged among vehicles and infrastructure entities. Due to mobility issues and unplanned events that may occurs it is important that contents should be transferred as fast as possible by taking into account consistence of the exchanged data and reliability of the connections. In order to face with these issues, in this work we propose a new transfer data protocol called Fast and Scalable Content Transfer Protocol (FSCTP). This protocol allows a data transfer by using a bidirectional channel among content suppliers and receivers exploiting several cooperative sessions. Each session will be based on User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) to start and manage data transfer. Often in urban area the VANET scenario is composed of several vehicle and infrastructures points. The main idea is to exploit ad-hoc connections between vehicles to reach content suppliers. Moreover, in order to obtain a faster data transfer, more than one session is exploited to achieve a higher transfer rate. Of course it is important to manage data transfer between suppliers to avoid redundancy and resource wastages. The main goal is to instantiate a cooperative multi-session layer efficiently managed in a VANET environment exploiting the wide coverage area and avoiding common issues known in this kind of scenario. High mobility and unstable connections between nodes are some of the most common issues to address, thus a cooperative work between network, transport and application layers needs to be designed.

  19. Content-based image retrieval for interstitial lung diseases using classification confidence

    Science.gov (United States)

    Dash, Jatindra Kumar; Mukhopadhyay, Sudipta; Prabhakar, Nidhi; Garg, Mandeep; Khandelwal, Niranjan

    2013-02-01

    Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.

  20. BI-LEVEL CLASSIFICATION OF COLOR INDEXED IMAGE HISTOGRAMS FOR CONTENT BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Karpagam Vilvanathan

    2013-01-01

    Full Text Available This dissertation proposes content based image classification and retrieval with Classification and Regression Tree (CART. A simple CBIR system (WH is designed and proved to be efficient even in the presence of distorted and noisy images. WH exhibits good performance in terms of precision, without using any intensive image processing feature extraction techniques. Unique indexed color histogram and wavelet decomposition based horizontal, vertical and diagonal image attributes have been chosen as the primary attributes in the design of the retrieval system. The output feature vectors of the WH method serve as input to the proposed decision tree based image classification and retrieval system. The performance of the proposed content based image classification and retrieval system is evaluated with the standard SIMPLIcity dataset which has been used in several previous works. The performance of the system is measured with precision as the metric. Holdout validation and k-fold cross validation are used to validate the results. The proposed system performs obviously better than SIMPLIcity and all the other compared methods.

  1. Content-Based Image Retrieval using Color Moment and Gabor Texture Feature

    Directory of Open Access Journals (Sweden)

    K. Hemachandran

    2012-09-01

    Full Text Available Content based image retrieval (CBIR has become one of the most active research areas in the past few years. Many indexing techniques are based on global feature distributions. However, these global distributions have limited discriminating power because they are unable to capture local image information. In this paper, we propose a content-based image retrieval method which combines color and texture features. To improve the discriminating power of color indexing techniques, we encode a minimal amount of spatial information in the color index. As its color features, an image is divided horizontally into three equal non-overlapping regions. From each region in the image, we extract the first three moments of the color distribution, from each color channel and store them in the index i.e., for a HSV color space, we store 27 floating point numbers per image. As its texture feature, Gabor texture descriptors are adopted. We assign weights to each feature respectively and calculate the similarity with combined features of color and texture using Canberra distance as similarity measure. Experimental results show that the proposed method has higher retrieval accuracy than other conventional methods combining color moments and texture features based on global features approach.

  2. Channel Allocation Based on Content Characteristics for Video Transmission in Time-Domain-Based Multichannel Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Md. Jalil Piran

    2015-01-01

    Full Text Available This paper proposes a method for channel allocation based on video content requirements and the quality of the available channels in cognitive radio networks (CRNs. Our objective is to save network bandwidth and achieve high-quality video delivery. In this method, the content is divided into clusters based on scene complexity and PSNR. To allocate channel to the clusters over multichannel CRNs, we first need to identify the licensee’s activity and then maximize the opportunistic usage accordingly. Therefore, we classify short and long time transmission opportunities based on the licensee’s activities using a Bayesian nonparametric inference model. Furthermore, to prevent transmission interruption, we consider the underlay mode for transmission of the clusters with a lower bitrate. Next, we map the available spectrum opportunities to the content clusters according to both the quality of the channels and the requirements of the clusters. Then, a distortion optimization model is constructed according to the network transmission mechanism. Finally, to maximize the average quality of the delivered video, an optimization problem is defined to determine the best bitrate for each cluster by maximizing the sum of the logarithms of the frame rates. Our extensive simulation results prove the superior performance of the proposed method in terms of spectrum efficiency and the quality of delivered video.

  3. A comparison of literature-based and content-based guided reading materials on elementary student reading and science achievement

    Science.gov (United States)

    Guns, Christine

    Guided reading, as developed by Fountas and Pinnell (2001), has been a staple of elementary reading programs for the past decade. Teachers in the elementary school setting utilize this small group, tailored instruction in order to differentiate and meet the instructional needs of the students. The literature shows academic benefit for students who have special needs, such as learning disabilities, autism, and hearing impairments but consideration of academic impact has not been investigated for regular education students. The purpose of this quasi-experimental study was to investigate the academic impact of the use of content-related (Group C) and the traditional literature-based (Group L) reading materials. During the Living Systems and Life Processes unit in science, two teachers self-selected to utilized science-related materials for guided reading instruction while the other three teacher participants utilized their normal literature-based guided reading materials. The two groups were compared using an ANCOVA in this pre-test/post-test design. The dependent variables included the Reading for Application and Instruction assessment (RAI) and a Living Systems and Life Processes assessment (LSA). Further analysis compared students of different reading levels and gender. The data analyses revealed a practical but not statistical significance for students in science performance. It was discovered that below level male and female students performed better on the LSA when provided with content-related guided reading materials. As far as reading achievement is concerned, students in both groups had comparable results. The teachers provided guided reading instruction to their students with fidelity and made adjustments to their practices due to the needs of their students. The content-related teachers utilized a larger number of expository texts than the literature-based teachers. These teachers expressed the desire to continue the practice of providing the students with

  4. Textual and visual content-based anti-phishing: a Bayesian approach.

    Science.gov (United States)

    Zhang, Haijun; Liu, Gang; Chow, Tommy W S; Liu, Wenyin

    2011-10-01

    A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases.

  5. Mutual Perception of USA and China based on Content-Analysis of Media

    Directory of Open Access Journals (Sweden)

    Farida Halmuratovna Autova

    2015-12-01

    Full Text Available The article evaluates mutual perception of the United States and China in the XXI century, based on content analysis of American and Chinese media. The research methodology includes both content and event analysis. To conduct content analysis we used leading weekly news magazines of US and China - “Newsweek” and “Beijing Review”. The events, limiting the time frame of analysis are Barack Obama's re-election to the second term in 2012, and the entry of Xi Jinping as the chairman of China in 2013. As a result, we have analyzed the issues of each magazine one year before and after the events respectively. Thematic areas covered by articles (politics, economy, culture, as well as stylistic coloring titles of articles are examined. Following the results of the analysis China confidently perceives itself in the international arena. In turn, the US are committed by emphasizing speed and power of the Chinese point out the negative consequences of such a jump (“growing pains”, on the challenges facing China in domestic and foreign policy, in order to create a negative image of China in the minds of American citizens.

  6. Research on algorithm about content-based segmentation and spatial transformation for stereo panorama

    Science.gov (United States)

    Li, Zili; Xia, Xuezhi; Zhu, Guangxi; Zhu, Yaoting

    2004-03-01

    The principle to construct G&IBMR virtual scene based on stereo panorama with binocular stereovision was put forward. Closed cubic B-splines have been used for content-based segmentation to virtual objects of stereo panorama and all objects in current viewing frustum would be ordered in current object linked list (COLL) by their depth information. The formula has been educed to calculate the depth information of a point in virtual scene by the parallax based on a parallel binocular vision model. A bilinear interpolation algorithm has been submitted to deform the segmentation template and take image splicing between three key positions. We also use the positional and directional transformation of binocular virtual camera bound to user avatar to drive the transformation of stereo panorama so as to achieve real-time consistency about perspective relationship and image masking. The experimental result has shown that the algorithm in this paper is effective and feasible.

  7. A COMPARATIVE STUDY OF DIMENSION REDUCTION TECHNIQUES FOR CONTENT-BASED IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    G. Sasikala

    2010-08-01

    Full Text Available Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content-based image retrieval is a promising approach because of its automatic indexing and retrieval based on their semantic features and visual appearance. This paper discusses the method for dimensionality reduction called Maximum Margin Projection (MMP. MMP aims at maximizing the margin between positive and negative sample at each neighborhood. It is designed for discovering the local manifold structure. Therefore, MMP is likely to be more suitable for image retrieval systems, where nearest neighbor search is usually involved. The performance of these approaches is measured by a user evaluation. It is found that the MMP based technique provides more functionalities and capabilities to support the features of information seeking behavior and produces better performance in searching images.

  8. Automatic video shot detection and characterization for content-based video retrieval

    Science.gov (United States)

    Sun, Jifeng; Cui, Songye; Xu, Xing; Luo, Ying

    2001-09-01

    In this paper, firstly, several video shot detection technologies have been discussed. An edited video consists of two kinds of shot boundaries have been known as straight cuts and optical cuts. Experimental result using a variety of videos are presented to demonstrate that moving window detection algorithm and 10-step difference histogram comparison algorithm are effective for detection of both kinds of shot cuts. After shot isolation, methods for shot characterization were investigated. We present a detailed discussion of key-frame extraction and review the visual features, particularly the color feature based on HSV model, of key-frames. Video retrieval methods based on key-frames have been presented at the end of this section. This paper also present an integrated system solution for computer- assisted video parsing and content-based video retrieval. The application software package was programmed on Visual C++ development platform.

  9. Content Based Image Retrieval using Novel Gaussian Fuzzy Feed Forward-Neural Network

    Directory of Open Access Journals (Sweden)

    C. R.B. Durai

    2011-01-01

    Full Text Available Problem statement: With extensive digitization of images, diagrams and paintings, traditional keyword based search has been found to be inefficient for retrieval of the required data. Content-Based Image Retrieval (CBIR system responds to image queries as input and relies on image content, using techniques from computer vision and image processing to interpret and understand it, while using techniques from information retrieval and databases to rapidly locate and retrieve images suiting an input query. CBIR finds extensive applications in the field of medicine as it assists a doctor to make better decisions by referring the CBIR system and gain confidence. Approach: Various methods have been proposed for CBIR using image low level image features like histogram, color layout, texture and analysis of the image in the frequency domain. Similarly various classification algorithms like Naïve Bayes classifier, Support Vector Machine, Decision tree induction algorithms and Neural Network based classifiers have been studied extensively. We proposed to extract features from an image using Discrete Cosine Transform, extract relevant features using information gain and Gaussian Fuzzy Feed Forward Neural Network algorithm for classification. Results and Conclusion: We apply our proposed procedure to 180 brain MRI images of which 72 images were used for testing and the remaining for training. The classification accuracy obtained was 95.83% for a three class problem. This research focused on a narrow search, where further investigation is needed to evaluate larger classes.

  10. Coherence and content of conflict-based narratives: associations to family risk and maladjustment.

    Science.gov (United States)

    Müller, Eva; Perren, Sonja; Wustmann Seiler, Corina

    2014-10-01

    This study examined the role of structural and content characteristics of children's conflict-based narratives (coherence, positive and aggressive themes) in the association between early childhood family risk and children's internalizing and externalizing problems in a sample of 193 children (97 girls, 96 boys) aged 3 to 5 years (M = 3.85, SD = .48). Parents participated in an interview on family related risk factors; teachers and parents completed the Strengths and Difficulties Questionnaire; children completed conflict-based narratives based on the MacArthur Story Stem Battery (MSSB). We specifically investigated the mediating and moderating role of narrative coherence and content themes in the association between family risk and children's internalizing and externalizing problems. Children's narrative coherence was associated with better adjustment, and had a buffering effect on the negative relation between family risk on children's internalizing problems. Positive themes were negatively associated with externalizing problems. Telling narratives with many positive and negative themes buffered the negative association of family risk and teacher-reported externalizing problems. In sum, the findings suggest that in children, being able to tell coherent and enriched narratives may buffer the impact of family risk on their symptoms, and being able to produce positive themes rather than aggressive themes is associated with lower externalizing problems.

  11. Web Video Categorization based on Wikipedia Categories and Content-Duplicated Open Resources

    CERN Document Server

    Chen, Zhineng; Song, Yicheng; Zhang, Yongdong; Li, Jintao

    2010-01-01

    This paper presents a novel approach for web video categorization by leveraging Wikipedia categories (WikiCs) and open resources describing the same content as the video, i.e., content-duplicated open resources (CDORs). Note that current approaches only col-lect CDORs within one or a few media forms and ignore CDORs of other forms. We explore all these resources by utilizing WikiCs and commercial search engines. Given a web video, its discrimin-ative Wikipedia concepts are first identified and classified. Then a textual query is constructed and from which CDORs are collected. Based on these CDORs, we propose to categorize web videos in the space spanned by WikiCs rather than that spanned by raw tags. Experimental results demonstrate the effectiveness of both the proposed CDOR collection method and the WikiC voting catego-rization algorithm. In addition, the categorization model built based on both WikiCs and CDORs achieves better performance compared with the models built based on only one of them as well as ...

  12. Detection of spam web page using content and link-based techniques: A combined approach

    Indian Academy of Sciences (India)

    Rajendra Kumar Roul; Shubham Rohan Asthana; Mit Shah; Dhruvesh Parikh

    2016-02-01

    Web spam is a technique through which the irrelevant pages get higher rank than relevant pages in the search engine’s results. Spam pages are generally insufficient and inappropriate results for user. Many researchers are working in this area to detect the spam pages. However, there is no universal efficient technique developed so far which can detect all spam pages. This paper is an effort in that direction, where we propose a combined approach of content and link-based techniques to identify the spam pages. The content-based approach uses term density and Part of Speech (POS) ratio test and in the link-based approach, we explore the collaborative detection using personalized page ranking to classify the Web page as spam or non-spam. For experimental purpose, WEBSPAM-UK2006 dataset has been used. The results have been compared with some of the existing approaches. A good and promising F-measure of 75.2% demonstrates the applicability and efficiency of our approach.

  13. A Kinect-Based Framework For Better User Experience in Real-Time Audiovisual Content Manipulation

    DEFF Research Database (Denmark)

    Potetsianakis, Emmanouil; Ksylakis, Emmanouil; Triantafyllidis, Georgios

    2014-01-01

    the inputs from Microsoft Kinect as a controller interface. Originally introduced as a peripheral of XBox, Kinect is a multimodal device equipped with RGB Camera, Depth Sensor and Microphone Array. We use those inputs in order to provide a non-tactile controller abstraction to the user, targeting multimedia...... content creation. Current Kinect-based solutions, try to recognize natural gestures of the user, and classify them as controller actions. The novelty of our implementation is that instead of extracting gesture features, we directly map the inputs from the Kinect to a suitable set of values...

  14. Overcoming PCR Inhibition During DNA-Based Gut Content Analysis of Ants.

    Science.gov (United States)

    Penn, Hannah J; Chapman, Eric G; Harwood, James D

    2016-10-01

    Generalist predators play an important role in many terrestrial systems, especially within agricultural settings, and ants (Hymenoptera: Formicidae) often constitute important linkages of these food webs, as they are abundant and influential in these ecosystems. Molecular gut content analysis provides a means of delineating food web linkages of ants based on the presence of prey DNA within their guts. Although this method can provide insight, its use on ants has been limited, potentially due to inhibition when amplifying gut content DNA. We designed a series of experiments to determine those ant organs responsible for inhibition and identified variation in inhibition among three species (Tetramorium caespitum (L.), Solenopsis invicta Buren, and Camponotus floridanus (Buckley)). No body segment, other than the gaster, caused significant inhibition. Following dissection, we determined that within the gaster, the digestive tract and crop cause significant levels of inhibition. We found significant differences in the frequency of inhibition between the three species tested, with inhibition most evident in T. caespitum The most effective method to prevent inhibition before DNA extraction was to exude crop contents and crop structures onto UV-sterilized tissue. However, if extracted samples exhibit inhibition, addition of bovine serum albumin to PCR reagents will overcome this problem. These methods will circumvent gut content inhibition within selected species of ants, thereby allowing more detailed and reliable studies of ant food webs. As little is known about the prevalence of this inhibition in other species, it is recommended that the protocols in this study are used until otherwise shown to be unnecessary. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Appropriate Tealeaf Harvest Timing Determination Referring Fiber Content in Tealeaf Derived from Ground based Nir Camera Images

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2015-08-01

    Full Text Available Method for most appropriate tealeaves harvest timing with the reference to the fiber content in tealeaves which can be estimated with ground based Near Infrared (NIR camera images is proposed. In the proposed method, NIR camera images of tealeaves are used for estimation of nitrogen content and fiber content in tealeaves. The nitrogen content is highly correlated to Theanine (amid acid content in tealeaves. Theanine rich tealeaves taste good. Meanwhile, the age of tealeaves depend on fiber content. When tealeaves are getting old, then fiber content is increased. Tealeaf shape volume also is increased with increasing of fiber content. Fiber rich tealeaves taste not so good, in general. There is negative correlation between fiber content and NIR reflectance of tealeaves. Therefore, tealeaves quality of nitrogen and fiber contents can be estimated with NIR camera images. Also, the shape volume of tealeaves is highly correlated to NIR reflectance of tealeaf surface. Therefore, not only tealeaf quality but also harvest amount can be estimated with NIR camera images. Experimental results show the proposed method works well for estimation of appropriate tealeaves harvest timing with fiber content in the tealeaves in concern estimated with NIR camera images.

  16. Engineering web maps with gradual content zoom based on streaming vector data

    Science.gov (United States)

    Huang, Lina; Meijers, Martijn; Šuba, Radan; van Oosterom, Peter

    2016-04-01

    Vario-scale data structures have been designed to support gradual content zoom and the progressive transfer of vector data, for use with arbitrary map scales. The focus to date has been on the server side, especially on how to convert geographic data into the proposed vario-scale structures by means of automated generalisation. This paper contributes to the ongoing vario-scale research by focusing on the client side and communication, particularly on how this works in a web-services setting. It is claimed that these functionalities are urgently needed, as many web-based applications, both desktop and mobile, require gradual content zoom, progressive transfer and a high performance level. The web-client prototypes developed in this paper make it possible to assess the behaviour of vario-scale data and to determine how users will actually see the interactions. Several different options of web-services communication architectures are possible in a vario-scale setting. These options are analysed and tested with various web-client prototypes, with respect to functionality, ease of implementation and performance (amount of transmitted data and response times). We show that the vario-scale data structure can fit in with current web-based architectures and efforts to standardise map distribution on the internet. However, to maximise the benefits of vario-scale data, a client needs to be aware of this structure. When a client needs a map to be refined (by means of a gradual content zoom operation), only the 'missing' data will be requested. This data will be sent incrementally to the client from a server. In this way, the amount of data transferred at one time is reduced, shortening the transmission time. In addition to these conceptual architecture aspects, there are many implementation and tooling design decisions at play. These will also be elaborated on in this paper. Based on the experiments conducted, we conclude that the vario-scale approach indeed supports gradual

  17. Adapting content-based image retrieval techniques for the semantic annotation of medical images.

    Science.gov (United States)

    Kumar, Ashnil; Dyer, Shane; Kim, Jinman; Li, Changyang; Leong, Philip H W; Fulham, Michael; Feng, Dagan

    2016-04-01

    The automatic annotation of medical images is a prerequisite for building comprehensive semantic archives that can be used to enhance evidence-based diagnosis, physician education, and biomedical research. Annotation also has important applications in the automatic generation of structured radiology reports. Much of the prior research work has focused on annotating images with properties such as the modality of the image, or the biological system or body region being imaged. However, many challenges remain for the annotation of high-level semantic content in medical images (e.g., presence of calcification, vessel obstruction, etc.) due to the difficulty in discovering relationships and associations between low-level image features and high-level semantic concepts. This difficulty is further compounded by the lack of labelled training data. In this paper, we present a method for the automatic semantic annotation of medical images that leverages techniques from content-based image retrieval (CBIR). CBIR is a well-established image search technology that uses quantifiable low-level image features to represent the high-level semantic content depicted in those images. Our method extends CBIR techniques to identify or retrieve a collection of labelled images that have similar low-level features and then uses this collection to determine the best high-level semantic annotations. We demonstrate our annotation method using retrieval via weighted nearest-neighbour retrieval and multi-class classification to show that our approach is viable regardless of the underlying retrieval strategy. We experimentally compared our method with several well-established baseline techniques (classification and regression) and showed that our method achieved the highest accuracy in the annotation of liver computed tomography (CT) images.

  18. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  19. Natural Language Processing Versus Content-Based Image Analysis for Medical Document Retrieval.

    Science.gov (United States)

    Névéol, Aurélie; Deserno, Thomas M; Darmoni, Stéfan J; Güld, Mark Oliver; Aronson, Alan R

    2008-09-18

    One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and natural language processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the "gold standard" codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system.

  20. A Survey On: Content Based Image Retrieval Systems Using Clustering Techniques For Large Data sets

    Directory of Open Access Journals (Sweden)

    Monika Jain

    2011-12-01

    Full Text Available Content-based image retrieval (CBIR is a new but widely adopted method for finding images from vastand unannotated image databases. As the network and development of multimedia technologies arebecoming more popular, users are not satisfied with the traditional information retrieval techniques. Sonowadays the content based image retrieval (CBIR are becoming a source of exact and fast retrieval. Inrecent years, a variety of techniques have been developed to improve the performance of CBIR. Dataclustering is an unsupervised method for extraction hidden pattern from huge data sets. With large datasets, there is possibility of high dimensionality. Having both accuracy and efficiency for high dimensionaldata sets with enormous number of samples is a challenging arena. In this paper the clustering techniquesare discussed and analysed. Also, we propose a method HDK that uses more than one clustering techniqueto improve the performance of CBIR.This method makes use of hierachical and divide and conquer KMeansclustering technique with equivalency and compatible relation concepts to improve the performanceof the K-Means for using in high dimensional datasets. It also introduced the feature like color, texture andshape for accurate and effective retrieval system.

  1. A novel evolutionary approach for optimizing content-based image indexing algorithms.

    Science.gov (United States)

    Saadatmand-Tarzjan, Mahdi; Moghaddam, Hamid Abrishami

    2007-02-01

    Optimization of content-based image indexing and retrieval (CBIR) algorithms is a complicated and time-consuming task since each time a parameter of the indexing algorithm is changed, all images in the database should be indexed again. In this paper, a novel evolutionary method called evolutionary group algorithm (EGA) is proposed for complicated time-consuming optimization problems such as finding optimal parameters of content-based image indexing algorithms. In the new evolutionary algorithm, the image database is partitioned into several smaller subsets, and each subset is used by an updating process as training patterns for each chromosome during evolution. This is in contrast to genetic algorithms that use the whole database as training patterns for evolution. Additionally, for each chromosome, a parameter called age is defined that implies the progress of the updating process. Similarly, the genes of the proposed chromosomes are divided into two categories: evolutionary genes that participate to evolution and history genes that save previous states of the updating process. Furthermore, a new fitness function is defined which evaluates the fitness of the chromosomes of the current population with different ages in each generation. We used EGA to optimize the quantization thresholds of the wavelet-correlogram algorithm for CBIR. The optimal quantization thresholds computed by EGA improved significantly all the evaluation measures including average precision, average weighted precision, average recall, and average rank for the wavelet-correlogram method.

  2. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces

    Science.gov (United States)

    Sridhar, Akshay; Doyle, Scott; Madabhushi, Anant

    2015-01-01

    Context: Content-based image retrieval (CBIR) systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. Aims: In this paper we present boosted spectral embedding(BoSE), which utilizes a boosted distance metric to selectively weight individual features (based on training data) to subsequently map the data into a reduced-dimensional space. Settings and Design: BoSE is evaluated against spectral embedding (SE) (which employs equal feature weighting) in the context of CBIR of digitized prostate and breast cancer histopathology images. Materials and Methods: The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1) Prostate cancer histopathology (benign vs. malignant), (2) estrogen receptor (ER) + breast cancer histopathology (low vs. high grade), and (3) HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration). Statistical Analysis Used: We plotted and calculated the area under precision-recall curves (AUPRC) and calculated classification accuracy using the Random Forest classifier. Results: BoSE outperformed SE both in terms of

  3. Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces

    Directory of Open Access Journals (Sweden)

    Akshay Sridhar

    2015-01-01

    Full Text Available Context : Content-based image retrieval (CBIR systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important. Aims : In this paper we present boosted spectral embedding (BoSE, which utilizes a boosted distance metric to selectively weight individual features (based on training data to subsequently map the data into a reduced-dimensional space. Settings and Design : BoSE is evaluated against spectral embedding (SE (which employs equal feature weighting in the context of CBIR of digitized prostate and breast cancer histopathology images. Materials and Methods : The following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1 Prostate cancer histopathology (benign vs. malignant, (2 estrogen receptor (ER + breast cancer histopathology (low vs. high grade, and (3 HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration. Statistical Analysis Used : We plotted and calculated the area under precision-recall curves (AUPRC and calculated classification accuracy using the Random Forest classifier. Results : BoSE outperformed SE both

  4. In-depth Evaluation of Content-Based Phishing Detection to Clarify Its Strengths and Limitations

    Science.gov (United States)

    Komiyama, Koichiro; Seko, Toshinori; Ichinose, Yusuke; Kato, Kei; Kawano, Kohei; Yoshiura, Hiroshi

    Zhang et al. proposed a method for content-based phishing detection (CBD) and reported its high performance in detecting phishing sites written in English. However, the evaluations of the CBD method performed by Zhang et al. and others were small-scale and simply measured the detection and error rates, i.e, they did not analyze the causes of the detection errors. Moreover, the effectiveness of the CBD method with non-English sites, such as Japanese and Chinese language sites, has never been tested. This paper reports our in-depth evaluation and analysis of the CBD method using 843 actual phishing sites (including 475 English and 368 Japanese sites), and explains both the strengths of the CBD method and its limitations. Our work provides a base for using the CBD method in the real world.

  5. Prospective Study for Semantic Inter-Media Fusion in Content-Based Medical Image Retrieval

    CERN Document Server

    Teodorescu, Roxana; Leow, Wee-Kheng; Cretu, Vladimir

    2008-01-01

    One important challenge in modern Content-Based Medical Image Retrieval (CBMIR) approaches is represented by the semantic gap, related to the complexity of the medical knowledge. Among the methods that are able to close this gap in CBMIR, the use of medical thesauri/ontologies has interesting perspectives due to the possibility of accessing on-line updated relevant webservices and to extract real-time medical semantic structured information. The CBMIR approach proposed in this paper uses the Unified Medical Language System's (UMLS) Metathesaurus to perform a semantic indexing and fusion of medical media. This fusion operates before the query processing (retrieval) and works at an UMLS-compliant conceptual indexing level. Our purpose is to study various techniques related to semantic data alignment, preprocessing, fusion, clustering and retrieval, by evaluating the various techniques and highlighting future research directions. The alignment and the preprocessing are based on partial text/image retrieval feedb...

  6. Content-based similarity for 3D model retrieval and classification

    Institute of Scientific and Technical Information of China (English)

    Ke Lü; Ning He; Jian Xue

    2009-01-01

    With the rapid development of 3D digital shape information,content-based 3D model retrieval and classification has become an important research area.This paper presents a novel 3D model retrieval and classification algorithm.For feature representation,a method combining a distance histogram and moment invariants is proposed to improve the retrieval performance.The major advantage of using a distance histogram is its invariance to the transforms of scaling,translation and rotation.Based on the premise that two similar objects should have high mutual information,the querying of 3D data should convey a great deal of information on the shape of the two objects,and so we propose a mutual information distance measurement to perform the similarity comparison of 3D objects.The proposed algorithm is tested with a 3D model retrieval and classification prototype,and the experimental evaluation demonstrates satisfactory retrieval results and classification accuracy.

  7. Colloidal processing of Fe-based metal ceramic composites with high content of ceramic reinforcement

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, J. A.; Ferrari, B.; Alvaredo, P.; Gordo, E.; Sanchez-Herencia, A. J.

    2013-07-01

    Major difficulties of processing metal-matrix composites by means of conventional powder metallurgy techniques are the lack of dispersion of the phases within the final microstructure. In this work, processing through colloidal techniques of the Fe-based metal-matrix composites, with a high content of a ceramic reinforcement (Ti(C,N) ), is presented for the first time in the literature. The colloidal approach allows a higher control of the powders packing and a better homogenization of phases since powders are mixed in a liquid medium. The chemical stability of Fe in aqueous medium determines the dispersion conditions of the mixture. The Fe slurries were formulated by optimising their zeta potential and their rheology, in order to shape bulk pieces by slip-casting. Preliminary results demonstrate the viability of this procedure, also opening new paths to the microstructural design of fully sintered Fe-based hard metal, with 50 vol. % of Ti(C,N) in its composition. (Author)

  8. Determination of the biodiesel content in diesel/biodiesel blends: a method based on fluorescence spectroscopy.

    Science.gov (United States)

    Scherer, Marisa D; Oliveira, Samuel L; Lima, Sandro M; Andrade, Luis H C; Caires, Anderson R L

    2011-05-01

    Blends of biodiesel and diesel are being used increasingly worldwide because of environmental, economic, and social considerations. Several countries use biodiesel blends with different blending limits. Therefore, it is necessary to develop or improve methods to quantify the biodiesel level in a diesel/biodiesel blend, to ensure compliance with legislation. The optical technique based on the absorption of light in the mid-infrared has been successful for this application. However, this method presents some challenges that must be overcome. In this paper, we propose a novel method, based on fluorescence spectroscopy, to determine the biodiesel content in the diesel/biodiesel blend, which allows in loco measurements by using portable systems. The results showed that this method is both practical and more sensitive than the standard optical method. © Springer Science+Business Media, LLC 2011

  9. Feature Extraction with Ordered Mean Values for Content Based Image Classification

    Directory of Open Access Journals (Sweden)

    Sudeep Thepade

    2014-01-01

    Full Text Available Categorization of images into meaningful classes by efficient extraction of feature vectors from image datasets has been dependent on feature selection techniques. Traditionally, feature vector extraction has been carried out using different methods of image binarization done with selection of global, local, or mean threshold. This paper has proposed a novel technique for feature extraction based on ordered mean values. The proposed technique was combined with feature extraction using discrete sine transform (DST for better classification results using multitechnique fusion. The novel methodology was compared to the traditional techniques used for feature extraction for content based image classification. Three benchmark datasets, namely, Wang dataset, Oliva and Torralba (OT-Scene dataset, and Caltech dataset, were used for evaluation purpose. Performance measure after evaluation has evidently revealed the superiority of the proposed fusion technique with ordered mean values and discrete sine transform over the popular approaches of single view feature extraction methodologies for classification.

  10. Neighbourhood search feature selection method for content-based mammogram retrieval.

    Science.gov (United States)

    Chandy, D Abraham; Christinal, A Hepzibah; Theodore, Alwyn John; Selvan, S Easter

    2017-03-01

    Content-based image retrieval plays an increasing role in the clinical process for supporting diagnosis. This paper proposes a neighbourhood search method to select the near-optimal feature subsets for the retrieval of mammograms from the Mammographic Image Analysis Society (MIAS) database. The features based on grey level cooccurrence matrix, Daubechies-4 wavelet, Gabor, Cohen-Daubechies-Feauveau 9/7 wavelet and Zernike moments are extracted from mammograms available in the MIAS database to form the combined or fused feature set for testing various feature selection methods. The performance of feature selection methods is evaluated using precision, storage requirement and retrieval time measures. Using the proposed method, a significant improvement is achieved in mean precision rate and feature dimension. The results show that the proposed method outperforms the state-of-the-art feature selection methods.

  11. Language-Building Activities and Interaction Variations with Mixed-Ability ESL University Learners in a Content-Based Course

    Science.gov (United States)

    Serna Dimas, Héctor Manuel; Ruíz Castellanos, Erika

    2014-01-01

    The preparation of both language-building activities and a variety of teacher/student interaction patterns increase both oral language participation and content learning in a course of manual therapy with mixed-language ability students. In this article, the researchers describe their collaboration in a content-based course in English with English…

  12. Textbooks Content Analysis of Social Studies and Natural Sciences of Secondary School Based on Emotional Intelligence Components

    Science.gov (United States)

    Babaei, Bahare; Abdi, Ali

    2014-01-01

    The aim of this study is to analyze the content of social studies and natural sciences textbooks of the secondary school on the basis of the emotional intelligence components. In order to determine and inspect the emotional intelligence components all of the textbooks content (including texts, exercises, and illustrations) was examined based on…

  13. Design of a Content Addressable Memory-based Parallel Processor implementing (−1+j-based Binary Number System

    Directory of Open Access Journals (Sweden)

    Tariq Jamil

    2014-11-01

    Full Text Available Contrary to the traditional base 2 binary number system, used in today’s computers, in which a complex number is represented by two separate binary entities, one for the real part and one for the imaginary part, Complex Binary Number System (CBNS, a binary number system with base (−1+j, is used to represent a given complex number in single binary string format. In this paper, CBNS is reviewed and arithmetic algorithms for this number system are presented. The design of a CBNS-based parallel processor utilizing content-addressable memory for implementation of associative dataflow concept has been described and software-related issues have also been explained.

  14. Classifying content-based Images using Self Organizing Map Neural Networks Based on Nonlinear Features

    Directory of Open Access Journals (Sweden)

    Ebrahim Parcham

    2014-07-01

    Full Text Available Classifying similar images is one of the most interesting and essential image processing operations. Presented methods have some disadvantages like: low accuracy in analysis step and low speed in feature extraction process. In this paper, a new method for image classification is proposed in which similarity weight is revised by means of information in related and unrelated images. Based on researchers’ idea, most of real world similarity measurement systems are nonlinear. Thus, traditional linear methods are not capable of recognizing nonlinear relationship and correlation in such systems. Undoubtedly, Self Organizing Map neural networks are strongest networks for data mining and nonlinear analysis of sophisticated spaces purposes. In our proposed method, we obtain images with the most similarity measure by extracting features of our target image and comparing them with the features of other images. We took advantage of NLPCA algorithm for feature extraction which is a nonlinear algorithm that has the ability to recognize the smallest variations even in noisy images. Finally, we compare the run time and efficiency of our proposed method with previous proposed methods.

  15. [Research on Resistant Starch Content of Rice Grain Based on NIR Spectroscopy Model].

    Science.gov (United States)

    Luo, Xi; Wu, Fang-xi; Xie, Hong-guang; Zhu, Yong-sheng; Zhang, Jian-fu; Xie, Hua-an

    2016-03-01

    A new method based on near-infrared reflectance spectroscopy (NIRS) analysis was explored to determine the content of rice-resistant starch instead of common chemical method which took long time was high-cost. First of all, we collected 62 spectral data which have big differences in terms of resistant starch content of rice, and then the spectral data and detected chemical values are imported chemometrics software. After that a near-infrared spectroscopy calibration model for rice-resistant starch content was constructed with partial least squares (PLS) method. Results are as follows: In respect of internal cross validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+1thD, pretreatment with 1thD+SNV were 0.920 2, 0.967 0 and 0.976 7 respectively. Root mean square error of prediction (RMSEP) were 1.533 7, 1.011 2 and 0.837 1 respectively. In respect of external validation, the coefficient of determination (R2) of untreated, pretreatment with MSC+ 1thD, pretreatment with 1thD+SNV were 0.805, 0.976 and 0.992 respectively. The average absolute error was 1.456, 0.818, 0.515 respectively. There was no significant difference between chemical and predicted values (Turkey multiple comparison), so we think near infrared spectrum analysis is more feasible than chemical measurement. Among the different pretreatment, the first derivation and standard normal variate (1thD+SNV) have higher coefficient of determination (R2) and lower error value whether in internal validation and external validation. In other words, the calibration model has higher precision and less error by pretreatment with 1thD+SNV.

  16. Exploring access to scientific literature using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.

  17. Analyzing the Content of Social Training in 1404 Outlook Based on Education System Evolutionary Revolution Document

    Directory of Open Access Journals (Sweden)

    Ladan Najafi

    2014-06-01

    Full Text Available Education system is the most important organization of public training and educating. This organization has a duty to provide a ground for students' access to a level of good living in individual and social dimensions, systematically and effectively. Based on the important mission of this organization in 1404 outlook, the necessity of formulating an evolutionary document for Islamic Republic was confirmed and after a decade of efforts to formulate this document, it was presented in Orbibehesht2013. Thus, it's necessary that in order for more familiarity of training performers of Islamic society and practical performance, specialists do some researches in relation to this document, because the right training of the new generation is one of the training system's necessities. Therefore, this study is called "Analyzing the Content of Social Training in 1404 Outlook Based on Education System Evolutionary Revolution Document" and is a discussion in A.P evolutionary revolution document in relation to social training, in which by using content analysis in the theoretical principles of training philosophy in Islamic Revolution of Iran (of revolutionary document, first in the basics of ontology, anthropology, axiology, cognitive value and theology, the ones which were related to the individual's social life and the necessity of his social training, were extracted and after that, the definition of training and its general perspectives and social training and its specific goals were presented in this document and finally, based on the proposed principles, a model for presenting individual and social role  in a good life has been provided, in hope that it can be a solution for practical actions for achieving a successful Islamic society in 1404.

  18. Designing and Implementing Content-Based Courses in English with a Non-Language Faculty at a Public Colombian University

    National Research Council Canada - National Science Library

    Fabio Alberto Arismendi Gómez; Claudia Patricia Díaz Mosquera; Leidy Natalia Salazar Valencia

    2008-01-01

    ... participated in a multi-site study to implement content-based (CB) courses in English. The professors, who had a high level of proficiency in English, worked in collaboration with language faculty...

  19. [Evaluation by case managers dementia : An explorative practice based study on types and content].

    Science.gov (United States)

    Ketelaar, Nicole A B M; Jukema, Jan S; van Bemmel, Marlies; Adriaansen, Marian J M; Smits, Carolien H M

    2017-06-01

    This practice based explorative study aims to provide insight into the ways in which case managers shape and fill up the evaluation phase of their support of the informal care network of persons with dementia. A combination of quantitative and qualitative research methods were used. A group of 57 case managers of persons with dementia in three different organisational networks took part in this study. Results from the quantitative and qualitative data are organized into four themes: (1) attitude towards evaluation, (2) forms of evaluation, (3) implementation of evaluation and (4) content of evaluation. There are different ways in shaping evaluation and the content of it. The importance of interim and final evaluation is recognized, but is difficult to realize in a methodical way. Barriers experienced by the case managers include various factors associated both with clients as professionals. Case managers evaluate continuously and in an informal way to assess whether the extent of their assistance is meeting the needs of the client and informal network. Case managers do not use systematic evaluation to measure the quality of care they offer to persons with dementia and their caregivers. The findings demand a discussion on the level of clients, as well as on the professional and societal level about the way case managers should evaluate their support.

  20. Steganalysis of content-adaptive JPEG steganography based on Gauss partial derivative filter bank

    Science.gov (United States)

    Zhang, Yi; Liu, Fenlin; Yang, Chunfang; Luo, Xiangyang; Song, Xiaofeng; Lu, Jicang

    2017-01-01

    A steganalysis feature extraction method based on Gauss partial derivative filter bank is proposed in this paper to improve the detection performance for content-adaptive JPEG steganography. Considering that the embedding changes of content-adaptive steganographic schemes are performed in the texture and edge regions, the proposed method generates filtered images comprising rich texture and edge information using Gauss partial derivative filter bank, and histograms of absolute values of filtered subimages are extracted as steganalysis features. Gauss partial derivative filter bank can represent texture and edge information in multiple orientations with less computation load than conventional methods and prevent redundancy in different filtered images. These two properties are beneficial in the extraction of low-complexity sensitive features. The results of experiments conducted on three selected modern JPEG steganographic schemes-uniform embedding distortion, JPEG universal wavelet relative distortion, and side-informed UNIWARD-indicate that the proposed feature set is superior to the prior art feature sets-discrete cosine transform residual, phase aware rich model, and Gabor filter residual.

  1. Advances in estimation methods of vegetation water content based on optical remote sensing techniques

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.

  2. Advances in gas content based on outburst control technology in Huainan, China

    Institute of Scientific and Technical Information of China (English)

    Xue Sheng; Yuan Liang; Xie Jun; Wang Yucang

    2014-01-01

    The sudden and violent nature of coal and gas outbursts continues to pose a serious threat to coal mine safety in China. One of the key issues is to predict the occurrence of outbursts. Current methods that are used for predicting the outbursts in China are considered to be inadequate, inappropriate or impractical in some seam conditions. In recent years, Huainan Mining Industry Group (Huainan) in China and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia have been jointly developing technology based on gas content in coal seams to predict the occurrence of outbursts in Hua-inan. Significant progresses in the technology development have been made, including the development of a more rapid and accurate system in determining gas content in coal seams, the invention of a sam-pling-while-drilling unit for fast and pointed coal sampling, and the coupling of DEM and LBM codes for advanced numerical simulation of outburst initiation and propagation. These advances are described in this paper.

  3. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    Science.gov (United States)

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging.

  4. Information retrieval for OCR documents: a content-based probabilistic correction model

    Science.gov (United States)

    Jin, Rong; Zhai, ChangXiang; Hauptmann, Alexander

    2003-01-01

    The difficulty with information retrieval for OCR documents lies in the fact that OCR documents contain a significant amount of erroneous words and unfortunately most information retrieval techniques rely heavily on word matching between documents and queries. In this paper, we propose a general content-based correction model that can work on top of an existing OCR correction tool to "boost" retrieval performance. The basic idea of this correction model is to exploit the whole content of a document to supplement any other useful information provided by an existing OCR correction tool for word corrections. Instead of making an explicit correction decision for each erroneous word as typically done in a traditional approach, we consider the uncertainties in such correction decisions and compute an estimate of the original "uncorrupted" document language model accordingly. The document language model can then be used for retrieval with a language modeling retrieval approach. Evaluation using the TREC standard testing collections indicates that our method significantly improves the performance compared with simple word correction approaches such as using only the top ranked correction.

  5. Tag Based Client Side Detection of Content Sniffing Attacks with File Encryption and File Splitter Technique

    Directory of Open Access Journals (Sweden)

    Syed Imran Ahmed Qadri

    2012-09-01

    Full Text Available In this paper we provide a security framework for server and client side. In this we provide some prevention methods which will apply for the server side and alert replication is also on client side. Content sniffing attacks occur if browsers render non-HTML files embedded with malicious HTML contents or JavaScript code as HTML files. This mitigation effects such as the stealing of sensitive information through the execution of malicious JavaScript code. In this framework client access the data which is encrypted from the server side. From the server data is encrypted using private key cryptography and file is send after splitting so that we reduce the execution time. We also add a tag bit concept which is included for the means of checking the alteration; if alteration performed tag bit is changed. Tag bit is generated by a message digest algorithm. We have implemented our approach in a java based environment that can be integrated in web applications written in various languages.

  6. Deeply learnt hashing forests for content based image retrieval in prostate MR images

    Science.gov (United States)

    Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin

    2016-03-01

    Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.

  7. [Refractometric determination of the fat base content in emulsion ointments (author's transl)].

    Science.gov (United States)

    Rudischer, S; Bauer, H J

    1981-01-01

    The Pharmacopoeia of the GDR provides no direct method for the determination of the fat base content in ointments. The indirect determination which is due to in calculating the difference between 100% and the percentages of all the other constituents is tedious and inexact. For this reason, it was tried to find a direct method which was to be reliable, rapid and easy. In principle, the refractometric method divised by Rudischer [2--5, 9] for the determination of fat in meat and meat products, which has been compulsory since 1965, is suitable for this purpose. Consequently, this method was adapted to the requirements of ointment analysis and subjected to modifications which resulted in the variant A for non-ionic ointments, and the variant B for ointments containing ionic fatty alcohol sulphates. The results obtained with the refractometric method from all kinds of ointments tested were in full agreement with the actual fat contents. Parallel determinations differed by 0.5% at the most.

  8. Image Content in Location-Based Shopping Recommender Systems For Mobile Users

    Directory of Open Access Journals (Sweden)

    Tranos Zuva

    2012-08-01

    Full Text Available This paper shows how image content can be used to realize a shopping recommender system for intuitively supporting mobile users in decision making. A mobile user equipped with a camera enabled smart phone combined with Global Positioning System (GPS capabilities would benefit in using a recommender system for mobile users. This recommender system is queried by image sent by a smart phone together with the smart phone’s GPS coordinates then the system returns a recommended retail shop together with its GPS coordinates, the image similar to the query image and other items on special offer. This recommender system shows a drastic reduction if not elimination of usage of text by mobileusers using mobile devices when accessing the system. This paper presents the proposed recommender system and the simulated results of the recommender system. In summary the main contribution of this paper is to show how image retrieval, image content and camera enabled smart mobile device with GPS capabilities can be used to realize a location-based shopping recommender system for mobile users.

  9. Empirical models of Total Electron Content based on functional fitting over Taiwan during geomagnetic quiet condition

    Directory of Open Access Journals (Sweden)

    Y. Kakinami

    2009-08-01

    Full Text Available Empirical models of Total Electron Content (TEC based on functional fitting over Taiwan (120° E, 24° N have been constructed using data of the Global Positioning System (GPS from 1998 to 2007 during geomagnetically quiet condition (Dst>−30 nT. The models provide TEC as functions of local time (LT, day of year (DOY and the solar activity (F, which are represented by 1–162 days mean of F10.7 and EUV. Other models based on median values have been also constructed and compared with the models based on the functional fitting. Under same values of F parameter, the models based on the functional fitting show better accuracy than those based on the median values in all cases. The functional fitting model using daily EUV is the most accurate with 9.2 TECu of root mean square error (RMS than the 15-days running median with 10.4 TECu RMS and the model of International Reference Ionosphere 2007 (IRI2007 with 14.7 TECu RMS. IRI2007 overestimates TEC when the solar activity is low, and underestimates TEC when the solar activity is high. Though average of 81 days centered running mean of F10.7 and daily F10.7 is often used as indicator of EUV, our result suggests that average of F10.7 mean from 1 to 54 day prior and current day is better than the average of 81 days centered running mean for reproduction of TEC. This paper is for the first time comparing the median based model with the functional fitting model. Results indicate the functional fitting model yielding a better performance than the median based one. Meanwhile we find that the EUV radiation is essential to derive an optimal TEC.

  10. Making generic tutorials content specific: recycling evidence-based practice (EBP) tutorials for two disciplines.

    Science.gov (United States)

    Jeffery, Keven M; Maggio, Lauren; Blanchard, Mary

    2009-01-01

    Librarians at the Boston University Medical Center constructed two interactive online tutorials, "Introduction to EBM" and "Formulating a Clinical Question (PICO)," for a Family Medicine Clerkship and then quickly repurposed the existing tutorials to support an Evidence-based Dentistry course. Adobe's ColdFusion software was used to populate the tutorials with course-specific content based on the URL used to enter each tutorial, and a MySQL database was used to collect student input. Student responses were viewable immediately by course faculty on a password-protected Web site. The tutorials ensured that all students received the same baseline training and allowed librarians to tailor a subsequent library skills workshop to student tutorial answers. The tutorials were well-received by the medical and dental schools and have been added to mandatory first-year Evidence-based Medicine (EBM) and Evidence-based Dentistry (EBD) courses, meaning that every medical and dental student at BUMC will be expected to complete these tutorials.

  11. Analysis of base content in in-service oils by fourier transform infrared spectroscopy.

    Science.gov (United States)

    Ehsan, Sadia; Sedman, Jacqueline; van de Voort, Frederick R; Akochi-Koblé, Emmanuel; Yuan, Tao; Takouk, Djaouida

    2012-06-01

    An automated FTIR method for the determination of the base content (BC(pKa)) of oils at rates of > 120 samples/h has been developed. The method uses a 5% solution of trifluoroacetic acid in 1-propanol (TFA/P) added to heptane-diluted oil to react with the base present and measures the ν(COO(-)) absorption of the TFA anion produced, with calibrations devised by gravimetrically adding 1-methylimidazole to a heptane-TFA/P mixture. To minimize spectral interferences, all spectra are transformed to 2(nd) derivative spectra using a gap-segment algorithm. Any solvent displacement effects resulting from sample miscibility are spectrally accounted for by measurement of the changes in the 1-propanol overtone band at 1936 cm(-1). A variety of oils were analyzed for BC(0.5), expressed as mEq base/g oil as well as converted to base number (BN) units (mg KOH/g oil) to facilitate direct comparison with ASTM D2896 and ASTM D974 results for the same samples. Linear relationships were obtained between FTIR and D2896 and D974, with the ASTM methods producing higher BN values by factors of ~1.5 and ~1.3, respectively. Thus, the FTIR BC method correlates well with ASTM potentiometric procedures and, with its much higher throughput, promises to be a useful alternative means of rapidly determining reserve alkalinity in commercial oil condition monitoring laboratories.

  12. From Content-Based Image Retrieval by Shape to Image Annotation

    Directory of Open Access Journals (Sweden)

    MOCANU, I.

    2010-11-01

    Full Text Available In many areas such as commerce, medical investigations, and others, large collections of digital images are being created. Search operations inside these collections of images are usually based on low-level features of objects contained in an image: color, shape, texture. Although such techniques of content-based image retrieval are useful, they are strongly limited by their inability to consider the meaning of images. Moreover, specifying a query in terms of low level features may not be very simple. Image annotation, in which images are associated with keywords describing their semantics, is a more effective way of image retrieval and queries can be naturally specified by the user. The paper presents a combined set of methods for image retrieval, in which both low level features and semantic properties are taken into account when retrieving images. First, it describes some methods for image representation and retrieval based on shape, and proposes a new such method, which overcomes some of the existing limitations. Then, it describes a new method for image semantic annotation based on a genetic algorithm, which is further improved from two points of view: the obtained solution value - using an anticipatory genetic algorithm, and the execution time - using a parallel genetic algorithm.

  13. Improvement of medical content in the curriculum of biomedical engineering based on assessment of students outcomes.

    Science.gov (United States)

    Abdulhay, Enas; Khnouf, Ruba; Haddad, Shireen; Al-Bashir, Areen

    2017-08-04

    Improvement of medical content in Biomedical Engineering curricula based on a qualitative assessment process or on a comparison with another high-standard program has been approached by a number of studies. However, the quantitative assessment tools have not been emphasized. The quantitative assessment tools can be more accurate and robust in cases of challenging multidisciplinary fields like that of Biomedical Engineering which includes biomedicine elements mixed with technology aspects. The major limitations of the previous research are the high dependence on surveys or pure qualitative approaches as well as the absence of strong focus on medical outcomes without implicit confusion with the technical ones. The proposed work presents the development and evaluation of an accurate/robust quantitative approach to the improvement of the medical content in the challenging multidisciplinary BME curriculum. The work presents quantitative assessment tools and subsequent improvement of curriculum medical content applied, as example for explanation, to the ABET (Accreditation Board for Engineering and Technology, USA) accredited biomedical engineering BME department at Jordan University of Science and Technology. The quantitative results of assessment of curriculum/course, capstone, exit exam, course assessment by student (CAS) as well as of surveys filled by alumni, seniors, employers and training supervisors were, first, mapped to the expected students' outcomes related to the medical field (SOsM). The collected data were then analyzed and discussed to find curriculum weakness points by tracking shortcomings in every outcome degree of achievement. Finally, actions were taken to fill in the gaps of the curriculum. Actions were also mapped to the students' medical outcomes (SOsM). Weighted averages of obtained quantitative values, mapped to SOsM, indicated accurately the achievement levels of all outcomes as well as the necessary improvements to be performed in curriculum

  14. The effect of wind erosion on toxic element content of soils based on wind tunnel trials

    Science.gov (United States)

    Tatárvári, Károly; Négyesi, Gábor

    2016-04-01

    Wind erosion causes enormous problems in many parts of the world. It damages the fertile layer of soils, and eventually wind erosion can transport materials, pathogens and these may cause medical problems in the respiratory system. Numerous international and Hungarian surveys have proved, that wind erosion not only affects loose textured soils. During droughts wind erosion may cause great damage in bound soils of clay in case these are over-cultivated and dusty. As an effect of climate change the duration and frequency of drought periods shall grow. In our investigation samples were taken from the upper 10 cms of soils of 5 various types of mechanical compounds (according to physical characteristics sand, clay, clay loam, loam, sandy loam) in Györ-Moson-Sopron County Hungary. According to the map of Hungary of the areas potentially affected by wind erosion the sand physical soil type is strongly endangered by wind erosion, other areas are moderatly endangered. According to most recent international classification areas belonging to the sand physical soil type are categorized as "endangered by wind erosion", and others belong to the category "not endangered by wind erosion", but these data were not based on local trials. When selecting the sampling areas it was taken to account that opencast sand and gravel mines are in operation in the area. Because of these recently significant wind erosion related phenomena were observed. The area is the most windy in the country. The mechanical composition, CaCO3 content, pH value (H2O,Kcl), humus content of the samples were defined. The wind erosion experiments were conducted in the wind tunnel of the University of Debrecen. The threshold velocities of the soils were measured, and the quantity of the soil transported by the wind was analyzed at four wind velocity value ranges. The transported material intercepted at different wind velocities at the height of 0-10 cm and 10-35 cm. The As, Ba, Cd, Co, Cr, Cu, Ni, Pb, and Zn

  15. Effect of soft-hard segment content on properties of palm oil polyol based shape memory polyurethane

    Science.gov (United States)

    Darman, Amina; Ali, Ernie Suzana; Zubir, Syazana Ahmad

    2017-07-01

    Shape memory polymers (SMP) are smart materials with the ability of changing shape when subjected to external stimuli. In this work, the shape memory polyurethane (SMPU) has been synthesized via two step bulk polymerization method by replacing up to 40% molar ratio of petroleum-based polyol using palm oil-based polyols (POP). This effort was done with the purpose of reducing the usage of petroleum-based polyol due to environmental awareness. The main objective is to investigate the effects of different polyol/isocyanate/1,4-butanediol molar ratio in relation to soft-hard segment content towards the mechanical and shape memory properties of the resulting SMPU. The mechanical properties were improved with POP addition and optimum performance of tensile properties were obtained within 35 to 40% of hard segment content. Tensile strength increased with increasing POP content but after 40% of hard segment content, the properties decreased. On the other hand, the modulus significantly reduced with an increase of hard segment content. Crystallinity also decreased with decreasing of polycaprolactone diol (PCL) content as more POP content was added. Shape memory properties of PU 165 is better than PU 154 in terms of the ability to return to its original shape since all of PU 165 samples showed 100% recovery. In general, the addition of palm oil-based polyol showed improvement in mechanical and shape memory properties as compared to pristine SMPU.

  16. Kernel Density Feature Points Estimator for Content-Based Image Retrieval

    CERN Document Server

    Zuva, Tranos; Ojo, Sunday O; Ngwira, Seleman M

    2012-01-01

    Research is taking place to find effective algorithms for content-based image representation and description. There is a substantial amount of algorithms available that use visual features (color, shape, texture). Shape feature has attracted much attention from researchers that there are many shape representation and description algorithms in literature. These shape image representation and description algorithms are usually not application independent or robust, making them undesirable for generic shape description. This paper presents an object shape representation using Kernel Density Feature Points Estimator (KDFPE). In this method, the density of feature points within defined rings around the centroid of the image is obtained. The KDFPE is then applied to the vector of the image. KDFPE is invariant to translation, scale and rotation. This method of image representation shows improved retrieval rate when compared to Density Histogram Feature Points (DHFP) method. Analytic analysis is done to justify our m...

  17. Novel polyclonal-monoclonal-based ELISA utilized to examine lupine (Lupinus species) content in food products.

    Science.gov (United States)

    Holden, Lise; Moen, Lena Haugland; Sletten, Gaynour B G; Dooper, Maaike M B W

    2007-04-04

    Sweet lupines are increasingly used in food production. Cause for concern has been expressed due to the increase in reported lupine-induced allergic incidents and the association between lupine and peanut allergies. In the current study, a polyclonal-monoclonal antibody-based sandwich ELISA for the detection of lupine proteins in foods was developed. The assay was sensitive to both native and processed proteins from Lupinus angustifolius and Lupinus albus and had a detection limit of 1 mug/g. Intra- and interassay coefficients of variation were lupine declaration, was evaluated for their content of lupine. The data showed that the majority were in agreement with the respective labeling. However, some inconsistency was seen, typically in bread/rolls and soy flours.

  18. Design Approach for Content-based Image Retrieval using Gabor-Zernike features

    Directory of Open Access Journals (Sweden)

    Abhinav Deshpande

    2012-04-01

    Full Text Available The process of extraction of different features from an image is known as Content-based Image Retrieval.Color,Texture and Shape are the major features of an image and play a vital role in the representation of an image..In this paper, a novel method is proposed to extract the region of interest(ROI from an image,prior to extraction of salient features of an image.The image is subjected to normalization so that the noise components due to Gaussian or other types of noises which are present in the image are eliminated and thesuccessfull extraction of various features of an image can be accomplished. Gabor Filters are used to extract the texture feature from an image whereas Zernike Moments can be used to extract the shape feature.The combination of Gabor feature and Zernike feature can be combined to extract Gabor-Zernike Features from an image.

  19. PROTOTYPE CONTENT BASED IMAGE RETRIEVAL UNTUK DETEKSI PEN YAKIT KULIT DENGAN METODE EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    Erick Fernando

    2016-05-01

    Full Text Available Dokter spesialis kulit melakukan pemeriksa secara visual objek mata, capture objek dengan kamera digital dan menanyakan riwayat perjalanan penyakit pasien, tanpa melakukan perbandingan terhadap gejala dan tanda yang ada sebelummnya. Sehingga pemeriksaan dan perkiraan jenis penyakit kulit. Pengolahan data citra dalam bentuk digital khususnya citra medis sudah sangat dibutuhkan dengan pra-processing. Banyak pasien yang dilayani di rumah sakit masih menggunakan data citra analog. Data analog ini membutuhkan ruangan khusus untuk menyimpan guna menghindarkan kerusakan mekanis. Uraian mengatasi permasalahan ini, citra medis dibuat dalam bentuk digital dan disimpan dalam sistem database dan dapat melihat kesamaan citra kulit yang baru. Citra akan dapat ditampilkan dengan pra- processing dengan identifikasi kesamaan dengan Content Based Image Retrieval (CBIR bekerja dengan cara mengukur kemiripan citra query dengan semua citra yang ada dalam database sehingga query cost berbanding lurus dengan jumlah citra dalam database.

  20. Tag Based Client Side Detection of Content Sniffing Attacks with File Encryption and File Splitter Technique

    Directory of Open Access Journals (Sweden)

    Syed Imran Ahmed Qadri,

    2012-09-01

    Full Text Available In this paper we provide a security framework forserver and clientside. In this we provide someprevention methods which will apply for the serverside and alert replication is also on client side.Content sniffing attacks occur if browsers rendernon-HTML files embedded with malicious HTMLcontents or JavaScript code as HTML files. Thismitigation effects such as the stealing of sensitiveinformation through the execution of maliciousJavaScript code. In this framework client access thedata which is encrypted from the server side. Fromthe server data is encrypted using private keycryptography and file is send after splitting so thatwe reduce the execution time. We also add a tag bitconcept which is included for the means of checkingthe alteration; if alteration performed tag bit ischanged. Tag bit is generated bya message digestalgorithm.We have implemented our approach in ajava based environment that can be integrated inweb applications written in various languages.

  1. Efficient content-based low-altitude images correlated network and strips reconstruction

    Science.gov (United States)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  2. Creating a large-scale content-based airphoto image digital library.

    Science.gov (United States)

    Zhu, B; Ramsey, M; Chen, H

    2000-01-01

    This paper describes a content-based image retrieval digital library that supports geographical image retrieval over a testbed of 800 aerial photographs, each 25 megabytes in size. In addition, this paper also introduces a methodology to evaluate the performance of the algorithms in the prototype system. There are two major contributions: we suggest an approach that incorporates various image processing techniques including Gabor filters, image enhancement and image compression, as well as information analysis techniques such as the self-organizing map (SOM) into an effective large-scale geographical image retrieval system. We present two experiments that evaluate the performance of the Gabor-filter-extracted features along with the corresponding similarity measure against that of human perception, addressing the lack of studies in assessing the consistency between an image representation algorithm or an image categorization method and human mental model.

  3. A Probabilistic Framework for Content-Based Diagnosis of Retinal Disease

    Energy Technology Data Exchange (ETDEWEB)

    Tobin Jr, Kenneth William [ORNL; Abdelrahman, Mohamed A [ORNL; Chaum, Edward [ORNL; Muthusamy Govindasamy, Vijaya Priya [ORNL; Karnowski, Thomas Paul [ORNL

    2007-01-01

    Diabetic retinopathy is the leading cause of blindness in the working age population around the world. Computer assisted analysis has the potential to assist in the early detection of diabetes by regular screening of large populations. The widespread availability of digital fundus cameras today is resulting in the accumulation of large image archives of diagnosed patient data that captures historical knowledge of retinal pathology. Through this research we are developing a content-based image retrieval method to verify our hypothesis that retinal pathology can be identified and quantified from visually similar retinal images in an image archive. We will present diagnostic results for specificity and sensitivity on a population of 395 fundus images representing the normal fundus and 14 stratified disease states.

  4. Inter-rater Reliability of Criteria-Based Content Analysis of Children's Statements of Abuse.

    Science.gov (United States)

    Niveau, Gérard; Lacasa, Marie-Josée; Berclaz, Michel; Germond, Michèle

    2015-09-01

    The evaluation of children's statements of sexual abuse cases in forensic cases is critically important and must and reliable. Criteria-based content analysis (CBCA) is the main component of the statement validity assessment (SVA), which is the most frequently used approach in this setting. This study investigated the inter-rater reliability (IRR) of CBCA in a forensic context. Three independent raters evaluated the transcripts of 95 statements of sexual abuse. IRR was calculated for each criterion, total score, and overall evaluation. The IRR was variable for the criteria, with several being unsatisfactory. But high IRR was found for the total CBCA scores (Kendall's W=0.84) and for overall evaluation (Kendall's W=0.65). Despite some shortcomings, SVA remains a robust method to be used in the comprehensive evaluation of children's statements of sexual abuse in the forensic setting. However, the low IRR of some CBCA criteria could justify some technical improvements. © 2015 American Academy of Forensic Sciences.

  5. Standardising Maritime English Training And Assessment through International Coordination of Content-Based Instruction

    Directory of Open Access Journals (Sweden)

    Annamaria Gabrielli

    2016-05-01

    Full Text Available The current provisions of the International Maritime Organization (IMO Standards of Training, Certification and Watchkeeping (STCW Manila; IMO, 2010 for language proficiency and communication skills require standard levels for cadets’ communication skills worldwide, but do not suggest how to coordinate standardised Maritime English (ME training and assessment across the globe in order to consistently meet these requirements. The responsibility for globally standardised assessment of cadet ME skills at Maritime Education and Training (MET institutions around the world is therefore shouldered by the trainers only. This inevitably leads to differences in local interpretations of the ME standards. The central interest of the International Maritime Lecturers Association (IMLA and the International Maritime English Conference (IMEC is therefore to develop consistent assessment methods for cadets’ ME skills, which can be implemented worldwide. This paper explores current ME training practice worldwide, and suggests cross-curricular, content-based instruction as a solution for globally unified and coordinated standards of ME skills assessment.

  6. Content Based Radiographic Images Indexing and Retrieval Using Pattern Orientation Histogram

    Directory of Open Access Journals (Sweden)

    Abolfazl Lakdashti

    2008-06-01

    Full Text Available Introduction: Content Based Image Retrieval (CBIR is a method of image searching and retrieval in a  database. In medical applications, CBIR is a tool used by physicians to compare the previous and current  medical images associated with patients pathological conditions. As the volume of pictorial information  stored in medical image databases is in progress, efficient image indexing and retrieval is increasingly  becoming a necessity.  Materials and Methods: This paper presents a new content based radiographic image retrieval approach  based on histogram of pattern orientations, namely pattern orientation histogram (POH. POH represents  the  spatial  distribution  of  five  different  pattern  orientations:  vertical,  horizontal,  diagonal  down/left,  diagonal down/right and non-orientation. In this method, a given image is first divided into image-blocks  and  the  frequency  of  each  type  of  pattern  is  determined  in  each  image-block.  Then,  local  pattern  histograms for each of these image-blocks are computed.   Results: The method was compared to two well known texture-based image retrieval methods: Tamura  and  Edge  Histogram  Descriptors  (EHD  in  MPEG-7  standard.  Experimental  results  based  on  10000  IRMA  radiography  image  dataset,  demonstrate  that  POH  provides  better  precision  and  recall  rates  compared to Tamura and EHD. For some images, the recall and precision rates obtained by POH are,  respectively, 48% and 18% better than the best of the two above mentioned methods.    Discussion and Conclusion: Since we exploit the absolute location of the pattern in the image as well as  its global composition, the proposed matching method can retrieve semantically similar medical images.

  7. A SYSTEM FOR ACCESSING A COLLECTION OF HISTOLOGY IMAGES USING CONTENT-BASED STRATEGIES

    Directory of Open Access Journals (Sweden)

    Camargo J

    2010-12-01

    Full Text Available Histology images are an important resource for research, education and medical practice. The availability of image collections with reference purposes is limited to printed formats such as books and specialized journals. When histology image sets are published in digital formats, they are composed of some tens of images that do not represent the wide diversity of biological structures that can be found in fundamental tissues. Making a complete histology image collection available to the general public having a great impact on research and education in different areas such as medicine, biology and natural sciences. This work presents the acquisition process of a histology image collection with 20,000 samples in digital format, from tissue processing to digital image capturing. The main purpose of collecting these images is to make them available as reference material to the academic comunity. In addition, this paper presents the design and architecture of a system to query and explore the image collection, using content-based image retrieval tools and text-based search on the annotations provided by experts. The system also offers novel image visualization methods to allow easy identification of interesting images among hundreds of possible pictures. The system has been developed using a service-oriented architecture and allows web-based access in http://www.informed.unal.edu.co

  8. Content-Based Image Retrieval Using Support Vector Machine in digital image processing techniques

    Directory of Open Access Journals (Sweden)

    G.V.Hari Prasad

    2012-04-01

    Full Text Available The rapid growth of computer technologies and the ad-vent of the World Wide Web have increased the amount and the complexity of multimedia information. A content-based image retrieval (CBIR system has been developed as an ef-ficient image retrieval tool, whereby the user can provide their query to the system to allow it to retrieve the user’s desired image from the image database. However, the tradi-tional relevance feedback of CBIR has some limitations that will decrease the performance of the CBIR system, such as the imbalance oftraining-set problem, classification prob-lem, limited information from user problem, and insuffi-cient trainingset problem. Therefore, in this study, we pro-posed an enhanced relevance-feedback method to support the user query based on the representative image selection and weight ranking of the images retrieved. The support vector machine (SVM has been used to support the learn-ing process to reduce the semantic gap between the user and the CBIR system. From these experiments, the proposed learning method has enabled users to improve their search results based on the performance of CBIR system. In addi-tion, the experiments also proved that by solving the imbal-ance training set issue, the performance of CBIR could be improved.

  9. Religion-Based User Generated Content in Online Newspapers Covering the Colectiv Nightclub Fire

    Directory of Open Access Journals (Sweden)

    Radu Cristian Răileanu

    2016-08-01

    Full Text Available The high degree of interactivity of the Internet, combined with the almost ubiquitous presence of forums on online media publications, has offered everybody the possibility to express their opinions and beliefs on websites. This paper uses content analysis to examine the religion-based comments that were posted on 8 Romanian mainstream news websites in reply to articles regarding a fire that broke out during a rock concert in Bucharest, killing over 50 people and injuring more than 100. The analysis also included the answers to these comments. Among the findings, we have discovered that the highest percentage of religion-based comments made some type of reference to Satanism and that very few of them expressed compassion towards the victims. On the other hand, counter-speech strategies managed to halt hate speech in almost half of the cases where they were employed. However, personal attacks against religion-based commentators were the most commonly used form of counter-speech, contributing to an unfriendly climate on the forums.

  10. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  11. Investigating elementary education and physical therapy majors' perceptions of an inquiry-based physics content course

    Science.gov (United States)

    Hilton, John Martin

    This study investigates why physical therapy assistant majors engage and perform better than elementary education majors in an inquiry-based conceptual physics course at Mid-Atlantic Community College. The students from each major are demographically similar, both courses are similar in depth and structure, and each course supports the students' program. However, there is an observed difference in the levels of engagement with the curriculum and performance on writing-based assessments between the two groups. To explore possible explanations for the difference, I examine students' affinity for science, their beliefs about the nature of science and scientific knowledge in the classroom, and their perception of the usefulness of science to their program. During semi-structured interviews, students from both majors displayed nearly identical weak affinities for science, epistemological beliefs, and uncertainty about the usefulness of the class. However, the physical therapy majors' ability to see the relevance of the physics course experience to their program enhanced their interest and motivation. In contrast, the elementary education students do not see connections between the course and their program, and do not see a purpose for their learning of physics content. To improve the program, I propose a two-pronged approach - designing a faded-scaffolded-inquiry approach for both classes, and developing a field-based/seminar class for the elementary education majors. The scaffolded inquiry will help both groups develop better orientations toward lab activities, and the structured observations and reflection will help the elementary group connect the material to their program.

  12. Local texton XOR patterns: A new feature descriptor for content-based image retrieval

    Directory of Open Access Journals (Sweden)

    Anu Bala

    2016-03-01

    Full Text Available In this paper, a novel feature descriptor, local texton XOR patterns (LTxXORP is proposed for content-based image retrieval. The proposed method collects the texton XOR pattern which gives the structure of the query image or database image. First, the RGB (red, green, blue color image is converted into HSV (hue, saturation and value color space. Second, the V color space is divided into overlapping subblocks of size 2 × 2 and textons are collected based on the shape of the textons. Then, exclusive OR (XOR operation is performed on the texton image between the center pixel and its surrounding neighbors. Finally, the feature vector is constructed based on the LTxXORPs and HSV histograms. The performance of the proposed method is evaluated by testing on benchmark database, Corel-1K, Corel-5K and Corel-10K in terms of precision, recall, average retrieval precision (ARP and average retrieval rate (ARR. The results after investigation show a significant improvement as compared to the state-of-the-art features for image retrieval.

  13. Web image retrieval using an effective topic and content-based technique

    Science.gov (United States)

    Lee, Ching-Cheng; Prabhakara, Rashmi

    2005-03-01

    There has been an exponential growth in the amount of image data that is available on the World Wide Web since the early development of Internet. With such a large amount of information and image available and its usefulness, an effective image retrieval system is thus greatly needed. In this paper, we present an effective approach with both image matching and indexing techniques that improvise on existing integrated image retrieval methods. This technique follows a two-phase approach, integrating query by topic and query by example specification methods. In the first phase, The topic-based image retrieval is performed by using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. This technique consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. In the second phase, we use query by example specification to perform a low-level content-based image match in order to retrieve smaller and relatively closer results of the example image. From this, information related to the image feature is automatically extracted from the query image. The main objective of our approach is to develop a functional image search and indexing technique and to demonstrate that better retrieval results can be achieved.

  14. Local tetra patterns: a new feature descriptor for content-based image retrieval.

    Science.gov (United States)

    Murala, Subrahmanyam; Maheshwari, R P; Balasubramanian, R

    2012-05-01

    In this paper, we propose a novel image indexing and retrieval algorithm using local tetra patterns (LTrPs) for content-based image retrieval (CBIR). The standard local binary pattern (LBP) and local ternary pattern (LTP) encode the relationship between the referenced pixel and its surrounding neighbors by computing gray-level difference. The proposed method encodes the relationship between the referenced pixel and its neighbors, based on the directions that are calculated using the first-order derivatives in vertical and horizontal directions. In addition, we propose a generic strategy to compute nth-order LTrP using (n - 1)th-order horizontal and vertical derivatives for efficient CBIR and analyze the effectiveness of our proposed algorithm by combining it with the Gabor transform. The performance of the proposed method is compared with the LBP, the local derivative patterns, and the LTP based on the results obtained using benchmark image databases viz., Corel 1000 database (DB1), Brodatz texture database (DB2), and MIT VisTex database (DB3). Performance analysis shows that the proposed method improves the retrieval result from 70.34%/44.9% to 75.9%/48.7% in terms of average precision/average recall on database DB1, and from 79.97% to 85.30% and 82.23% to 90.02% in terms of average retrieval rate on databases DB2 and DB3, respectively, as compared with the standard LBP.

  15. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  16. The Pediatrics Milestones Assessment Pilot: Development of Workplace-Based Assessment Content, Instruments, and Processes.

    Science.gov (United States)

    Hicks, Patricia J; Margolis, Melissa; Poynter, Sue E; Chaffinch, Christa; Tenney-Soeiro, Rebecca; Turner, Teri L; Waggoner-Fountain, Linda; Lockridge, Robin; Clyman, Stephen G; Schwartz, Alan

    2016-05-01

    To report on the development of content and user feedback regarding the assessment process and utility of the workplace-based assessment instruments of the Pediatrics Milestones Assessment Pilot (PMAP). One multisource feedback instrument and two structured clinical observation instruments were developed and refined by experts in pediatrics and assessment to provide evidence for nine competencies based on the Pediatrics Milestones (PMs) and chosen to inform residency program faculty decisions about learners' readiness to serve as pediatric interns in the inpatient setting. During the 2012-2013 PMAP study, 18 U.S. pediatric residency programs enrolled interns and subinterns. Faculty, residents, nurses, and other observers used the instruments to assess learner performance through direct observation during a one-month rotation. At the end of the rotation, data were aggregated for each learner, milestone levels were assigned using a milestone classification form, and feedback was provided to learners. Learners and site leads were surveyed and/or interviewed about their experience as participants. Across the sites, 2,338 instruments assessing 239 learners were completed by 630 unique observers. Regarding end-of-rotation feedback, 93% of learners (128/137) agreed the assessments and feedback "helped me understand how those with whom I work perceive my performance," and 85% (117/137) agreed they were "useful for constructing future goals or identifying a developmental path." Site leads identified several benefits and challenges to the assessment process. PM-based instruments used in workplace-based assessment provide a meaningful and acceptable approach to collecting evidence of learner competency development. Learners valued feedback provided by PM-based assessment.

  17. Development of an automatic measuring device for total sugar content in chlortetracycline fermenter based on STM32

    Science.gov (United States)

    Liu, Ruochen; Chen, Xiangguang; Yao, Minpu; Huang, Suyi; Ma, Deshou; Zhou, Biao

    2017-01-01

    Because fermented liquid in chlortetracycline fermenter has high viscosity and complex composition, conventional instruments can't directly measure its total sugar content of fermented liquid. At present, offline artificial sampling measurement is usually the way to measuring total sugar content in chlortetracycline Fermenter. it will take too much time and manpower to finish the measurement., and the results will bring the lag of control process. To realize automatic measurement of total sugar content in chlortetracycline fermenter, we developed an automatic measuring device for total sugar content based on STM32 microcomputer. It can not only realize the function of automatic sampling, filtering, measuring of fermented liquid and automatic washing of the device, but also can make the measuring results display in the field and finish data communication. The experiment results show that the automatic measuring device of total sugar content in chlortetracycline fermenter can meet the demand of practical application.

  18. Lymph node content of supraclavicular and thoracodorsal-based axillary flaps for vascularized lymph node transfer.

    Science.gov (United States)

    Gerety, Patrick A; Pannucci, Christopher J; Basta, Marten N; Wang, Amber R; Zhang, Paul; Mies, Carolyn; Kanchwala, Suhail K

    2016-01-01

    Microvascular transfer of lymph node flaps has recently gained popularity as a treatment for secondary lymphedema often occurring after axillary, groin, or pelvic lymph node dissections. This study aimed to delineate the lymph node contents and pedicle characteristics of the supraclavicular (SC) and thoracodorsal (TD)-based axillary flaps as well as to compare lymph node quantification of surgeon vs pathologist. SC and TD flaps were dissected from fresh female cadavers. The surgeon assessed pedicle characteristics, lymph node content, and anatomy. A pathologist assessed all flaps for gross and microscopic lymph node contents. The κ statistic was used to compare surgeon and pathologist. Ten SC flaps and 10 TD flaps were harvested and quantified. In comparing the SC and TD flaps, there were no statistical differences between artery diameter (3.1 vs 3.2 mm; P = .75) and vein diameter (2.8 vs 3.5 mm; P = .24). The TD flap did have a significantly longer pedicle than the SC flap (4.2 vs 3.2 cm; P = .03). The TD flap was found to be significantly heavier than the SC flap (17.0 ± 4.8 vs 12.9 ± 3.3 g; P = .04). Gross lymph node quantity was similar in the SC and TD flaps (2.5 ± 1.7 vs 1.8 ± 1.2; P = .33). There was good agreement between the surgeon and pathologist in detecting gross lymph nodes in the flaps (SC κ = 0.87, TD κ = 0.61). The SC and TD flaps have similar lymph node quantity, but the SC flap has higher lymphatic density. A surgeon's estimation of lymph node quantity is reliable and has been verified in this study by comparison to a pathologist's examination. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  19. Evaluation of Web-Based Consumer Medication Information: Content and Usability of 4 Australian Websites.

    Science.gov (United States)

    Raban, Magdalena Z; Tariq, Amina; Richardson, Lauren; Byrne, Mary; Robinson, Maureen; Li, Ling; Westbrook, Johanna I; Baysari, Melissa T

    2016-07-21

    Medication is the most common intervention in health care, and written medication information can affect consumers' medication-related behavior. Research has shown that a large proportion of Australians search for medication information on the Internet. To evaluate the medication information content, based on consumer medication information needs, and usability of 4 Australian health websites: Better Health Channel, myDr, healthdirect, and NPS MedicineWise . To assess website content, the most common consumer medication information needs were identified using (1) medication queries to the healthdirect helpline (a telephone helpline available across most of Australia) and (2) the most frequently used medications in Australia. The most frequently used medications were extracted from Australian government statistics on use of subsidized medicines in the community and the National Census of Medicines Use. Each website was assessed to determine whether it covered or partially covered information and advice about these medications. To assess website usability, 16 consumers participated in user testing wherein they were required to locate 2 pieces of medication information on each website. Brief semistructured interviews were also conducted with participants to gauge their opinions of the websites. Information on prescription medication was more comprehensively covered on all websites (3 of 4 websites covered 100% of information) than nonprescription medication (websites covered 0%-67% of information). Most websites relied on consumer medicines information leaflets to convey prescription medication information to consumers. Information about prescription medication classes was less comprehensive, with no website providing all information examined about antibiotics and antidepressants. Participants (n=16) were able to locate medication information on websites in most cases (accuracy ranged from 84% to 91%). However, a number of usability issues relating to website

  20. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers

    Science.gov (United States)

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179

  1. Quick supramolecular solvent-based microextraction for quantification of low curcuminoid content in food.

    Science.gov (United States)

    Caballero-Casero, Noelia; Ocak, Miraç; Ocak, Ümmüham; Rubio, Soledad

    2014-03-01

    There is a need to monitor the consumption of curcuminoids, an EU-permitted natural colour in food, to ensure that acceptable daily intakes are not exceeded, especially by young children. This paper describes a sensitive method able to quantify low contents of curcumin (CUR), demethoxycurcumin (DMC) and bis-demethoxycurcumin (BDMC) in foodstuffs. The method was based on a single-step extraction by use of a supramolecular solvent (SUPRAS) made up of reverse aggregates of decanoic acid, and direct analysis of the extract by use of liquid chromatography-photodiode array (PDA) detection. The extraction involved the stirring of 200 mg foodstuff with 600 μL SUPRAS for 15 min. No cleanup or concentration of the extracts was required. Curcuminoid solubilisation occurred via dispersion and hydrogen bonding. The method was used for the determination of curcuminoids in different types of foodstuff (snack, gelatine, yoghurt, mayonnaise, butter, candy and fish products) that encompassed a wide range of protein, fat, carbohydrate, sugar and water contents (0.85-11.04, 0-81.11, 0.06-75, 0.06-79.48, and 10.08-85.10 g, respectively, in each 100 g of food). Method quantification limits for the foodstuffs analysed were in the ranges 2.9-7.7, 2.8-11.2 and 3.3-9.0 μg kg(-1) for CUR, DMC and BDMC, respectively. The concentrations of curcuminoids detected in the foodstuffs and the recoveries obtained from fortified samples were in the ranges ND-284, ND-201 and ND-61.3 μg kg(-1), and 82-106, 89-106 and 90-102 %, for CUR, DMC and BDMC, respectively. The relative standard deviations were in the range 2-7 %. This method enabled quick and simple microextraction of curcuminoids with minimal solvent consumption, while delivering accurate and precise data.

  2. No-reference multiscale blur detection tool for content based image retrieval

    Science.gov (United States)

    Ezekiel, Soundararajan; Stocker, Russell; Harrity, Kyle; Alford, Mark; Ferris, David; Blasch, Erik; Gorniak, Mark

    2014-06-01

    In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important component of digital image analysis. To assess image quality for these mobile products, a standard image is required as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be used to measure the quality of the images. However, these methods are not possible if there is no reference image. In our approach, a discrete-wavelet transformation is applied to the blurred image, which decomposes into the approximate image and three detail sub-images, namely horizontal, vertical, and diagonal images. We then focus on noise-measuring the detail images and blur-measuring the approximate image to assess the image quality. We then compute noise mean and noise ratio from the detail images, and blur mean and blur ratio from the approximate image. The Multi-scale Blur Detection (MBD) metric provides both an assessment of the noise and blur content. These values are weighted based on a linear regression against full-reference y values. From these statistics, we can compare to normal useful image statistics for image quality without needing a reference image. We then test the validity of our obtained weights by R2 analysis as well as using them to estimate image quality of an image with a known quality measure. The result shows that our method provides acceptable results for images containing low to mid noise levels and blur content.

  3. A novel cellular automata based technique for visual multimedia content encryption

    Science.gov (United States)

    Chatzichristofis, Savvas A.; Mitzias, Dimitris A.; Sirakoulis, Georgios Ch.; Boutalis, Yiannis S.

    2010-11-01

    This paper proposes a new method for visual multimedia content encryption using Cellular Automata (CA). The encryption scheme is based on the application of an attribute of the CLF XOR filter, according to which the original content of a cellular neighborhood can be reconstructed following a predetermined number of repeated applications of the filter. The encryption is achieved using a key image of the same dimensions as the image being encrypted. This technique is accompanied by the one-time pad (OTP) encryption method, rendering the proposed method reasonably powerful, given the very large number of resultant potential security keys. The method presented here makes encryption possible in cases where there is more than one image with the use of just one key image. A further significant characteristic of the proposed method is that it demonstrates how techniques from the field of image retrieval can be used in the field of image encryption. The proposed method is further strengthened by the fact that the resulting encrypted image for a given key image is different each time. The encryption result depends on the structure of an artificial image produced by the superposition of four 1-D CA time-space diagrams as well as from a CA random number generator. A semi-blind source separation algorithm is used to decrypt the encrypted image. The result of the decryption is a lossless representation of the encrypted image. Simulation results demonstrate the effectiveness of the proposed encryption method. The proposed method is implemented in C# and is available online through the img(Rummager) application.

  4. A content analysis of smartphone-based applications for hypertension management.

    Science.gov (United States)

    Kumar, Nilay; Khunger, Monica; Gupta, Arjun; Garg, Neetika

    2015-02-01

    Smartphone-based medical applications (apps) can facilitate self-management of hypertension (HTN). The content and consumer interaction metrics of HTN-related apps are unknown. In this cross-sectional study to ascertain the content of medical apps designed for HTN management, we queried Google Play and Apple iTunes using the search terms "hypertension" and "high blood pressure." The top 107 apps were analyzed. Major app functionalities including tracking (for blood pressure [BP], pulse, weight, body mass index), medical device (to measure pulse or BP), general information on HTN, and medication adherence tools were recorded along with consumer engagement parameters. Data were collected from May 28 to May 30, 2014. A total of 72% of the apps had tracking function, 22% had tools to enhance medication adherence, 37% contained general information on HTN, and 8% contained information on Dietary Approaches to Stop Hypertension (DASH) diet. These data showed that a majority of apps for HTN are designed primarily for health management functions. However, 14% of Google Android apps could transform the smartphone into a medical device to measure BP. None of these apps employed the use of a BP cuff or had any documentation of validation against a gold standard. Only 3% of the apps were developed by healthcare agencies such as universities or professional organizations. In regression models. the medical device function was highly predictive of greater number of downloads (odds ratio, 97.08; P apps designed for HTN serve health management functions such as tracking blood pressure, weight, or body mass index. Consumers have a strong tendency to download and favorably rate apps that are advertised to measure blood pressure and heart rate, despite a lack of validation for these apps. There is a need for greater oversight in medical app development for HTN, especially when they qualify as a medical device. Copyright © 2015 American Society of Hypertension. Published by Elsevier

  5. High-throughput retrotransposon-based fluorescent markers: improved information content and allele discrimination

    Directory of Open Access Journals (Sweden)

    Baker David

    2009-07-01

    Full Text Available Abstract Background Dense genetic maps, together with the efficiency and accuracy of their construction, are integral to genetic studies and marker assisted selection for plant breeding. High-throughput multiplex markers that are robust and reproducible can contribute to both efficiency and accuracy. Multiplex markers are often dominant and so have low information content, this coupled with the pressure to find alternatives to radio-labelling, has led us to adapt the SSAP (sequence specific amplified polymorphism marker method from a 33P labelling procedure to fluorescently tagged markers analysed from an automated ABI 3730 xl platform. This method is illustrated for multiplexed SSAP markers based on retrotransposon insertions of pea and is applicable for the rapid and efficient generation of markers from genomes where repetitive element sequence information is available for primer design. We cross-reference SSAP markers previously generated using the 33P manual PAGE system to fluorescent peaks, and use these high-throughput fluorescent SSAP markers for further genetic studies in Pisum. Results The optimal conditions for the fluorescent-labelling method used a triplex set of primers in the PCR. These included a fluorescently labelled specific primer together with its unlabelled counterpart, plus an adapter-based primer with two bases of selection on the 3' end. The introduction of the unlabelled specific primer helped to optimise the fluorescent signal across the range of fragment sizes expected, and eliminated the need for extensive dilutions of PCR amplicons. The software (GeneMarker Version 1.6 used for the high-throughput data analysis provided an assessment of amplicon size in nucleotides, peak areas and fluorescence intensity in a table format, so providing additional information content for each marker. The method has been tested in a small-scale study with 12 pea accessions resulting in 467 polymorphic fluorescent SSAP markers of which

  6. Computer-aided diagnostics of screening mammography using content-based image retrieval

    Science.gov (United States)

    Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo

    2012-03-01

    Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.

  7. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    Science.gov (United States)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  8. Physiologically based pharmacokinetic modeling of PLGA nanoparticles with varied mPEG content

    Directory of Open Access Journals (Sweden)

    Avgoustakis K

    2012-03-01

    Full Text Available Mingguang Li1, Zoi Panagi2, Konstantinos Avgoustakis2, Joshua Reineke11Department of Pharmaceutical Sciences, Eugene Applebaum College of Pharmacy and Health Sciences, Wayne State University, Detroit, MI, USA; 2Pharmaceutical Technology Laboratory, Department of Pharmacy, University of Patras, Rion, Patras, GreeceAbstract: Biodistribution of nanoparticles is dependent on their physicochemical properties (such as size, surface charge, and surface hydrophilicity. Clear and systematic understanding of nanoparticle properties' effects on their in vivo performance is of fundamental significance in nanoparticle design, development and optimization for medical applications, and toxicity evaluation. In the present study, a physiologically based pharmacokinetic model was utilized to interpret the effects of nanoparticle properties on previously published biodistribution data. Biodistribution data for five poly(lactic-co-glycolic acid (PLGA nanoparticle formulations prepared with varied content of monomethoxypoly (ethyleneglycol (mPEG (PLGA, PLGA-mPEG256, PLGA-mPEG153, PLGA-mPEG51, PLGA-mPEG34 were collected in mice after intravenous injection. A physiologically based pharmacokinetic model was developed and evaluated to simulate the mass-time profiles of nanoparticle distribution in tissues. In anticipation that the biodistribution of new nanoparticle formulations could be predicted from the physiologically based pharmacokinetic model, multivariate regression analysis was performed to build the relationship between nanoparticle properties (size, zeta potential, and number of PEG molecules per unit surface area and biodistribution parameters. Based on these relationships, characterized physicochemical properties of PLGA-mPEG495 nanoparticles (a sixth formulation were used to calculate (predict biodistribution profiles. For all five initial formulations, the developed model adequately simulates the experimental data indicating that the model is suitable for

  9. AN EFFICIENT/ENHANCED CONTENT BASED IMAGE RETRIEVAL FOR A COMPUTATIONAL ENGINE

    Directory of Open Access Journals (Sweden)

    K. V. Shriram

    2014-01-01

    Full Text Available A picture or image is worth a thousand words. It is very much pertinent to the field of image processing. In the recent years, much advancement in VLSI technologies has triggered the abundant availability of powerful processors in the market. With the prices of RAM are having come down, the databases could be used to store information on the about art works, medical images like CT scan, satellite images, nature photography, album images, images of convicts i.e., criminals for security purpose, giving rise to a massive data having a diverse image set collection. This leads us to the problem of relevant image retrieval from a huge database having diverse image set collection. Web search engines are always expected to deliver flawless results in a short span of time including accuracy and speed. An image search engine also comes under the same roof. The results of an image search should match with the best available image from in the database. Content Based Image Retrieval (CBIR has been proposed to enable these image search engines with impeccable results. In this CBIR technology, using only color and texture as parameters for zeroing in on an imagemay not help in fetching the best result. Also most of the existing systems uses keyword based search which could yield inappropriate results. All the above mentioned drawbacks in CBIR have been addressed in this research. A complete analysis of CBIR including a combination of features has been carried out, implemented and tested.

  10. Content-based numerical report searching for image enabled case retrieval

    Science.gov (United States)

    Xue, Liang; Ling, Tonghui; Zhang, Jianguo

    2010-03-01

    One way to improve accuracy of diagnosis and provide better medical treatment to patients is to recall or find records of previous patients with similar disease features from healthcare information systems which already have confirmed diagnostic results. In most situations, features of disease may be described by other kinds of information or data types such as numerical reports or a simple or complicated SR (Structure Reports) generated from Ultrasound Information System (USIS) or from computer assisted detection (CAD) components, or laboratory information system (LIS). In this presentation, we described a new approach to search and retrieve numerical reports based on the contents of parameters from large database of numerical reports. We have tested this approach by using numerical data from an ultrasound information system (USIS) and got desired results both in accuracy and performance. The system can be wrapped as a web service and is being integrated into a USIS and EMR for clinical evaluation without interrupting the normal operations of USIS/RIS/PACS. We give the design architecture and implementation strategy of this novel framework to provide feature based case retrieval capability in an integrated healthcare information system.

  11. Synthesis and high content cell-based profiling of simplified analogues of the microtubule stabilizer (+)-discodermolide.

    Science.gov (United States)

    Minguez, Jose M; Giuliano, Kenneth A; Balachandran, Raghavan; Madiraju, Charitha; Curran, Dennis P; Day, Billy W

    2002-12-01

    (+)-Discodermolide, a C24:4, trihydroxylated, octamethyl, carbamate-bearing fatty acid lactone originally isolated from a Caribbean sponge, has proven to be the most potent of the microtubule-stabilizing agents. Recent studies suggest that it or its analogues may have advantages over other classes of microtubule-stabilizing agents. (+)-Discodermolide's complex molecular architecture has made structure-activity relationship analysis in this class of compounds a formidable task. The goal of this study was to prepare simplified analogues of (+)-discodermolide and to analyze their biological activities to expand structure-activity relationships. A small library of analogues was prepared wherein the (+)-discodermolide methyl groups at C-14 and C-16 and the C-7 hydroxyl were removed, and the lactone was replaced by simple esters. The library components were analyzed for microtubule-stabilizing actions in vitro, antiproliferative activity against a small panel of human carcinoma cells, and cell signaling, microtubule architecture and mitotic spindle alterations by a multiparameter fluorescence cell-based screening technique. The results show that even drastic structural simplification can lead to analogues with actions related to microtubule targeting and signal transduction, but that these subtle effects were illuminated only through the high information content cell-based screen.

  12. Electrical properties of multiphase composites based on carbon nanotubes and an optimized clay content

    Science.gov (United States)

    Egiziano, Luigi; Lamberti, Patrizia; Spinelli, Giovanni; Tucci, Vincenzo; Guadagno, Liberata; Vertuccio, Luigi

    2016-05-01

    The experimental results concerning the characterization of a multiphase nanocomposite systems based on epoxy matrix, loaded with different amount of multi-walled carbon nanotubes (MWCNTs) and an optimized Hydrotalcite (HT) clay content (i.e. 0.6 wt%), duly identified by an our previous theoretical study based on Design of Experiment (DoE), are presented. Dynamic-mechanical analysis (DMA) reveal that even the introduction of higher HT loading (up to 1%wt) don't affect significantly the mechanical properties of the nanocomposites while morphological investigations show an effective synergy between clay and carbon nanotubes that leads to peculiar micro/nanostructures that favor the creation of the electrical conductive network inside the insulating resin. An electrical characterization is carried out in terms of DC electrical conductivity, percolation threshold (EPT) and frequency response in the range 10Hz-1MHz. In particular, the measurements of the DC conductivity allow to obtain the typical "percolation" curve also found for classical CNT-polymer mixtures and a value of about 2 S/m for the electrical conductivity is achieved at the highest considered CNTs concentration (i.e. 1 wt%). The results suggest that multiphase nanocomposites obtained incorporating dispersive nanofillers, in addition to the conductive one, may be a valid alternative to the polymer blends, to improve the properties of the polymeric materials thus able to meet high demands, particularly concerning their mechanical and thermal stability and electrical features required in the aircraft engineering.

  13. Content-based high-resolution remote sensing image retrieval with local binary patterns

    Science.gov (United States)

    Wang, A. P.; Wang, S. G.

    2006-10-01

    Texture is a very important feature in image analysis including content-based image retrieval (CBIR). A common way of retrieving images is to calculate the similarity of features between a sample images and the other images in a database. This paper applies a novel texture analysis approach, local binary patterns (LBP) operator, to 1m Ikonos images retrieval and presents an improved LBP histogram spatially enhanced LBP (SEL) histogram with spatial information by dividing the LBP labeled images into k*k regions. First different neighborhood P and scale factor R were chosen to scan over the whole images, so that their labeled LBP and local variance (VAR) images were calculated, from which we got the LBP, LBP/VAR, and VAR histograms and SEL histograms. The histograms were used as the features for CBIR and a non-parametric statistical test G-statistic was used for similarity measure. The result showed that LBP/VAR based features got a very high retrieval rate with certain values of P and R, and SEL features that are more robust to illumination changes than LBP/VAR also obtained higher retrieval rate than LBP histograms. The comparison to Gabor filter confirmed the effectiveness of the presented approach in CBIR.

  14. Non destructive determination of the free chloride content in cement based materials

    Energy Technology Data Exchange (ETDEWEB)

    Elsener, B. [Department of Inorganic and Analytical Chemistry, University of Cagliari, I-09128 Cagliari (Italy); Institute of Materials Chemistry and Corrosion, Swiss Federal Institute of Technology, ETH Hoenggerberg, CH-8093 Zuerich (Switzerland); Zimmermann, L.; Boehni, H. [Institute of Materials Chemistry and Corrosion, Swiss Federal Institute of Technology, ETH Hoenggerberg, CH-8093 Zuerich (Switzerland)

    2003-06-01

    A non-destructive chloride sensitive sensor element for use in cement based porous materials is presented. The sensor element determines the activity of the free chloride ions in solutions and in porous cement based materials such as cement paste, mortar or concrete. The calibration in synthetic pore solution showed a response according to Nernst law over three decades of chloride concentration. The sensor element has shown excellent reproducibility and long term stability. The sensor element has been used to monitor the chloride uptake into mortar specimens. The results show a good agreement between the free chloride content determined by the sensor and by pore water expression. Applications in monitoring of reinforced concrete structures and their limitations are discussed. (Abstract Copyright [2003], Wiley Periodicals, Inc.) [German] In der vorliegenden Arbeit wird ein Chloridsensor zur zerstoerungsfreien Erfassung des Chloridgehalts in zementoesen Materialien beschrieben. Der Sensor bestimmt die Aktivitaet der freien Chloridionen in Loesungen und in Zementstein, Moertel oder Beton. Die Kalibrierungskurve in synthetischer Betonporenloesung zeigt das erwartete Nernst'sche Verhalten ueber mehr als drei Konzentrationsdekaden. Der Sensor weist eine sehr hohe Reproduzierbarkeit und Langzeitstabilitaet auf. Der Chloridsensor wurde eingesetzt, um das Eindringen der Chloridionen in Moertelpruefkoerpern zu untersuchen. Ein Vergleich der Chloridkonzentration bestimmt durch Auspressen der Porenloesung am Ende der Versuche mit den von Sensoren bestimmten Chloridkonzentration zeigt eine sehr gute Uebereinstimmung. Praktische Anwendungen und die Einsatzgrenzen des Sensors werden diskutiert. (Abstract Copyright [2003], Wiley Periodicals, Inc.)

  15. Accelerating Content-Based Image Retrieval via GPU-Adaptive Index Structure

    Directory of Open Access Journals (Sweden)

    Lei Zhu

    2014-01-01

    Full Text Available A tremendous amount of work has been conducted in content-based image retrieval (CBIR on designing effective index structure to accelerate the retrieval process. Most of them improve the retrieval efficiency via complex index structures, and few take into account the parallel implementation of them on underlying hardware, making the existing index structures suffer from low-degree of parallelism. In this paper, a novel graphics processing unit (GPU adaptive index structure, termed as plane semantic ball (PSB, is proposed to simultaneously reduce the work of retrieval process and exploit the parallel acceleration of underlying hardware. In PSB, semantics are embedded into the generation of representative pivots and multiple balls are selected to cover more informative reference features. With PSB, the online retrieval of CBIR is factorized into independent components that are implemented on GPU efficiently. Comparative experiments with GPU-based brute force approach demonstrate that the proposed approach can achieve high speedup with little information loss. Furthermore, PSB is compared with the state-of-the-art approach, random ball cover (RBC, on two standard image datasets, Corel 10 K and GIST 1 M. Experimental results show that our approach achieves higher speedup than RBC on the same accuracy level.

  16. Searching for document contents in an IHE-XDS EHR architecture via archetype-based indexing of document types.

    Science.gov (United States)

    Rinner, Christoph; Kohler, Michael; Saboor, Samrend; Huebner-Bloder, Gudrun; Ammenwerth, Elske; Duftschmid, Georg

    2013-01-01

    The shared EHR (electronic health record) system architecture IHE XDS is widely adopted internationally. It ensures a high level of data privacy via distributed storage of EHR documents. Its standard search capabilities, however, are limited; it only allows a retrieval of complete documents by querying a restricted set of document metadata. Existing approaches that aim to extend XDS queries to document contents typically employ a central index of document contents. Hereby they undermine XDS' basic characteristic of distributed data storage. To avoid data privacy concerns, we propose querying EHR contents in XDS by indexing document types based on Archetypes instead. We successfully tested our approach within the ISO/EN 13606 standard.

  17. Popularity based distribution schemes for P2P assisted streaming of VoD contents

    OpenAIRE

    Gramatikov, Sasho; Jaureguizar Núñez, Fernando; Cabrera Quesada, Julian; García Santos, Narciso

    2012-01-01

    The Video on Demand (VoD) service is becoming a dominant service in the telecommunication market due to the great convenience regarding the choice of content items and their independent viewing time. However, it comes with the downsides of high server storage and capacity demands because of the large variety of content items and the high amount of traffic generated for serving all requests. Storing part of the popular contents on the peers brings certain advantages but, it still has issues re...

  18. Developing E-Learning Based on Animation Content for Improving Mathematical Connection Abilities in High School Students

    OpenAIRE

    Dedi Rohendi

    2012-01-01

    The purpose of this paper is to develop e-learning based on animation content for improving mathematical connection abilities in senior high school students. The e-learning was developed by using Moddle and the animation content was developed by using macromedia flash. To get the student mathematical conection abilities it uses the instruments of mathematical tests before and after teaching and learning process. The data were analyzed by using t-test and gain value test. The study found that ...

  19. Identify Web-page Content meaning using Knowledge based System for Dual Meaning Words

    OpenAIRE

    Sinha, Sukanta; Dattagupta, Rana; Mukhopadhyay, Debajyoti

    2012-01-01

    Meaning of Web-page content plays a big role while produced a search result from a search engine. Most of the cases Web-page meaning stored in title or meta-tag area but those meanings do not always match with Web-page content. To overcome this situation we need to go through the Web-page content to identify the Web-page meaning. In such cases, where Webpage content holds dual meaning words that time it is really difficult to identify the meaning of the Web-page. In this paper, we are introdu...

  20. PetroSPIRE: a multimodal content-based retrieval system for petroleum applications

    Science.gov (United States)

    Bergman, Lawrence D.; Castelli, Vittorio; Li, Chung-Sheng; Tilke, Peter; Bryant, Ian

    1999-08-01

    In this paper we present a novel content-based search application for petroleum exploration and production. The target application is specification of and search for geologically significant features to be extracted from 2D imagery acquired from oil well bores, in conjunction with 1D parameter traces. The PetroSPIRE system permits a user to define rock strata using image examples in conjunction with parameter constraints. Similarity retrieval is based multimodal search, an relies on texture-matching techniques using pre-extracted texture features, employing high- dimensional indexing and nearest neighbor search. Special- purpose visualization techniques allow a user to evalute object definitions, which can then be iteratively refined by supplying multiple positive and negative image examples as well as multiple parameter constraints. Higher-level semantic constructs can be created from simpler entities by specifying sets of inter-object constraints. A delta-lobe riverbed, for examples, might be specified as layer of siltstone which is above and within 10 feet of a layer of sandstone, with an intervening layer of shale. These 'compound objects', along with simple objects, from a library of searchable entities that can be used in an operational setting. Both object definition and search are accomplished using a web-based Java client, supporting image and parameter browsing, drag-and-drop query specification, and thumbnail viewing of query results. Initial results from this search engine have been deemed encouraging by oil- industry E and P researchers. A more ambitious pilot is underway to evaluate the efficacy of this approach on a large database from a North Sea drilling site.

  1. Increases in synthetic cannabinoids-related harms: Results from a longitudinal web-based content analysis.

    Science.gov (United States)

    Lamy, Francois R; Daniulaityte, Raminta; Nahhas, Ramzi W; Barratt, Monica J; Smith, Alan G; Sheth, Amit; Martins, Silvia S; Boyer, Edward W; Carlson, Robert G

    2017-06-01

    Synthetic Cannabinoid Receptor Agonists (SCRA), also known as "K2" or "Spice," have drawn considerable attention due to their potential of abuse and harmful consequences. More research is needed to understand user experiences of SCRA-related effects. We use semi-automated information processing techniques through eDrugTrends platform to examine SCRA-related effects and their variations through a longitudinal content analysis of web-forum data. English language posts from three drug-focused web-forums were extracted and analyzed between January 1st 2008 and September 30th 2015. Search terms are based on the Drug Use Ontology (DAO) created for this study (189 SCRA-related and 501 effect-related terms). EDrugTrends NLP-based text processing tools were used to extract posts mentioning SCRA and their effects. Generalized linear regression was used to fit restricted cubic spline functions of time to test whether the proportion of drug-related posts that mention SCRA (and no other drug) and the proportion of these "SCRA-only" posts that mention SCRA effects have changed over time, with an adjustment for multiple testing. 19,052 SCRA-related posts (Bluelight (n=2782), Forum A (n=3882), and Forum B (n=12,388)) posted by 2543 international users were extracted. The most frequently mentioned effects were "getting high" (44.0%), "hallucinations" (10.8%), and "anxiety" (10.2%). The frequency of SCRA-only posts declined steadily over the study period. The proportions of SCRA-only posts mentioning positive effects (e.g., "High" and "Euphoria") steadily decreased, while the proportions of SCRA-only posts mentioning negative effects (e.g., "Anxiety," 'Nausea," "Overdose") increased over the same period. This study's findings indicate that the proportion of negative effects mentioned in web forum posts and linked to SCRA has increased over time, suggesting that recent generations of SCRA generate more harms. This is also one of the first studies to conduct automated content analysis

  2. A Review of Research on Content-Based Foreign/Second Language Education in US K-12 Contexts

    Science.gov (United States)

    Tedick, Diane J.; Wesely, Pamela M.

    2015-01-01

    This review of the extant research literature focuses on research about content-based language instruction (CBI) programmes in K-12 foreign/second language education in the USA. The review emphasises studies on one-way language immersion (OWI) and two-way language immersion (TWI) programmes, which are school-based and subject matter-driven. OWI…

  3. [Disinfection efficacy of hand hygiene based on chlorhexidine gluconate content and usage of alcohol-based hand-rubbing solution].

    Science.gov (United States)

    Tanaka, Ippei; Watanabe, Kiyoshi; Nakaminami, Hidemasa; Azuma, Chihiro; Noguchi, Norihisa

    2014-01-01

    Recently, the procedure for surgical hand hygiene has been switching to a two-stage method and hand-rubbing method from the traditional hand-scrubbing method. Both the two-stage and hand-rubbing methods use alcohol-based hand-rubbing after hand washing. The former requires 5 min of antiseptic hand washing, and the latter 1 min of nonantiseptic hand washing. For a prolonged bactericidal effect in terms of surgical hand hygiene, chlorhexidine gluconate (CHG) has been noted due to its residual activity. However, no detailed study comparing the disinfection efficacy and prolonged effects according to different contents of CHG and the usage of alcohol-based hand-rubbing has been conducted. The glove juice method is able to evaluate disinfection efficacy and prolonged effects of the disinfectants more accurately because it can collect not only transitory bacteria but also normal inhabitants on hands. In the present study, we examined the disinfection efficacy and prolonged effects on alcohol-based hand-rubbing containing CHG by six hand-rubbing methods and three two-stage methods using the glove juice method. In both methods, 3 mL (one pump dispenser push volume) alcohol-based hand-rubbing solution containing 1% (w/v) CHG showed the highest disinfection efficacy and prolonged effects, and no significant difference was found between the hand-rubbing and two-stage methods. In the two methods of hand hygiene, the hand-rubbing method was able to save time and cost. Therefore, the data strongly suggest that the hand-rubbing method using a one pump dispenser push volume of alcohol-based hand-rubbing solution containing 1% (w/v) CHG is suitable for surgical hand hygiene.

  4. Design of Open Content Social Learning Based on the Activities of Learner and Similar Learners

    Science.gov (United States)

    John, Benneaser; Jayakumar, J.; Thavavel, V.; Arumugam, Muthukumar; Poornaselvan, K. J.

    2017-01-01

    Teaching and learning are increasingly taking advantage of the rapid growth in Internet resources, open content, mobile technologies and social media platforms. However, due to the generally unstructured nature and overwhelming quantity of learning content, effective learning remains challenging. In an effort to close this gap, the authors…

  5. "UML Quiz": Automatic Conversion of Web-Based E-Learning Content in Mobile Applications

    Science.gov (United States)

    von Franqué, Alexander; Tellioglu, Hilda

    2014-01-01

    Many educational institutions use Learning Management Systems to provide e-learning content to their students. This often includes quizzes that can help students to prepare for exams. However, the content is usually web-optimized and not very usable on mobile devices. In this work a native mobile application ("UML Quiz") that imports…

  6. A Learning Content Authoring Approach Based on Semantic Technologies and Social Networking: An Empirical Study

    Science.gov (United States)

    Nesic, Sasa; Gasevic, Dragan; Jazayeri, Mehdi; Landoni, Monica

    2011-01-01

    Semantic web technologies have been applied to many aspects of learning content authoring including semantic annotation, semantic search, dynamic assembly, and personalization of learning content. At the same time, social networking services have started to play an important role in the authoring process by supporting authors' collaborative…

  7. Petalz: Search-based Procedural Content Generation for the Casual Gamer

    DEFF Research Database (Denmark)

    Risi, S.; Lehman, J.; D'Ambrosio, D.B;

    2015-01-01

    The impact of game content on the player experience is potentially more critical in casual games than in competitive games because of the diminished role of strategic or tactical diversions. Interestingly, until now procedural content generation (PCG) has nevertheless been investigated almost...

  8. Residual monomer content determination in some acrylic denture base materials and possibilities of its reduction

    Directory of Open Access Journals (Sweden)

    Kostić Milena

    2009-01-01

    Full Text Available Background/Aim. Polymethyl methacrylate is used for producing a denture basis. It is a material made by the polymerization process of methyl methacrylate. Despite of the polymerization type, there is a certain amount of free methyl methacrylate (residual monomer incorporated in the denture, which can cause irritation of the oral mucosa. The aim of this study was to determine the amount of residual monomer in four different denture base acrylic resins by liquid chromatography and the possibility of its reduction. Methods. After the polymerization, a postpolymerization treatment was performed in three different ways: in boiling water for thirty minutes, with 500 W microwaves for three minutes and in steam bath at 22º C for one to thirty days. Results. The obtained results showed that the amount of residual monomer is significantly higher in cold polymerizing acrylates (9.1-11%. The amount of residual monomer after hot polymerization was in the tolerance range (0.59- 0.86%. Conclusion. The obtained results denote a low content of residual monomer in the samples which have undergone postpolymerization treatment. A lower percent of residual monomer is established in samples undergone a hot polymerization.

  9. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  10. Color Histogram and DBC Co-Occurrence Matrix for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    K. Prasanthi Jasmine

    2014-12-01

    Full Text Available This paper presents the integration of color histogram and DBC co-occurrence matrix for content based image retrieval. The exit DBC collect the directional edges which are calculated by applying the first-order derivatives in 0º, 45º, 90º and 135º directions. The feature vector length of DBC for a particular direction is 512 which are more for image retrieval. To avoid this problem, we collect the directional edges by excluding the center pixel and further applied the rotation invariant property. Further, we calculated the co-occurrence matrix to form the feature vector. Finally, the HSV color histogram and the DBC co-occurrence matrix are integrated to form the feature database. The retrieval results of the proposed method have been tested by conducting three experiments on Brodatz, MIT VisTex texture databases and Corel-1000 natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP, DBC and other transform domain features.

  11. Content-Based Image Retrieval using Local Features Descriptors and Bag-of-Visual Words

    Directory of Open Access Journals (Sweden)

    Mohammed Alkhawlani

    2015-09-01

    Full Text Available Image retrieval is still an active research topic in the computer vision field. There are existing several techniques to retrieve visual data from large databases. Bag-of-Visual Word (BoVW is a visual feature descriptor that can be used successfully in Content-based Image Retrieval (CBIR applications. In this paper, we present an image retrieval system that uses local feature descriptors and BoVW model to retrieve efficiently and accurately similar images from standard databases. The proposed system uses SIFT and SURF techniques as local descriptors to produce image signatures that are invariant to rotation and scale. As well as, it uses K-Means as a clustering algorithm to build visual vocabulary for the features descriptors that obtained of local descriptors techniques. To efficiently retrieve much more images relevant to the query, SVM algorithm is used. The performance of the proposed system is evaluated by calculating both precision and recall. The experimental results reveal that this system performs well on two different standard datasets.

  12. Recommendations for Structure and Content for a School-Based Adolescent Immunization Curriculum.

    Science.gov (United States)

    Salazar, Kelsey R; Seib, Katherine G; Underwood, Natasha L; Gargano, Lisa M; Sales, Jessica M; Morfaw, Christopher; Murray, Dennis; Diclemente, Ralph J; Hughes, James M

    2016-07-01

    Despite high utilization of childhood vaccinations, adolescent immunization coverage rates lag behind recommended coverage levels. The four vaccines recommended for adolescents ages 11 to 18 years are tetanus, diphtheria, and pertussis vaccine; human papillomavirus vaccine; meningococcal conjugate vaccine; and an annual influenza vaccine. The Healthy People 2020 goal is 80% coverage for each recommended immunization, but coverage rates in Georgia among adolescents fall below those goals for all but the tetanus, diphtheria, and pertussis vaccine. We developed a multicomponent intervention that included a school-based, teacher-delivered educational curriculum to increase adolescent vaccination coverage rates in Richmond County, Georgia. We facilitated focus group discussions with middle- and high school science teachers who delivered the immunization curriculum in two consecutive school years. The objective of the focus group was to understand teachers' perspectives about the curriculum impact and to synthesize recommendations for optimal dissemination of the curriculum content, structure, and packaging. Teachers provided recommendations for curriculum fit within existing classes, timing of delivery, and dosage of delivery and recommended creating a flexible tool kit, such as a downloadable online package. Teachers also recommended increasing emphasis on disease transmission and symptoms to keep students engaged. These findings can be applied to the development of an online, cost-effective tool kit geared toward teaching adolescents about the immune system and adolescent vaccinations. © 2016 Society for Public Health Education.

  13. A framing theory-based content analysis of a Turkish newspaper's coverage of nanotechnology

    Science.gov (United States)

    Şenocak, Erdal

    2017-07-01

    This study aims at examining how nanotechnology is covered in Turkish print media. As an initial part of this objective, a total of 76 articles derived from a widespread national newspaper were analyzed based on the framing theory. These articles were analyzed using both quantitative and qualitative traditions of content analysis; however, the quantitative method was the primary form of investigation. The analyses showed that the first news about nanotechnology appeared in 1991 and the frequencies of articles had increased in the subsequent years; but the number of articles had decreased after a while. The findings demonstrated a remarkable positive tone in the articles; there were only a few articles in negative tones and these articles were published in the first years of nanotechnology news. It was further found that the articles were mostly concerned with the implementations of nanotechnology, such as research and education centers, medical, and electronics. The study also investigated the presentation style of nanotechnology news. In other words, it investigated how the articles were framed. The results showed that the articles were mostly framed with scientific researches or discoveries and future expectations.

  14. DESIGN OF PARAMETER EXTRACTOR IN LOW POWER PRECOMPUTATION BASED CONTENT ADDRESSABLE MEMORY

    Directory of Open Access Journals (Sweden)

    Saroja pasumarti,

    2011-07-01

    Full Text Available Content-addressable memory (CAM is frequently used in applications, such as lookup tables, databases, associative computing, and networking, that require high-speed searches due to its ability to improve application performance by using parallel comparison to reduce search time. Although the use of parallel comparison results in reduced search time, it also significantly increases power consumption. In this paper, we propose a Block-XOR approach to improve the efficiency of low power pre computation- based CAM (PBCAM. Through mathematical analysis, we found that our approach can effectively reduce the number of comparison operations by 50% on average as compared with the ones-count approach for 15-bit-long inputs. In our experiment, we used Synopsys Nanosim to estimate the power consumption in TSMC 0.35- m CMOS technology. Compared with the ones-count PB-CAM system, the experimental results show that our proposed approach can achieve on average 30% in power reduction and 32% in power performance reduction. The major contribution of this paper is that it presents theoretical and practical proofs to verify that our proposed Block- XOR PB-CAM system can achieve greater power reduction without the need for a special CAM cell design. This implies that our approach is more flexible and adaptive for general designs.

  15. A Content based CT Lung Image Retrieval by DCT Matrix and Feature Vector Technique

    Directory of Open Access Journals (Sweden)

    J.Bridget Nirmala

    2012-03-01

    Full Text Available Most of the image retrieval systems are still incapable of providing retrieval result with high retrieval accuracy and less computational complexity. Image Retrieval technique to retrieve similar and relevant Computed Tomography (CT images of lung from a large database of images. During the process of retrieval, a query image which contains the affected area / abnormal region is given as an input to retrieve similar images which contain affected area/abnormal region from the database. DCT Matrix (DCTM is a kind of commonly used color feature representation in image retrieval. This paper describes a content based image retrieval (CBIR that represent each image in database by a vector of feature values called DCT vector matrix(8x8. Using this DCTM row and column feature vector values considered as a query image which is compared with existing database to cull out more similar and relevant images. The experimental result shows that 97% of images can be retrieved correctly using this technique

  16. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Directory of Open Access Journals (Sweden)

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  17. OMNeT++-Based Cross-Layer Simulator for Content Transmission over Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Massin R

    2010-01-01

    Full Text Available Flexbility and deployment simplicity are among the numerous advantages of wireless links when compared to standard wired communications. However, challenges do remain high for wireless communications, in particular due to the wireless medium inherent unreliability, and to the desired flexibility, which entails complex protocol procedures. In that context simulation is an important tool to understand and design the protocols that manage the wireless networks. This paper introduces a new simulation framework based on the OMNeT++ simulator whose goal is to enable the study of data and multimedia content transmission over hybrid wired/wireless ad hoc networks, as well as the design of innovative radio access schemes. To achieve this goal, the complete protocol stack from the application to the physical layer is simulated, and the real bits and bytes of the messages transferred on the radio channel are exchanged. To ensure that this framework is reusable and extensible in future studies and projects, a modular software and protocol architecture has been defined. Although still in progress, our work has already provided some valuable results concerning cross layer HARQ/MAC protocol performance and video transmission over the wireless channel, as illustrated by results examples.

  18. Proposed Technique for Content Based Sound Analysis and Ordering Using CASA and PAMIR Algorithm

    Directory of Open Access Journals (Sweden)

    Senthil Kumar T K

    2011-03-01

    Full Text Available Making the machine to hear as a human is one of the emerging technology of current technical world. If we can make machines to hear as humans, then we can use them to easily distinguish speech from music and background noises, to separate out the speech and music for special treatment, to know from direction sounds are coming, to learn which noises are typical and which are noteworthy. These machines should be able to listen and react in real time, to take appropriate action on hearing noteworthy events, to participate in ongoing activities, whether in factories, in musical performances, or in phone conversations. The existing auditory models for automatic speech recognition (ASR has not been entirely successful, due to the highly evolved state of ASR system technologies , which are finely tuned to existing representations and to how phonetic properties of speech are manifest in those representations. One particularly promising area of machine hearing research is computational auditory scene analysis (CASA. To the extent that we can analyze sound scenes into separate meaningful components, we can achieve an advantage in tasks involving processing of those components separately. Separating speech from interference is one such application. This paper deals with the retrieval of the sound from the text queries using CASA and PAMIR algorithms with Pole-Zero filter cascade peripheral model. This paper work on content-based sound ranking system that uses biologically inspired auditory features and successfully learns a matching between acoustics and known text.

  19. The influence of carbon fibre content on the tribological properties of polyarylate based composites materials

    Institute of Scientific and Technical Information of China (English)

    Burya; A.I.; Chigvintseva; O.P

    2001-01-01

    The analysis of scientific-technical literature has shown the prospectiveness of applyinghigh-temperature thermoplastic polymers - among which there are complex aromatic polyesters -as constructive materials. Mixed polyarylates of DV mark based on diphenilolpropane and themixture of iso- and terephtale acid are mentioned to make the most valuable practical interest. Forimproving technological and exploitation properties the authors of the article have suggested toreinforce the polymer linking element with uglen-9 mark. Combination of the composition compo-nents was realized within the rotating electromagnetic field with the help of non-equiaxial ferro-magnetic elements. The study of tribotechnical characteristics (coefficient of friction, intensity oflinear wear, temperature in the contact zone "polymer specimen - counterbody" of elaborated car-bon plastics) has been made at the disc machine of friction. Investigation of exploitation regimes’(specific pressure and slip velocity) influence on the mentioned properties of the initial polymer hasshown that polyarylate can be recommended for work at values of PV criterion not greater than 1.2MPa · m/s. Hardening the exploitation regimes is accompanied by the catastrophic wear of plastic.Reinforcement of polyarylate with carbon fibre is noted to enable significant improvement (to de-crease the coefficient of friction, to increase resistance to wear) of tribotechnical characteristics ofcarbon plastics. The most optimal is the content of carbon fibre in polyarylate in amount of 25mass.%.

  20. Framing Autism: A Content Analysis of Five Major News Frames in U.S.-Based Newspapers.

    Science.gov (United States)

    Wendorf Muhamad, Jessica; Yang, Fan

    2017-03-01

    The portrayal of child autism-related news stories has become a serious issue in the United States, yet few studies address this from media framing perspective. To fill this gap in the literature, this study examined the applicability of a media framing scale (Semetko & Valkenburg, 2000) for the deductive examination of autism-related news stories in U.S.-based newspapers. Under the theoretical framework of framing theory, a content analysis of news stories (N = 413) was conducted to investigate the presence of the five news frames using an established questionnaire. Differentiating between local and national news outlets, the following five news frames were measured: (a) attribution of responsibility, (b) human interest, (c) conflict, (d) morality, and (e) economic consequences. Findings revealed that news stories about autism most frequently fell within the human interest frame. Furthermore, the study shed light on how local and national newspapers might differ in framing autism-related news pieces and in their placement of the autism-related story within the newspaper (e.g., front page section, community section).

  1. An Extended Image Hashing Concept: Content-Based Fingerprinting Using FJLT

    Directory of Open Access Journals (Sweden)

    Xudong Lv

    2009-01-01

    Full Text Available Dimension reduction techniques, such as singular value decomposition (SVD and nonnegative matrix factorization (NMF, have been successfully applied in image hashing by retaining the essential features of the original image matrix. However, a concern of great importance in image hashing is that no single solution is optimal and robust against all types of attacks. The contribution of this paper is threefold. First, we introduce a recently proposed dimension reduction technique, referred as Fast Johnson-Lindenstrauss Transform (FJLT, and propose the use of FJLT for image hashing. FJLT shares the low distortion characteristics of a random projection, but requires much lower computational complexity. Secondly, we incorporate Fourier-Mellin transform into FJLT hashing to improve its performance under rotation attacks. Thirdly, we propose a new concept, namely, content-based fingerprint, as an extension of image hashing by combining different hashes. Such a combined approach is capable of tackling all types of attacks and thus can yield a better overall performance in multimedia identification. To demonstrate the superior performance of the proposed schemes, receiver operating characteristics analysis over a large image database and a large class of distortions is performed and compared with the state-of-the-art image hashing using NMF.

  2. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  3. Predicting In Vivo Anti-Hepatofibrotic Drug Efficacy Based on In Vitro High-Content Analysis

    Science.gov (United States)

    Zheng, Baixue; Tan, Looling; Mo, Xuejun; Yu, Weimiao; Wang, Yan; Tucker-Kellogg, Lisa; Welsch, Roy E.; So, Peter T. C.; Yu, Hanry

    2011-01-01

    Background/Aims Many anti-fibrotic drugs with high in vitro efficacies fail to produce significant effects in vivo. The aim of this work is to use a statistical approach to design a numerical predictor that correlates better with in vivo outcomes. Methods High-content analysis (HCA) was performed with 49 drugs on hepatic stellate cells (HSCs) LX-2 stained with 10 fibrotic markers. ∼0.3 billion feature values from all cells in >150,000 images were quantified to reflect the drug effects. A systematic literature search on the in vivo effects of all 49 drugs on hepatofibrotic rats yields 28 papers with histological scores. The in vivo and in vitro datasets were used to compute a single efficacy predictor (Epredict). Results We used in vivo data from one context (CCl4 rats with drug treatments) to optimize the computation of Epredict. This optimized relationship was independently validated using in vivo data from two different contexts (treatment of DMN rats and prevention of CCl4 induction). A linear in vitro-in vivo correlation was consistently observed in all the three contexts. We used Epredict values to cluster drugs according to efficacy; and found that high-efficacy drugs tended to target proliferation, apoptosis and contractility of HSCs. Conclusions The Epredict statistic, based on a prioritized combination of in vitro features, provides a better correlation between in vitro and in vivo drug response than any of the traditional in vitro markers considered. PMID:22073152

  4. Role of oxygen content on micro-whiskers in mercury based superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Aslan Çataltepe, Ö., E-mail: ozdenaslan@yahoo.com [Faculty of Engineering, Gedik University, 34876 İstanbul (Turkey); Güven Özdemir, Z. [Department of Physics, Yıldız Technical University, 34210 İstanbul (Turkey); Onbaşlı, Ü. [Department of Physics, Marmara University, Rıdvanpaşa cad.3.sok., 85/12, 34730 İstanbul (Turkey)

    2015-01-01

    In this study, the formation of micro-whiskers at the mercury based cuprate superconductors, synthesized by solid state reaction technique has been investigated for both oxygen and argon annealed samples. In this context, the superconducting samples have been annealed by the oxygen or argon gases in same rate (pressure) of 150 bar. Moreover, the over doped sample has been subjected to oxygen annealing twice at the same oxygen rate. Hence, micro-whiskers in the mercury cuprates have spontaneously grown for the over oxygen annealed sample, so we have not intended to have whisker grown. The whiskers grown in the mercury based cuprate superconductor has been investigated by Scanning Electron Microscopy, X-Ray Diffraction analysis and Superconducting Quantum Interference Devices measurements for the first time. It has been determined that whiskers grown on the over doped sample, which are in micrometer dimensions, have been observed only surfaces of the bulk sample. Moreover, the formation of whiskers has been examined for the optimally oxygen and argon doped samples. It has been shown that neither the optimally oxygen doped nor argon doped samples with the same gas rate have displayed any whisker structures. Hence, it has been decided that that the type of gas, the density of gas flowing and the bulk properties of the superconductor play a crucial role on formation of whisker structure in the system. Moreover, it has been revealed that in order to get rich whisker content, the oxygen process should be applied to the powder form of the superconductor in such a way to get the over oxygen doping rate for the superconducting system investigated. For further works, the magnetic and transport properties of the mercury based whiskers grown are planned to be determined. - Highlights: • Effect of gas type on whiskers has been investigated for Hg-based superconductor. • Concentration of the gas have a crucial role for whisker formation. • Shape of the superconducting

  5. Life cycle assessment of microalgae-based aviation fuel: Influence of lipid content with specific productivity and nitrogen nutrient effects.

    Science.gov (United States)

    Guo, Fang; Zhao, Jing; A, Lusi; Yang, Xiaoyi

    2016-12-01

    The aim of this work is to compare the life cycle assessments of low-N and normal culture conditions for a balance between the lipid content and specific productivity. In order to achieve the potential contribution of lipid content to the life cycle assessment, this study established relationships between lipid content (nitrogen effect) and specific productivity based on three microalgae strains including Chlorella, Isochrysis and Nannochloropsis. For microalgae-based aviation fuel, the effects of the lipid content on fossil fuel consumption and greenhouse gas (GHG) emissions are similar. The fossil fuel consumption (0.32-0.68MJ·MJ(-1)MBAF) and GHG emissions (17.23-51.04gCO2e·MJ(-1)MBAF) increase (59.70-192.22%) with the increased lipid content. The total energy input decreases (2.13-3.08MJ·MJ(-1)MBAF, 14.91-27.95%) with the increased lipid content. The LCA indicators increased (0-47.10%) with the decreased nitrogen recovery efficiency (75-50%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Knowledge-based method for determining the meaning of ambiguous biomedical terms using information content measures of similarity.

    Science.gov (United States)

    McInnes, Bridget T; Pedersen, Ted; Liu, Ying; Melton, Genevieve B; Pakhomov, Serguei V

    2011-01-01

    In this paper, we introduce a novel knowledge-based word sense disambiguation method that determines the sense of an ambiguous word in biomedical text using semantic similarity or relatedness measures. These measures quantify the degree of similarity between concepts in the Unified Medical Language System (UMLS). The objective of this work was to develop a method that can disambiguate terms in biomedical text by exploiting similarity information extracted from the UMLS and to evaluate the efficacy of information content-based semantic similarity measures, which augment path-based information with probabilities derived from biomedical corpora. We show that information content-based measures obtain a higher disambiguation accuracy than path-based measures because they weight the path based on where it exists in the taxonomy coupled with the probability of the concepts occurring in a corpus of text.

  7. Pattern recognition of acorns from different Quercus species based on oil content and fatty acid profile

    Directory of Open Access Journals (Sweden)

    Abreu, José M.F.

    2003-12-01

    Full Text Available The aim of this study was (i to characterize different species of Quercus genus and (ii to discriminate among them on the basis of the content and fatty acid composition of the oil in their fruits and/or their morphological aspects via pattern recognition techniques (Principal Component Analysis, PCA, Cluster Analysis, CA, and Discriminant Analysis, DA. Quercus rotundifolia Lam., Quercus suber L. and Quercus pyrenaica Willd., grown in the same stand in the centre of Portugal, were investigated. When oil content and respective fatty acid composition were used to characterize samples, well-separated groups corresponding to each of the species were observed by PCA and confirmed by CA and DA. The ‘‘width’’ and ‘‘length’’ of acorns exhibited a low discriminant power. Acorns from Q. rotundifolia showed the highest average oil content followed by Q. suber and Q. pyrenaica acorns (9.1, 5.2 and 3.8%, respectively. Fatty acid profiles of Q. rotundifolia and Q. suber oils are similar to olive oil while the oil from Q. pyrenaica acorns is more unsaturated.El objetivo de este estudio fué (i la caracterización de diferentes especies del género Quercus y (ii la clasificación de las mismas en base al contenido y composición de ácidos grasos del aceite de sus frutos y/o en sus caracteres morfológicos, via técnicas de patrón de reconocimiento (Análisis de Componentes Principales, ACP, Análisis de Cluster, AC, y Análisis Discriminante, AD. Se han estudiado Quercus rotundifolia Lam., Quercus suber L. y Quercus pyrenaica Willd., pertenecientes a la misma zona del centro de Portugal. Al emplear el contenido de aceite y sus respectivas composiciones de ácidos grasos para caracterizar a las muestras, el ACP reveló grupos bien separados correspondientes a cada especie, los cuales, a su vez, se confirmarón con el AC y el AD. El ‘‘ancho’’ y ‘‘longitud’’ de las bellotas

  8. Cloud-based application for rice moisture content measurement using image processing technique and perceptron neural network

    Science.gov (United States)

    Cruz, Febus Reidj G.; Padilla, Dionis A.; Hortinela, Carlos C.; Bucog, Krissel C.; Sarto, Mildred C.; Sia, Nirlu Sebastian A.; Chung, Wen-Yaw

    2017-02-01

    This study is about the determination of moisture content of milled rice using image processing technique and perceptron neural network algorithm. The algorithm involves several inputs that produces an output which is the moisture content of the milled rice. Several types of milled rice are used in this study, namely: Jasmine, Kokuyu, 5-Star, Ifugao, Malagkit, and NFA rice. The captured images are processed using MATLAB R2013a software. There is a USB dongle connected to the router which provided internet connection for online web access. The GizDuino IOT-644 is used for handling the temperature and humidity sensor, and for sending and receiving of data from computer to the cloud storage. The result is compared to the actual moisture content range using a moisture tester for milled rice. Based on results, this study provided accurate data in determining the moisture content of the milled rice.

  9. Sensory acceptability of slow fermented sausages based on fat content and ripening time.

    Science.gov (United States)

    Olivares, Alicia; Navarro, José Luis; Salvador, Ana; Flores, Mónica

    2010-10-01

    Low fat dry fermented sausages were manufactured using controlled ripening conditions and a slow fermented process. The effect of fat content and ripening time on the chemical, colour, texture parameters and sensory acceptability was studied. The fat reduction in slow fermented sausages produced an increase in the pH decline during the first stage of the process that was favoured by the higher water content of the low fat sausages. Fat reduction did not affect the external appearance and there was an absence of defects but lower fat content resulted in lower sausage lightness. The sausage texture in low fat sausages caused an increase in chewiness and at longer ripening times, an increase in hardness. The sensory acceptability of the fermented sausages analyzed by internal preference mapping depended on the different preference patterns of consumers. A group of consumers preferred sausages with high and medium fat content and high ripening time. The second group of consumers preferred sausages with low ripening time regardless of fat content except for the appearance, for which these consumers preferred sausages of high ripening time. Finally, the limit to produce high acceptability low fat fermented sausages was 16% fat content in the raw mixture that is half the usual content of dry fermented sausages.

  10. Design and Develop of En E-Learning Content Based on Multimedia Game

    Directory of Open Access Journals (Sweden)

    Thongchai Kaewkiriya

    2013-11-01

    Full Text Available This paper aims to develop e-learning contents for multimedia technology lesson with the purpose to assiststudents in learning the subject. The multimedia game was used to make the lesson more interesting and atthe same time to provide students with real example of how multimedia works.The effectiveness of thedeveloped contents was studied by comparing results of the same test from students taking conventionalclass-room lectures and those using the developed e-learning contents. We found that the latter performedbetter at the statistical significance level of 0.05.

  11. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    Science.gov (United States)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently

  12. Facebook apps for smoking cessation: a review of content and adherence to evidence-based guidelines.

    Science.gov (United States)

    Jacobs, Megan A; Cobb, Caroline O; Abroms, Lorien; Graham, Amanda L

    2014-09-09

    Facebook is the most popular social network site, with over 1 billion users globally. There are millions of apps available within Facebook, many of which address health and health behavior change. Facebook may represent a promising channel to reach smokers with cessation interventions via apps. To date, there have been no published reports about Facebook apps for smoking cessation. The purpose of this study was to review the features and functionality of Facebook apps for smoking cessation and to determine the extent to which they adhere to evidence-based guidelines for tobacco dependence treatment. In August 2013, we searched Facebook and three top Internet search engines using smoking cessation keywords to identify relevant Facebook apps. Resultant apps were screened for eligibility (smoking cessation-related, English language, and functioning). Eligible apps were reviewed by 2 independent coders using a standardized coding scheme. Coding included content features (interactive, informational, and social) and adherence to an established 20-item index (possible score 0-40) derived from the US Public Health Service's Clinical Practice Guidelines for Treating Tobacco Use and Dependence. We screened 22 apps for eligibility; of these, 12 underwent full coding. Only 9 apps were available on Facebook. Facebook apps fell into three broad categories: public pledge to quit (n=3), quit-date-based calculator/tracker (n=4), or a multicomponent quit smoking program (n=2). All apps incorporated interactive, informational, and social features except for two quit-date-based calculator/trackers apps (lacked informational component). All apps allowed app-related posting within Facebook (ie, on self/other Facebook profile), and four had a within-app "community" feature to enable app users to communicate with each other. Adherence index summary scores among Facebook apps were low overall (mean 15.1, SD 7.8, range 7-30), with multicomponent apps scoring the highest. There are few

  13. The Technology of Extracting Content Information from Web Page Based on DOM Tree

    Science.gov (United States)

    Yuan, Dingrong; Mo, Zhuoying; Xie, Bing; Xie, Yangcai

    There are huge amounts of information on Web pages, which includes content information and other useless information, such as navigation, advertisement and flash of animation etc. Reducing the toils of Web users, we estabished a thechnique to extract the content information from web page. Fristly, we analyzed the semantic of web documents by V8 engine of Google and parsed the web document into DOM tree. And then, traversed the DOM tree, pruned the DOM tree in the light of the characteristic of Web page's edit language. Finally, we extracted the content information from Web page. Theoretics and experiments showed that the technique could simplify the web page, present the content information to web users and supply clean data for applicable area, such as retrieval, KDD and DM from web.

  14. Component Content Soft-Sensor of SVM Based on Ions Color Characteristics

    Directory of Open Access Journals (Sweden)

    Zhang Kunpeng

    2012-10-01

    Full Text Available In consideration of different characteristic colors of Ions in the P507-HCL Pr/Nd extraction separation system, ions color image feature H, S, I that closely related to the element component contents are extracted by using image processing method. Principal Component Analysis algorithm is employed to determine statistics mean of H, S, I which has the stronger correlation with element component content and the auxiliary variables are obtained. With the algorithm of support vector machine, a component contents soft-sensor model in Pr/Nd extraction process is established. Finally, simulations and tests verify the rationality and feasibility of the proposed method. The research results provide theoretical foundation for the online measurement of the component content in Pr/Nd countercurrent extraction separation process.

  15. Key-frame retrieval from MPEG video based on linear approximation of content curve

    Science.gov (United States)

    Kim, Tae-hee; Lee, Woong-hee; Jeong, Dong-seok

    2003-01-01

    In general, video is too much lengthy for browsing the contents. So, there are many efforts being made for browsing the content of the video fast and effectively. Video summary is the one of techniques related to those efforts. Video summary comprises a number of key-frames. Therefore, we propose a method to extract key-frames from the video in MPEG compressed domain. Proposed method extracts the simple 2D content curve reflecting the variation of the video content from the MPEG video in the compressed domain, approximates the curve to polygonal lines and then extracts key-frames from the approximated lines effectively and rapidly. Also, proposed method let the user set the number of key-frames.

  16. UNDEUTSCH HYPOTHESIS AND CRITERIA BASED CONTENT ANALYSIS: A META-ANALYTIC REVIEW

    Directory of Open Access Journals (Sweden)

    Bárbara G. Amado

    2015-01-01

    Full Text Available The credibility of a testimony is a crucial component of judicial decision-making. Checklists of testimony credibility criteria are extensively used by forensic psychologists to assess the credibility of a testimony, and in many countries they are admitted as valid scientific evidence in a court of law. These checklists are based on the Undeutsch hypothesis asserting that statements derived from the memory of real-life experiences differ significantly in content and quality from fabricated or fictitious accounts. Notwithstanding, there is considerable controversy regarding the degree to which these checklists comply with the legal standards for scientific evidence to be admitted in a court of law (e.g., Daubert standards. In several countries, these checklists are not admitted as valid evidence in court, particularly in view of the inconsistent results reported in the scientific literature. Bearing in mind these issues, a meta-analysis was designed to test the Undeutsch hypothesis using the CBCA Checklist of criteria to discern between memories of self-experienced real-life events and fabricated or fictitious accounts. As the original hypothesis was formulated for populations of children, only quantitative studies with samples of children were considered for this study. In line with the Undeutsch hypothesis, the results showed a significant positive effect size that is generalizable to the total CBCA score, δ = 0.79. Moreover, a significant positive effect size was observed in each and all of the credibility criteria. In conclusion, the results corroborated the validity of the Undeutsch hypothesis and the CBCA criteria for discriminating between the memory of real self-experienced events and false or invented accounts. The results are discussed in terms of the implications for forensic practice. Con frecuencia, la evaluación de la fiabilidad de un testimonio se lleva a cabo mediante el uso de sistemas categoriales de análisis de contenido

  17. Learning science content through socio-scientific issues-based instruction: a multi-level assessment study

    Science.gov (United States)

    Sadler, Troy D.; Romine, William L.; Sami Topçu, Mustafa

    2016-07-01

    Science educators have presented numerous conceptual and theoretical arguments in favor of teaching science through the exploration of socio-scientific issues (SSI). However, the empirical knowledge base regarding the extent to which SSI-based instruction supports student learning of science content is limited both in terms of the number of studies that have been conducted in this area and the quality of research. This research sought to answer two questions: (1) To what extent does SSI-based instruction support student learning of science content? and (2) How do assessments at variable distances from the curriculum reveal patterns of learning associated with SSI-based instruction? Sixty-nine secondary students taught by three teachers participated in the study. Three teachers implemented an SSI intervention focused on the use of biotechnology for identifying and treating sexually transmitted diseases. We found that students demonstrated statistically and practically significant gains in content knowledge as measured by both proximal and distal assessments. These findings support the claim that SSI-based teaching can foster content learning and improved performance on high-stakes tests.

  18. Criteria-Based Content Analysis (CBCA reality criteria in adults: A meta-analytic review

    Directory of Open Access Journals (Sweden)

    Bárbara G. Amado

    2016-01-01

    Full Text Available Antecedentes/Objetivo: El Criteria-Based Content Analysis (CBCA constituye la herramienta mundialmente más utilizada para la evaluación de la credibilidad del testimonio. Originalmente fue creado para testimonios de menores víctimas de abuso sexual, gozando de amparo científico. Sin embargo, se ha generalizado su práctica a poblaciones de adultos y otros contextos sin un aval de la literatura para tal generalización. Método: Por ello, nos planteamos una revisión meta-analítica con el objetivo de contrastar la Hipótesis Undeutsch y los criterios de realidad del CBCA para conocer su potencial capacidad discriminativa entre memorias de eventos auto-experimentados y fabricados en adultos. Resultados: Los resultados confirman la hipótesis Undeutsch y validan el CBCA como técnica. No obstante, los resultados no son generalizables y los criterios auto-desaprobación y perdón al autor del delito no discriminan entre ambas memorias. Además, se encontró que la técnica puede ser complementada con criterios adicionales de realidad. El estudio de moderadores mostró que la eficacia discriminativa era significativamente superior en estudios de campo en casos de violencia sexual y de género. Conclusiones: Se discute la utilidad, así como las limitaciones y condiciones para la transferencia de estos resultados a la práctica forense.

  19. Continuous-flow sorting of microalgae cells based on lipid content by high frequency dielectrophoresis

    Directory of Open Access Journals (Sweden)

    Doug Redelman

    2016-08-01

    Full Text Available This paper presents a continuous-flow cell screening device to isolate and separate microalgae cells (Chlamydomonas reinhardtii based on lipid content using high frequency (50 MHz dielectrophoresis. This device enables screening of microalgae due to the balance between lateral DEP forces relative to hydrodynamic forces. Positive DEP force along with amplitude-modulated electric field exerted on the cells flowing over the planar interdigitated electrodes, manipulated low-lipid cell trajectories in a zigzag pattern. Theoretical modelling confirmed cell trajectories during sorting. Separation quantification and sensitivity analysis were conducted with time-course experiments and collected samples were analysed by flow cytometry. Experimental testing with nitrogen starveddw15-1 (high-lipid, HL and pgd1 mutant (low-lipid, LL strains were carried out at different time periods, and clear separation of the two populations was achieved. Experimental results demonstrated that three populations were produced during nitrogen starvation: HL, LL and low-chlorophyll (LC populations. Presence of the LC population can affect the binary separation performance. The continuous-flow micro-separator can separate 74% of the HL and 75% of the LL out of the starting sample using a 50 MHz, 30 voltages peak-to-peak AC electric field at Day 6 of the nitrogen starvation. The separation occurred between LL (low-lipid: 86.1% at Outlet # 1 and LC (88.8% at Outlet # 2 at Day 9 of the nitrogen starvation. This device can be used for onsite monitoring; therefore, it has the potential to reduce biofuel production costs

  20. Predicting in vivo anti-hepatofibrotic drug efficacy based on in vitro high-content analysis.

    Directory of Open Access Journals (Sweden)

    Baixue Zheng

    Full Text Available BACKGROUND/AIMS: Many anti-fibrotic drugs with high in vitro efficacies fail to produce significant effects in vivo. The aim of this work is to use a statistical approach to design a numerical predictor that correlates better with in vivo outcomes. METHODS: High-content analysis (HCA was performed with 49 drugs on hepatic stellate cells (HSCs LX-2 stained with 10 fibrotic markers. ~0.3 billion feature values from all cells in >150,000 images were quantified to reflect the drug effects. A systematic literature search on the in vivo effects of all 49 drugs on hepatofibrotic rats yields 28 papers with histological scores. The in vivo and in vitro datasets were used to compute a single efficacy predictor (E(predict. RESULTS: We used in vivo data from one context (CCl(4 rats with drug treatments to optimize the computation of E(predict. This optimized relationship was independently validated using in vivo data from two different contexts (treatment of DMN rats and prevention of CCl(4 induction. A linear in vitro-in vivo correlation was consistently observed in all the three contexts. We used E(predict values to cluster drugs according to efficacy; and found that high-efficacy drugs tended to target proliferation, apoptosis and contractility of HSCs. CONCLUSIONS: The E(predict statistic, based on a prioritized combination of in vitro features, provides a better correlation between in vitro and in vivo drug response than any of the traditional in vitro markers considered.