Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.
This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.
This invention concerns an automatic liquid-liquid extraction system ensuring great reproducibility on a number of samples, stirring and decanting of the two liquid phases, then the quantitative removal of the entire liquid phase present in the extraction vessel at the end of the operation. This type of system has many applications, particularly in carrying out analytical processes comprising a stage for the extraction, by means of an appropriate solvent, of certain components of the sample under analysis
The automatic system described is suitable for multi-element separations by solvent extraction techniques with organic solvents heavier than water. The analysis is run automatically by a central control unit and includes steps such as pH regulation and reduction or oxidation. As an example, the separation of radioactive Hg2+, Cu2+, Mo6+, Cd2+, As5+, Sb5+, Fe3+, and Co3+ by means of diethyldithiocarbonate complexes is reported. (Auth.)
Full Text Available Automatic keywords extraction is the task to identify a small set of words, key phrases, keywords, or key segments from a document that can describe the meaning of the document. Keywords are useful tools as they give the shortest summary of the document. This paper concentrates on Automatic keywords extraction for Punjabi language text. It includes various phases like removing stop words, Identification of Punjabi nouns and noun stemming, Calculation of Term Frequency and Inverse Sentence Frequency (TF-ISF, Punjabi keywords as nouns with high TF-ISF score and title/headline feature for Punjabi text. The extracted keywords are very much helpful in automatic indexing, text summarization, information retrieval, classification, clustering, topic detection and tracking and web searches etc.
Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.
National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.
Luks, Wojciech; Tkachuk, Oksana; Buschnell, David
Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.
Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.
With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.
Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan
Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…
Haijian Chen; Dongmei Han; Yonghui Dai; Lina Zhao
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course ...
A description and results of tests of device for automatic extraction of ions from a high-frequency ion source are presented. The automatic regime is realized by introducing feedback with respect to the current of the source cathode and requires low sinusoidal modulation of the exctracting voltage. By varying the power of the discharge the beam current was controlled in the 90-1470μA range with automatic preservation of the optimal conditions in the extraction system. The device was used on a 210-kV neutron generator
Coronary arteriography is a clinically important diagnostic tool for the evaluation of coronary artery disease, and can provide detailed information. For the quantitative assessment of the coronary arteriograms. Several studies concerning the extraction of vessel edges have been published, and automatic extraction of vessel edges has been used in clinical diagnostic systems. However, these methods are not satisfactory, because manual modification by the operator is unavoidable in some cases. To reduce manual operation, accurate and automatic extraction of the coronary arteries is necessary. In this paper, we propose a new technique for automatic extraction of the coronary arteries using morphological operators. This method includes the following steps: contrast enhancement using a morphological Top-Hat operator, enhancement of thin vessels and reduction of pulse noise using a morphological erosion operator, elimination of obvious background pixels by semi-binary thresholding, and extraction of the coronary arteries by labeling and counting the area. (author)
The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)
LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)
A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.
In the MRI cardiac function analysis, left ventricular volume curves and diagnostic parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from the MR cine images. The ROI extractions had to be done by manual operations, so the examination efficiency and data analysis reproducibility were poor in diagnoses on site. In this paper, we outline an automatic extraction method for the left ventricular contours from MR cine images to improve cardiac function diagnosis. With this method, the operator needs to manually indicate only 3 points on the 1st image, and can then get all the contours from the total sequence of images automatically. (author)
Guo, Xichao; Wang, Cheng; Wen, Chenglu; Cheng, Ming
In order to perform high precise calibration of camera in complex background, a novel design of planar composite target and the corresponding automatic extraction algorithm are presented. Unlike other commonly used target designs, the proposed target contains the information of feature point coordinate and feature point serial number simultaneously. Then based on the original target, templates are prepared by three geometric transformations and used as the input of template matching based on shape context. Finally, parity check and region growing methods are used to extract the target as final result. The experimental results show that the proposed method for automatic extraction and recognition of the proposed target is effective, accurate and reliable.
Liu, Peilei; Wang, Ting
Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine. This paper extracted directional protein-protein interaction from the biological text, using the SVM-based method. Experiments were evaluated on the LLL05 corpus with good results. The results show that dependency features are import for the protein-protein interaction extraction and features related to the interaction w...
An automatic method of extracting left ventricle from SPECT myocardial perfusion data was introduced. This method was based on the least square analysis of the positions of all short-axis slices pixels from the half sphere-cylinder myocardial model, and used a iterative reconstruction technique to automatically cut off the non-left ventricular tissue from the perfusion images. Thereby, this technique provided the bases for further quantitative analysis
Gemert, J.C. van; Schavemaker, J.G.M.; Bonenkamp, C.W.B.
Amateur soccer statistics have interesting applications such as providing insights to improve team performance, individual coaching, monitoring team progress and personal or team entertainment. Professional soccer statistics are extracted with labor intensive expensive manual effort which is not rea
LIU Rujie; YUAN Baozong
This paper presents a fuzzy-basedmethod to locate the position and the size of irises ina head-shoulder image with plain background. Thismethod is composed of two stages: the face region es-timation stage and the eye feature extraction stage.In the first stage, a region growing method is adoptedto estimate the face region. In the second stage, thecoarse eye area is firstly extracted based on the loca-tion of the nasion, and the deformable template al-gorithm is then applied to eye area to determine theposition and the size of irises. Experimental resultsshow the efficiency and robustness of this method.
Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula
In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.
Abdulgabbar M. Saif
Full Text Available Problem statement: The identification of collocations is very important part in natural language processing applications that require some degree of semantic interpretation such as, machine translation, information retrieval and text summarization. Because of the complexities of Arabic, the collocations undergo some variations such as, morphological, graphical, syntactic variation that constitutes the difficulties of identifying the collocation. Approach: We used the hybrid method for extracting the collocations from Arabic corpus that is based on linguistic information and association measures. Results: This method extracted the bi-gram candidates of Arabic collocation from corpus and evaluated the association measures by using the n-best evaluation method. We reported the precision values for each association measure in each n-best list. Conclusion: The experimental results showed that the log-likelihood ratio is the best association measure that achieved highest precision.
Considering worldwide increasing and devastating flood events, the issue of flood defence and prediction becomes more and more important. Conventional methods for the observation of water levels, for instance gauging stations, provide reliable information. However, they are rather cost-expensive in purchase, installation and maintenance and hence mostly limited for monitoring large streams only. Thus, small rivers with noticeable increasing flood hazard risks are often neglected. State-of-the-art smartphones with powerful camera systems may act as affordable, mobile measuring instruments. Reliable and effective image processing methods may allow the use of smartphone-taken images for mobile shoreline detection and thus for water level monitoring. The paper focuses on automatic methods for the determination of waterlines by spatio-temporal texture measures. Besides the considerable challenge of dealing with a wide range of smartphone cameras providing different hardware components, resolution, image quality and programming interfaces, there are several limits in mobile device processing power. For test purposes, an urban river in Dresden, Saxony was observed. The results show the potential of deriving the waterline with subpixel accuracy by a column-by-column four-parameter logistic regression and polynomial spline modelling. After a transformation into object space via suitable landmarks (which is not addressed in this paper), this corresponds to an accuracy in the order of a few centimetres when processing mobile device images taken from small rivers at typical distances.
Chuqing Cao; Ying Sun
Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data....
Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.
We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles . Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System)  and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction
Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.
刘志坚; 李建军; 王义林; 李材元; 肖祥芷
With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.
Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan
Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.
Jemaa, Yousra Ben
We present in this paper a biometric system of face detection and recognition in color images. The face detection technique is based on skin color information and fuzzy classification. A new algorithm is proposed in order to detect automatically face features (eyes, mouth and nose) and extract their correspondent geometrical points. These fiducial points are described by sets of wavelet components which are used for recognition. To achieve the face recognition, we use neural networks and we study its performances for different inputs. We compare the two types of features used for recognition: geometric distances and Gabor coefficients which can be used either independently or jointly. This comparison shows that Gabor coefficients are more powerful than geometric distances. We show with experimental results how the importance recognition ratio makes our system an effective tool for automatic face detection and recognition.
Kiranyaz, Serkan; Ferreira, Miguel; Gabbouj, Moncef
In this work, we focus on automatic extraction of object boundaries from Canny edge field for the purpose of content-based indexing and retrieval over image and video databases. A multiscale approach is adopted where each successive scale provides further simplification of the image by removing more details, such as texture and noise, while keeping major edges. At each stage of the simplification, edges are extracted from the image and gathered in a scale-map, over which a perceptual subsegment analysis is performed in order to extract true object boundaries. The analysis is mainly motivated by Gestalt laws and our experimental results suggest a promising performance for main objects extraction, even for images with crowded textural edges and objects with color, texture, and illumination variations. Finally, integrating the whole process as feature extraction module into MUVIS framework allows us to test the mutual performance of the proposed object extraction method and subsequent shape description in the context of multimedia indexing and retrieval. A promising retrieval performance is achieved, and especially in some particular examples, the experimental results show that the proposed method presents such a retrieval performance that cannot be achieved by using other features such as color or texture. PMID:17153949
The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
Berman Jules J
nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.
Martin Andrew CR
Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and
Boyd, Joseph; Rajman, Martin
Automatic metadata extraction (AME) of scientific papers has been described as one of the hardest problems in document engineering. Heterogeneous content, varying style, and unpredictable placement of article components render the problem inherently indeterministic. Conditional random fields (CRF), a machine learning technique, can be used to classify document metadata amidst this uncertainty, annotating document contents with semantic labels. High energy physics (HEP) papers, such as those written at CERN, have unique content and structural characteristics, with scientific collaborations of thousands of authors altering article layouts dramatically. The distinctive qualities of these papers necessitate the creation of specialised datasets and model features. In this work we build an unprecedented training set of HEP papers and propose and evaluate a set of innovative features for CRF models. We build upon state-of-the-art AME software, GROBID, a tool coordinating a hierarchy of CRF models in a full document ...
Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.
Sui, Wei; Wang, Lingfeng; Fan, Bin; Xiao, Hongfei; Wu, Huaiyu; Pan, Chunhong
Urban building reconstruction is an important step for urban digitization and realisticvisualization. In this paper, we propose a novel automatic method to recover urban building geometry from 3D point clouds. The proposed method is suitable for buildings composed of planar polygons and aligned with the gravity direction, which are quite common in the city. Our key observation is that the building shapes are usually piecewise constant along the gravity direction and determined by several dominant shapes. Based on this observation, we formulate building reconstruction as an energy minimization problem under the Markov Random Field (MRF) framework. Specifically, point clouds are first cutinto a sequence of slices along the gravity direction. Then, floorplans are reconstructed by extracting boundaries of these slices, among which dominant floorplans are extracted and propagated to other floors via MRF. To guarantee correct propagation, a new distance measurement for floorplans is designed, which first encodes floorplans into strings and then calculates distances between their corresponding strings. Additionally, an image based editing method is also proposed to recover detailed window structures. Experimental results on both synthetic and real data sets have validated the effectiveness of our method. PMID:26661472
Awrangjeb, M.; Lu, G.; Fraser, C.
This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the
Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra
Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.
Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....
The author mainly described the working condition of the automatic control system of high uranium concentration solvent extraction with pulse sieve-plate column on a large scale test. The use of the automatic instrument and meter, automatic control circuit, and the best feedback control point of the solvent extraction processing with pulse sieve-plate column are discussed in detail. The writers point out the success of this experiment on automation, also present some questions that should be cared for the automatic control, instruments and meters in production in the future
Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.
Jahjah, Munzer; Ulivieri, Carlo
Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were
Artemova, Svetlana; Jaillet, Léonard; Redon, Stephane
The Universal Force Field (UFF) is a classical force field applicable to almost all atom types of the periodic table. Such a flexibility makes this force field a potential good candidate for simulations involving a large spectrum of systems and, indeed, UFF has been applied to various families of molecules. Unfortunately, initializing UFF, that is, performing molecular structure perception to determine which parameters should be used to compute the UFF energy and forces, appears to be a difficult problem. Although many perception methods exist, they mostly focus on organic molecules, and are thus not well-adapted to the diversity of systems potentially considered with UFF. In this article, we propose an automatic perception method for initializing UFF that includes the identification of the system's connectivity, the assignment of bond orders as well as UFF atom types. This perception scheme is proposed as a self-contained UFF implementation integrated in a new module for the SAMSON software platform for computational nanoscience (http://www.samson-connect.net). We validate both the automatic perception method and the UFF implementation on a series of benchmarks. PMID:26927616
Zhou, Ruohua; Mattavelli, Marco
The purpose of this thesis is to develop new methods for automatic transcription of melody and harmonic parts of real-life music signal. Music transcription is here defined as an act of analyzing a piece of music signal and writing down the parameter representations, which indicate the pitch, onset time and duration of each pitch, loudness and instrument applied in the analyzed music signal. The proposed algorithms and methods aim at resolving two key sub-problems in automatic music transcrip...
The purpose of this thesis is to develop new methods for automatic transcription of melody and harmonic parts of real-life music signal. Music transcription is here defined as an act of analyzing a piece of music signal and writing down the parameter representations, which indicate the pitch, onset time and duration of each pitch, loudness and instrument applied in the analyzed music signal. The proposed algorithms and methods aim at resolving two key sub-problems in automatic music transcrip...
In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)
Ibrahim Missaoui; Zied Lachiri
In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that ...
Full Text Available In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that the proposed feature extraction method outperforms the classic methods such as Perceptual Linear Prediction, Linear Predictive Coding, Linear Prediction Cepstral coefficients and Mel Frequency Cepstral Coefficients.
Al-Khalifa, Hend S.; Davis, Hugh C.
This paper reports on an evaluation of the keywords produced by Yahoo API context-based term extractor compared to a folksonomy set for the same website. The evaluation process is made in two ways: automatically, by measuring the percentage of overlap between the folksonomy set and Yahoo keywords set; and subjectively, by asking a human indexer to rate the quality of the generated keywords from both systems. The result of the experiment will be considered as an evidence for the rich semantics...
Project of beam extraction system in Cracow AIC-144 cyclotron is described. The problems of increase of beam emittance, and change of the magnetic field in the cyclotron chamber are discussed. Expected extraction coefficient of the beam is about 0.7. (S.B.)
Mohammad Awrangjeb; Fraser, Clive S.
Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging) data. Using the ground height from a DEM (digital elevation model), the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the b...
One key aspect of local fault diagnosis is how to effectively extract abrupt features from the vibration signals. This paper proposes a method to automatically extract abrupt information based on singular value decomposition and higher-order statistics. In order to observe the distribution law of singular values, a numerical analysis to simulate the noise, periodic signal, abrupt signal and singular value distribution is conducted. Based on higher-order statistics and spectrum analysis, a method to automatically choose the upper and lower borders of the singular value interval reflecting the abrupt information is built. And the selected singular values derived from this method are used to reconstruct abrupt signals. It is proven that the method is able to obtain accurate results by processing the rub-impact fault signal measured from the experiments. The analytical and experimental results indicate that the proposed method is feasible for automatically extracting abrupt information caused by faults like the rotor–stator rub-impact. (paper)
ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng
Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.
YUAN Hong-chun; CHEN Ying; SUN Yue-fu
The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.
Corney, DPA; Clark, JY; Tang, HL; Wilkin, P
Herbarium specimens are a vital resource in botanical taxonomy. Many herbaria are undertaking large-scale digitization projects to improve access and to preserve delicate specimens, and in doing so are creating large sets of images. Leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens. Here, we demonstrate that herbarium images can be analysed using suitable software and that leaf characters can be extracted automa...
This paper presents an automatic extraction method of soft tissues from 3D MRI images of the head. A 3D region growing algorithm is used to extract soft tissues such as cerebrum, cerebellum and brain stem. Four information sources are used to control the 3D region growing. Model of each soft tissue has been constructed in advance and provides a 3D region growing space. Head skin area which is automatically extracted from input image provides an unsearchable area. Zero-crossing points are detected by using Laplacian operator, and by examining sign change between neighborhoods. They are used as a control condition in the 3D region growing process. Graylevels of voxels are also directly used to extract each tissue region as a control condition. Experimental results applied to 19 samples show that the method is successful. (author)
Kazuma, Akamine; Kimura, Akisato; Takagi, Shigeru
Automatic video segmentation plays an important role in a wide range of computer vision and image processing applications. Recently, various methods have been proposed for this purpose. The problem is that most of these methods are far from real-time processing even for low-resolution videos due to the complex procedures. To this end, we propose a new and quite fast method for automatic video segmentation with the help of 1) efficient optimization of Markov random fields with polynomial time of number of pixels by introducing graph cuts, 2) automatic, computationally efficient but stable derivation of segmentation priors using visual saliency and sequential update mechanism, and 3) an implementation strategy in the principle of stream processing with graphics processor units (GPUs). Test results indicates that our method extracts appropriate regions from videos as precisely as and much faster than previous semi-automatic methods even though any supervisions have not been incorporated.
Fontecave Jallon, Julie; Berthommier, Frédéric
Despite the development of new imaging techniques, existing X-ray data remain an appropriate tool to study speech production phenomena. However, to exploit these images, the shapes of the vocal tract articulators must first be extracted. This task, usually manually realized, is long and laborious. This paper describes a semi-automatic technique for facilitating the extraction of vocal tract contours from complete sequences of large existing cineradiographic databases in the context of continu...
Hong YU; Hatzivassiloglou, Vasileios; Friedman, Carol; Rzhetsky, Andrey; Wilbur, W. John
Genes and proteins are often associated with multiple names, and more names are added as new functional or structural information is discovered. Because authors often alternate between these synonyms, information retrieval and extraction benefits from identifying these synonymous names. We have developed a method to extract automatically synonymous gene and protein names from MEDLINE and journal articles. We first identified patterns authors use to list synonymous gene and protein names. We d...
Jiang, Guangxiang; Gu, Lixu
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Jones, Steve; Paynter, Gordon W.
Discussion of finding relevant documents in electronic document collections focuses on an evaluation of the Kea automatic keyphrase extraction algorithm which was developed by members of the New Zealand Digital Library Project. Results are based on evaluations by human assessors of the quality and appropriateness of Kea keyphrases. (Author/LRW)
Dvořák, P.; Bartušek, Karel; Gescheidtová, E.
Cambridge: The Electromagnetics Academy, 2014, s. 1885-1889. ISBN 978-1-934142-28-8. [PIERS 2014. Progress In Electromagnetics Research Symposium /35./. Guangzhou (CN), 25.08.2014-28.08.2014] R&D Projects: GA ČR GAP102/12/1104 Institutional support: RVO:68081731 Keywords : brain tumor * MRI * automatic extraction Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering
Mohammed G.H. Al Zamil
The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.
Diagnosis by CT images is increasing. But we often use X-ray images because of scanning time and scan cost. It is difficult to extract tumor region of X-ray images, and pathologists have to diagnose many images of tumor. Therefore, demand for the development of CAD system is increasing to support pathologists. Images that we use are dog images. Many people research human Images, but animal images are not researched well. In this paper, automatic extraction of tumor region is studied. We used operation of normalized correlation. Template of this filter looks like mountain. We also used Quoit-filter, that detected region that had possibility of tumor. We calculated edge of tumor to see easily tumors. Our method detected some tumor candidate edge. As future work, we should extract bone region, and some fixed value, including filter size should be automatically determined. (author)
Yang, Guanyu; Kitslaar, Pieter; Frenay, Michel; Broersen, Alexander; Boogers, Mark J; Bax, Jeroen J; Reiber, Johan H C; Dijkstra, Jouke
Coronary computed tomographic angiography (CCTA) is a non-invasive imaging modality for the visualization of the heart and coronary arteries. To fully exploit the potential of the CCTA datasets and apply it in clinical practice, an automated coronary artery extraction approach is needed. The purpose of this paper is to present and validate a fully automatic centerline extraction algorithm for coronary arteries in CCTA images. The algorithm is based on an improved version of Frangi's vesselness filter which removes unwanted step-edge responses at the boundaries of the cardiac chambers. Building upon this new vesselness filter, the coronary artery extraction pipeline extracts the centerlines of main branches as well as side-branches automatically. This algorithm was first evaluated with a standardized evaluation framework named Rotterdam Coronary Artery Algorithm Evaluation Framework used in the MICCAI Coronary Artery Tracking challenge 2008 (CAT08). It includes 128 reference centerlines which were manually delineated. The average overlap and accuracy measures of our method were 93.7% and 0.30 mm, respectively, which ranked at the 1st and 3rd place compared to five other automatic methods presented in the CAT08. Secondly, in 50 clinical datasets, a total of 100 reference centerlines were generated from lumen contours in the transversal planes which were manually corrected by an expert from the cardiology department. In this evaluation, the average overlap and accuracy were 96.1% and 0.33 mm, respectively. The entire processing time for one dataset is less than 2 min on a standard desktop computer. In conclusion, our newly developed automatic approach can extract coronary arteries in CCTA images with excellent performances in extraction ability and accuracy. PMID:21637981
Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.
Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik; Kim, Won Yong; Wiggers, Henrik; Frøkiær, Jørgen; Sørensen, Jens
TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV was then...... calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured in the left ventricular outflow tract by cardiovascular magnetic resonance using phase-contrast velocity mapping within two weeks of PET imaging. Results...
Full Text Available Automatic identification and extraction of bone contours from x-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated x-ray images. The automatic initialization to align the 3D model with the x-ray images is solved by an Estimation of Bayesian Network Algorithm to fit a simplified multiple component geometrical model of the proximal femur to the x-ray data. Landmarks can be extracted from the geometrical model for the initialization of the 3D statistical model. The contour extraction is then accomplished by a joint registration and segmentation procedure. We iteratively updates the extracted bone contours and an instanced 3D model to fit the x-ray images. Taking the projected silhouettes of the instanced 3D model on the registered x-ray images as templates, bone contours can be extracted by a graphical model based Bayesian inference. The 3D model can then be updated by a non-rigid 2D/3D registration between the 3D statistical model and the extracted bone contours. Preliminary experiments on clinical data sets verified its validity.
Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid
A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162
Rahimi, S.; H. Arefi; Bahmanyar, R.
In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads’ precise...
Muhammad Zuhair Qadir; Atif Nisar Jilani; Hassam Ullah Sheikh
Since Android has become a popular software platform for mobile devices recently; they offer almost the same functionality as personal computers. Malwares have also become a big concern. As the number of new Android applications tends to be rapidly increased in the near future, there is a need for automatic malware detection quickly and efficiently. In this paper, we define a simple static analysis approach to first extract the features of the android application based on intents and categori...
Khandait, S. P.; Dr. R.C.Thool; P.D.Khandait
In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image...
Henß, Stefan; Mezini, Mira
Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts.
Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær; Kero, Tanja; Orndahl, Lovisa Holm; Kim, Won Yong; Bjerner, Tomas; Bouchelouche, Kirsten; Wiggers, Henrik; Frøkiær, Jørgen; Sørensen, Jens
Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent a...... dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...
Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun
Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of the answers for that problem. Another difficulty is to compensate information loss that happened during such image processing procedures. Many morphologically motivated image processing algorithms are applied for that purpose. The proposed method is verified as successful in extracting DCFs and measuring thicknesses in experiment using two hundred 800 × 600 DICOM ultrasonography images with 98.5% extraction rate. Also, the thickness of DCFs automatically measured by this software has small difference (less than 0.3 cm) for 89.8% of extracted DCFs. PMID:26949411
Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L
Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...
Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.
In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes. (paper)
Arsalan A. Othman
Full Text Available This study aims to assess the localization and size distribution of landslides using automatic remote sensing techniques in (semi- arid, non-vegetated, mountainous environments. The study area is located in the Kurdistan region (NE Iraq, within the Zagros orogenic belt, which is characterized by the High Folded Zone (HFZ, the Imbricated Zone and the Zagros Suture Zone (ZSZ. The available reference inventory includes 3,190 landslides mapped from sixty QuickBird scenes using manual delineation. The landslide types involve rock falls, translational slides and slumps, which occurred in different lithological units. Two hundred and ninety of these landslides lie within the ZSZ, representing a cumulated surface of 32 km2. The HFZ implicates 2,900 landslides with an overall coverage of about 26 km2. We first analyzed cumulative landslide number-size distributions using the inventory map. We then proposed a very simple and robust algorithm for automatic landslide extraction using specific band ratios selected upon the spectral signatures of bare surfaces as well as posteriori slope and the normalized difference vegetation index (NDVI thresholds. The index is based on the contrast between landslides and their background, whereas the landslides have high reflections in the green and red bands. We applied the slope threshold map to remove low slope areas, which have high reflectance in red and green bands. The algorithm was able to detect ~96% of the recent landslides known from the reference inventory on a test site. The cumulative landslide number-size distribution of automatically extracted landslide is very similar to the one based on visual mapping. The automatic extraction is therefore adapted for the quantitative analysis of landslides and thus can contribute to the assessment of hazards in similar regions.
Rahimi, S.; Arefi, H.; Bahmanyar, R.
In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.
Sebari, Imane; He, Dong-Chen
We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.
For the simulation of surgical operations, the extraction of the selected region using MR images is useful. However, this segmentation requires a high level of skill and experience from the technicians. We have developed an unique automatic extraction algorithm for extracting three dimensional brain parenchyma using MR head images. It is named the ''three dimensional gray scale clumsy painter method''. In this method, a template having the shape of a pseudo-circle, a so called clumsy painter (CP), moves along the contour of the selected region and extracts the region surrounded by the contour. This method has advantages compared with the morphological filtering and the region growing method. Previously, this method was applied to binary images, but there were some problems in that the results of the extractions were varied by the value of the threshold level. We introduced gray level information of images to decide the threshold, and depend upon the change of image density between the brain parenchyma and CSF. We decided the threshold level by the vector of a map of templates, and changed the map according to the change of image density. As a result, the over extracted ratio was improved by 36%, and the under extracted ratio was improved by 20%. (author)
Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian
Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.
Recent developments in medical imaging equipment have made it possible to acquire large amounts of image data and to perform detailed diagnosis. However, it is difficult for physicians to evaluate all of the image data obtained. To address this problem, computer-aided detection (CAD) and expert systems have been investigated. In these investigations, as the types of images used for diagnosis has expanded, the requirements for image processing have become more complex. We therefore propose a new method which we call Automatic Construction of Tree-structural Image Transformation (3D-ACTIT) to perform various 3D image processing procedures automatically using instance-based learning. We have conducted research on diffusion-weighted image (DWI) data and its processing. In this report, we describe how 3D-ACTIT performs processing to extract only abnormal signal regions from 3D-DWI data. (author)
Instrumentation based on continuous segmented flow analysis is suggested for the control of uranium loading in the amine phase of solvent extraction processing sulfate leach liquors. It can be installed with relatively little capital outlay and operational costs are expected to be low. The uranium(VI) in up to 60 samples of extract (proportional 0.1 to 5 g l-1 U) per hour can be determined. Application of spectrophotometry to the analysis of various process streams is discussed and it is concluded that it compares favourably in several important respects with the use of alternative techniques. (orig.)
Semenov, Sergey A; Reznik, Aleksandr M
A method of optimisation of new extractants structure using the desirable function has been developed. Earlier the desirable function has been proposed by Harrington (Ind Qual Control 21: 494-498, 1965) for the optimisation of processes with several response functions. The developed method of optimisation of new extractants structure has been used for construction of phenolic type extractants (PTE) (a class of N-(2-hydroxy-5-nonylbenzil)-dialkylamines). It has been offered to use the charge o...
Arup Sarkar, Ujjal Marjit, Utpal Biswas
Full Text Available DBpedia is one of the very well known live projectsfrom the Semantic Web. It is likeamirror version ofthe Wikipedia site in Semantic Web. Initially itpublishes the information collected from theWikipedia, but only that part which is relevant tothe Semantic Web.Collecting information forSemantic Web from the Wikipedia is demonstratedas the extraction of structured data. DBpedianormally do this by using a specially designedframework called DBpedia Information ExtractionFramework. This extraction framework do itsworks thorough the evaluation of the similarproperties from the DBpedia Ontology and theWikipedia template. This step is known as DBpediamapping.At present mostof the mapping jobs aredone complete manually.In this paper a newframework is introduced considering the issuesrelated to the template to ontology mapping. A semi-automatic mapping tool for the DBpedia projectisproposedwith the capability of automaticsuggestion generation for the end usersso thatusers can identify the similar Ontology and templateproperties.Proposed framework is useful since afterselection of similar properties, the necessary code tomaintain the mapping between Ontology andtemplate is generated automatically.
Full Text Available Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging data. Using the ground height from a DEM (digital elevation model, the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the building mask. A cluster of points usually represents an individual building or tree. During segmentation, the planar roof segments are extracted from each cluster of points and refined using rules, such as the coplanarity of points and their locality. Planes on trees are removed using information, such as area and point height difference. Experimental results on nine areas of six different data sets show that the proposed method can successfully remove vegetation and, so, offers a high success rate for building detection (about 90% correctness and completeness and roof plane extraction (about 80% correctness and completeness, when LIDAR point density is as low as four points/m2. Thus, the proposed method can be exploited in various applications.
In MR images, the contrast between tumors and surrounding soft tissues is not always clear, and it may be difficult to determine the tumor region. In this report, we propose a method for the automatic and objective extraction of tumors based on the correlations among multiple MR images. First, a map reflecting the correlations of three types of MR images (Gd-enhanced, T1-weighted, and T2-weighted images) is created by training of Self-Organizing Maps (SOM). Second, the SOM are grouped into a number of clusters determined in advance, and the original MR images are divided into clusters according to the clustered SOM. Finally, the tumor region in the clustered MR images is refined by reclassification, improving the accuracy of extraction. This method was applied to 10 cases in a clinical study, and in 8 of these cases, the tumor could be distinguished from other regions as an independent cluster. The proposed method is expected to be useful for the automatic extraction of tumors in MR images. (author)
Borowiecki, Karol; O'Hagan, John
application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music and of the......The purpose of this paper is to demonstrate the potential for generating interesting aggregate data on certain aspect of the lives of thousands of composers, and indeed other creative groups, from large on-line dictionaries and to be able to do so relatively quickly. A purpose-built java...
Li, Y.; Hu, X.; Guan, H.; Liu, P.
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.
Full Text Available Abstract Background This paper describes and evaluates a sentence selection engine that extracts a GeneRiF (Gene Reference into Functions as defined in ENTREZ-Gene based on a MEDLINE record. Inputs for this task include both a gene and a pointer to a MEDLINE reference. In the suggested approach we merge two independent sentence extraction strategies. The first proposed strategy (LASt uses argumentative features, inspired by discourse-analysis models. The second extraction scheme (GOEx uses an automatic text categorizer to estimate the density of Gene Ontology categories in every sentence; thus providing a full ranking of all possible candidate GeneRiFs. A combination of the two approaches is proposed, which also aims at reducing the size of the selected segment by filtering out non-content bearing rhetorical phrases. Results Based on the TREC-2003 Genomics collection for GeneRiF identification, the LASt extraction strategy is already competitive (52.78%. When used in a combined approach, the extraction task clearly shows improvement, achieving a Dice score of over 57% (+10%. Conclusions Argumentative representation levels and conceptual density estimation using Gene Ontology contents appear complementary for functional annotation in proteomics.
Harms, Hans; Hansson, Nils Henrik Stubkjær; Tolbod, Lars Poulsen;
Dynamic cardiac positron emission tomography (PET) is used to quantify molecular processes in vivo. However, measurements of left-ventricular (LV) mass and volumes require electrocardiogram (ECG)-gated PET data. The aim of this study was to explore the feasibility of measuring LV geometry using non...... generated from non-gated dynamic data. Using software-based structure recognition the LV wall was automatically segmented from K1 images to derive mLV and wall thickness (WT). End-systolic (ESV) and end-diastolic (EDV) volumes were calculated using blood pool images and used to obtain stroke volume (SV) and...
Kumar, Nishant; De Beer, Jan; Vanthienen, Jan; Moens, Marie-Francine
We will report evaluation of Automatic Named Entity Extraction feature of IR tools on Dutch, French, and English text. The aim is to analyze the competency of off-the-shelf information extraction tools in recognizing entity types including person, organization, location, vehicle, time, & currency from unstructured text. Within such an evaluation one can compare the effectiveness of different approaches for identifying named entities.
Khandait, S P; Khandait, P D
In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.
Yan, Wai Yeung; Morsy, Salem; Shaker, Ahmed; Tulloch, Mark
Mobile LiDAR has been recently demonstrated as a viable technique for pole-like object detection and classification. Despite that a desirable accuracy (around 80%) has been reported in the existing studies, majority of them were presented in the street level with relatively flat ground and very few of them addressed how to extract the entire pole structure from the ground or curb surface. Therefore, this paper attempts to fill the research gap by presenting a workflow for automatic extraction of light poles and towers from mobile LiDAR data point cloud, with a particular focus on municipal highway. The data processing workflow includes (1) an automatic ground filtering mechanism to separate aboveground and ground features, (2) an unsupervised clustering algorithm to cluster the aboveground data point cloud, (3) a set of decision rules to identify and classify potential light poles and towers, and (4) a least-squares circle fitting algorithm to fit the circular pole structure so as to remove the ground points. The workflow was tested with a set of mobile LiDAR data collected for a section of highway 401 located in Toronto, Ontario, Canada. The results showed that the proposed method can achieve an over 91% of detection rate for five types of light poles and towers along the study area.
Albers, Bastian; Kada, Martin; Wichmann, Andreas
Building outlines are needed for various applications like urban planning, 3D city modelling and updating cadaster. Their automatic reconstruction, e.g. from airborne laser scanning data, as regularized shapes is therefore of high relevance. Today's airborne laser scanning technology can produce dense 3D point clouds with high accuracy, which makes it an eligible data source to reconstruct 2D building outlines or even 3D building models. In this paper, we propose an automatic building outline extraction and regularization method that implements a trade-off between enforcing strict shape restriction and allowing flexible angles using an energy minimization approach. The proposed procedure can be summarized for each building as follows: (1) an initial building outline is created from a given set of building points with the alpha shape algorithm; (2) a Hough transform is used to determine the main directions of the building and to extract line segments which are oriented accordingly; (3) the alpha shape boundary points are then repositioned to both follow these segments, but also to respect their original location, favoring long line segments and certain angles. The energy function that guides this trade-off is evaluated with the Viterbi algorithm.
An automatic processing program system of the molecular replacement method AUTMR is presented. The program solves the initial model of the target crystal structure using a homologous molecule as the search model. It processes the structure-factor calculation of the model molecule, the rotation function, the translation function and the rigid-group refinement successively in one computer job. Test calculations were performed for six protein crystals and the structures were solved in all of these cases. (orig.)
Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.
Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when
Kelly, Colin; Devereux, Barry; Korhonen, Anna
Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134
At the previous European Triga Users Conference in Vienna,a paper was given describing a new handling tool for irradiated samples at the L.E.N.A plant. This tool was the first part of an automatic device for the management of samples to be irradiated in the TRIGA MARK ii reactor and successively extracted and stored. So far sample insertion and extraction to/from irradiation facilities available on reactor top (central thimble,rotatory specimen rack and channel f),has been carried out manually by reactor and health-physics operators using the ''traditional'' fishing pole provided by General Atomic, thus exposing reactor personnel to ''unjustified'' radiation doses. The present paper describes the design and the operation of a new device, a ''robot''type machine,which, remotely operated, takes care of sample insertion into the different irradiation facilities,sample extraction after irradiation and connection to the storage pits already described. The extraction of irradiated sample does not require the presence of reactor personnel on the reactor top and,therefore,radiation doses are strongly reduced. All work from design to construction has been carried out by the personnel of the electronic group of the L.E.N.A plant. (orig.)
Goncalves, G.; Duro, N.; Sousa, E.; Figueiredo, I.
Due to both natural and anthropogenic causes, the coastal lines keeps changing dynamically and continuously their shape, position and extend over time. In this paper we propose an approach to derive a tide-coordinate shoreline from two extracted instantaneous shorelines corresponding to a nearly low tide and high tide events. First, all the multispectral images are panshaperned to meet the 15 meters spatial resolution of the panchromatic images. Second, by using the Modification of Normalized Difference Water Index (MNDWI) and the kmeans clustering method we extract the raster shoreline for each image acquisition time. Third, each raster shoreline is smoothed and vectorized using a penalized least square method. Fourth, a 2D constrained Delaunay triangulation is built from the two extracted instantaneous shorelines with their respective heights interpolated from a Tidal gauche station. Finally, the desired tide-coordinate shoreline is interpolated from the previous triangular intertidal surface. The results show that an automatic tide-coordinated extraction method can be efficiently implemented using free available remote sensing imagery data (Landsat 8) and open source software (QGIS and Orfeo toolbox) and python scripting for task automation and software integration.
Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology
Rohollah Moosavi Tayebi
Full Text Available Coronary arterial tree extraction in angiograms is an essential component of each cardiac image processing system. Once physicians decide to check up coronary arteries from x-ray angiograms, extraction must be done precisely, fast, automatically and including whole arterial tree to help diagnosis or treatment during the cardiac surgical operation. This application is very helpful for the surgeon on deciding the target vessels prior to coronary artery bypass graft surgery. Some techniques and algorithms are proposed for extracting coronary arteries in angiograms. However, most of them suffer from some disadvantages such as time complexity, low accuracy, extracting only parts of main arteries instead of the full coronary arterial tree, need manual segmentation, appearance of artifacts and so forth. This study presents a new method for extracting whole coronary arterial tree in angiography images using Starlet wavelet transform. To this end, firstly we remove noise from raw angiograms and then sharpen the coronary arteries. Then coronary arterial tree is extracted by applying a modified Starlet wavelet transform and afterwards the residual noises and artifacts are cleaned. For evaluation, we measure proposed method performance on our created data set from 4932 Left Coronary Artery (LCA and Right Coronary Artery (RCA angiograms and compared with some state-of-the-art approaches. The proposed method shows much higher accuracy 96% for LCA and 97% for RCA, higher sensitivity 86% for LCA and 89% for RCA, higher specificity 98% for LCA and 99% for RCA and also higher precision 87% for LCA and 93% for RCA angiograms.
Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.
Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.
Yu, Kun; Ji, Guangrong; Zheng, Haiyong
Extracting the cell objects of red tide algae is the most important step in the construction of an automatic microscopic image recognition system for harmful algal blooms. This paper describes a set of composite methods for the automatic segmentation of cells of red tide algae from microscopic images. Depending on the existence of setae, we classify the common marine red tide algae into non-setae algae species and Chaetoceros, and design segmentation strategies for these two categories according to their morphological characteristics. In view of the varied forms and fuzzy edges of non-setae algae, we propose a new multi-scale detection algorithm for algal cell regions based on border- correlation, and further combine this with morphological operations and an improved GrabCut algorithm to segment single-cell and multicell objects. In this process, similarity detection is introduced to eliminate the pseudo cellular regions. For Chaetoceros, owing to the weak grayscale information of their setae and the low contrast between the setae and background, we propose a cell extraction method based on a gray surface orientation angle model. This method constructs a gray surface vector model, and executes the gray mapping of the orientation angles. The obtained gray values are then reconstructed and linearly stretched. Finally, appropriate morphological processing is conducted to preserve the orientation information and tiny features of the setae. Experimental results demonstrate that the proposed methods can eff ectively remove noise and accurately extract both categories of algae cell objects possessing a complete shape, regular contour, and clear edge. Compared with other advanced segmentation techniques, our methods are more robust when considering images with different appearances and achieve more satisfactory segmentation eff ects.
Thaer M. Dieb
Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for
H.G. Sui; Chen, G.; Hua, L.
Automatic water-body extraction from remote sense images is a challenging problem. Using GIS data to update and extract waterbody is an old but active topic. However, automatic registration and change detection of the two data sets often presents difficulties. In this paper, a novel automatic water-body extraction method is proposed. The core idea is to integrate image segmentation, image registration and change detection with GIS data as a whole processing. A new iterative segmentat...
DEHiBA (N,N-di-(ethyl-2-hexyl)isobutyramide, a monoamide, was chosen as selective extractant for the recovery of uranium in the first cycle of the GANEX process, which aims to realize the grouped extraction of actinides in the second step of the process. The aim of this work is an improved description of monoamide organic solutions in alkane diluent after solutes extraction: water, nitric acid and uranyl nitrate. A parametric study was undertaken to characterize species at molecular scale (by IR spectroscopy, UV-visible spectroscopy, time-resolved laser-induced fluorescence spectroscopy, and electro-spray ionisation mass spectrometry) and at supramolecular scale (by vapor pressure osmometry and small angle X-ray scattering coupled to molecular dynamic simulations). Extraction isotherms were modelled taking into account the molecular and supramolecular speciation. These works showed that the organization of the organic solution depends on the amide concentration, the nature and the concentration of the extracted solute. Three regimes can be distinguished. 1/For extractant concentration less than 0.5 mol/L, monomers are predominate species. 2/ For extractant concentrations between 0.5 and 1 mol/L, small aggregates are formed containing 2 to 4 molecules of monoamide. 3/ For more concentrated solutions (greater than 1 mol/L), slightly larger species can be formed after water or nitric acid extraction. Concerning uranyl nitrate extraction, an important and strong organization of the organic phase is observed, which no longer allows the formation of well spherical defined aggregates. At molecular scale, complexes are not sensitive to the organization of the solution: the same species are observed, regardless of the solute and extractant concentrations in organic phase. (author)
Lingua, Andrea; Marenchino, Davide; Nex, Francesco
In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336
Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.
Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC
This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.
Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M
Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. PMID:26518205
Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. . We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.
Hiremath P S & Kodge B G
Full Text Available In the 21st century, Aerial and satellite images are information rich. They are alsocomplex to analyze. For GIS systems, many features require fast and reliableextraction of open space area from high resolution satellite imagery. In this paperwe will study efficient and reliable automatic extraction algorithm to find out theopen space area from the high resolution urban satellite imagery. This automaticextraction algorithm uses some filters and segmentations and grouping isapplying on satellite images. And the result images may use to calculate the totalavailable open space area and the built up area. It may also use to compare thedifference between present and past open space area using historical urbansatellite images of that same projection.
Yeo, Boon-Lock; Liu, Bede
Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.
Muhammad Zuhair Qadir
Full Text Available Since Android has become a popular software platform for mobile devices recently; they offer almost the same functionality as personal computers. Malwares have also become a big concern. As the number of new Android applications tends to be rapidly increased in the near future, there is a need for automatic malware detection quickly and efficiently. In this paper, we define a simple static analysis approach to first extract the features of the android application based on intents and categories the application into a known major category and later on mapping it with the permissions requested by the application and also comparing it with the most obvious intents of category. As a result, getting to know which apps are using features which they are not supposed to use or they do not need.
Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi
Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.
The use of three-dimensional data has become, for a lot of mapping applications, very important. DEM are applied for modelling purposes, i.e. the 3D city model generation, but principally for imagery orthorectification. In aerial photogrammetry is well known the suitable use of stereo imagery to produce an accurate DEM, but the limits of the process (cost, schedule of data collection, highly technical staff) and the new advanced digital image processing algorithms have open the work scenario to the remote sensing data. This research has wanted to investigate the possibility to obtain accurate DEMs by means of automatic terrain extraction algorithms implemented in Leica Photogrammetry Suite (LPS) from stereoscopic remote sensing images collected by DigitalGlobe's WorldView-2 (WV2) satellite. The DEM of Rio de Janeiro (Brazil) and the correspondent digital orthoimages have been the results
Arastounia, M.; Lichti, D. D.
A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for
Li, H.; di, L.; Huang, X.; Li, D.
In recent years, there has been much interest in information extraction from Lidar point cloud data. Many automatic edge detection algorithms have been applied to extracting information from Lidar data. Generally they can be divided as three major categories: early vision gradient operators, optimal detectors and operators using parametric fitting models. Lidar point cloud includes the intensity information and the geographic information. Thus, traditional edge detectors used in remote sensed images can take advantage with the coordination information provided by point data. However, derivation of complex terrain features from Lidar data points depends on the intensity properties and topographic relief of each scene. Take road for example, in some urban area, road has the alike intensity as buildings, but the topographic relationship of road is distinct. The edge detector for road in urban area is different from the detector for buildings. Therefore, in Lidar extraction, each kind of scene has its own suitable edge detector. This paper compares application of the different edge detectors from the previous paragraph to various terrain areas, in order to figure out the proper algorithm for respective terrain type. The Canny, EDISON and SUSAN algorithms were applied to data points with the intensity character and topographic relationship of Lidar data. The Lidar data for test are over different terrain areas, such as an urban area with a mass of buildings, a rural area with vegetation, an area with slope, or an area with a bridge, etc. Results using these edge detectors are compared to determine which algorithm is suitable for a specific terrain area. Key words: Edge detector, Extraction, Lidar, Point data
This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)
Bey, Benjamin S.; Fichot, Erin B.; Norman, R. Sean
Successful and accurate analysis and interpretation of metagenomic data is dependent upon the efficient extraction of high-quality, high molecular weight (HMW) community DNA. However, environmental mat samples often pose difficulties to obtaining large concentrations of high-quality, HMW DNA. Hypersaline microbial mats contain high amounts of extracellular polymeric substances (EPS)1 and salts that may inhibit downstream applications of extracted DNA. Direct and harsh methods are often used i...
Extraction of dodecamolibdophosphoric acid H3PMo12O40 by nitrates of some high molecular amines (di-(2-ethylene-hexyl)-amine, diponylamine, diisoamyloctylamine) dichlorthane solution has been studied. The composition of associates in the organic phase may be presented as (BH3)PMo12O40, where BH+ is the protonized form of the amine. The overall conventional equilibrium constant of complex formation and extraction equals (1.51+-0.35)x1011
Pakade, Vusumzi; Cukrowska, Ewa; Lindahl, Sofia; Turner, Charlotta; Chimuka, Luke
Molecular imprinted polymer produced using quercetin as the imprinting compound was applied for the extraction of flavonol aglycones (quercetin and kaempferol) from Moringa oleifera methanolic extracts obtained using heated reflux extraction method. Identification and quantification of these flavonols in the Moringa extracts was achieved using high performance liquid chromatography with ultra violet detection. Breakthrough volume and retention capacity of molecular imprinted polymer SPE was investigated using a mixture of myricetin, quercetin and kaempferol. The calculated theoretical number of plates was found to be 14, 50 and 8 for myricetin, quercetin and kaempferol, respectively. Calculated adsorption capacities were 2.0, 3.4 and 3.7 μmol/g for myricetin, quercetin and kaempferol, respectively. No myricetin was observed in Moringa methanol extracts. Recoveries of quercetin and kaempferol from Moringa methanol extracts of leaves and flowers ranged from 77 to 85% and 75 to 86%, respectively, demonstrating the feasibility of using the developed molecularly imprinted SPE method for quantitative clean-up of both of these flavonoids. Using heated reflux extraction combined with molecularly imprinted SPE, quercetin concentrations of 975 ± 58 and 845 ± 32 mg/kg were determined in Moringa leaves and flowers, respectively. However, the concentrations of kaempferol found in leaves and flowers were 2100 ± 176 and 2802 ± 157 mg/kg, respectively. PMID:23255435
Agarwalla, Swapna; Sarma, Kandarpa Kumar
Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time
Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.
Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.
Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.
Full Text Available Unstructured Arabic text documents are an important source of geographical and temporal information. The possibility of automatically tracking spatio-temporal information, capturing changes relating to events from text documents, is a new challenge in the fields of geographic information retrieval (GIR, temporal information retrieval (TIR and natural language processing (NLP. There was a lot of work on the extraction of information in other languages that use Latin alphabet, such as English,, French, or Spanish, by against the Arabic language is still not well supported in GIR and TIR and it needs to conduct more researches. In this paper, we present an approach that support automated exploration and extraction of spatio-temporal information from Arabic text documents in order to capture and model such information before it can be utilized in search and exploration tasks. The system has been successfully tested on 50 documents that include a mixture of types of Spatial/temporal information. The result achieved 91.01% of recall and of 80% precision. This illustrates that our approach is effective and its performance is satisfactory.
Zhang, Yanfeng; Zhang, Yongjun; Zhang, Yi; Li, Xin
Automatically extracting DTM from DSM or LiDAR data by distinguishing non-ground points from ground points is an important issue. Many algorithms for this issue are developed, however, most of them are targeted at processing dense LiDAR data, and lack the ability of getting DTM from low resolution DSM. This is caused by the decrease of distinction on elevation variation between steep terrains and surface objects. In this paper, a method called two-steps semi-global filtering (TSGF) is proposed to extract DTM from low resolution DSM. Firstly, the DSM slope map is calculated and smoothed by SGF (semi-global filtering), which is then binarized and used as the mask of flat terrains. Secondly, the DSM is segmented with the restriction of the flat terrains mask. Lastly, each segment is filtered with semi-global algorithm in order to remove non-ground points, which will produce the final DTM. The first SGF is based on global distribution characteristic of large slope, which distinguishes steep terrains and flat terrains. The second SGF is used to filter non-ground points on DSM within flat terrain segments. Therefore, by two steps SGF non-ground points are removed robustly, while shape of steep terrains is kept. Experiments on DSM generated by ZY3 imagery with resolution of 10-30m demonstrate the effectiveness of the proposed method.
Mohammad Subhi Al-batah
Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.
Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi
To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316
Full Text Available Users of geospatial data in government, military, industry, research, and other sectors have need foraccurate display of roads and other terrain information in areas where there are ongoing operations orlocations of interest. Hence, road extraction that is significantly more automated than the employment ofcostly and scarce human resources has become a challenging technical issue for the geospatialcommunity. An automatic road extraction based on Extended Kalman Filtering (EKF and variablestructured multiple model particle filter (VS-MMPF from satellite images is addressed. EKF traces themedian axis of a single road segment while VS-MMPF traces all road branches initializing at theintersection. In case of Local Linearization Particle filter (LLPF, a large number of particles are usedand therefore high computational expense is usually required in order to attain certain accuracy androbustness. The basic idea is to reduce the whole sampling space of the multiple model system to the modesubspace by marginalization over the target subspace and choose better importance function for modestate sampling. The core of the system is based on profile matching. During the estimation, new referenceprofiles were generated and stored in the road template memory for future correlation analysis, thuscovering the space of road profiles. .
Full Text Available Triploid Populus tomentosa Carr. (Salicaceae is a good alternative to meet the increasing need of the global pulp and paper industry. Meanwhile, the xylem of this species could be a useful bioresource to develop low molecular extractives with significant bioactive potential. In the present work, a phytochemical investigation on aqueous EtOH extractives of Triploid P. tomentosa xylem, by systematical performance of Sephadex LH-20 open column chromatography and Thin Layer Chromatography (TLC, resulted in the isolation of two phenolic acids (ρ-coumaric acid (I and caffeic acid (II, two flavonoids (apigenin (III and luteolin (IV, and three phenolic glucosides (salicortin (V, salireposide (VI and populoside (VII. The structure elucidation and determination of the isolated extractives were based on their spectroscopical data and physiochemical evidences. This was the first time to report the low molecular weight extractives of Triploid P. tomentosa. Various low molecular weight extractives from Triploid P. tomentosa xylem exhibited significant antioxidative activities by DPPH and hydroxyl radical scavenging assays.
Syed Ali Naqi Gilani
Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.
Dussol, David; Druault, Philippe; Mallat, Bachar; Delacroix, Sylvain; Germain, Grégory
When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid-liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.
Karon L. Smith
Full Text Available Phenological metrics are of potential value as direct indicators of climate change. Usually they are obtained via either satellite imaging or ground based manual measurements; both are bespoke and therefore costly and have problems associated with scale and quality. An increase in the use of camera networks for monitoring infrastructure offers a means of obtaining images for use in phenological studies, where the only necessary outlay would be for data transfer, storage, processing and display. Here a pilot study is described that uses image data from a traffic monitoring network to demonstrate that it is possible to obtain usable information from the data captured. There are several challenges in using this network of cameras for automatic extraction of phenological metrics, not least, the low quality of the images and frequent camera motion. Although questions remain to be answered concerning the optimal employment of these cameras, this work illustrates that, in principle, image data from camera networks such as these could be used as a means of tracking environmental change in a low cost, highly automated and scalable manner that would require little human involvement.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
Introduction: It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. Materials and Methods: This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. Results: The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. Conclusion: It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints. PMID:27095858
Saporito, Salvatore; Herold, Ingeborg HF; Houthuizen, Patrick; van den Bosch, Harrie CM; Korsten, Hendrikus HM; van Assen, Hans C.; Mischi, Massimo
Indicator dilution theory provides a framework for the measurement of several cardiovascular parameters. Recently, dynamic imaging and contrast agents have been proposed to apply the method in a minimally invasive way. However, the use of contrast-enhanced sequences requires the definition of regions of interest (ROIs) in the dynamic image series; a time-consuming and operator dependent task, commonly performed manually. In this work, we propose a method for the automatic extraction of indicator dilution curves, exploiting the time domain correlation between pixels belonging to the same region. Individual time intensity curves were projected into a low dimensional subspace using principal component analysis; subsequently, clustering was performed to identify the different ROIs. The method was assessed on clinically available DCE-MRI and DCE-US recordings, comparing the derived IDCs with those obtained manually. The robustness to noise of the proposed approach was shown on simulated data. The tracer kinetic parameters derived on real images were in agreement with those obtained from manual annotation. The presented method is a clinically useful preprocessing step prior to further ROI-based cardiac quantifications.
Indicator dilution theory provides a framework for the measurement of several cardiovascular parameters. Recently, dynamic imaging and contrast agents have been proposed to apply the method in a minimally invasive way. However, the use of contrast-enhanced sequences requires the definition of regions of interest (ROIs) in the dynamic image series; a time-consuming and operator dependent task, commonly performed manually. In this work, we propose a method for the automatic extraction of indicator dilution curves, exploiting the time domain correlation between pixels belonging to the same region. Individual time intensity curves were projected into a low dimensional subspace using principal component analysis; subsequently, clustering was performed to identify the different ROIs. The method was assessed on clinically available DCE-MRI and DCE-US recordings, comparing the derived IDCs with those obtained manually. The robustness to noise of the proposed approach was shown on simulated data. The tracer kinetic parameters derived on real images were in agreement with those obtained from manual annotation. The presented method is a clinically useful preprocessing step prior to further ROI-based cardiac quantifications. (paper)
Bey, Benjamin S; Fichot, Erin B; Norman, R Sean
Successful and accurate analysis and interpretation of metagenomic data is dependent upon the efficient extraction of high-quality, high molecular weight (HMW) community DNA. However, environmental mat samples often pose difficulties to obtaining large concentrations of high-quality, HMW DNA. Hypersaline microbial mats contain high amounts of extracellular polymeric substances (EPS)1 and salts that may inhibit downstream applications of extracted DNA. Direct and harsh methods are often used in DNA extraction from refractory samples. These methods are typically used because the EPS in mats, an adhesive matrix, binds DNA during direct lysis. As a result of harsher extraction methods, DNA becomes fragmented into small sizes. The DNA thus becomes inappropriate for large-insert vector cloning. In order to circumvent these limitations, we report an improved methodology to extract HMW DNA of good quality and quantity from hypersaline microbial mats. We employed an indirect method involving the separation of microbial cells from the background mat matrix through blending and differential centrifugation. A combination of mechanical and chemical procedures was used to extract and purify DNA from the extracted microbial cells. Our protocol yields approximately 2 μg of HMW DNA (35-50 kb) per gram of mat sample, with an A(260/280) ratio of 1.6. Furthermore, amplification of 16S rRNA genes suggests that the protocol is able to minimize or eliminate any inhibitory effects of contaminants. Our results provide an appropriate methodology for the extraction of HMW DNA from microbial mats for functional metagenomic studies and may be applicable to other environmental samples from which DNA extraction is challenging. PMID:21775955
Yanjun Zhang; Xiangmin Zhang; Wenhui Liu; Yuxi Luo; Enjia Yu; Keju Zou; Xiaoliang Liu
This paper employed the clinical Polysomnographic (PSG) data, mainly including all-night Electroencephalogram (EEG), Electrooculogram (EOG) and Electromyogram (EMG) signals of subjects, and adopted the American Academy of Sleep Medicine (AASM) clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as cl...
Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs
The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis
Aluir P. Dal-Poz
Full Text Available This article presents an automatic methodology for extraction of road seeds from high-resolution aerial images. The method is based on a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each one of the road seeds is composed by a sequence of connected road objects, in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. Experiments carried out with high-resolution aerial images showed that the proposed methodology is very promising in extracting road seeds. This article presents the fundamentals of the method and the experimental results, as well.Este artigo apresenta uma metodologia automática para extração de sementes de rodovia a partir de imagens aéreas de alta resolução. O método se baseia em um conjunto de quatro objetos de rodovia e em um conjunto de regras de conexão entre tais objetos. Cada objeto de rodovia é uma representação local de um fragmento de rodovia aproximadamente reto e sua construção é baseada na combinação de polígonos que descrevem todas as bordas relevantes da imagem, de acordo com algumas regras que incorporam conhecimento sobre a feição rodovia. Cada uma das sementes de rodovia é composta por uma sucessão de objetos de rodovia conectados, sendo que cada sucessão deste tipo pode ser geometricamente estruturada como uma cadeia de quadriláteros contíguos. Os experimentos realizados com imagens aéreas de alta resolução mostraram que a metodologia proposta é muito promissora na extração de sementes de rodovia. Este artigo apresenta os fundamentos do método, bem como os resultados experimentais.
Full Text Available Pure surface materials denoted by endmembers play an important role in hyperspectral processing in various fields. Many endmember extraction algorithms (EEAs have been proposed to find appropriate endmember sets. Most studies involving the automatic extraction of appropriate endmembers without a priori information have focused on N-FINDR. Although there are many different versions of N-FINDR algorithms, computational complexity issues still remain and these algorithms cannot consider the case where spectrally mixed materials are extracted as final endmembers. A sequential endmember extraction-based algorithm may be more effective when the number of endmembers to be extracted is unknown. In this study, we propose a simple but accurate method to automatically determine the optimal endmembers using such a method. The proposed method consists of three steps for determining the proper number of endmembers and for removing endmembers that are repeated or contain mixed signatures using the Root Mean Square Error (RMSE images obtained from Iterative Error Analysis (IEA and spectral discrimination measurements. A synthetic hyperpsectral image and two different airborne images such as Airborne Imaging Spectrometer for Application (AISA and Compact Airborne Spectrographic Imager (CASI data were tested using the proposed method, and our experimental results indicate that the final endmember set contained all of the distinct signatures without redundant endmembers and errors from mixed materials.
Ahmadi, Salman; Zoej, M. J. Valadan; Ebadi, Hamid; Moghaddam, Hamid Abrishami; Mohammadzadeh, Ali
To present a new method for building boundary detection and extraction based on the active contour model, is the main objective of this research. Classical models of this type are associated with several shortcomings; they require extensive initialization, they are sensitive to noise, and adjustment issues often become problematic with complex images. In this research a new model of active contours has been proposed that is optimized for the automatic building extraction. This new active contour model, in comparison to the classical ones, can detect and extract the building boundaries more accurately, and is capable of avoiding detection of the boundaries of features in the neighborhood of buildings such as streets and trees. Finally, the detected building boundaries are generalized to obtain a regular shape for building boundaries. Tests with our proposed model demonstrate excellent accuracy in terms of building boundary extraction. However, due to the radiometric similarity between building roofs and the image background, our system fails to recognize a few buildings.
Kwang Baek Kim; Doo Heon Song; Hyun Jun Park
Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of t...
Farrington, Keith [School of Chemical Sciences, Dublin City University, Glasnevin, Dublin 9 (Ireland); Magner, Edmond [Materials and Surface Science Institute, Chemical and Environmental Sciences Department, University of Limerick, Limerick (Ireland); Regan, Fiona [School of Chemical Sciences, Dublin City University, Glasnevin, Dublin 9 (Ireland)]. E-mail: firstname.lastname@example.org
A rational design approach was taken to the planning and synthesis of a molecularly imprinted polymer capable of extracting caffeine (the template molecule) from a standard solution of caffeine and further from a food sample containing caffeine. Data from NMR titration experiments in conjunction with a molecular modelling approach was used in predicting the relative ratios of template to functional monomer and furthermore determined both the choice of solvent (porogen) and the amount used for the study. In addition the molecular modelling program yielded information regarding the thermodynamic stability of the pre-polymerisation complex. Post-polymerisation analysis of the polymer itself by analysis of the pore size distribution by BET yielded significant information regarding the nature of the size and distribution of the pores within the polymer matrix. Here is proposed a stepwise procedure for the development and testing of a molecularly imprinted polymer using a well-studied compound-caffeine as a model system. It is shown that both the physical characteristics of a molecularly imprinted polymer (MIP) and the analysis of the pre-polymerisation complex can yield vital information, which can predict how well a given MIP will perform.
A rational design approach was taken to the planning and synthesis of a molecularly imprinted polymer capable of extracting caffeine (the template molecule) from a standard solution of caffeine and further from a food sample containing caffeine. Data from NMR titration experiments in conjunction with a molecular modelling approach was used in predicting the relative ratios of template to functional monomer and furthermore determined both the choice of solvent (porogen) and the amount used for the study. In addition the molecular modelling program yielded information regarding the thermodynamic stability of the pre-polymerisation complex. Post-polymerisation analysis of the polymer itself by analysis of the pore size distribution by BET yielded significant information regarding the nature of the size and distribution of the pores within the polymer matrix. Here is proposed a stepwise procedure for the development and testing of a molecularly imprinted polymer using a well-studied compound-caffeine as a model system. It is shown that both the physical characteristics of a molecularly imprinted polymer (MIP) and the analysis of the pre-polymerisation complex can yield vital information, which can predict how well a given MIP will perform
Ma, Run-Tian; Shi, Yan-Ping
A new magnetic molecularly imprinted polymers (MMIPs) for quercetagetin was prepared by surface molecular imprinting method using super paramagnetic core-shell nanoparticle as the supporter. Acrylamide as the functional monomer, ethyleneglycol dimethacrylate as the crosslinker and acetonitrile as the porogen were applied in the preparation process. Fourier transform infrared spectrometer (FT-IR), X-ray diffraction (XRD) and Vibrating sample magnetometer (VSM) were applied to characterize the MMIPs, and High performance liquid chromatography (HPLC) was utilized to analyze the target analytes. The selectivity of quercetagetin MMIPs was evaluated according to their recognition to template and its analogues. Excellent binding for quercetagetin was observed in MMIPs adsorption experiment, and the adsorption isotherm models analysis showed that the homogeneous binding sites were distributed on the surface of the MMIPs. The MMIPs were employed as adsorbents in solid phase extraction for the determination of quercetagetin in Calendula officinalis extracts. Furthermore, this method is fast, simple and could fulfill the determination and extraction of quercetagetin from herbal extract. PMID:25618718
Full Text Available Automatic vehicle extraction from an airborne laser scanning (ALS point cloud is very useful for many applications, such as digital elevation model generation and 3D building reconstruction. In this article, an object-based point cloud analysis (OBPCA method is proposed for vehicle extraction from an ALS point cloud. First, a segmentation-based progressive TIN (triangular irregular network densification is employed to detect the ground points, and the potential vehicle points are detected based on the normalized heights of the non-ground points. Second, 3D connected component analysis is performed to group the potential vehicle points into segments. At last, vehicle segments are detected based on three features, including area, rectangularity and elongatedness. Experiments suggest that the proposed method is capable of achieving higher accuracy than the exiting mean-shift-based method for vehicle extraction from an ALS point cloud. Moreover, the larger the point density is, the higher the achieved accuracy is.
Full Text Available The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD and Root Mean Square Fluctuations (RMSF of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit.
Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard
This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186
Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18F-FDG imaging and the relationship between the 18F-FDG accumulation of brain anatomic function area and the 18F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18F-FDG accumulation of every brain anatomic function area and the 18F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation
BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.
Mugnai, Mauro L; Elber, Ron
We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system-the diffusion along the backbone torsions of a solvated alanine dipeptide. PMID:25573551
Maltezos, Evangelos; Ioannidis, Charalabos
This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.
Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut
Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.
Pereira, Ivo F.; Sousa, Tiago M.; Praca, Isabel;
different market sources, even including different market types; machine learning approach for automatic definition of downloads periodicity of new information available on-line. This is a crucial tool to go a step forward in electricity markets simulation, since the integration of this database with a...... scenarios generation tool, based on knowledge discovery techniques, provides a framework to study real market scenarios allowing simulators improvement and validation....
A new method to obtain spline outline description of Chinese font based on stroke extraction is presented.It has two primary advantages:(1)the quality of Chinese output is greatly improved;(2)the memory requirement is reduced.The method for stroke extraction is discussed in detail and experimental results are presented.
In this paper, we propose a new method of extracting heart wall contours using the Active Contour Model (snakes). We use an adaptive contrast enhancing method, which made it possible to extract both inner and outer contours of the left ventricule of the heart. Experimental results showed the efficiency of this method. (author)
Kawanaka, Kaichiro; Uetsuji, Yasutomo; Tsuchiya, Kazuyoshi; Nakamachi, Eiji
In this study, a portable type HMS (Health Monitoring System) device is newly developed. It has features 1) puncturing a blood vessel by using a minimally invasive micro-needle, 2) extracting and transferring human blood and 3) measuring blood glucose level. This miniature SMBG (Self-Monitoring of Blood Glucose) device employs a syringe reciprocal blood extraction system equipped with an electro-mechanical control unit for accurate and steady operations. The device consists of a) a disposable syringe unit, b) a non-disposable body unit, and c) a glucose enzyme sensor. The syringe unit consists of a syringe itself, its cover, a piston and a titanium alloy micro-needle, whose inner diameter is about 100Âµm. The body unit consists of a linear driven-type stepping motor, a piston jig, which connects directly to the shaft of the stepping motor, and a syringe jig, which is driven by combining with the piston jig and slider, which fixes the syringe jig. The required thrust to drive the slider is designed to be greater than the value of the blood extraction force. Because of this driving mechanism, the automatic blood extraction and discharging processes are completed by only one linear driven-type stepping motor. The experimental results using our miniature SMBG device was confirmed to output more than 90% volumetric efficiency under the driving speed of the piston, 1.0mm/s. Further, the blood sugar level was measured successfully by using the glucose enzyme sensor.
Mitani, Constantina; Anthemidis, Aristidis N
A novel and versatile automatic sequential injection countercurrent liquid-liquid microextraction (SI-CC-LLME) system coupled with atomic absorption spectrometry (FAAS) is presented for metal determination. The extraction procedure was based on the countercurrent flow of aqueous and organic phases which takes place into a newly designed lab made microextraction chamber. A noteworthy feature of the extraction chamber is that it can be utilized for organic solvents heavier or lighter than water. The proposed method was successfully demonstrated for on-line lead determination and applied in environmental water samples using an amount of 120 μL of chloroform as extractant and ammonium diethyldithiophosphate as chelating reagent. The effect of the major experimental parameters including the volume of extractant, as well as the flow rate of aqueous and organic phases were studied and optimized. Under the optimum conditions for 6 mL sample consumption an enhancement factor of 130 was obtained. The detection limit was 1.5 μg L(-1) and the precision of the method, expressed as relative standard deviation (RSD) was 2.7% at 40.0 μg L(-1) Pb(II) concentration level. The proposed method was evaluated by analyzing certified reference materials and spiked environmental water samples. PMID:25435230
Cherkauer, Keith; Hearst, Anthony
Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.
Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang
Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry. PMID:22195180
Radiation protection of the patient in computed tomography (CT) is a priority for several reasons: the dose received during a scan is relatively high, is the diagnostic mode with greater contribution to dose to patient collective and the frequency of completion of TC is increasing rapidly the past few years. On the other hand, are currently beginning to commercially offer automated registration of dose to patient receiving dosimetric parameters of all scans performed on the equipment connected to the system. In this communication the first results are presented from two TC connected to an automatic system of this kind recently installed at our Center. (Author)
M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono
This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...
Gábor, Kata; Apidianaki, Marianna; Sagot, Benoît; Villemonte De La Clergerie, Éric
An important trend in recent works on lexical semantics has been the development of learning methods capable of extracting semantic information from text corpora. The majority of these methods are based on the distributional hypothesis of meaning and acquire semantic information by identifying distributional patterns in texts. In this article, we present a distributional analysis method for extracting nominalization relations from monolingual corpora. The acquisition method makes use of distr...
Ben Abacha Asma; Zweigenbaum Pierre
Abstract Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration). MeTAE allows (i) to extract and annotate medical entities and relationships from medical texts and (ii) to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i) r...
Li, Y.; Hu, X.; H. Guan; Liu, P.
The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these...
In this study, we extracted whole brain and temporal lobe images from MR images (26 healthy elderly controls and 34 Alzheimer-type dementia patients) by means of binarize, mask processing, template matching, Hough transformation, and boundary tracing etc. We assessed the extraction accuracy by comparing the extracted images to images extracts by a radiological technologist. The results of assessment by consistent rate; brain images 91.3±4.3%, right temporal lobe 83.3±6.9%, left temporal lobe 83.7±7.6%. Furthermore discriminant analysis using 6 textural features demonstrated sensitivity and specificity of 100% when the healthy elderly controls were compared to the Alzheimer-type dementia patients. Our research showed the possibility of automatic objective diagnosis of temporal lobe abnormalities by automatic extracted images of the temporal lobes. (author)
Mayunga, Selassie David
The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance
Full Text Available Recent development of laser scanning device increased the capability of representing rock outcrop in a very high resolution. Accurate 3D point cloud model with rock joint information can help geologist to estimate stability of rock slope on-site or off-site. An automatic plane extraction method was developed by computing normal directions and grouping them in similar direction. Point normal was calculated by moving least squares (MLS method considering every point within a given distance to minimize error to the fitting plane. Normal directions were classified into a number of dominating clusters by fuzzy K-means clustering. Region growing approach was exploited to discriminate joints in a point cloud. Overall procedure was applied to point cloud with about 120,000 points, and successfully extracted joints with joint information. The extraction procedure was implemented to minimize number of input parameters and to construct plane information into the existing point cloud for less redundancy and high usability of the point cloud itself.
Full Text Available Unmanned Aerial Vehicles (UAVs have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data could increase the pace of current mapping procedures. This review introduces a workflow considered applicable for automated boundary delineation from UAV data. This is done by reviewing approaches for feature extraction from various application fields and synthesizing these into a hypothetical generalized cadastral workflow. The workflow consists of preprocessing, image segmentation, line extraction, contour generation and postprocessing. The review lists example methods per workflow step—including a description, trialed implementation, and a list of case studies applying individual methods. Furthermore, accuracy assessment methods are outlined. Advantages and drawbacks of each approach are discussed in terms of their applicability on UAV data. This review can serve as a basis for future work on the implementation of most suitable methods in a UAV-based cadastral mapping workflow.
Purpose: In this study automatic detection of implanted gold markers in megavoltage portal images for on-line position verification was investigated. Methods and Materials: A detection method for fiducial gold markers, consisting of a marker extraction kernel (MEK), was developed. The detection success rate was determined for different markers using this MEK. The localization accuracy was investigated by measuring distances between markers, which were fixed on a perspex template. In order to generate images comparable to images of patients with implanted markers, this template was placed on the skin of patients before the start of the treatment. Portal images were taken of lateral prostate fields at 18 MV within 1-2 monitor units (MU). Results: The detection success rates for markers of 5 mm length and 1.2 and 1.4 mm diameter were 0.95 and 0.99 respectively when placed at the beam entry and 0.39 and 0.86 when placed at the beam exit. The localization accuracy appears to be better than 0.6 mm for all markers. Conclusion: Automatic marker detection with an acceptable accuracy at the start of a radiotherapy fraction is feasible. Further minimization of marker diameters may be achieved with the help of an a-Si flat panel imager and may increase the clinical acceptance of this technique
Full Text Available This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM. The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best performer (80.68% is SVM classifier with all features.
In this paper, we propose a method of endocardium extraction from chest MRI images. The proposed procedure constructed with three-dimentional digital image processing techniques is executed without manual intervention. A digital figure of endocardium is obtained as two components: left chambers and right chambers. The shape of extracted endocardium was verified by observing a voxel expression image displayed with depth-coded shading. Volume change curves of left and right chambers were calculated to show feasibility of using the results for measurement of cardiac functions. (author)
Mirvahabi, S. S.; Abbaspour, R. A.
Navigation has become an essential component of human life and a necessary component in many fields. Because of the increasing size and complexity of buildings, a unified data model for navigation analysis and exchange of information. IndoorGML describes an appropriate data model and XML schema of indoor spatial information that focuses on modelling indoor spaces. Collecting spatial data by professional and commercial providers often need to spend high cost and time, which is the major reason that VGI emerged. One of the most popular VGI projects is OpenStreetMap (OSM). In this paper, a new approach is proposed for the automatic generation of IndoorGML data core file from OSM data file. The output of this approach is the file of core data model that can be used alongside the navigation data model for navigation application of indoor space.
Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren
Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.
Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik;
Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study was to...
Ji, Wenhua; Xie, Hongkai; Zhou, Jie; Wang, Xiao; Ma, Xiuli; Huang, Luqi
Specific molecularly imprinted polymers for dencichine were developed for the first time in this study by the bulk polymerization using phenylpyruvic acid and dl-tyrosine as multi-templates. The photographs confirmed that molecularly imprinted polymers prepared using N,N'-methylene diacrylamide as cross-linker and glycol dimethyl ether as porogen displayed excellent hydrophilicity. Selectivity, adsorption isotherm and adsorption kinetics were investigated. The sample loading-washing-eluting solvent was optimized to evaluate the property of molecularly imprinted solid phase extract. Compared with LC/WCX-SPE, water-compatible molecularly imprinted solid phase extraction displayed more excellent specific adsorption performance. The extracted dencichine from Panax notoginseng with the purity of 98.5% and the average recovery of 85.6% (n=3) was obtained. PMID:26680322
The preferential selectivity of Zr4+ over Hf4+ ion towards mixed alkyl organophosphorus extractants is predicted using molecular modelling and solvent extraction studies. Density Functional theory successfully captures the higher complexation stability of mixed alkyl phosphine oxide (MAPO) over mixed alkyl substituted phosphine oxide (MSAPO) for both Zr4+ and Hf4+ ions as observed in the solvent extraction experiment. Further, the extraction energy for Zr4+ ion is higher than Hf4+ ion with MAPO over MSAPO. The calculated extraction energy follows the same order of distribution constant as predicted by the solvent extraction which shows that MAPO is the best extractant in terms of higher distribution constant and selectivity over MSAPO. (author)
To assess the validity of automatic extraction of left ventricular inner contours based on contrast-enhanced ultrafast cine-MR images, phantom (n=15) and clinical (n=60) studies were performed. In phantom study, left ventricular volumes obtained by biplane modified Simpson's method based on automatic extraction of left ventricular inner contour was significantly correlated to phantom's volumes(r=0.991). Contrast-enhanced breath-hold ultrafast cine MR imaging was shown to provide accurate cardiac images with high success rate (89% in horizontal long axis section and 88% in vertical long axis section) in clinical study. However, the extraction of left ventricular inner contour depends on operator's manual tracing and the time required for data analysis is longer. The automatic extraction time of left ventricular inner contour was 4 second/frame, on the other hand conventional manual tracing time was 60-90 second/frame. Comparison with left ventricular volumes showed a high correlation between contrast-enhanced ultrafast cine MR imaging (monoplane area-length's and biplane modified Simpson's methods based on automatic extraction of left ventricle) and digital subtraction left ventriculography (biplane area-length's method). (author)
Yang, Ronggen; Zhang, Yue; Gong, Lejun
With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.
Cardiac MR Imaging is a non invasive technique that allows the acquisition of a series of short-axis slices of the heart. These images encompass the entire left ventricle in the different phases of the cardiac cycle. The principal physiological parameters extracted from this series are the ejection fraction and the wall thickness. To this end, the determination of both the endocardial and the epicardial contour is required. Following the extraction of three parameters for each pixel, the fuzzy set of the cardiac contour points is defined. The first parameter depends on the pixel grey level value, the second on the presence of an edge and the third on the information retrieved on the previous slice. The calculation of the membership degree to the fuzzy set of the cardiac contour points for each pixel involves the creation of a matrix of membership degrees. The cardiac contours are determined on this matrix with the aid of a dynamic programming technique, graph searching. (authors)
Guo, X.; Chen, Y.; Wang, C.; Cheng, M.; Wen, C.; Yu, J.
In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.
J. Del Rio Vera; Coiras, E.; Groen, J; Evans, B.
This paper presents a new supervised classification approach for automated target recognition (ATR) in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving...
Del Rio Vera, J.; Coiras, E.; Groen, J.; Evans, B.
This paper presents a new supervised classification approach for automated target recognition (ATR) in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.
Fiszman, M.; Haug, P. J.; Frederick, P. R.
Free-text documents are the main type of data produced by a radiology department in a hospital information system. While this type of data is readily accessible for clinical data review it can not be accessed by other applications to perform medical decision support, quality assurance, and outcome studies. In an attempt to solve this problem, natural language processing systems have been developed and tested against chest x-rays reports to extract relevant clinical information and make it acc...
Lawrence C Lee
Full Text Available Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram, that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR, tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76, protein tyrosine kinases (0.72 versus 0.69, and ion channel transporters (0.76 versus 0.74. Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.
Current search engines use sentence extraction techniques to produce snippet result summaries, which users may find less than ideal for determining the relevance of pages. Unlike extracting, abstracting programs analyse the context of documents and rewrite them into informative summaries. Our project aims to produce abstracting summaries which are coherent and easy to read thereby lessening users’ time in judging the relevance of pages. However, automatic abstracting technique has its domain ...
Miró, Manuel; Hartwell, Supaporn Kradtap; Jakmunee, Jaroon; Grudpan, Kate; Hansen, Elo Harald
Solid-phase extraction (SPE) is the most versatile sample-processing method for removal of interfering species and/or analyte enrichment. Although significant advances have been made over the past two decades in automating the entire analytical protocol involving SPE via flow-injection approaches...... overcoming the above shortcomings, so-called bead-injection (BI) analysis, based on automated renewal of the sorbent material per assay exploiting the various generations of flow-injection analysis. It addresses novel instrumental developments for implementing BI and a number of alternatives for online...
J. Del Rio Vera
Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.
Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.
A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional
Full Text Available Currently, many recycling activities adopt manual sorting for plastic recycling that relies on plant personnel who visually identify and pick plastic bottles as they travel along the conveyor belt. These bottles are then sorted into the respective containers. Manual sorting may not be a suitable option for recycling facilities of high throughput. It has also been noted that the high turnover among sorting line workers had caused difficulties in achieving consistency in the plastic separation process. As a result, an intelligent system for automated sorting is greatly needed to replace manual sorting system. The core components of machine vision for this intelligent sorting system is the image recognition and classification. In this research, the overall plastic bottle sorting system is described. Additionally, the feature extraction algorithm used is discussed in detail since it is the core component of the overall system that determines the success rate. The performance of the proposed feature extractions were evaluated in terms of classification accuracy and result obtained showed an accuracy of more than 80%.
A mammogram is the standard modality used for breast cancer screening. Computer-aided detection (CAD) approaches are helpful for improving breast cancer detection rates when applied to mammograms. However, automated analysis of a mammogram often leads to inaccurate results in the presence of the pectoral muscle. Therefore, it is necessary to first handle pectoral muscle segmentation separately before any further analysis of a mammogram. One difficulty to overcome when segmenting out pectoral muscle is its strong overlapping with dense glandular tissue which tampers with its extraction. This paper introduces an automated two-step approach for pectoral muscle extraction. The pectoral region is firstly estimated through segmentation by mean of a modified Fuzzy C-Means clustering algorithm. After contour validation, the final boundary is delineated through iterative refinement of edge point using average gradient. The proposed method is quite simple in implementation and yields accurate results. It was tested on a set of images from the MIAS database and yielded results which, compared to those of some state-of-the-art approaches, were better. (paper)
Flavio Vasconcelos da Silva
Full Text Available In this work, bromelain was recovered from ground pineapple stem and rind by means of precipitation with alcohol at low temperature. Bromelain is the name of a group of powerful protein-digesting, or proteolytic, enzymes that are particularly useful for reducing muscle and tissue inflammation and as a digestive aid. Temperature control is crucial to avoid irreversible protein denaturation and consequently to improve the quality of the enzyme recovered. The process was carried out alternatively in two fed-batch pilot tanks: a glass tank and a stainless steel tank. Aliquots containing 100 mL of pineapple aqueous extract were fed into the tank. Inside the jacketed tank, the protein was exposed to unsteady operating conditions during the addition of the precipitating agent (ethanol 99.5% because the dilution ratio "aqueous extract to ethanol" and heat transfer area changed. The coolant flow rate was manipulated through a variable speed pump. Fine tuned conventional and adaptive PID controllers were on-line implemented using a fieldbus digital control system. The processing performance efficiency was enhanced and so was the quality (enzyme activity of the product.
A fuel pellet extraction device of the spent fuel rods is described. The device consists of a cutting device of the spent fuel rods and the decladding device of the fuel pellets. The cutting device is to cut a spent fuel rod to n optimal size for fast decladding operation. To design the device, the fuel rod properties are investigated including the dimension and material of fuel rod tubes and pellets. Also, various methods of existing cutting method are investigated. The design concepts accommodate remote operability for the Hot-Cell(radioactive ) area operation. Also, the modularization of the device structure is considered for the easy maintenance. The decladding device is to extract the fuel pellet from the rod cut. To design this device, the existing method is investigated including the chemical and mechanical decladding methods. From the view point of fuel recovery and feasibility of implementation. it is concluded that the chemical decladding method is not appropriate due to the mass production of radioactive liquid wastes, in spite of its high fuel recovery characteristics. Hence, in this paper, the mechanical decladding method is adopted and the device is designed so as to be applicable to various lengths of rod-cuts. As like the cutting device,the concepts of remote operability and maintainability is considered. Both devices are fabricated and the performance is investigated through a series of experiments. From the experimental result, the optimal operational condition of the devices is established
Full Text Available It is very common for a customer to read reviews about the product before making a final decision to buy it. Customers are always eager to get the best and the most objective information about the product theywish to purchase and reviews are the major source to obtain this information. Although reviews are easily accessible from the web, but since most of them carry ambiguous opinion and different structure, it is often very difficult for a customer to filter the information he actually needs. This paper suggests a framework, which provides a single user interface solution to this problem based on sentiment analysis of reviews. First, it extracts all the reviews from different websites carrying varying structure, and gathers information about relevant aspects of that product. Next, it does sentiment analysis around those aspects and gives them sentiment scores. Finally, it ranks all extracted aspects and clusters them into positive and negative class. The final output is a graphical visualization of all positive and negative aspects, which provide the customer easy, comparable, and visual information about the important aspects of the product. The experimental results on five different products carrying 5000 reviewsshow 78% accuracy. Moreover, the paper also explained the effect of Negation, Valence Shifter, and Diminisher with sentiment lexiconon sentiment analysis, andconcluded that they all are independent of the case problem , and have no effect on the accuracy of sentiment analysis.
Vauchel, Peggy; Arhaliass, Abdellah; Legrand, Jack; Kaas, Raymond; Baron, Regis
Alginates are natural polysaccharides that are extracted from brown seaweeds and widely used for their rheological properties. The central step in the extraction protocol used in the alginate industry is the alkaline extraction, which requires several hours. In this study, a significant decrease in alginate dynamic viscosity was observed after 2 h of alkaline treatment. Intrinsic viscosity and average molecular weight of alginates from alkaline extractions 1-4 h in duration were determined, i...
Han, Di; Fang, Yangfu; Du, Deyang; Huang, Gaoshan; Qiu, Teng; Mei, Yongfeng
We design and fabricate a simple self-powered system to collect analyte molecules in fluids for surface-enhanced Raman scattering (SERS) detection. The system is based on catalytic Au/SiO/Ti/Ag-layered microengines by employing rolled-up nanotechnology. Pronounced SERS signals are observed on microengines with more carrier molecules compared with the same structure without automatic motions.We design and fabricate a simple self-powered system to collect analyte molecules in fluids for surface-enhanced Raman scattering (SERS) detection. The system is based on catalytic Au/SiO/Ti/Ag-layered microengines by employing rolled-up nanotechnology. Pronounced SERS signals are observed on microengines with more carrier molecules compared with the same structure without automatic motions. Electronic supplementary information (ESI) available: Experimental procedures, characterization, SERS enhancement factor calculation and videos. See DOI: 10.1039/c6nr00117c
de Lorenzo Victor
Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical
Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.
Contrast-enhanced breath-hold ultrafast cine MR imaging was shown to provide accurate cardiac images with higher success rate (89% in horizontal long axis view and 88% in vertical long axis view). However, the data analysis method still depends on operator's manual tracing of left ventricular (LV) contours which cannot exclude subjectivity, so not only the operator's contributions but also the data analysis results' reproducibility problems remains. We propose an automatic extraction method of LV contours on cine MR images, which needs only 3 manually inputted points at the 1st cardiac frame and require no manual operation for another frames. The automatic LV edge extraction time was 4 second/frame by this method, on the other hand, conventional manual tracing time was 60-90 second/frame. Comparison with LV volumes showed a high correlation (r=0.953 in EDVI, r=0.962 in ESVI) between manual and automatic tracing of LV contours on horizontal long axis view. We have developed an automatic extraction method of LV contours on long axis view in contrast-enhanced ultrafast cine MR images. This is an accurate highly reproducible method of evaluating LV volumetry and volume curve. (author)
Meyer, Heiko; Garofalakis, Anikitos; Zacharakis, Giannis; Psycharakis, Stylianos; Mamalaki, Clio; Kioussis, Dimitris; Economou, Eleftherios N.; Ntziachristos, Vasilis; Ripoll, Jorge
During the past decade, optical imaging combined with tomographic approaches has proved its potential in offering quantitative three-dimensional spatial maps of chromophore or fluorophore concentration in vivo. Due to its direct application in biology and biomedicine, diffuse optical tomography (DOT) and its fluorescence counterpart, fluorescence molecular tomography (FMT), have benefited from an increase in devoted research and new experimental and theoretical developments, giving rise to a new imaging modality. The most recent advances in FMT and DOT are based on the capability of collecting large data sets by using CCDs as detectors, and on the ability to include multiple projections through recently developed noncontact approaches. For these to be implemented, we have developed an imaging setup that enables three-dimensional imaging of arbitrary shapes in fluorescence or absorption mode that is appropriate for small animal imaging. This is achieved by implementing a noncontact approach both for sources and detectors and coregistering surface geometry measurements using the same CCD camera. A thresholded shadowgrammetry approach is applied to the geometry measurements to retrieve the surface mesh. We present the evaluation of the system and method in recovering three-dimensional surfaces from phantom data and live mice. The approach is used to map the measured in vivo fluorescence data onto the tissue surface by making use of the free-space propagation equations, as well as to reconstruct fluorescence concentrations inside highly scattering tissuelike phantom samples. Finally, the potential use of this setup for in vivo small animal imaging and its impact on biomedical research is discussed.
Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes
Full Text Available This paper employed the clinical Polysomnographic (PSG data, mainly including all-night Electroencephalogram (EEG, Electrooculogram (EOG and Electromyogram (EMG signals of subjects, and adopted the American Academy of Sleep Medicine (AASM clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM were learned and the multi-kernel FSVM (MK-FSVM was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.
Full Text Available The exploration of social conversations for addressing patient’s needs is an important analytical task in which many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in order to generate insight from social media data, which can be considered as one of the most challenging source of information available today due to its sheer volume and noise. This study is based on the work by Scott Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The mechanism for automatically identifying and generating a taxonomy from social conversations is developed and pressured tested using public data from media sites focused on the needs of cancer patients and their families. Moreover, a novel method for generating the category’s label and the determination of an optimal number of categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is familiar with taxonomies, what they are and how they are used.
He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia
Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.
This thesis presents broadly the applications of molecularly imprinted polymers in sensors and solid phase extraction. Sensors for creatine and creatinine have been reported using a novel method of rational design of molecularly imprinted polymers (MIPs), and solid phase extraction of aflatoxin-B 1 has also been described in the thesis. A method for the selective detection of creataine and creatinine is reported in this thesis, which is based on the reaction between polymeri...
Xie, Xiaoyu; Wei, Fen; Chen, Liang; Wang, Sicen
In this study, highly selective core-shell molecularly imprinted polymers on the surface of magnetic nanoparticles were prepared using protocatechuic acid as the template molecule. The resulting magnetic molecularly imprinted polymers were characterized by transmission electron microscopy, Fourier transform infrared spectroscopy, X-ray diffraction, and vibrating sample magnetometry. The binding performances of the prepared materials were evaluated by static and selective adsorption. The binding isotherms were obtained for protocatechuic acid and fitted by the Langmuir isotherm model and Freundlich isotherm model. Furthermore, the resulting materials were used as the solid-phase extraction materials coupled to high-performance liquid chromatography for the selective extraction and detection of protocatechuic acid from the extracts of Homalomena occulta and Cynomorium songaricum with the recoveries in the range 86.3-102.2%. PMID:25641806
Carboxylic acids are cation exchanger type of extractant which extract metal ions from weak acidic solutions by ion exchange mechanism. They are present as dimer (H2A2) in the non polar organic diluents. High molecular weight carboxylic acids such as versatic 10 acid and naphthenic acid are used for the separation of high purity of yttrium from heavy fraction of rare earths. Extraction behavior of rare earths with different types of carboxylic acids is also reported. Literature survey revealed that the extraction behavior of uranium from aqueous solutions with carboxylic acids is scanty. An attempt has been made in the present work to examine the extraction behavior of U(VI) with three different types of high molecular weight carboxylic acids namely cekanoic acid, neoheptanoic acid and versatic 10 acid dissolved in xylene. Extraction of metal ions is very much dependent on pH of the solution
Bykov, A. D.; Pshenichnikov, A. M.; Sinitsa, L. N.; Shcherbakov, A. P.
An expert system has been developed for the initial analysis of a recorded spectrum, namely, for the line search and the determination of line positions and intensities. The expert system is based on pattern recognition algorithms. Object recognition learning allows the system to achieve the needed flexibility and automatically detect groups of overlapping lines, whose profiles should be fit together. Gauss, Lorentz, and Voigt profiles are used as model profiles to which spectral lines are fit. The expert system was applied to processing of the Fourier transform spectrum of the D2O molecule in the region 3200-4200 cm-1, and it detected 4670 lines in the spectrum, which consisted of 439000 dots. No one experimentally observed line exceeding the noise level was missed.
Zhao, Song-Feng; Huang, Fang; Wang, Guo-Li; Zhou, Xiao-Xin
We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov-Popov-Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province
Full Text Available The purpose of Biomedical Natural Language Processing (BioNLP is to capture biomedical phenomena from textual data by extracting relevant entities, information and relations between biomedical entities (i.e. proteins and genes. In general, in most of the published papers, only binary relations were extracted. In a recent past, the focus is shifted towards extracting more complex relations in the form of bio-molecular events that may include several entities or other relations. In this paper we propose an approach that enables event trigger extraction of relatively complex bio-molecular events. We approach this problem as a detection of bio-molecular event trigger using the well-known algorithm, namely Conditional Random Field (CRF. We apply our experiments on development set. It shows the overall average recall, precision and F-measure values of 64.27504%, 69.97559% and 67.00429%, respectively for the event detection.
Full Text Available A specific DNA extraction method for sea anemones is described in which extraction of total DNA from eight species of sea anemones and one species of corallimorpharian was achieved by changing the standard extraction protocols. DNA extraction from sea anemone tissue is made more difficult both by the tissue consistency and the presence of symbiotic zooxanthellae. The technique described here is an efficient way to avoid problems of DNA contamination and obtain large amounts of purified and integral DNA which can be used in different kinds of molecular analyses.
Syed, Deeba N.; Chamcheu, Jean-Christopher; Adhami, Vaqar M.; Mukhtar, Hasan
There is increased appreciation by the scientific community that dietary phytochemicals can be potential weapons in the fight against cancer. Emerging data has provided new insights into the molecular and cellular framework needed to establish novel mechanism-based strategies for cancer prevention by selective bioactive food components. The unique chemical composition of the pomegranate fruit, rich in antioxidant tannins and flavonoids has drawn the attention of many investigators. Polyphenol...
Peng, Wanxi; Lin, Zhi; Wang, Lansheng; Chang, Junbo; Gu, Fangliang; Zhu, Xiangwei
Illicium verum, whose extractives can activate the demic acquired immune response, is an expensive medicinal plant. However, the rich extractives in I. verum biomass were seriously wasted for the inefficient extraction and separation processes. In order to further utilize the biomedical resources for the good acquired immune response, the four extractives were obtained by SJYB extraction, and then the immunology moleculars of SJYB extractives were identified and analyzed by GC-MS. The result showed that the first-stage extractives contained 108 components including anethole (40.27%), 4-methoxy-benzaldehyde (4.25%), etc.; the second-stage extractives had 5 components including anethole (84.82%), 2-hydroxy-2-(4-methoxy-phenyl)-n-methyl-acetamide (7.11%), etc.; the third-stage extractives contained one component namely anethole (100%); and the fourth-stage extractives contained 5 components including cyclohexyl-benzene (64.64%), 1-(1-methylethenyl)-3-(1-methylethyl)-benzene (17.17%), etc. The SJYB extractives of I. verum biomass had a main retention time between 10 and 20 min what's more, the SJYB extractives contained many biomedical moleculars, such as anethole, eucalyptol, [1S-(1α,4aα,10aβ)]-1,2,3,4,4a,9,10,10a-octahydro-1,4a-dimethyl-7-(1-methylethyl)-1-phenanthrenecarboxylic acid, stigmast-4-en-3-one, γ-sitosterol, and so on. So the functional analytical results suggested that the SJYB extractives of I. verum had a function in activating the acquired immune response and a huge potential in biomedicine. PMID:27081359
Jailson F. B. Querido
Full Text Available Dicistroviridae is a new family of small, nonenveloped, and +ssRNA viruses pathogenic to both beneficial arthropods and insect pests as well. Triatoma virus (TrV, a dicistrovirus, is a pathogen of Triatoma infestans (Hemiptera: Reduviidae, one of the main vectors of Chagas disease. In this work, we report a single-step method to identify TrV, a dicistrovirus, isolated from fecal samples of triatomines. The identification method proved to be quite sensitive, even without the extraction and purification of RNA virus.
Marin Rodenas, Alfonso
Nowadays email is commonly used by citizens to establish communication with their government. On the received emails, governments deal with some common queries and subjects which some handling officers have to manually answer. Automatic email classification of the incoming emails allows to increase the communication efficiency by decreasing the delay between the query and its response. This thesis takes part within the IMAIL project, which aims to provide an automatic answering solution to th...
Objective: To establish a method which can extract functional areas of the brain basal ganglia automatically. Methods: 18F-fluorodeoxyglucose (FDG) PET images were spatial normalized to Talairach atlas space through two steps, image registration and image deformation. The functional areas were extracted from three dimension PET images based on the coordinate obtained from atlas; caudate and putamen were extracted and rendered, the grey value of the area was normalized by whole brain. Results: The normal ratio of left caudate head, body and tail were 1.02 ± 0.04, 0.92 ± 0.07 and 0.71 ± 0.03, the right were 0.98 ± 0.03, 0.87 ± 0.04 and 0.71 ± 0.01 respectively. The normal ratio of left and right putamen were 1.20 ± 0.06 and 1.20 ± 0.04. The mean grey value between left and right basal ganglia had no significant difference (P>0.05). Conclusion: The automatic functional area extracting method based on atlas of Talairach is feasible. (authors)
Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra;
The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/......., organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual...
This thesis focuses on the use of molecularly imprinted polymers as selective sorbents for solid-phase extraction (MISPE). The MISPE methods developed were mainly intended for use with biological samples, such as human urine and blood plasma. These body fluids are complex samples, which often need an effective clean-up step before analysis to reduce the levels of possible interfering substances from the matrix, especially if the analytes are present in trace amounts. Solid-phase extraction (S...
Shah Parantu K; Wattarujeekrit Tuangthong; Collier Nigel
Abstract Background The exploitation of information extraction (IE), a technology aiming to provide instances of structured representations from free-form text, has been rapidly growing within the molecular biology (MB) research community to keep track of the latest results reported in literature. IE systems have traditionally used shallow syntactic patterns for matching facts in sentences but such approaches appear inadequate to achieve high accuracy in MB event extraction due to complex sen...
A liquid-liquid extraction step has been incorporated into an automatic method for determination of europium in the presence of other lanthanides, yttrium and scandium. Europium(III) is selectively reduced on a Jones reductor and the europium(II) reacted with molybdophosphoric acid to produce a molybdenum blue which is extracted into isoamyl alcohol for spectrophotometric determination. Incorporation of the extraction step increases the sensitivity of the method by a factor of 5 enabling from 2 to 50 μg of europium per ml of aqueous sample solution to be determined but reduces the sampling rate from 20 to 10 samples per hour. The method has been applied to the determination of europium in lanthanide oxides and in the minerals bastnasite and monazite following a lanthanide group separation. (orig.)
We propose a new application of information technology to recognize and extract expressions of atomic and molecular states from electrical forms of scientific abstracts. Present results will help scientists to understand atomic states as well as the physics discussed in the articles. Combining with the internet search engines, it will make one possible to collect not only atomic and molecular data but broader scientific information over a wide range of research fields. (author)
Becker, Holger; Schattschneider, Sebastian; Klemm, Richard; Hlawatsch, Nadine; Gärtner, Claudia
The continuous monitoring of the environment for lethal pathogens is a central task in the field of biothreat detection. Typical scenarios involve air-sampling in locations such as public transport systems or large public events and a subsequent analysis of the samples by a portable instrument. Lab-on-a-chip technologies are one of the promising technological candidates for such a system. We have developed an integrated microfluidic system with automatic sampling for the detection of CBRNE-related pathogens. The chip contains a two-pronged analysis strategy, on the one hand an immunological track using antibodies immobilized on a frit and a subsequent photometric detection, on the other hand a molecular biology approach using continuous-flow PCR with a fluorescence end-point detection. The cartridge contains two-component molded rotary valve to allow active fluid control and switching between channels. The accompanying instrument contains all elements for fluidic and valve actuation, thermal control, as well as the two detection modalities. Reagents are stored in dedicated reagent packs which are connected directly to the cartridge. With this system, we have been able to demonstrate the detection of a variety of pathogen species.
Chen, Fang-Fang; Wang, Guo-Ying; Shi, Yan-Ping
Molecularly imprinted polymers (MIPs) had been prepared by precipitation polymerization method using acrylamide as the functional monomer, ethylene glycol dimethacrylate as the cross-linker, acetonitrile as the porogen solvent and protocatechuic acid (PA), one of phenolic acids, as the template molecule. The MIPs were characterized by scanning electron microscopy and Fourier transform infrared, and their performance relative to non-imprinted polymers was assessed by equilibrium binding experiments. Six structurally similar phenolic acids, including p-hydroxybenzoic acid, gallic acid, salicylic acid, syringic acid, vanillic acid, ferulic acid were selected to assess the selectivity and recognition capability of the MIPs. The MIPs were applied to extract PA from the traditional Chinese medicines as a solid-phase extraction sorbent. The resultant cartridge showed that the MIPs have a good extraction performance and were able to selectively extract almost 82% of PA from the extract of Rhizoma homalomenae. Thus, the proposed molecularly imprinted-solid phase extraction-high performance liquid chromatography method can be successfully used to extract and analyse PA in traditional Chinese medicines. PMID:21809445
He, Wei; Chen, Meilian; Park, Jae-Eun; Hur, Jin
Few studies have been conducted to examine the spatial heterogeneity of riverine sediment organic matter (SOM) at the molecular level. The present study explored the chemical and molecular heterogeneity of alkaline-extractable SOM from riverine sediments via multiple analytical tools including molecular composition, absorption and fluorescence spectra, and molecular size distributions. The riverine SOM revealed complex and diverse characteristics, exhibiting a great number of non-redundant formulas and high spatial variations. The molecular diversity was more pronounced for the sediments affected by a higher degree of anthropogenic activities. Unlike the cases of aquatic dissolved organic matter, highly-unsaturated structures with oxygen (HUSO) of SOM were more associated with the spectral and size features of humic-like (or terrestrial) substances than aromatic molecules were, cautioning the interpretation of the SOM molecules responsible for apparent indicators. Noting that a higher detection rate (DR) produces fewer common molecules, the common molecules of 23 different SOMs were determined at a reasonable DR value of 0.35, which accounted for a small portion (5.8%) of all detected molecules. They were mainly CHO compounds (>98%), which positively correlated with spectral indicators of biological production. Despite the low abundance, however, the ratios of aromatic to aliphatic substances could be indexed to classify the common molecules into several geochemical molecular groups with different degrees of the associations with the apparent spectral and size indicators. PMID:27192357
We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)
Kopec, Wojciech; Telenius, Jelena; Khandelia, Himanshu
Several small drugs and medicinal plant extracts, such as the Indian spice extract curcumin, have a wide range of useful pharmacological properties that cannot be ascribed to binding to a single protein target alone. The lipid bilayer membrane is thought to mediate the effects of many such...... molecules directly via perturbation of the plasma membrane structure and dynamics, or indirectly by modulating transmembrane protein conformational equilibria. Furthermore, for bioavailability, drugs must interact with and eventually permeate the lipid bilayer barrier on the surface of cells. Biophysical...... studies of the interactions of drugs and plant extracts are therefore of interest. Molecular dynamics simulations, which can access time and length scales that are not simultaneously accessible by other experimental methods, are often used to obtain quantitative molecular and thermodynamic descriptions of...
Hale, Scott A
At least two software packages---DARWIN, Eckerd College, and FinScan, Texas A&M---exist to facilitate the identification of cetaceans---whales, dolphins, porpoises---based upon the naturally occurring features along the edges of their dorsal fins. Such identification is useful for biological studies of population, social interaction, migration, etc. The process whereby fin outlines are extracted in current fin-recognition software packages is manually intensive and represents a major user input bottleneck: it is both time consuming and visually fatiguing. This research aims to develop automated methods (employing unsupervised thresholding and morphological processing techniques) to extract cetacean dorsal fin outlines from digital photographs thereby reducing manual user input. Ideally, automatic outline generation will improve the overall user experience and improve the ability of the software to correctly identify cetaceans. Various transformations from color to gray space were examined to determine whi...
Zarejousheghani, Mashaalah; Schrader, Steffi; Möder, Monika; Lorenz, Pierre; Borsdorf, Helko
Acesulfame is a known indicator that is used to identify the introduction of domestic wastewater into water systems. It is negatively charged and highly water-soluble at environmental pH values. In this study, a molecularly imprinted polymer (MIP) was synthesized for negatively charged acesulfame and successfully applied for the selective solid phase extraction (SPE) of acesulfame from influent and effluent wastewater samples. (Vinylbenzyl)trimethylammonium chloride (VBTA) was used as a novel phase transfer reagent, which enhanced the solubility of negatively charged acesulfame in the organic solvent (porogen) and served as a functional monomer in MIP synthesis. Different molecularly imprinted polymers were synthesized to optimize the extraction capability of acesulfame. The different materials were evaluated using equilibrium rebinding experiments, selectivity experiments and scanning electron microscopy (SEM). The most efficient MIP was used in a molecularly imprinted-solid phase extraction (MISPE) protocol to extract acesulfame from wastewater samples. Using high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS-MS) analysis, detection and quantification limits were achieved at 0.12μgL(-1) and 0.35μgL(-1), respectively. Certain cross selectivity for the chemical compounds containing negatively charged sulfonamide functional group was observed during selectivity experiments. PMID:26256920
We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)
Sequential injection microcolumn extraction (SI-MCE) based on the implementation of a soil-containing microcartridge as external reactor in a sequential injection network is, for the first time, proposed for dynamic fractionation of macronutrients in environmental solids, as exemplified by the partitioning of inorganic phosphorus in agricultural soils. The on-line fractionation method capitalises on the accurate metering and sequential exposure of the various extractants to the solid sample by application of programmable flow as precisely coordinated by a syringe pump. Three different soil phase associations for phosphorus, that is, exchangeable, Al- and Fe-bound, and Ca-bound fractions, were elucidated by accommodation in the flow manifold of the three steps of the Hieltjes-Lijklema (HL) scheme involving the use of 1.0 M NH4Cl, 0.1 M NaOH and 0.5 M HCl, respectively, as sequential leaching reagents. The precise timing and versatility of SI for tailoring various operational extraction modes were utilized for investigating the extractability and the extent of phosphorus re-distribution for variable partitioning times. Automatic spectrophotometric determination of soluble reactive phosphorus in soil extracts was performed by a flow injection (FI) analyser based on the Molybdenum Blue (MB) chemistry. The 3σ detection limit was 0.02 mg P L-1 while the linear dynamic range extended up to 20 mg P L-1 regardless of the extracting media. Despite the variable chemical composition of the HL extracts, a single FI set-up was assembled with no need for either manifold re-configuration or modification of chemical composition of reagents. The mobilization of trace elements, such as Cd, often present in grazed pastures as a result of the application of phosphate fertilizers, was also explored in the HL fractions by electrothermal atomic absorption spectrometry
Tang, Fu-Ching; Wu, Fu-Chiao; Yen, Chia-Te; Chang, Jay; Chou, Wei-Yang; Gilbert Chang, Shih-Hui; Cheng, Horng-Long
In the optimization of organic solar cells (OSCs), a key problem lies in the maximization of charge carriers from the active layer to the electrodes. Hence, this study focused on the interfacial molecular configurations in efficient OSC charge extraction by theoretical investigations and experiments, including small molecule-based bilayer-heterojunction (sm-BLHJ) and polymer-based bulk-heterojunction (p-BHJ) OSCs. We first examined a well-defined sm-BLHJ model system of OSC composed of p-type pentacene, an n-type perylene derivative, and a nanogroove-structured poly(3,4-ethylenedioxythiophene) (NS-PEDOT) hole extraction layer. The OSC with NS-PEDOT shows a 230% increment in the short circuit current density compared with that of the conventional planar PEDOT layer. Our theoretical calculations indicated that small variations in the microscopic intermolecular interaction among these interfacial configurations could induce significant differences in charge extraction efficiency. Experimentally, different interfacial configurations were generated between the photo-active layer and the nanostructured charge extraction layer with periodic nanogroove structures. In addition to pentacene, poly(3-hexylthiophene), the most commonly used electron-donor material system in p-BHJ OSCs was also explored in terms of its possible use as a photo-active layer. Local conductive atomic force microscopy was used to measure the nanoscale charge extraction efficiency at different locations within the nanogroove, thus highlighting the importance of interfacial molecular configurations in efficient charge extraction. This study enriches understanding regarding the optimization of the photovoltaic properties of several types of OSCs by conducting appropriate interfacial engineering based on organic/polymer molecular orientations. The ultimate power conversion efficiency beyond at least 15% is highly expected when the best state-of-the-art p-BHJ OSCs are combined with present arguments
Objective To study the molecular mechanisms of Curcuma Wenyujin extract-mediated inhibitory effects on human esophageal carcinoma cells. Methods The Curcuma Wenyujin extract was obtained by supercritical carbon dioxide extraction. TE-1 cells were divided into 4 groups after adherence.
Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald;
Recently a novel approach to perform sequential extractions (SE) of elements in solid samples was developed by this group, based upon the use of a sequential injection (SI) system incorporating a specially designed extraction microcolumn. Entailing a number of distinct advantages as compared to c...... CRM483 soil which exhibits inhomogeneity in the particle size distribution....
Khomami, Bamin [Univ. of Tennessee, Knoxville, TN (United States); Cui, Shengting [Univ. of Tennessee, Knoxville, TN (United States); de Almeida, Valmor F. [Oak Ridge National Lab., Oak Ridge, TN (United States); Felker, Kevin [Oak Ridge National Lab., Oak Ridge, TN (United States)
The purpose of this project is to quantify the interfacial transport of water into the most prevalent nuclear reprocessing solvent extractant mixture, namely tri-butyl- phosphate (TBP) and dodecane, via massively parallel molecular dynamics simulations on the most powerful machines available for open research. Specifically, we will accomplish this objective by evolving the water/TBP/dodecane system up to 1 ms elapsed time, and validate the simulation results by direct comparison with experimentally measured water solubility in the organic phase. The significance of this effort is to demonstrate for the first time that the combination of emerging simulation tools and state-of-the-art supercomputers can provide quantitative information on par to experimental measurements for solvent extraction systems of relevance to the nuclear fuel cycle. Results: Initially, the isolated single component, and single phase systems were studied followed by the two-phase, multicomponent counterpart. Specifically, the systems we studied were: pure TBP; pure n-dodecane; TBP/n-dodecane mixture; and the complete extraction system: water-TBP/n-dodecane two phase system to gain deep insight into the water extraction process. We have completely achieved our goal of simulating the molecular extraction of water molecules into the TBP/n-dodecane mixture up to the saturation point, and obtained favorable comparison with experimental data. Many insights into fundamental molecular level processes and physics were obtained from the process. Most importantly, we found that the dipole moment of the extracting agent is crucially important in affecting the interface roughness and the extraction rate of water molecules into the organic phase. In addition, we have identified shortcomings in the existing OPLS-AA force field potential for long-chain alkanes. The significance of this force field is that it is supposed to be optimized for molecular liquid simulations. We found that it failed for dodecane and
Dick Deborah Pinheiro; Burba Peter
In the present study, the extraction behaviour of humic substances (HS) from an Oxisol and a Mollisol from South Brazil, by using 0.1 and 0.5 mol L-1 NaOH and 0.15 mol L-1 neutral pyrophosphate solutions, respectively, was systematically studied. The kinetics and efficiency of HS extraction were evaluated by means of UV/Vis spectroscopy. The isolated humic acids (HA) and fulvic acids (FA) were size-classified by multistage ultrafiltration (six fractions) in the molecular weight range of 1 to ...
Jeddi, Fakhri; Piarroux, Renaud; Mary, Charles
During the last 20 years, molecular biology techniques have propelled the diagnosis of parasitic diseases into a new era, as regards assay speed, sensitivity, and parasite characterization. However, DNA extraction remains a critical step and should be adapted for diagnostic and epidemiological studies. The aim of this report was to document the constraints associated with DNA extraction for the diagnosis of parasitic diseases and illustrate the adaptation of an automated extraction system, Nu...
Tract-specific analysis (TSA) measures diffusion parameters along a specific fiber that has been extracted by fiber tracking using manual regions of interest (ROIs), but TSA is limited by its requirement for manual operation, poor reproducibility, and high time consumption. We aimed to develop a fully automated extraction method for the cingulum bundle (CB) and to apply the method to TSA in neurobehavioral disorders such as Parkinson's disease (PD). We introduce the voxel classification (VC) and auto diffusion tensor fiber-tracking (AFT) methods of extraction. The VC method directly extracts the CB, skipping the fiber-tracking step, whereas the AFT method uses fiber tracking from automatically selected ROIs. We compared the results of VC and AFT to those obtained by manual diffusion tensor fiber tracking (MFT) performed by 3 operators. We quantified the Jaccard similarity index among the 3 methods in data from 20 subjects (10 normal controls [NC] and 10 patients with Parkinson's disease dementia [PDD]). We used all 3 extraction methods (VC, AFT, and MFT) to calculate the fractional anisotropy (FA) values of the anterior and posterior CB for 15 NC subjects, 15 with PD, and 15 with PDD. The Jaccard index between results of AFT and MFT, 0.72, was similar to the inter-operator Jaccard index of MFT. However, the Jaccard indices between VC and MFT and between VC and AFT were lower. Consequently, the VC method classified among 3 different groups (NC, PD, and PDD), whereas the others classified only 2 different groups (NC, PD or PDD). For TSA in Parkinson's disease, the VC method can be more useful than the AFT and MFT methods for extracting the CB. In addition, the results of patient data analysis suggest that a reduction of FA in the posterior CB may represent a useful biological index for monitoring PD and PDD. (author)
van Hage Willem; Katrenko Sophia; Meij Edgar; Schuemie Martijn; Gibson Andrew P; Marshall M Scott; Roos Marco; Krommydas Konstantinos; Adriaans Pieter W
Abstract Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable ...
Jed A. Fuhrman; Comeau, Dorothy E.; Hagström, Åke; Chan, Amy M.
We developed a simple technique for the high-yield extraction of purified DNA from mixed populations of natural planktonic marine microbes (primarily bacteria). This is a necessary step for several molecular biological approaches to the study of microbial communities in nature. The microorganisms from near-shore marine and brackish water samples, ranging in volume from 8 to 40 liters, were collected on 0.22-μm-pore-size fluorocarbon-based filters, after prefiltration through glass fiber filte...
Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun
Accurate diagnosis of acute appendicitis is a difficult problem in practice especially when the patient is too young or women in pregnancy. In this paper, we propose a fully automatic appendix extractor from ultrasonography by applying a series of image processing algorithms and an unsupervised neural learning algorithm, self-organizing map. From the suggestions of clinical practitioners, we define four shape patterns of appendix and self-organizing map learns those patterns in pixel clustering phase. In the experiment designed to test the performance for those four frequently found shape patterns, our method is successful in 3 types (1 failure out of 45 cases) but leaves a question for one shape pattern (80% correct).