WorldWideScience

Sample records for publication summarizes approximately

  1. Bayesian Query-Focused Summarization

    CERN Document Server

    Daumé, Hal

    2009-01-01

    We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.

  2. Summarizing Expository Texts

    Science.gov (United States)

    Westby, Carol; Culatta, Barbara; Lawrence, Barbara; Hall-Kenyon, Kendra

    2010-01-01

    Purpose: This article reviews the literature on students' developing skills in summarizing expository texts and describes strategies for evaluating students' expository summaries. Evaluation outcomes are presented for a professional development project aimed at helping teachers develop new techniques for teaching summarization. Methods: Strategies…

  3. Arabic summarization in Tw

    Directory of Open Access Journals (Sweden)

    Nawal El-Fishawy

    2014-06-01

    Full Text Available Twitter, an online micro blogs, enables its users to write and read text-based posts known as “tweets”. It became one of the most commonly used social networks. However, an important problem arises is that the returned tweets, when searching for a topic phrase, are only sorted by recency not relevancy. This makes the user to manually read through the tweets in order to understand what are primarily saying about the particular topic. Some strategies were developed for summarizing English micro blogs but Arabic micro blogs summarization is still an active research area. This paper presents a machine learning based solution for summarizing Arabic micro blogging posts and more specifically Egyptian dialect summarization. The goal is to produce short summary for Arabic tweets related to a specific topic in less time and effort. The proposed strategy is evaluated and the results are compared with that obtained by the well-known multi-document summarization algorithms including; SumBasic, TF-IDF, PageRank, MEAD, and human summaries.

  4. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  5. Gesture Recognition Summarization

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ting-fang; FENG Zhi-quan; SU Yuan-yuan; JIANG Yan

    2014-01-01

    Gesture recognition is an important research in the field of human-computer interaction. Hand Gestures are strong variable and flexible, so the gesture recognition has always been an important challenge for the researchers. In this paper, we first outlined the development of gestures recognition, and different classification of gestures based on different purposes. Then we respectively introduced common methods used in the process of gesture segmentation, feature extraction and recognition. Finally, the gesture recognition was summarized and the studying prospects were given.

  6. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  7. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies.

    Directory of Open Access Journals (Sweden)

    Asad Abdi

    Full Text Available Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively.This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing.

  8. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies

    Science.gov (United States)

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M.; Aliguliyev, Ramiz M.

    2016-01-01

    Background Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. Design/Results This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing. PMID:26735139

  9. Video summarization using motion descriptors

    Science.gov (United States)

    Divakaran, Ajay; Peker, Kadir A.; Sun, Huifang

    2001-01-01

    We describe a technique for video summarization that uses motion descriptors computed in the compressed domain to speed up conventional color based video summarization technique. The basic hypothesis of the work is that the intensity of motion activity of a video segment is a direct indication of its 'summarizability.' We present experimental verification of this hypothesis. We are thus able to quickly identify easy to summarize segments of a video sequence since they have a low intensity of motion activity. Moreover, the compressed domain extraction of motion activity intensity is much simpler than the color-based calculations. We are able to easily summarize these segments by simply choosing a key-frame at random from each low- activity segment. We can then apply conventional color-based summarization techniques to the remaining segments. We are thus able to speed up color-based summarization techniques by reducing the number of segments on which computationally more expensive color-based computation is needed.

  10. Inferring Advisor-Student Relationships from Publication Networks Based on Approximate MaxConfidence Measure

    Directory of Open Access Journals (Sweden)

    Yongjun Li

    2017-01-01

    Full Text Available A publication network contains abundant knowledge about advisor-student relationships. However, these relationship labels are not explicitly shown and need to be identified based on the hidden knowledge. The exploration of such relationships can benefit many interesting applications such as expert finding and research community analysis and has already drawn many scholars’ attention. In this paper, based on the common knowledge that a student usually coauthors his papers with his advisor, we propose an approximate MaxConfidence measure and present an advisor-student relationship identification algorithm based on the proposed measure. Based on the comparison of two authors’ publication list, we first employ the proposed measure to determine the time interval that a potential advising relationship lasts and then infer the likelihood of this potential advising relationship. Our algorithm suggests an advisor for each student based on the inference results. The experiment results show that our algorithm can infer advisor-student relationships efficiently and achieve a better accuracy than the time-constrained probabilistic factor graph (TPFG model without any supervised information. Also, we apply some reasonable restrictions on the dataset to reduce the search space significantly.

  11. Video summarization: methods and landscape

    Science.gov (United States)

    Barbieri, Mauro; Agnihotri, Lalitha; Dimitrova, Nevenka

    2003-11-01

    The ability to summarize and abstract information will be an essential part of intelligent behavior in consumer devices. Various summarization methods have been the topic of intensive research in the content-based video analysis community. Summarization in traditional information retrieval is a well understood problem. While there has been a lot of research in the multimedia community there is no agreed upon terminology and classification of the problems in this domain. Although the problem has been researched from different aspects there is usually no distinction between the various dimensions of summarization. The goal of the paper is to provide the basic definitions of widely used terms such as skimming, summarization, and highlighting. The different levels of summarization: local, global, and meta-level are made explicit. We distinguish among the dimensions of task, content, and method and provide an extensive classification model for the same. We map the existing summary extraction approaches in the literature into this model and we classify the aspects of proposed systems in the literature. In addition, we outline the evaluation methods and provide a brief survey. Finally we propose future research directions based on the white spots that we identified by analysis of existing systems in the literature.

  12. Summarization by domain ontology navigation

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik

    2013-01-01

    of the subject. In between these two extremes, conceptual summaries encompass selected concepts derived using background knowledge. We address in this paper an approach where conceptual summaries are provided through a conceptualization as given by an ontology. The ontology guiding the summarization can...... be a simple taxonomy or a generative domain ontology. A domain ontology can be provided by a preanalysis of a domain corpus and can be used to condense improved summaries that better reflects the conceptualization of a given domain....

  13. Public Spending on Private Security Services in El Salvador: Preliminary Descriptive Approximation

    Directory of Open Access Journals (Sweden)

    Augusto Rigoberto Lopez Ramírez

    2014-11-01

    Full Text Available This is a descriptive study of the cost it represents for the Salvadoran government to finance the ongoing expense of engaging private security service providers. This relates to the situation of public finances, and implications for security and surveillance of public facilities by private companies. A desk review of institutional data was undertaken, measuring the central trend of this expense. Finally, the conclusions describe the issue in its relevant dimensions, and other pertinent topics of research also come to light.DOI: http://dx.doi.org/10.5377/rpsp.v4i1.1582

  14. Monitoring social media: Summarization, classification and recommendation

    NARCIS (Netherlands)

    Ren, Zhaochun

    2016-01-01

    In this dissertation, we continue previous research on understanding social media documents along three lines: summarization, classification and recommendation. Our first line of work is the summarization of social media documents. Considering the task of time-aware tweets summarization, we first fo

  15. Text summarization as a decision support aid

    Directory of Open Access Journals (Sweden)

    Workman T

    2012-05-01

    Full Text Available Abstract Background PubMed data potentially can provide decision support information, but PubMed was not exclusively designed to be a point-of-care tool. Natural language processing applications that summarize PubMed citations hold promise for extracting decision support information. The objective of this study was to evaluate the efficiency of a text summarization application called Semantic MEDLINE, enhanced with a novel dynamic summarization method, in identifying decision support data. Methods We downloaded PubMed citations addressing the prevention and drug treatment of four disease topics. We then processed the citations with Semantic MEDLINE, enhanced with the dynamic summarization method. We also processed the citations with a conventional summarization method, as well as with a baseline procedure. We evaluated the results using clinician-vetted reference standards built from recommendations in a commercial decision support product, DynaMed. Results For the drug treatment data, Semantic MEDLINE enhanced with dynamic summarization achieved average recall and precision scores of 0.848 and 0.377, while conventional summarization produced 0.583 average recall and 0.712 average precision, and the baseline method yielded average recall and precision values of 0.252 and 0.277. For the prevention data, Semantic MEDLINE enhanced with dynamic summarization achieved average recall and precision scores of 0.655 and 0.329. The baseline technique resulted in recall and precision scores of 0.269 and 0.247. No conventional Semantic MEDLINE method accommodating summarization for prevention exists. Conclusion Semantic MEDLINE with dynamic summarization outperformed conventional summarization in terms of recall, and outperformed the baseline method in both recall and precision. This new approach to text summarization demonstrates potential in identifying decision support data for multiple needs.

  16. A Survey of Unstructured Text Summarization Techniques

    Directory of Open Access Journals (Sweden)

    Sherif Elfayoumy

    2014-05-01

    Full Text Available Due to the explosive amounts of text data being created and organizations increased desire to leverage their data corpora, especially with the availability of Big Data platforms, there is not usually enough time to read and understand each document and make decisions based on document contents. Hence, there is a great demand for summarizing text documents to provide a representative substitute for the original documents. By improving summarizing techniques, precision of document retrieval through search queries against summarized documents is expected to improve in comparison to querying against the full spectrum of original documents. Several generic text summarization algorithms have been developed, each with its own advantages and disadvantages. For example, some algorithms are particularly good for summarizing short documents but not for long ones. Others perform well in identifying and summarizing single-topic documents but their precision degrades sharply with multi-topic documents. In this article we present a survey of the literature in text summarization. We also surveyed some of the most common evaluation methods for the quality of automated text summarization techniques. Last, we identified some of the challenging problems that are still open, in particular the need for a universal approach that yields good results for mixed types of documents.

  17. Using Text Messaging to Summarize Text

    Science.gov (United States)

    Williams, Angela Ruffin

    2012-01-01

    Summarizing is an academic task that students are expected to have mastered by the time they enter college. However, experience has revealed quite the contrary. Summarization is often difficult to master as well as teach, but instructors in higher education can benefit greatly from the rapid advancement in mobile wireless technology devices, by…

  18. Summarizing Software Artifacts:A Literature Review

    Institute of Scientific and Technical Information of China (English)

    Najam Nazar; Yan Hu; He Jiang

    2016-01-01

    This paper presents a literature review in the field of summarizing software artifacts, focusing on bug reports, source code, mailing lists and developer discussions artifacts. From Jan. 2010 to Apr. 2016, numerous summarization techniques, approaches, and tools have been proposed to satisfy the ongoing demand of improving software performance and quality and facilitating developers in understanding the problems at hand. Since aforementioned artifacts contain both structured and unstructured data at the same time, researchers have applied different machine learning and data mining techniques to generate summaries. Therefore, this paper first intends to provide a general perspective on the state of the art, describing the type of artifacts, approaches for summarization, as well as the common portions of experimental procedures shared among these artifacts. Moreover, we discuss the applications of summarization, i.e., what tasks at hand have been achieved through summarization. Next, this paper presents tools that are generated for summarization tasks or employed during summarization tasks. In addition, we present different summarization evaluation methods employed in selected studies as well as other important factors that are used for the evaluation of generated summaries such as adequacy and quality. Moreover, we briefly present modern communication channels and complementarities with commonalities among different software artifacts. Finally, some thoughts about the challenges applicable to the existing studies in general as well as future research directions are also discussed. The survey of existing studies will allow future researchers to have a wide and useful background knowledge on the main and important aspects of this research field.

  19. A Statistical Approach to Automatic Speech Summarization

    Science.gov (United States)

    Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex

    2003-12-01

    This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  20. A Statistical Approach to Automatic Speech Summarization

    Directory of Open Access Journals (Sweden)

    Chiori Hori

    2003-02-01

    Full Text Available This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG. We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  1. Automatic Text Summarization: Past, Present and Future

    OpenAIRE

    Saggion, Horacio; Poibeau, Thierry

    2012-01-01

    International audience; Automatic text summarization, the computer-based production of condensed versions of documents, is an important technology for the information society. Without summaries it would be practically impossible for human beings to get access to the ever growing mass of information available online. Although research in text summarization is over fifty years old, some efforts are still needed given the insufficient quality of automatic summaries and the number of interesting ...

  2. Learning to Summarize and Summarizing for Learning: Some Computer-Based Supports

    NARCIS (Netherlands)

    Dessus, Philippe

    2008-01-01

    Dessus, P. (2008). Learning to Summarize and Summarizing for Learning: Some Computer-Based Supports. Workshop presentation at the symposium Learning networks for professional. November, 14, 2008, Heerlen, Nederland: Open Universiteit Nederland.

  3. The Effect of Summarizing and Presentation Strategies

    Directory of Open Access Journals (Sweden)

    Hooshang Khoshsima

    2014-07-01

    Full Text Available The present study aimed to find out the effect of summarizing and presentation strategies on Iranian intermediate EFL learners’ reading comprehension. 61 students were selected and divided into two experimental and control groups. The homogeneity of their proficiency level was established using a TOEFL proficiency test. The experimental group used the two strategies three sessions each week for twenty weeks, while the control group was not trained on the strategies. After every two-week instruction, an immediate posttest was administered. At the end of the study, a post-test was administered to both groups. Paired-sample t-test and Independent sample t-test were used for analysis. The results of the study revealed that summarizing and presentation strategies had significant effect on promoting reading comprehension of intermediate EFL learners. It also indicated that the presentation strategy was significantly more effective on students’ reading comprehension. Keywords: reading strategy, summarizing, presentation, reading comprehension, EFL learners

  4. Summarization on variable liquid thrust rocket engines

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The technology actuality and development trend of variable thrust rocket engines at home and abroad are summarized. Key technologies of developing variable thrust rocket engines are analyzed. Development advices on developing variable thrust rocket engines that are adapted to the situation of our country are brought forward.

  5. A novel tool for assessing and summarizing the built environment

    Directory of Open Access Journals (Sweden)

    Kroeger Gretchen L

    2012-10-01

    Full Text Available Abstract Background A growing corpus of research focuses on assessing the quality of the local built environment and also examining the relationship between the built environment and health outcomes and indicators in communities. However, there is a lack of research presenting a highly resolved, systematic, and comprehensive spatial approach to assessing the built environment over a large geographic extent. In this paper, we contribute to the built environment literature by describing a tool used to assess the residential built environment at the tax parcel-level, as well as a methodology for summarizing the data into meaningful indices for linkages with health data. Methods A database containing residential built environment variables was constructed using the existing body of literature, as well as input from local community partners. During the summer of 2008, a team of trained assessors conducted an on-foot, curb-side assessment of approximately 17,000 tax parcels in Durham, North Carolina, evaluating the built environment on over 80 variables using handheld Global Positioning System (GPS devices. The exercise was repeated again in the summer of 2011 over a larger geographic area that included roughly 30,700 tax parcels; summary data presented here are from the 2008 assessment. Results Built environment data were combined with Durham crime data and tax assessor data in order to construct seven built environment indices. These indices were aggregated to US Census blocks, as well as to primary adjacency communities (PACs and secondary adjacency communities (SACs which better described the larger neighborhood context experienced by local residents. Results were disseminated to community members, public health professionals, and government officials. Conclusions The assessment tool described is both easily-replicable and comprehensive in design. Furthermore, our construction of PACs and SACs introduces a novel concept to approximate varying

  6. A novel tool for assessing and summarizing the built environment

    Science.gov (United States)

    2012-01-01

    Background A growing corpus of research focuses on assessing the quality of the local built environment and also examining the relationship between the built environment and health outcomes and indicators in communities. However, there is a lack of research presenting a highly resolved, systematic, and comprehensive spatial approach to assessing the built environment over a large geographic extent. In this paper, we contribute to the built environment literature by describing a tool used to assess the residential built environment at the tax parcel-level, as well as a methodology for summarizing the data into meaningful indices for linkages with health data. Methods A database containing residential built environment variables was constructed using the existing body of literature, as well as input from local community partners. During the summer of 2008, a team of trained assessors conducted an on-foot, curb-side assessment of approximately 17,000 tax parcels in Durham, North Carolina, evaluating the built environment on over 80 variables using handheld Global Positioning System (GPS) devices. The exercise was repeated again in the summer of 2011 over a larger geographic area that included roughly 30,700 tax parcels; summary data presented here are from the 2008 assessment. Results Built environment data were combined with Durham crime data and tax assessor data in order to construct seven built environment indices. These indices were aggregated to US Census blocks, as well as to primary adjacency communities (PACs) and secondary adjacency communities (SACs) which better described the larger neighborhood context experienced by local residents. Results were disseminated to community members, public health professionals, and government officials. Conclusions The assessment tool described is both easily-replicable and comprehensive in design. Furthermore, our construction of PACs and SACs introduces a novel concept to approximate varying scales of community and

  7. Guided Structure-Aware Review Summarization

    Institute of Scientific and Technical Information of China (English)

    Feng Jin; Min-Lie Huang; Xiao-Yan Zhu

    2011-01-01

    Although the goal of traditional text summarization is to generate summaries with diverse information,most of those applications have no explicit definition of the information structure.Thus,it is difficult to generate truly structureaware summaries because the information structure to guide summarization is unclear.In this paper,we present a novel framework to generate guided summaries for product reviews.The guided summary has an explicitly defined structure which comes from the important aspects of products.The proposed framework attempts to maximize expected aspect satisfaction during summary generation.The importance of an aspect to a generated summary is modeled using Labeled Latent Dirichlet Allocation.Empirical experimental results on consumer reviews of cars show the effectiveness of our method.

  8. Summarizing Vocabularies in the Global Semantic Web

    Institute of Scientific and Technical Information of China (English)

    Xiang Zhang; Gong Cheng; Wei-Yi Ge; Yu-Zhong Qu

    2009-01-01

    In the Semantic Web, vocabularies are defined and shared among knowledge workers to describe linked data for scientific, industrial or daily life usage. With the rapid growth of online vocabularies, there is an emergent need for approaches helping users understand vocabularies quickly. In this paper, we study the summarization of vocabularies to help users understand vocabularies. Vocabulary summarization is based on the structural analysis and pragmatics statistics in the global Semantic Web. Local Bipartite Model and Expanded Bipartite Model of a vocabulary are proposed to characterize the structure in a vocabulary and links between vocabularies. A structural importance for each RDF sentence in the vocabulary is assessed using link analysis. Meanwhile, pragmatics importance of each RDF sentence is assessed using the statistics of instantiation of its terms in the Semantic Web. Summaries are produced by extracting important RDF sentences in vocabularies under a re-ranking strategy. Preliminary experiments show that it is feasible to help users understand a vocabulary through its summary.

  9. Data summarization method for chronic disease tracking.

    Science.gov (United States)

    Aleksić, Dejan; Rajković, Petar; Vučković, Dušan; Janković, Dragan; Milenković, Aleksandar

    2017-05-01

    Bearing in mind the rising prevalence of chronic medical conditions, the chronic disease management is one of the key features required by medical information systems used in primary healthcare. Our research group paid a particular attention to this specific area by offering a set of custom data collection forms and reports in order to improve medical professionals' daily routine. The main idea was to provide an overview of history for chronic diseases, which, as it seems, had not been properly supported in previous administrative workflows. After five years of active use of medical information systems in more than 25 primary healthcare institutions, we were able to identify several scenarios that were often end-user-action dependent and could result in the data related to chronic diagnoses being loosely connected. An additional benefit would be a more effective identification of potentially new patients suffering from chronic diseases. For this particular reason, we introduced an extension of the existing data structures and a summarizing method along with a specific tool that should help in connecting all the data related to a patient and a diagnosis. The summarization method was based on the principle of connecting all of the records pertaining to a specific diagnosis for the selected patient, and it was envisaged to work in both automatic and on-demand mode. The expected results were a more effective identification of new potential patients and a completion of the existing histories of diseases associated with chronic diagnoses. The current system usage analysis shows that a small number of doctors used functionalities specially designed for chronic diseases affecting less than 6% of the total population (around 11,500 out of more than 200,000 patients). In initial tests, the on-demand data summarization mode was applied in general practice and 89 out of 155 users identified more than 3000 new patients with a chronic disease over a three-month test period

  10. TREC 2014 Temporal Summarization Track Overview

    Science.gov (United States)

    2015-02-17

    TREC 2014 Temporal Summarization Track Overview Javed Aslam Fernando Diaz Matthew Ekstrand-Abueg Richard McCreadie Virgil Pavlu Tetsuya Sakai...string, time): u = (u.string, u.t). For example u = (“The hurricane was upgraded to category 4”, 1330169580) represents an update describing the... hurricane category, now 4, pushed out by system S at UNIX time 1330169580 (i.e. 1330169580 seconds after 0:00 UTC on January 1, 1970). In this year’s

  11. Vortex core timelines and ribbon summarizations: flow summarization over time and simulation ensembles

    Science.gov (United States)

    Chan, Alexis Y. L.; Lee, Joohwi; Taylor, Russell M.

    2013-01-01

    We present two new vortex-summarization techniques designed to portray vortex motion over an entire simulation and over an ensemble of simulations in a single image. Linear "vortex core timelines" with cone glyphs summarize flow over all time steps of a single simulation, with color varying to indicate time. Simplified "ribbon summarizations" with hue nominally encoding ensemble membership and saturation encoding time enable direct visual comparison of the distribution of vortices in time and space for a set of simulations.

  12. Web Search Results Summarization Using Similarity Assessment

    Directory of Open Access Journals (Sweden)

    Sawant V.V.

    2014-06-01

    Full Text Available Now day’s internet has become part of our life, the WWW is most important service of internet because it allows presenting information such as document, imaging etc. The WWW grows rapidly and caters to a diversified levels and categories of users. For user specified results web search results are extracted. Millions of information pouring online, users has no time to surf the contents completely .Moreover the information available is repeated or duplicated in nature. This issue has created the necessity to restructure the search results that could yield results summarized. The proposed approach comprises of different feature extraction of web pages. Web page visual similarity assessment has been employed to address the problems in different fields including phishing, web archiving, web search engine etc. In this approach, initially by enters user query the number of search results get stored. The Earth Mover's Distance is used to assessment of web page visual similarity, in this technique take the web page as a low resolution image, create signature of that web page image with color and co-ordinate features .Calculate the distance between web pages by applying EMD method. Compute the Layout Similarity value by using tag comparison algorithm and template comparison algorithm. Textual similarity is computed by using cosine similarity, and hyperlink analysis is performed to compute outward links. The final similarity value is calculated by fusion of layout, text, hyperlink and EMD value. Once the similarity matrix is found clustering is employed with the help of connected component. Finally group of similar web pages i.e. summarized results get displayed to user. Experiment conducted to demonstrate the effectiveness of four methods to generate summarized result on different web pages and user queries also.

  13. Sociometry Based Multiparty Audio Recordings Summarization

    OpenAIRE

    Vinciarelli, Alessandro

    2006-01-01

    This paper shows how Social Network Analysis, the study of relational data in specific social environments, can be used to summarize multiparty radio news recordings. A social network is extracted from each recording and it is analyzed in order to detect the role of each speaker (e.g. anchorman, guest, etc.). The role is then used as a criterion to select the segments that are more representative of the recording content. The results show that the length of the recordings can be reduced by mo...

  14. Method for gathering and summarizing internet information

    Science.gov (United States)

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  15. Method for gathering and summarizing internet information

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  16. Figure-associated text summarization and evaluation.

    Science.gov (United States)

    Polepalli Ramesh, Balaji; Sethi, Ricky J; Yu, Hong

    2015-01-01

    Biomedical literature incorporates millions of figures, which are a rich and important knowledge resource for biomedical researchers. Scientists need access to the figures and the knowledge they represent in order to validate research findings and to generate new hypotheses. By themselves, these figures are nearly always incomprehensible to both humans and machines and their associated texts are therefore essential for full comprehension. The associated text of a figure, however, is scattered throughout its full-text article and contains redundant information content. In this paper, we report the continued development and evaluation of several figure summarization systems, the FigSum+ systems, that automatically identify associated texts, remove redundant information, and generate a text summary for every figure in an article. Using a set of 94 annotated figures selected from 19 different journals, we conducted an intrinsic evaluation of FigSum+. We evaluate the performance by precision, recall, F1, and ROUGE scores. The best FigSum+ system is based on an unsupervised method, achieving F1 score of 0.66 and ROUGE-1 score of 0.97. The annotated data is available at figshare.com (http://figshare.com/articles/Figure_Associated_Text_Summarization_and_Evaluation/858903).

  17. Summarizing Audiovisual Contents of a Video Program

    Directory of Open Access Journals (Sweden)

    Gong Yihong

    2003-01-01

    Full Text Available In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker′s face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1 provides a natural visual and audio content overview, and (2 maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  18. Dynamic summarization of bibliographic-based data

    Directory of Open Access Journals (Sweden)

    Hurdle John F

    2011-02-01

    Full Text Available Abstract Background Traditional information retrieval techniques typically return excessive output when directed at large bibliographic databases. Natural Language Processing applications strive to extract salient content from the excessive data. Semantic MEDLINE, a National Library of Medicine (NLM natural language processing application, highlights relevant information in PubMed data. However, Semantic MEDLINE implements manually coded schemas, accommodating few information needs. Currently, there are only five such schemas, while many more would be needed to realistically accommodate all potential users. The aim of this project was to develop and evaluate a statistical algorithm that automatically identifies relevant bibliographic data; the new algorithm could be incorporated into a dynamic schema to accommodate various information needs in Semantic MEDLINE, and eliminate the need for multiple schemas. Methods We developed a flexible algorithm named Combo that combines three statistical metrics, the Kullback-Leibler Divergence (KLD, Riloff's RlogF metric (RlogF, and a new metric called PredScal, to automatically identify salient data in bibliographic text. We downloaded citations from a PubMed search query addressing the genetic etiology of bladder cancer. The citations were processed with SemRep, an NLM rule-based application that produces semantic predications. SemRep output was processed by Combo, in addition to the standard Semantic MEDLINE genetics schema and independently by the two individual KLD and RlogF metrics. We evaluated each summarization method using an existing reference standard within the task-based context of genetic database curation. Results Combo asserted 74 genetic entities implicated in bladder cancer development, whereas the traditional schema asserted 10 genetic entities; the KLD and RlogF metrics individually asserted 77 and 69 genetic entities, respectively. Combo achieved 61% recall and 81% precision, with an F

  19. Document summarization using positive pointwise mutual information

    CERN Document Server

    S, Aji

    2012-01-01

    The degree of success in document summarization processes depends on the performance of the method used in identifying significant sentences in the documents. The collection of unique words characterizes the major signature of the document, and forms the basis for Term-Sentence-Matrix (TSM). The Positive Pointwise Mutual Information, which works well for measuring semantic similarity in the Term-Sentence-Matrix, is used in our method to assign weights for each entry in the Term-Sentence-Matrix. The Sentence-Rank-Matrix generated from this weighted TSM, is then used to extract a summary from the document. Our experiments show that such a method would outperform most of the existing methods in producing summaries from large documents.

  20. AUTOMATIC TEXT SUMMARIZATION BASED ON TEXTUAL COHESION

    Institute of Scientific and Technical Information of China (English)

    Chen Yanmin; Liu Bingquan; Wang Xiaolong

    2007-01-01

    This paper presents two different algorithms that derive the cohesion structure in the form of lexical chains from two kinds of language resources HowNet and TongYiCiCiLin.The research that connects the cohesion structure of a text to the derivation of its summary is displayed.A novel model of automatic text summarization is devised,based on the data provided by lexicai chains from original texts.Moreover,the construction rules of lexical chains are modified according to characteristics of the knowledge database in order to be more suitable for Chinese suIninarization.Evaluation results show that high quality indicative summaries are produced from Chinese texts.

  1. Summarization of an online medical encyclopedia.

    Science.gov (United States)

    Fiszman, Marcelo; Rindflesch, Thomas C; Kilicoglu, Halil

    2004-01-01

    We explore a knowledge-rich (abstraction) approach to summarization and apply it to multiple documents from an online medical encyclopedia. A semantic processor functions as the source interpreter and produces a list of predications. A transformation stage then generalizes and condenses this list, ultimately generating a conceptual condensate for a given disorder topic. We provide a preliminary evaluation of the quality of the condensates produced for a sample of four disorders. The overall precision of the disorder conceptual condensates was 87%, and the compression ratio from the base list of predications to the final condensate was 98%. The conceptual condensate could be used as input to a text generator to produce a natural language summary for a given disorder topic.

  2. Video summarization and semantics editing tools

    Science.gov (United States)

    Xu, Li-Qun; Zhu, Jian; Stentiford, Fred

    2001-01-01

    This paper describes a video summarization and semantics editing tool that is suited for content-based video indexing and retrieval with appropriate human operator assistance. The whole system has been designed with a clear focus on the extraction and exploitation of motion information inherent in the dynamic video scene. The dominant motion information has ben used explicitly for shot boundary detection, camera motion characterization, visual content variations description, and for key frame extraction. Various contributions have been made to ensure that the system works robustly with complex scenes and across different media types. A window-based graphical user interface has been designed to make the task very easy for interactive analysis and editing of semantic events and episode where appropriate.

  3. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...... analysis systems and useless data makes the final results of such systems noisy, unstable, and erroneous. Thus, there is a need for a mechanism to summarize the original video sequence to a set of the most expressive images of the sequence. The proposed system in this paper uses a face quality assessment...... technique for this purpose. The summarized results of this technique have been used in three different facial analysis systems and the experimental results on real video sequences are promising....

  4. Hierarchical video summarization for medical data

    Science.gov (United States)

    Zhu, Xingquan; Fan, Jianping; Elmagarmid, Ahmed K.; Aref, Walid G.

    2001-12-01

    To provide users with an overview of medical video content at various levels of abstraction which can be used for more efficient database browsing and access, a hierarchical video summarization strategy has been developed and is presented in this paper. To generate an overview, the key frames of a video are preprocessed to extract special frames (black frames, slides, clip art, sketch drawings) and special regions (faces, skin or blood-red areas). A shot grouping method is then applied to merge the spatially or temporally related shots into groups. The visual features and knowledge from the video shots are integrated to assign the groups into predefined semantic categories. Based on the video groups and their semantic categories, video summaries for different levels are constructed by group merging, hierarchical group clustering and semantic category selection. Based on this strategy, a user can select the layer of the summary to access. The higher the layer, the more concise the video summary; the lower the layer, the greater the detail contained in the summary.

  5. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...... analysis systems and useless data makes the final results of such systems noisy, unstable, and erroneous. Thus, there is a need for a mechanism to summarize the original video sequence to a set of the most expressive images of the sequence. The proposed system in this paper uses a face quality assessment...

  6. Summarize to learn: summarization and visualization of text for ubiquitous learning

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Last, Mark; Verbeke, Mathias;

    2013-01-01

    Visualizations can stand in many relations to texts – and, as research into learning with pictures has shown, they can become particularly valuable when they transform the contents of the text (rather than just duplicate its message or structure it). But what kinds of transformations can...... be particularly helpful in the learning process? In this paper, we argue that interacting with, and creating, summaries of texts is a key transformation technique, and we investigate how textual and graphical summarization approaches, as well as automatic and manual summarization, can complement one another...

  7. Summarize to learn: summarization and visualization of text for ubiquitous learning

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Last, Mark; Verbeke, Mathias

    2013-01-01

    be particularly helpful in the learning process? In this paper, we argue that interacting with, and creating, summaries of texts is a key transformation technique, and we investigate how textual and graphical summarization approaches, as well as automatic and manual summarization, can complement one another......Visualizations can stand in many relations to texts – and, as research into learning with pictures has shown, they can become particularly valuable when they transform the contents of the text (rather than just duplicate its message or structure it). But what kinds of transformations can...... to support effective learning....

  8. Video Summarization: Survey on Event Detection and Summarization in Soccer Videos

    Directory of Open Access Journals (Sweden)

    Yasmin S. Khan

    2015-11-01

    Full Text Available In today's world, the rapid development of digital video and editing technology has led to fast growing of video data, creating the need for effective and advanced techniques for analysis and video retrieval, as multimedia repositories have made browsing, delivery of contents (video and video retrieval very slow. Hence, video summarization proposes various ways for faster browsing among a large amount of data and also for content indexing. Many people spend their free time to watch or play different sports like soccer, cricket, etc. but it is not possible to watch each and every game due to the longer timing of the game. In such cases, the users may just want to view the summary of the video that is just an abstract of the original video, instead of watching the whole video that provides more information about the occurrence of various incidents in the video. It is preferable to watch just highlights of the game or just review/trailer of a movie. Apparently, summarizing a video is an important process. In this paper, video summarization approaches are discussed, that can generate static or dynamic summaries. We present different techniques for each mode in literature. We have discussed some features used for generating video summaries. As soccer is the world’s most famous game played and watched, it is taken as a case study. Research done in this domain is discussed. We conclude that there is a broad perspective for further research in this field.

  9. Graph-based models for multi-document summarization

    OpenAIRE

    Ercan, Canhasi

    2014-01-01

    This thesis is about automatic document summarization, with experimental results on general, query, update and comparative multi-document summarization (MDS). We describe prior work and our own improvements on some important aspects of a summarization system, including text modeling by means of a graph and sentence selection via archetypal analysis. The centerpiece of this work is a novel method for summarization that we call “Archetypal Analysis Summarization”. Archetypal Analysis (AA) is...

  10. An Optimization Model and DPSO-EDA for Document Summarization

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliev

    2011-11-01

    Full Text Available We model document summarization as a nonlinear 0-1 programming problem where an objective function is defined as Heronian mean of the objective functions enforcing the coverage and diversity. The proposed model implemented on a multi-document summarization task. Experiments on DUC2001 and DUC2002 datasets showed that the proposed model outperforms the other summarization methods.

  11. Approximate Representations and Approximate Homomorphisms

    CERN Document Server

    Moore, Cristopher

    2010-01-01

    Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_{x,y} ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensi...

  12. Cat swarm optimization based evolutionary framework for multi document summarization

    Science.gov (United States)

    Rautray, Rasmita; Balabantaray, Rakesh Chandra

    2017-07-01

    Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.

  13. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  14. Automatic summarization of audio-visual soccer feeds

    OpenAIRE

    Chen F; De Vleeschouwer C; Duxans Barrobes H.; Gregorio Escalada J.; Conejero D.

    2010-01-01

    This paper presents a fully automatic system for soccer game summarization. The system takes audio-visual content as an input, and builds on the integration of two independent but complementary contributions (i) to identify crucial periods of the soccer game in a fully automatic way, and (ii) to summarize the soccer game as a function of individual narrative preferences of the user. The process involves both audio and video analysis, and handles the personalized summarization challenge as a r...

  15. Automated methods for the summarization of electronic health records.

    Science.gov (United States)

    Pivovarov, Rimma; Elhadad, Noémie

    2015-09-01

    This review examines work on automated summarization of electronic health record (EHR) data and in particular, individual patient record summarization. We organize the published research and highlight methodological challenges in the area of EHR summarization implementation. The target audience for this review includes researchers, designers, and informaticians who are concerned about the problem of information overload in the clinical setting as well as both users and developers of clinical summarization systems. Automated summarization has been a long-studied subject in the fields of natural language processing and human-computer interaction, but the translation of summarization and visualization methods to the complexity of the clinical workflow is slow moving. We assess work in aggregating and visualizing patient information with a particular focus on methods for detecting and removing redundancy, describing temporality, determining salience, accounting for missing data, and taking advantage of encoded clinical knowledge. We identify and discuss open challenges critical to the implementation and use of robust EHR summarization systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.

  16. Interestingness-Driven Diffusion Process Summarization in Dynamic Networks

    DEFF Research Database (Denmark)

    Qu, Qiang; Liu, Siyuan; Jensen, Christian S.

    2014-01-01

    tool in this regard is data summarization. However, few existing studies aim to summarize graphs/networks for dynamics. Dynamic networks raise new challenges not found in static settings, including time sensitivity and the needs for online interestingness evaluation and summary traceability, which......The widespread use of social networks enables the rapid diffusion of information, e.g., news, among users in very large communities. It is a substantial challenge to be able to observe and understand such diffusion processes, which may be modeled as networks that are both large and dynamic. A key...... render existing techniques inapplicable. We study the topic of dynamic network summarization: how to summarize dynamic networks with millions of nodes by only capturing the few most interesting nodes or edges over time, and we address the problem by finding interestingness-driven diffusion processes...

  17. Towards App-based Formative Feedback to Support Summarizing Skills

    NARCIS (Netherlands)

    Van Rosmalen, Peter; Kester, Liesbeth; Boshuizen, Els

    2013-01-01

    Van Rosmalen, P., Kester, L., & Boshuizen, H. P. A. (2013, 18 September). Towards App‐based Formative Feedback to Support Summarizing Skills. Presentation given at ECTEL 2013: Workshop on Technology-Enhanced Formative Assessment (TEFA), Paphos, Cyprus.

  18. EXPLOITING RHETORICAL RELATIONS TO MULTIPLE DOCUMENTS TEXT SUMMARIZATION

    Directory of Open Access Journals (Sweden)

    N. Adilah Hanin Zahri

    2015-03-01

    Full Text Available Many of previous research have proven that the usage of rhetorical relations is capable to enhance many applications such as text summarization, question answering and natural language generation. This work proposes an approach that expands the benefit of rhetorical relations to address redundancy problem for cluster-based text summarization of multiple documents. We exploited rhetorical relations exist between sentences to group similar sentences into multiple clusters to identify themes of common information. The candidate summary were extracted from these clusters. Then, cluster-based text summarization is performed using Conditional Markov Random Walk Model to measure the saliency scores of the candidate summary. We evaluated our method by measuring the cohesion and separation of the clusters constructed by exploiting rhetorical relations and ROUGE score of generated summaries. The experimental result shows that our method performed well which shows promising potential of applying rhetorical relation in text clustering which benefits text summarization of multiple documents

  19. Figure summarizer browser extensions for PubMed Central

    OpenAIRE

    Agarwal, Shashank; Yu, Hong

    2011-01-01

    Summary: Figures in biomedical articles present visual evidence for research facts and help readers understand the article better. However, when figures are taken out of context, it is difficult to understand their content. We developed a summarization algorithm to summarize the content of figures and used it in our figure search engine (http://figuresearch.askhermes.org/). In this article, we report on the development of web browser extensions for Mozilla Firefox, Google Chrome and Apple Saf...

  20. Extractive summarization using complex networks and syntactic dependency

    Science.gov (United States)

    Amancio, Diego R.; Nunes, Maria G. V.; Oliveira, Osvaldo N.; Costa, Luciano da F.

    2012-02-01

    The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general.

  1. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  2. WEB DOCUMENT SEGMENTATION USING FREQUENT TERM SETS FOR SUMMARIZATION

    Directory of Open Access Journals (Sweden)

    Sarukesi Karunakaran

    2012-01-01

    Full Text Available Query sensitive summarization aims at extracting the query relevant contents from web documents. Web page segmentation focuses on reducing the run time overhead of the summarization systems by grouping the related contents of a web page into segments. At query time, query relevant segments of the web page are identified and important sentences from these segments are extracted to compose the summary. DOM tree structures of the web documents are utilized to perform the segmentation of the contents. Leaf nodes of DOM tress are merged to form segments according to the statistical and linguistic similarity measure. The proposed system has been evaluated by intrinsic approach making use of user satisfaction index. The performance of the system is compared with summarization without using preprocessed segments. Performance of this system is more promising than the other measures like cosine similarity, jaccard measure which make use of sparse term-frequent vectors, since the most frequent term sets are considered to measure the relevance. Relevant segments alone need to be processed at run time for summarization which reduces the time complexity of the summarization process.

  3. Syntactic and Sentence Feature Based Hybrid Approach for Text Summarization

    Directory of Open Access Journals (Sweden)

    D.Y. Sakhare

    2014-02-01

    Full Text Available Recently, there has been a significant research in automatic text summarization using feature-based techniques in which most of them utilized any one of the soft computing techniques. But, making use of syntactic structure of the sentences for text summarization has not widely applied due to its difficulty of handling it in summarization process. On the other hand, feature-based technique available in the literature showed efficient results in most of the techniques. So, combining syntactic structure into the feature-based techniques is surely smooth the summarization process in a way that the efficiency can be achieved. With the intention of combining two different techniques, we have presented an approach of text summarization that combines feature and syntactic structure of the sentences. Here, two neural networks are trained based on the feature score and the syntactic structure of sentences. Finally, the two neural networks are combined with weighted average to find the sentence score of the sentences. The experimentation is carried out using DUC 2002 dataset for various compression ratios. The results showed that the proposed approach achieved F-measure of 80% for the compression ratio 50 % that proved the better results compared with the existing techniques.

  4. An answer summarization method based on keyword extraction

    Directory of Open Access Journals (Sweden)

    Fan Qiaoqing

    2017-01-01

    Full Text Available In order to reduce the redundancy of answer summary generated from community q&a dataset without topic tags, we propose an answer summarization algorithm based on keyword extraction. We combine tf-idf with word vector to change the influence transferred ratio equation in TextRank. And then during summarizing, we take the ratio of the number of sentences containing any keyword to the total number of candidate sentences as an adaptive factor for AMMR. Meanwhile we reuse the scores of keywords generated by TextRank as a weight factor for sentence similarity computing. Experimental results show that the proposed answer summarization is better than the traditional MMR and AMMR.

  5. Text Summarization Using FrameNet-Based Semantic Graph Model

    Directory of Open Access Journals (Sweden)

    Xu Han

    2016-01-01

    Full Text Available Text summarization is to generate a condensed version of the original document. The major issues for text summarization are eliminating redundant information, identifying important difference among documents, and recovering the informative content. This paper proposes a Semantic Graph Model which exploits the semantic information of sentence using FSGM. FSGM treats sentences as vertexes while the semantic relationship as the edges. It uses FrameNet and word embedding to calculate the similarity of sentences. This method assigns weight to both sentence nodes and edges. After all, it proposes an improved method to rank these sentences, considering both internal and external information. The experimental results show that the applicability of the model to summarize text is feasible and effective.

  6. Video Analytics for Indexing, Summarization and Searching of Video Archives

    Energy Technology Data Exchange (ETDEWEB)

    Trease, Harold E.; Trease, Lynn L.

    2009-08-01

    This paper will be submitted to the proceedings The Eleventh IASTED International Conference on. Signal and Image Processing. Given a video or video archive how does one effectively and quickly summarize, classify, and search the information contained within the data? This paper addresses these issues by describing a process for the automated generation of a table-of-contents and keyword, topic-based index tables that can be used to catalogue, summarize, and search large amounts of video data. Having the ability to index and search the information contained within the videos, beyond just metadata tags, provides a mechanism to extract and identify "useful" content from image and video data.

  7. A list of tables summarizing various Cmap analysis, from which the final tables in the manuscript are based on

    Data.gov (United States)

    U.S. Environmental Protection Agency — Various Cmap analyses within and across species and microarray platforms conducted and summarized to generate the tables in the publication. This dataset is...

  8. Mining Sequential Update Summarization with Hierarchical Text Analysis

    Directory of Open Access Journals (Sweden)

    Chunyun Zhang

    2016-01-01

    Full Text Available The outbreak of unexpected news events such as large human accident or natural disaster brings about a new information access problem where traditional approaches fail. Mostly, news of these events shows characteristics that are early sparse and later redundant. Hence, it is very important to get updates and provide individuals with timely and important information of these incidents during their development, especially when being applied in wireless and mobile Internet of Things (IoT. In this paper, we define the problem of sequential update summarization extraction and present a new hierarchical update mining system which can broadcast with useful, new, and timely sentence-length updates about a developing event. The new system proposes a novel method, which incorporates techniques from topic-level and sentence-level summarization. To evaluate the performance of the proposed system, we apply it to the task of sequential update summarization of temporal summarization (TS track at Text Retrieval Conference (TREC 2013 to compute four measurements of the update mining system: the expected gain, expected latency gain, comprehensiveness, and latency comprehensiveness. Experimental results show that our proposed method has good performance.

  9. Use of Laplacian Projection Technique for Summarizing Likert Scale Annotations

    OpenAIRE

    Tanveer, M. Iftekhar

    2015-01-01

    Summarizing Likert scale ratings from human annotators is an important step for collecting human judgments. In this project we study a novel, graph theoretic method for this purpose. We also analyze a few interesting properties for this approach using real annotation datasets.

  10. Enhancing biomedical text summarization using semantic relation extraction.

    Directory of Open Access Journals (Sweden)

    Yue Shang

    Full Text Available Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1 We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2 We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3 For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  11. p-adic Gauss integrals from the Poison summarizing formula

    CERN Document Server

    Prokhorenko, D V

    2011-01-01

    In the present paper we show how to obtain the well-known formula for Gauss sums and the Gauss reciprocity low from the Poison summarizing formula by using some ideas of renormalization and ergodic theories. We also apply our method to obtain new simple derivation of the standard formula for p-adic Gauss integrals.

  12. Towards App-based Formative Feedback to Support Summarizing Skills

    NARCIS (Netherlands)

    Van Rosmalen, Peter; Kester, Liesbeth; Boshuizen, Els

    2013-01-01

    Van Rosmalen, P., Kester, L., & Boshuizen, H. P. A. (2013). Towards App‐based Formative Feedback to Support Summarizing Skills. ECTEL 2013: Workshop on Technology-Enhanced Formative Assessment (TEFA). September, 17-18, 2013, Paphos, Cyprus. Available online at: http://www.kbs.uni-hannover.de/tefa201

  13. QCS : a system for querying, clustering, and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of

  14. QCS: a system for querying, clustering and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O' Leary, Dianne P. (University of Maryland, College Park, MD); Conroy, John M. (Center for Computing Sciences, Bowie, MD)

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design

  15. Mendelian randomization analysis with multiple genetic variants using summarized data.

    Science.gov (United States)

    Burgess, Stephen; Butterworth, Adam; Thompson, Simon G

    2013-11-01

    Genome-wide association studies, which typically report regression coefficients summarizing the associations of many genetic variants with various traits, are potentially a powerful source of data for Mendelian randomization investigations. We demonstrate how such coefficients from multiple variants can be combined in a Mendelian randomization analysis to estimate the causal effect of a risk factor on an outcome. The bias and efficiency of estimates based on summarized data are compared to those based on individual-level data in simulation studies. We investigate the impact of gene-gene interactions, linkage disequilibrium, and 'weak instruments' on these estimates. Both an inverse-variance weighted average of variant-specific associations and a likelihood-based approach for summarized data give similar estimates and precision to the two-stage least squares method for individual-level data, even when there are gene-gene interactions. However, these summarized data methods overstate precision when variants are in linkage disequilibrium. If the P-value in a linear regression of the risk factor for each variant is less than 1×10⁻⁵, then weak instrument bias will be small. We use these methods to estimate the causal association of low-density lipoprotein cholesterol (LDL-C) on coronary artery disease using published data on five genetic variants. A 30% reduction in LDL-C is estimated to reduce coronary artery disease risk by 67% (95% CI: 54% to 76%). We conclude that Mendelian randomization investigations using summarized data from uncorrelated variants are similarly efficient to those using individual-level data, although the necessary assumptions cannot be so fully assessed. © 2013 WILEY PERIODICALS, INC.

  16. Generalized minimum dominating set and application in automatic text summarization

    CERN Document Server

    Xu, Yi-Zhi

    2016-01-01

    For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.

  17. Multimodal Stereoscopic Movie Summarization Conforming to Narrative Characteristics.

    Science.gov (United States)

    Mademlis, Ioannis; Tefas, Anastasios; Nikolaidis, Nikos; Pitas, Ioannis

    2016-10-05

    Video summarization is a timely and rapidly developing research field with broad commercial interest, due to the increasing availability of massive video data. Relevant algorithms face the challenge of needing to achieve a careful balance between summary compactness, enjoyability and content coverage. The specific case of stereoscopic 3D theatrical films has become more important over the past years, but not received corresponding research attention. In the present work, a multi-stage, multimodal summarization process for such stereoscopic movies is proposed, that is able to extract a short, representative video skim conforming to narrative characteristics from a 3D film. At the initial stage, a novel, low-level video frame description method is introduced (Frame Moments Descriptor, or FMoD), that compactly captures informative image statistics from luminance, color, optical flow and stereoscopic disparity video data, both in a global and in a local scale. Thus, scene texture, illumination, motion and geometry properties may succinctly be contained within a single frame feature descriptor, which can subsequently be employed as a building block in any key-frame extraction scheme, e.g., for intra-shot frame clustering. The computed key-frames are then used to construct a movie summary in the form of a video skim, which is post-processed in a manner that also takes into account the audio modality. The next stage of the proposed summarization pipeline essentially performs shot pruning, controlled by a userprovided shot retention parameter, that removes segments from the skim based on the narrative prominence of movie characters in both the visual and the audio modalities. This novel process (Multimodal Shot Pruning, or MSP) is algebraically modelled as a multimodal matrix Column Subset Selection Problem, which is solved using an evolutionary computing approach. Subsequently, disorienting editing effects induced by summarization are dealt with, through manipulation of

  18. Leveraging Usage Data for Linked Data Movie Entity Summarization

    CERN Document Server

    Thalhammer, Andreas; Roa-Valverde, Antonio; Fensel, Dieter

    2012-01-01

    Novel research in the field of Linked Data focuses on the problem of entity summarization. This field addresses the problem of ranking features according to their importance for the task of identifying a particular entity. Next to a more human friendly presentation, these summarizations can play a central role for semantic search engines and semantic recommender systems. In current approaches, it has been tried to apply entity summarization based on patterns that are inherent to the regarded data. The proposed approach of this paper focuses on the movie domain. It utilizes usage data in order to support measuring the similarity between movie entities. Using this similarity it is possible to determine the k-nearest neighbors of an entity. This leads to the idea that features that entities share with their nearest neighbors can be considered as significant or important for these entities. Additionally, we introduce a downgrading factor (similar to TF-IDF) in order to overcome the high number of commonly occurri...

  19. Summarization-based image resizing by intelligent object carving.

    Science.gov (United States)

    Dong, Weiming; Zhou, Ning; Lee, Tong-Yee; Wu, Fuzhang; Kong, Yan; Zhang, Xiaopeng

    2014-01-01

    Image resizing can be more effectively achieved with a better understanding of image semantics. In this paper, similar patterns that exist in many real-world images are analyzed. By interactively detecting similar objects in an image, the image content can be summarized rather than simply distorted or cropped. This method enables the manipulation of image pixels or patches as well as semantic objects in the scene during image resizing process. Given the special nature of similar objects in a general image, the integration of a novel object carving (OC) operator with the multi-operator framework is proposed for summarizing similar objects. The object removal sequence in the summarization strategy directly affects resizing quality. The method by which to evaluate the visual importance of the object as well as to optimally select the candidates for object carving is demonstrated. To achieve practical resizing applications for general images, a template matching-based method is developed. This method can detect similar objects even when they are of various colors, transformed in terms of perspective, or partially occluded. To validate the proposed method, comparisons with state-of-the-art resizing techniques and a user study were conducted. Convincing visual results are shown to demonstrate the effectiveness of the proposed method.

  20. Personalized summarization using user preference for m-learning

    Science.gov (United States)

    Lee, Sihyoung; Yang, Seungji; Ro, Yong Man; Kim, Hyoung Joong

    2008-02-01

    As the Internet and multimedia technology is becoming advanced, the number of digital multimedia contents is also becoming abundant in learning area. In order to facilitate the access of digital knowledge and to meet the need of a lifelong learning, e-learning could be the helpful alternative way to the conventional learning paradigms. E-learning is known as a unifying term to express online, web-based and technology-delivered learning. Mobile-learning (m-learning) is defined as e-learning through mobile devices using wireless transmission. In a survey, more than half of the people remarked that the re-consumption was one of the convenient features in e-learning. However, it is not easy to find user's preferred segmentation from a full version of lengthy e-learning content. Especially in m-learning, a content-summarization method is strongly required because mobile devices are limited to low processing power and battery capacity. In this paper, we propose a new user preference model for re-consumption to construct personalized summarization for re-consumption. The user preference for re-consumption is modeled based on user actions with statistical model. Based on the user preference model for re-consumption with personalized user actions, our method discriminates preferred parts over the entire content. Experimental results demonstrated successful personalized summarization.

  1. Validity of the Eikonal Approximation

    OpenAIRE

    Kabat, Daniel

    1992-01-01

    We summarize results on the reliability of the eikonal approximation in obtaining the high energy behavior of a two particle forward scattering amplitude. Reliability depends on the spin of the exchanged field. For scalar fields the eikonal fails at eighth order in perturbation theory, when it misses the leading behavior of the exchange-type diagrams. In a vector theory the eikonal gets the exchange diagrams correctly, but fails by ignoring certain non-exchange graphs which dominate the asymp...

  2. Applying a sunburst visualization to summarize user navigation sequences.

    Science.gov (United States)

    Rodden, Kerry

    2014-01-01

    For many Web-based applications, it's important to be able to analyze the paths users have taken through a site--for example, to understand how they're discovering engaging content. These paths are difficult to summarize visually because of the underlying data's complexity. A Google researcher applied a sunburst visualization to this problem, after simplifying the data into a hierarchical format. The resulting visualization was successful in YouTube and is widely referenced and accessed. The code for the visualization is available as open source.

  3. Algorithms and estimators for summarization of unaggregated data streams

    DEFF Research Database (Denmark)

    Cohen, Edith; Duffield, Nick; Kaplan, Haim

    2014-01-01

    Abstract Statistical summaries of IP traffic are at the heart of network operation and are used to recover aggregate information on subpopulations of flows. It is therefore of great importance to collect the most accurate and informative summaries given the router's resource constraints. A summar......Abstract Statistical summaries of IP traffic are at the heart of network operation and are used to recover aggregate information on subpopulations of flows. It is therefore of great importance to collect the most accurate and informative summaries given the router's resource constraints...

  4. [Summarization of studies on Chinese marine medicinal animal Syngnthus acus].

    Science.gov (United States)

    Li, C; Zou, G; Bian, H; Ju, X

    2001-09-01

    Syngnthus acus L. is a kind of very important traditional Chinese-medicine from sea. It has plentiful amino acid, protein, trace element, poly-carbon non-saturation fat acid, etc. Syngnthus acus has sexual-hormones-like, anti-cancer and resisting fatigue effects; It can resist fatigue. It can also improve organism immunity and enhance the systolic strength of heart muscle. It can be widely used in many fields such as food field, medical field and aquatic products industry. Its classification, resources, chemical composition and its medical value have been summarized in this article.

  5. Towards an Automatic Forum Summarization to Support Tutoring

    Science.gov (United States)

    Carbonaro, Antonella

    The process of summarizing information is becoming increasingly important in the light of recent advances in resource creation and distribution and the resulting influx of large numbers of information in everyday life. These advances are also challenging educational institutions to adopt the opportunities of distributed knowledge sharing and communication. Among the most recent trends, the availability of social communication networks, knowledge representation and of activate learning gives rise for a new landscape of learning as a networked, situated, contextual and life-long activities. In this scenario, new perspectives on learning and teaching processes must be developed and supported, relating learning models, content-based tools, social organization and knowledge sharing.

  6. Summarizing health inequalities in a Balanced Scorecard. Methodological considerations.

    Science.gov (United States)

    Auger, Nathalie; Raynault, Marie-France

    2006-01-01

    The association between social determinants and health inequalities is well recognized. What are now needed are tools to assist in disseminating such information. This article describes how the Balanced Scorecard may be used for summarizing data on health inequalities. The process begins by selecting appropriate social groups and indicators, and is followed by the measurement of differences across person, place, or time. The next step is to decide whether to focus on absolute versus relative inequality. The last step is to determine the scoring method, including whether to address issues of depth of inequality.

  7. Hierarchical clustering techniques for image database organization and summarization

    Science.gov (United States)

    Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    This paper investigates clustering techniques as a method of organizing image databases to support popular visual management functions such as searching, browsing and navigation. Different types of hierarchical agglomerative clustering techniques are studied as a method of organizing features space as well as summarizing image groups by the selection of a few appropriate representatives. Retrieval performance using both single and multiple level hierarchies are experimented with and the algorithms show an interesting relationship between the top k correct retrievals and the number of comparisons required. Some arguments are given to support the use of such cluster-based techniques for managing distributed image databases.

  8. Improving readability through extractive summarization for learners with reading difficulties

    Directory of Open Access Journals (Sweden)

    K. Nandhini

    2013-11-01

    Full Text Available In this paper, we describe the design and evaluation of extractive summarization approach to assist the learners with reading difficulties. As existing summarization approaches inherently assign more weights to the important sentences, our approach predicts the summary sentences that are important as well as readable to the target audience with good accuracy. We used supervised machine learning technique for summary extraction of science and social subjects in the educational text. Various independent features from the existing literature for predicting important sentences and proposed learner dependent features for predicting readable sentences are extracted from texts and are used for automatic classification. We performed both extrinsic and intrinsic evaluation on this approach and the intrinsic evaluation is carried out using F-measure and readability analysis. The extrinsic evaluation comprises of learner feedback using likert scale and the effect of assistive summary on improving readability for learners’ with reading difficulty using ANOVA. The results show significant improvement in readability for the target audience using assistive summary.

  9. AUTOMATIC PATENT DOCUMFNT SUMMARIZATION FOR COLLABORATIVE KNOWLEDGE SYSTEMS AND SERVICES

    Institute of Scientific and Technical Information of China (English)

    Amy J.C.TRAPPEY; Charles V.TRAPPEY; Chun-Yi WU

    2009-01-01

    Engineering and research teams often develop new products and technologies by referring to inventions described in patent databases. Efficient patent analysis builds R&D knowledge, reduces new product development time, increases market success, and reduces potential patent infringement. Thus, it is beneficial to automatically and systematically extract information from patent documents in order to improve knowledge sharing and collaboration among R&D team members. In this research, patents are summarized using a combined ontology based and TF-IDF concept clustering approach. The ontology captures the general knowledge and core meaning of patents in a given domain. Then, the proposed methodology extracts, clusters, and integrates the content of a patent to derive a summary and a cluster tree diagram of key terms. Patents from the International Patent Classification (IPC) codes B25C, B25D, B25F (categories for power hand tools) and B24B, C09G and H011 (categories for chemical mechanical polishing) are used as case studies to evaluate the compression ratio, retention ratio, and classification accuracy of the summarization results. The evaluation uses statistics to represent the summary generation and its compression ratio, the ontology based keyword extraction retention ratio, and the summary classification accuracy. The results show that the ontology based approach yields about the same compression ratio as previous non-ontology based research but yields on average an 11% improvement for the retention ratio and a 14% improvement for classification accuracy.

  10. An unsupervised method for summarizing egocentric sport videos

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  11. Summarizing Large-Scale Database Schema Using Community Detection

    Institute of Scientific and Technical Information of China (English)

    Xue Wang; Xuan Zhou; Shan Wang

    2012-01-01

    Schema summarization on large-scale databases is a challenge.In a typical large database schema,a great proportion of the tables are closely connected through a few high degree tables.It is thus difficult to separate these tables into clusters that represent different topics.Moreover,as a schema can be very big,the schema summary needs to be structured into multiple levels,to further improve the usability.In this paper,we introduce a new schema summarization approach utilizing the techniques of community detection in social networks.Our approach contains three steps.First,we use a community detection algorithm to divide a database schema into subject groups,each representing a specific subject.Second,we cluster the subject groups into abstract domains to form a multi-level navigation structure.Third,we discover representative tables in each cluster to label the schema summary.We evaluate our approach on Freebase,a real world large-scale database.The results show that our approach can identify subject groups precisely.The generated abstract schema layers are very helpful for users to explore database.

  12. Summarization on Evaluation of Ecological Value of Artificial Forest

    Institute of Scientific and Technical Information of China (English)

    LIUTao; ZHANGHuaxing

    2005-01-01

    This paper is a summarization on evaluation of value of artificial forest. The main contents include:(i) the difference in concepts between ecological function, ecological efficiency and ecological benefits of artificial forest; (ii) the motive and several taches of economic feedback or compensation for ecological benefit; (iii)the ecological efficiencies of artificial forest and the main correlative factors which includes the ecological efficiencies of artificial forest and the main correlation factors infecting the ecological efficiency;(iv) the basic math correlations between ecological efficiencies of artificial forest and the related factors; (v)service range of the ecological efficiencies of artificial forest; and (vi) the basic principle of measurement of ecological efficiencies of artificial forest. At the end, the basic methods of main ecological efficiencies of artificial forest are expatiated.

  13. A Graph Summarization Algorithm Based on RFID Logistics

    Science.gov (United States)

    Sun, Yan; Hu, Kongfa; Lu, Zhipeng; Zhao, Li; Chen, Ling

    Radio Frequency Identification (RFID) applications are set to play an essential role in object tracking and supply chain management systems. The volume of data generated by a typical RFID application will be enormous as each item will generate a complete history of all the individual locations that it occupied at every point in time. The movement trails of such RFID data form gigantic commodity flowgraph representing the locations and durations of the path stages traversed by each item. In this paper, we use graph to construct a warehouse of RFID commodity flows, and introduce a database-style operation to summarize graphs, which produces a summary graph by grouping nodes based on user-selected node attributes, further allows users to control the hierarchy of summaries. It can cut down the size of graphs, and provide convenience for users to study just on the shrunk graph which they interested. Through extensive experiments, we demonstrate the effectiveness and efficiency of the proposed method.

  14. The interplay between autonomy and dignity: summarizing patients voices.

    Science.gov (United States)

    Delmar, Charlotte

    2013-11-01

    Patients have to be respected with dignity as the masters of their own lives. The problem, however, is that autonomy may become so dominant and the fundamental value of caring in professional nursing that the patient's dignity is affected. The aim of this article is to point out some of the issues with the interplay between autonomy, also called self-management and dignity. Given voice to the patient perspective the background is provided by cases from research conducted through qualitative interviews with patients and expanded by summarizing empirical research concerning the interplay between autonomy and dignity. The search strategy and the research question gave five empirical research papers and three theoretical studies and concept analyses. A concise overview of the relevant research contains information about all the major elements of the studies. The background research and an interpretative summary address new issues to be taken into account in dignity conserving care.

  15. Disease Related Knowledge Summarization Based on Deep Graph Search

    Directory of Open Access Journals (Sweden)

    Xiaofang Wu

    2015-01-01

    Full Text Available The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher’s information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS is applied to find hidden (indirect relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo.

  16. Dynamic key-frame extraction for video summarization

    Science.gov (United States)

    Ciocca, Gianluigi; Schettini, Raimondo

    2005-01-01

    We propose an innovative approach to the selection of representative frames of a video shot for video summarization. By analyzing the differences between two consecutive frames of a video sequence, the algorithm determines the complexity of the sequence in terms of visual content changes. Three descriptors are used to express the frame"s visual content: a color histogram, wavelet statistics and an edge direction histogram. Similarity measures are computed for each descriptor and combined to form a frame difference measure. The use of multiple descriptors provides a more precise representation, capturing even small variations in the frame sequence. This method can dynamically, and rapidly select a variable number of key frame within each shot, and does not exhibit the complexity of existing methods based on clustering algorithm strategies.

  17. Summarized data to achieve population-wide anonymized wellness measures.

    Science.gov (United States)

    Clarke, Andrew; Steele, Robert

    2012-01-01

    The growth in smartphone market share has seen the increasing emergence of individuals collecting quantitative wellness data. Beyond the potential health benefits for the individual in regards to managing their own health, the data is highly related to preventative and risk factors for a number of lifestyle related diseases. This data has often been a component of public health data collection and epidemiological studies due to its large impact on the health system with chronic and lifestyle diseases increasingly being a major burden for the health service. However, collection of this kind of information from large segments of the community in a usable fashion has not been specifically explored in previous work. In this paper we discuss some of the technologies that increase the ease and capability of gathering quantitative wellness data via smartphones, how specific and detailed this data needs to be for public health use and the challenges of such anonymized data collection for public health. Additionally, we propose a conceptual architecture that includes the necessary components to support this approach to data collection.

  18. VISUAL ATTENTION BASED KEYFRAMES EXTRACTION AND VIDEO SUMMARIZATION

    Directory of Open Access Journals (Sweden)

    P.Geetha

    2012-05-01

    Full Text Available Recent developments in digital video and drastic increase of internet use have increased the amount of people searching and watching videos online. In order to make the search of the videos easy, Summary of the video may be provided along with each video. The video summary provided thus should be effective so that the user would come to know the content of the video without having to watch it fully. The summary produced should consists of the key frames that effectively express the content and context of the video. This work suggests a method to extract key frames which express most of the information in the video. This is achieved by quantifying Visual attention each frame commands. Visual attention of each frame is quantified using a descriptor called Attention quantifier. This quantification of visual attention is based on the human attention mechanism that indicates color conspicuousness and the motion involved seek more attention. So based on the color conspicuousness and the motion involved each frame is given a Attention parameter. Based on the attention quantifier value the key frames are extracted and are summarized adaptively. This framework suggests a method to produces meaningful video summary.

  19. Approximate Public Key Authentication with Information Hiding

    Energy Technology Data Exchange (ETDEWEB)

    THOMAS,EDWARD V.; DRAELOS,TIMOTHY J.

    2000-10-01

    This paper describes a solution for the problem of authenticating the shapes of statistically variant gamma spectra while simultaneously concealing the shapes and magnitudes of the sensitive spectra. The shape of a spectrum is given by the relative magnitudes and positions of the individual spectral elements. Class-specific linear orthonormal transformations of the measured spectra are used to produce output that meet both the authentication and concealment requirements. For purposes of concealment, the n-dimensional gamma spectra are transformed into n-dimensional output spectra that are effectively indistinguishable from Gaussian white noise (independent of the class). In addition, the proposed transformations are such that statistical authentication metrics computed on the transformed spectra are identical to those computed on the original spectra.

  20. Applying Semantics in Dataset Summarization for Solar Data Ingest Pipelines

    Science.gov (United States)

    Michaelis, J.; McGuinness, D. L.; Zednik, S.; West, P.; Fox, P. A.

    2012-12-01

    for supporting the following use cases: (i) Temporal alignment of time-stamped MLSO observations with raw data gathered at MLSO. (ii) Linking of multiple visualization entries to common (and structurally complex) workflow structures - designed to capture the visualization generation process. To provide real-world use cases for the described approach, a semantic summarization system is being developed for data gathered from HAO's Coronal Multi-channel Polarimeter (CoMP) and Chromospheric Helium-I Imaging Photometer (CHIP) pipelines. Web Links: [1] http://mlso.hao.ucar.edu/ [2] http://www.w3.org/TR/vocab-data-cube/

  1. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X....... The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  2. Validity of the eikonal approximation

    CERN Document Server

    Kabat, D

    1992-01-01

    We summarize results on the reliability of the eikonal approximation in obtaining the high energy behavior of a two particle forward scattering amplitude. Reliability depends on the spin of the exchanged field. For scalar fields the eikonal fails at eighth order in perturbation theory, when it misses the leading behavior of the exchange-type diagrams. In a vector theory the eikonal gets the exchange diagrams correctly, but fails by ignoring certain non-exchange graphs which dominate the asymptotic behavior of the full amplitude. For spin--2 tensor fields the eikonal captures the leading behavior of each order in perturbation theory, but the sum of eikonal terms is subdominant to graphs neglected by the approximation. We also comment on the eikonal for Yang-Mills vector exchange, where the additional complexities of the non-abelian theory may be absorbed into Regge-type modifications of the gauge boson propagators.

  3. Extractive text summarization system to aid data extraction from full text in systematic review development.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Hurdle, John F; Jonnalagadda, Siddhartha

    2016-12-01

    Extracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process. We developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables. At the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, plearning and natural language processing are promising approaches to the development of such an extractive summarization system. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Optimal Belief Approximation

    CERN Document Server

    Leike, Reimar H

    2016-01-01

    In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a ranking function that quantifies how "embarrassing" it is to communicate a given approximation. We show that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. We find that this ranking is equivalent to the Kullback-Leibler divergence that is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments, the approximated and non-approximated beliefs, should be used. We hope that our elementary derivation settles the apparent confusion. We show for example that when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many su...

  5. Approximate flavor symmetries

    OpenAIRE

    Rašin, Andrija

    1994-01-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  6. On Element SDD Approximability

    CERN Document Server

    Avron, Haim; Toledo, Sivan

    2009-01-01

    This short communication shows that in some cases scalar elliptic finite element matrices cannot be approximated well by an SDD matrix. We also give a theoretical analysis of a simple heuristic method for approximating an element by an SDD matrix.

  7. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  8. Approximation of distributed delays

    CERN Document Server

    Lu, Hao; Eberard, Damien; Simon, Jean-Pierre

    2010-01-01

    We address in this paper the approximation problem of distributed delays. Such elements are convolution operators with kernel having bounded support, and appear in the control of time-delay systems. From the rich literature on this topic, we propose a general methodology to achieve such an approximation. For this, we enclose the approximation problem in the graph topology, and work with the norm defined over the convolution Banach algebra. The class of rational approximates is described, and a constructive approximation is proposed. Analysis in time and frequency domains is provided. This methodology is illustrated on the stabilization control problem, for which simulations results show the effectiveness of the proposed methodology.

  9. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X. The clas......Let (X,d) be a metric space and (Omega, d) a compact subspace of X which supports a non-atomic finite measure m.  We consider `natural' classes of badly approximable  subsets of Omega. Loosely speaking, these consist of points in Omega which `stay clear' of some given set of points in X...

  10. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  11. Canonical Sets of Best L1-Approximation

    Directory of Open Access Journals (Sweden)

    Dimiter Dryanov

    2012-01-01

    Full Text Available In mathematics, the term approximation usually means either interpolation on a point set or approximation with respect to a given distance. There is a concept, which joins the two approaches together, and this is the concept of characterization of the best approximants via interpolation. It turns out that for some large classes of functions the best approximants with respect to a certain distance can be constructed by interpolation on a point set that does not depend on the choice of the function to be approximated. Such point sets are called canonical sets of best approximation. The present paper summarizes results on canonical sets of best L1-approximation with emphasis on multivariate interpolation and best L1-approximation by blending functions. The best L1-approximants are characterized as transfinite interpolants on canonical sets. The notion of a Haar-Chebyshev system in the multivariate case is discussed also. In this context, it is shown that some multivariate interpolation spaces share properties of univariate Haar-Chebyshev systems. We study also the problem of best one-sided multivariate L1-approximation by sums of univariate functions. Explicit constructions of best one-sided L1-approximants give rise to well-known and new inequalities.

  12. Approximation techniques for engineers

    CERN Document Server

    Komzsik, Louis

    2006-01-01

    Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.

  13. Theory of approximation

    CERN Document Server

    Achieser, N I

    2004-01-01

    A pioneer of many modern developments in approximation theory, N. I. Achieser designed this graduate-level text from the standpoint of functional analysis. The first two chapters address approximation problems in linear normalized spaces and the ideas of P. L. Tchebysheff. Chapter III examines the elements of harmonic analysis, and Chapter IV, integral transcendental functions of the exponential type. The final two chapters explore the best harmonic approximation of functions and Wiener's theorem on approximation. Professor Achieser concludes this exemplary text with an extensive section of pr

  14. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  15. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  16. Approximate Modified Policy Iteration

    CERN Document Server

    Scherrer, Bruno; Ghavamzadeh, Mohammad; Geist, Matthieu

    2012-01-01

    Modified policy iteration (MPI) is a dynamic programming (DP) algorithm that contains the two celebrated policy and value iteration methods. Despite its generality, MPI has not been thoroughly studied, especially its approximation form which is used when the state and/or action spaces are large or infinite. In this paper, we propose three approximate MPI (AMPI) algorithms that are extensions of the well-known approximate DP algorithms: fitted-value iteration, fitted-Q iteration, and classification-based policy iteration. We provide an error propagation analysis for AMPI that unifies those for approximate policy and value iteration. We also provide a finite-sample analysis for the classification-based implementation of AMPI (CBMPI), which is more general (and somehow contains) than the analysis of the other presented AMPI algorithms. An interesting observation is that the MPI's parameter allows us to control the balance of errors (in value function approximation and in estimating the greedy policy) in the fina...

  17. Approximate calculation of integrals

    CERN Document Server

    Krylov, V I

    2006-01-01

    A systematic introduction to the principal ideas and results of the contemporary theory of approximate integration, this volume approaches its subject from the viewpoint of functional analysis. In addition, it offers a useful reference for practical computations. Its primary focus lies in the problem of approximate integration of functions of a single variable, rather than the more difficult problem of approximate integration of functions of more than one variable.The three-part treatment begins with concepts and theorems encountered in the theory of quadrature. The second part is devoted to t

  18. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  19. Approximating Stationary Statistical Properties

    Institute of Scientific and Technical Information of China (English)

    Xiaoming WANG

    2009-01-01

    It is well-known that physical laws for large chaotic dynamical systems are revealed statistically. Many times these statistical properties of the system must be approximated numerically. The main contribution of this manuscript is to provide simple and natural criterions on numerical methods (temporal and spatial discretization) that are able to capture the stationary statistical properties of the underlying dissipative chaotic dynamical systems asymptotically. The result on temporal approximation is a recent finding of the author, and the result on spatial approximation is a new one. Applications to the infinite Prandtl number model for convection and the barotropic quasi-geostrophic model are also discussed.

  20. Rational approximations to fluid properties

    Science.gov (United States)

    Kincaid, J. M.

    1990-05-01

    The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function tilde p(T,rho) that contains a set of parameters (gamma sub i); the (gamma sub i) is chosen such that tilde p(T,rho) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and rho is the density). In most cases, a nonlinear least-squares numerical method is used to determine (gamma sub i). There are several drawbacks to this method: one has essentially to guess what tilde p(T,rho) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular, it lets the data choose the function tilde p(T,rho) and its numerical implementation involves only linear algorithms.

  1. Approximation of irrationals

    Directory of Open Access Journals (Sweden)

    Malvina Baica

    1985-01-01

    Full Text Available The author uses a new modification of Jacobi-Perron Algorithm which holds for complex fields of any degree (abbr. ACF, and defines it as Generalized Euclidean Algorithm (abbr. GEA to approximate irrationals.

  2. Approximations in Inspection Planning

    DEFF Research Database (Denmark)

    Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.

    2000-01-01

    Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....

  3. The Karlqvist approximation revisited

    CERN Document Server

    Tannous, C

    2015-01-01

    The Karlqvist approximation signaling the historical beginning of magnetic recording head theory is reviewed and compared to various approaches progressing from Green, Fourier, Conformal mapping that obeys the Sommerfeld edge condition at angular points and leads to exact results.

  4. Approximations in Inspection Planning

    DEFF Research Database (Denmark)

    Engelund, S.; Sørensen, John Dalsgaard; Faber, M. H.

    2000-01-01

    Planning of inspections of civil engineering structures may be performed within the framework of Bayesian decision analysis. The effort involved in a full Bayesian decision analysis is relatively large. Therefore, the actual inspection planning is usually performed using a number of approximations....... One of the more important of these approximations is the assumption that all inspections will reveal no defects. Using this approximation the optimal inspection plan may be determined on the basis of conditional probabilities, i.e. the probability of failure given no defects have been found...... by the inspection. In this paper the quality of this approximation is investigated. The inspection planning is formulated both as a full Bayesian decision problem and on the basis of the assumption that the inspection will reveal no defects....

  5. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  6. Approximation Behooves Calibration

    DEFF Research Database (Denmark)

    da Silva Ribeiro, André Manuel; Poulsen, Rolf

    2013-01-01

    Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009.......Calibration based on an expansion approximation for option prices in the Heston stochastic volatility model gives stable, accurate, and fast results for S&P500-index option data over the period 2005–2009....

  7. Approximation and supposition

    Directory of Open Access Journals (Sweden)

    Maksim Duškin

    2015-11-01

    Full Text Available Approximation and supposition This article compares exponents of approximation (expressions like Russian около, примерно, приблизительно, более, свыше and the words expressing supposition (for example Russian скорее всего, наверное, возможно. These words are often confused in research, in particular researchers often mention exponents of supposition in case of exponents of approximation. Such approach arouses some objections. The author intends to demonstrate in this article a notional difference between approximation and supposition, therefore the difference between exponents of these two notions. This difference could be described by specifying different attitude of approximation and supposition to the notion of knowledge. Supposition implies speaker’s ignorance of the exact number, while approximation does not mean such ignorance. The article offers examples proving this point of view.

  8. Approximation methods in gravitational-radiation theory

    Science.gov (United States)

    Will, C. M.

    1986-02-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  9. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  10. Diophantine approximations on fractals

    CERN Document Server

    Einsiedler, Manfred; Shapira, Uri

    2009-01-01

    We exploit dynamical properties of diagonal actions to derive results in Diophantine approximations. In particular, we prove that the continued fraction expansion of almost any point on the middle third Cantor set (with respect to the natural measure) contains all finite patterns (hence is well approximable). Similarly, we show that for a variety of fractals in [0,1]^2, possessing some symmetry, almost any point is not Dirichlet improvable (hence is well approximable) and has property C (after Cassels). We then settle by similar methods a conjecture of M. Boshernitzan saying that there are no irrational numbers x in the unit interval such that the continued fraction expansions of {nx mod1 : n is a natural number} are uniformly eventually bounded.

  11. Monotone Boolean approximation

    Energy Technology Data Exchange (ETDEWEB)

    Hulme, B.L.

    1982-12-01

    This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.

  12. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  13. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  14. Local spline approximants

    OpenAIRE

    Norton, Andrew H.

    1991-01-01

    Local spline approximants offer a means for constructing finite difference formulae for numerical solution of PDEs. These formulae seem particularly well suited to situations in which the use of conventional formulae leads to non-linear computational instability of the time integration. This is explained in terms of frequency responses of the FDF.

  15. On Convex Quadratic Approximation

    NARCIS (Netherlands)

    den Hertog, D.; de Klerk, E.; Roos, J.

    2000-01-01

    In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of

  16. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  17. Exploring the style-technique interaction in extractive summarization of broadcast news

    OpenAIRE

    Kolluru, BalaKrishna; Christensen, Heidi; Gotoh, Yoshihiko; Renals, Steve

    2003-01-01

    In this paper we seek to explore the interaction between the style of a broadcast news story and its summarization technique. We report the performance of three different summarization techniques on broadcast news stories, which are split into planned speech and spontaneous speech. The initial results indicate that some summarization techniques work better for the documents with spontaneous speech than for those with planned speech. Even for human beings some documents are inherently difficul...

  18. Topology, calculus and approximation

    CERN Document Server

    Komornik, Vilmos

    2017-01-01

    Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...

  19. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.

  20. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  1. Topics in Metric Approximation

    Science.gov (United States)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  2. Approximate option pricing

    Energy Technology Data Exchange (ETDEWEB)

    Chalasani, P.; Saias, I. [Los Alamos National Lab., NM (United States); Jha, S. [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  3. Finite elements and approximation

    CERN Document Server

    Zienkiewicz, O C

    2006-01-01

    A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o

  4. Effects of a Summarizing Strategy on Written Summaries of Children with Emotional and Behavioral Disorders

    Science.gov (United States)

    Saddler, Bruce; Asaro-Saddler, Kristie; Moeyaert, Mariola; Ellis-Robinson, Tammy

    2017-01-01

    In this single-subject study, we examined the effects of a summarizing strategy on the written summaries of children with emotional and behavioral disorders (EBDs). Six students with EBDs in fifth and sixth grades learned a mnemonic-based strategy for summarizing taught through the self-regulated strategy development (SRSD) approach. Visual…

  5. A Study of Cognitive Mapping as a Means to Improve Summarization and Comprehension of Expository Text.

    Science.gov (United States)

    Ruddell, Robert B.; Boyle, Owen F.

    1989-01-01

    Investigates the effects of cognitive mapping on written summarization and comprehension of expository text. Concludes that mapping appears to assist students in: (1) developing procedural knowledge resulting in more effective written summarization and (2) identifying and using supporting details in their essays. (MG)

  6. The Relative Effectiveness of Structured Questions and Summarizing on Near and Far Transfer Tasks.

    Science.gov (United States)

    Wang, Weimin

    The purpose of this study was to compare the effect of two learning strategies: summarizing and structured questions on near and far transfer tasks. The study explored the possible way to activate metacognitive strategies and critical thinking skills through the use of reflective activities, like summarizing or answering structured questions after…

  7. Text summarization in the biomedical domain: a systematic review of recent research.

    Science.gov (United States)

    Mishra, Rashmi; Bian, Jiantao; Fiszman, Marcelo; Weir, Charlene R; Jonnalagadda, Siddhartha; Mostafa, Javed; Del Fiol, Guilherme

    2014-12-01

    The amount of information for clinicians and clinical researchers is growing exponentially. Text summarization reduces information as an attempt to enable users to find and understand relevant source texts more quickly and effortlessly. In recent years, substantial research has been conducted to develop and evaluate various summarization techniques in the biomedical domain. The goal of this study was to systematically review recent published research on summarization of textual documents in the biomedical domain. MEDLINE (2000 to October 2013), IEEE Digital Library, and the ACM digital library were searched. Investigators independently screened and abstracted studies that examined text summarization techniques in the biomedical domain. Information is derived from selected articles on five dimensions: input, purpose, output, method and evaluation. Of 10,786 studies retrieved, 34 (0.3%) met the inclusion criteria. Natural language processing (17; 50%) and a hybrid technique comprising of statistical, Natural language processing and machine learning (15; 44%) were the most common summarization approaches. Most studies (28; 82%) conducted an intrinsic evaluation. This is the first systematic review of text summarization in the biomedical domain. The study identified research gaps and provides recommendations for guiding future research on biomedical text summarization. Recent research has focused on a hybrid technique comprising statistical, language processing and machine learning techniques. Further research is needed on the application and evaluation of text summarization in real research or patient care settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Text Summarization in the Biomedical Domain: A Systematic Review of Recent Research

    Science.gov (United States)

    Mishra, Rashmi; Bian, Jiantao; Fiszman, Marcelo; Weir, Charlene R.; Jonnalagadda, Siddhartha; Mostafa, Javed; Fiol, Guilherme Del

    2014-01-01

    Objective The amount of information for clinicians and clinical researchers is growing exponentially. Text summarization reduces information as an attempt to enable users to find and understand relevant source texts more quickly and effortlessly. In recent years, substantial research has been conducted to develop and evaluate various summarization techniques in the biomedical domain. The goal of this study was to systematically review recent published research on summarization of textual documents in the biomedical domain. Materials and methods MEDLINE (2000 to October 2013), IEEE Digital Library, and the ACM Digital library were searched. Investigators independently screened and abstracted studies that examined text summarization techniques in the biomedical domain. Information is derived from selected articles on five dimensions: input, purpose, output, method and evaluation. Results Of 10,786 studies retrieved, 34 (0.3%) met the inclusion criteria. Natural Language processing (17; 50%) and a Hybrid technique comprising of statistical, Natural language processing and machine learning (15; 44%) were the most common summarization approaches. Most studies (28; 82%) conducted an intrinsic evaluation. Discussion This is the first systematic review of text summarization in the biomedical domain. The study identified research gaps and provides recommendations for guiding future research on biomedical text summarization. conclusion Recent research has focused on a Hybrid technique comprising statistical, language processing and machine learning techniques. Further research is needed on the application and evaluation of text summarization in real research or patient care settings. PMID:25016293

  9. Assessment of home-based behavior modification programs for autistic children: reliability and validity of the behavioral summarized evaluation.

    Science.gov (United States)

    Oneal, Brent J; Reeb, Roger N; Korte, John R; Butter, Eliot J

    2006-01-01

    Since the publication of Lovaas' (1987) impressive findings, there has been a proliferation of home-based behavior modification programs for autistic children. Parents and other paraprofessionals often play key roles in the implementation and monitoring of these programs. The Behavioral Summarized Evaluation (BSE) was developed for professionals and paraprofessionals to use in assessing the severity of autistic symptoms over the course of treatment. This paper examined the psychometric properties of the BSE (inter-item consistency, factorial composition, convergent validity, and sensitivity to parents' perceptions of symptom change over time) when used by parents of autistic youngsters undergoing home-based intervention. Recommendations for future research are presented.

  10. Summarization based on physical features and logical structure of multi documents

    Institute of Scientific and Technical Information of China (English)

    Qin Bing; Liu Ting; Li Sheng

    2005-01-01

    With the rapid development of the Internet, multi documents summarization is becoming a very hot research topic. In order to generate a summarization that can effectively characterize the original information from documents, this paper proposes a multi documents summarization approach based on the physical features and logical structure of the document set. This method firstly clusterssimilar sentences into several Logical Topics (LTs), and then orders these topics according to their physical features of multi documents. After that, sentences used for the summarization are extracted from these LTs, and finally the summarization is generated via certain sorting algorithms. Our experiments show that the information coverage rate of our method is 8.83% higher than those methods based solely on logical structures, and 14.31% higher than Top-N method.

  11. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  12. Studying the correlation between different word sense disambiguation methods and summarization effectiveness in biomedical texts

    Directory of Open Access Journals (Sweden)

    Díaz Alberto

    2011-08-01

    Full Text Available Abstract Background Word sense disambiguation (WSD attempts to solve lexical ambiguities by identifying the correct meaning of a word based on its context. WSD has been demonstrated to be an important step in knowledge-based approaches to automatic summarization. However, the correlation between the accuracy of the WSD methods and the summarization performance has never been studied. Results We present three existing knowledge-based WSD approaches and a graph-based summarizer. Both the WSD approaches and the summarizer employ the Unified Medical Language System (UMLS Metathesaurus as the knowledge source. We first evaluate WSD directly, by comparing the prediction of the WSD methods to two reference sets: the NLM WSD dataset and the MSH WSD collection. We next apply the different WSD methods as part of the summarizer, to map documents onto concepts in the UMLS Metathesaurus, and evaluate the summaries that are generated. The results obtained by the different methods in both evaluations are studied and compared. Conclusions It has been found that the use of WSD techniques has a positive impact on the results of our graph-based summarizer, and that, when both the WSD and summarization tasks are assessed over large and homogeneous evaluation collections, there exists a correlation between the overall results of the WSD and summarization tasks. Furthermore, the best WSD algorithm in the first task tends to be also the best one in the second. However, we also found that the improvement achieved by the summarizer is not directly correlated with the WSD performance. The most likely reason is that the errors in disambiguation are not equally important but depend on the relative salience of the different concepts in the document to be summarized.

  13. Approximate strip exchanging.

    Science.gov (United States)

    Roy, Swapnoneel; Thakur, Ashok Kumar

    2008-01-01

    Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.

  14. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....

  15. S-Approximation: A New Approach to Algebraic Approximation

    Directory of Open Access Journals (Sweden)

    M. R. Hooshmandasl

    2014-01-01

    Full Text Available We intend to study a new class of algebraic approximations, called S-approximations, and their properties. We have shown that S-approximations can be used for applied problems which cannot be modeled by inclusion based approximations. Also, in this work, we studied a subclass of S-approximations, called Sℳ-approximations, and showed that this subclass preserves most of the properties of inclusion based approximations but is not necessarily inclusionbased. The paper concludes by studying some basic operations on S-approximations and counting the number of S-min functions.

  16. Offsite radiation doses summarized from Hanford environmental monitoring reports for the years 1957-1984. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    Soldat, J.K.; Price, K.R.; McCormack, W.D.

    1986-02-01

    Since 1957, evaluations of offsite impacts from each year of operation have been summarized in publicly available, annual environmental reports. These evaluations included estimates of potential radiation exposure to members of the public, either in terms of percentages of the then permissible limits or in terms of radiation dose. The estimated potential radiation doses to maximally exposed individuals from each year of Hanford operations are summarized in a series of tables and figures. The applicable standard for radiation dose to an individual for whom the maximum exposure was estimated is also shown. Although the estimates address potential radiation doses to the public from each year of operations at Hanford between 1957 and 1984, their sum will not produce an accurate estimate of doses accumulated over this time period. The estimates were the best evaluations available at the time to assess potential dose from the current year of operation as well as from any radionuclides still present in the environment from previous years of operation. There was a constant striving for improved evaluation of the potential radiation doses received by members of the public, and as a result the methods and assumptions used to estimate doses were periodically modified to add new pathways of exposure and to increase the accuracy of the dose calculations. Three conclusions were reached from this review: radiation doses reported for the years 1957 through 1984 for the maximum individual did not exceed the applicable dose standards; radiation doses reported over the past 27 years are not additive because of the changing and inconsistent methods used; and results from environmental monitoring and the associated dose calculations reported over the 27 years from 1957 through 1984 do not suggest a significant dose contribution from the buildup in the environment of radioactive materials associated with Hanford operations.

  17. Summarization and Matching of Density-Based Clusters in Streaming Environments

    CERN Document Server

    Yang, Di; Ward, Matthew O

    2011-01-01

    Density-based cluster mining is known to serve a broad range of applications ranging from stock trade analysis to moving object monitoring. Although methods for efficient extraction of density-based clusters have been studied in the literature, the problem of summarizing and matching of such clusters with arbitrary shapes and complex cluster structures remains unsolved. Therefore, the goal of our work is to extend the state-of-art of density-based cluster mining in streams from cluster extraction only to now also support analysis and management of the extracted clusters. Our work solves three major technical challenges. First, we propose a novel multi-resolution cluster summarization method, called Skeletal Grid Summarization (SGS), which captures the key features of density-based clusters, covering both their external shape and internal cluster structures. Second, in order to summarize the extracted clusters in real-time, we present an integrated computation strategy C-SGS, which piggybacks the generation of...

  18. Prestack traveltime approximations

    KAUST Repository

    Alkhalifah, Tariq Ali

    2012-05-01

    Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.

  19. Multi-layered graph-based multi-document summarization model

    OpenAIRE

    Canhasi, Ercan

    2014-01-01

    Multi-document summarization is a process of automatic generation of a compressed version of the given collection of documents. Recently, the graph-based models and ranking algorithms have been actively investigated by the extractive document summarization community. While most work to date focuses on homogeneous connecteness of sentences and heterogeneous connecteness of documents and sentences (e.g. sentence similarity weighted by document importance), in this paper we present a novel 3-lay...

  20. A Unified Framework for Event Summarization and Rare Event Detection from Multiple Views.

    Science.gov (United States)

    Kwon, Junseok; Lee, Kyoung Mu

    2015-09-01

    A novel approach for event summarization and rare event detection is proposed. Unlike conventional methods that deal with event summarization and rare event detection independently, our method solves them in a single framework by transforming them into a graph editing problem. In our approach, a video is represented by a graph, each node of which indicates an event obtained by segmenting the video spatially and temporally. The edges between nodes describe the relationship between events. Based on the degree of relations, edges have different weights. After learning the graph structure, our method finds subgraphs that represent event summarization and rare events in the video by editing the graph, that is, merging its subgraphs or pruning its edges. The graph is edited to minimize a predefined energy model with the Markov Chain Monte Carlo (MCMC) method. The energy model consists of several parameters that represent the causality, frequency, and significance of events. We design a specific energy model that uses these parameters to satisfy each objective of event summarization and rare event detection. The proposed method is extended to obtain event summarization and rare event detection results across multiple videos captured from multiple views. For this purpose, the proposed method independently learns and edits each graph of individual videos for event summarization or rare event detection. Then, the method matches the extracted multiple graphs to each other, and constructs a single composite graph that represents event summarization or rare events from multiple views. Experimental results show that the proposed approach accurately summarizes multiple videos in a fully unsupervised manner. Moreover, the experiments demonstrate that the approach is advantageous in detecting rare transition of events.

  1. Using Confidence Interval to Summarize the Evaluating Results of DSM Systems

    Institute of Scientific and Technical Information of China (English)

    SHI Weisong; TANG Zhimin; SHI Jinsong

    2000-01-01

    Distributed Shared Memory (DSM) systems have gained popular acceptance by combining the scalability and low cost of distributed system with the ease of use of single address space. Many new hardware DSM and software DSM systems have been proposed in recent years. In general, benchmarking is widely used to demonstrate the performance advantages of new systems. However, the common method used to summarize the measured results is the arithmetic mean of ratios,which is incorrect in some cases. Furthermore, many published papers list a lot of data only, and do not summarize them effectively, which confuse users greatly. In fact, many users want to get a single number as conclusion, which is not provided in old summarizing techniques. Therefore, a new data-summarizing technique based on confidence interval is proposed in this paper. The new technique includes two data-summarizing methods: (1) paired confidence interval method; (2) unpaired confidence interval method. With this new technique, it is concluded that at some confidence one system is better than others. Four examples are shown to demonstrate the advantages of this new technique. Furthermore, with the help of confidence level,it is proposed to standardize the benchmarks used for evaluating DSM systems so that a convincing result can be got. In addition, the new summarizing technique fits not only for evaluating DSM systems, but also for evaluating other systems, such as memory system and communication systems.

  2. Summarization strategies of hearing-impaired and normally hearing college students.

    Science.gov (United States)

    Peterson, L N; French, L

    1988-09-01

    The purpose of this study was to compare the summary writing skills of hearing-impaired and normally hearing college students. Summarization was defined in terms of the following measures: deletion of trivial text information, inclusion of most important ideas, selection of topic sentences, creation of topic statements, and integration of information within and among several paragraphs. Inclusion of opinionated, incorrect, and uninterpretable information was measured also. Thirty hearing-impaired and 30 normally hearing students read and summarized two expository science passages that were controlled for the number of topic (main idea) sentences and that had been rated previously for the importance of "idea units." Students' factual comprehension also was assessed. Hearing-impaired and normally hearing students exhibited a similar pattern of use among several measured summarization strategies, except for the inclusion of opinions or comments in their summaries. Hearing-impaired students were not as sensitive as normally hearing students to importance of ideas and used the following summarization strategies significantly less often: inclusion of important ideas, selection of topic sentences, creation of topic statements, and integration of ideas within and among paragraphs. The results indicated that hearing-impaired college students have basic summarization skills but do not apply summarization strategies as effectively as normally hearing students.

  3. THE EFFECT OF SUMMARIZATION INSTRUCTIONAL STRATEGIES AND PRESENTATION FORMATS ON THE OUTCOMES OF HISTORICAL ARGUMENTATIVE REASONING

    Directory of Open Access Journals (Sweden)

    Susanto Yunus Alfian

    2014-07-01

    Full Text Available The purpose of this research is to examine the effects of summarization instructional strategies and presentation formats on the learning outcomes of history argumentative reasoning. This study is designed as a factorial design. The subjects were the students enrolled in four state-owned sehior high school in Malang Regency. The main conclusions are presented as follow: (1 A significant difference existed for students who used the cause-effect graphic organizer summarization strategy to answer history argumentative reasoning post-test questions when compared to the written summarizing strategy, (2 There is no difference between those who were presented with present-subheadings presentation format and those who were presented absent-subheadings on answering history argumentative reasoning posttest questions, and (3 There is a significant interaction between the summarization instructional strategies and the presentation formats. The students who used cause-effect graphic organizer summarization strategy and were given with the present-subheadings presentation format significantly outperformed in the historical  argumentative reasoning post-test scores than the other groups (graphic organizer and absent-subheadings group, written summarizing and with-subheadings group, and written summarizing and without-subheadings group.Key Words:  summarization instructional strategy, presentation format, cause-effect graphic organizer, written summarizing, present-subheadings, historical argumentative reasoning.Tujuan dari penelitian ini adalah untuk mengetahui pengaruh strategi pembelajaran summarization dan format presentasi tentang hasil belajar sejarah penalaran argumentatif. Penelitian ini dirancang sebagai desain faktorial. Subjek penelitian adalah siswa terdaftar di empat sekolah SMA di Kabupaten Malang. Kesimpulan utama disajikan sebagai berikut: (1 Sebuah perbedaan yang signifikan ada bagi siswa yang menggunakan strategi peringkasan untuk menjawab

  4. Theory and implementation of summarization: Improving sensor interpretation for spacecraft operations

    Science.gov (United States)

    Swartwout, Michael Alden

    New paradigms in space missions require radical changes in spacecraft operations. In the past, operations were insulated from competitive pressures of cost, quality and time by system infrastructures, technological limitations and historical precedent. However, modern demands now require that operations meet competitive performance goals. One target for improvement is the telemetry downlink, where significant resources are invested to acquire thousands of measurements for human interpretation. This cost-intensive method is used because conventional operations are not based on formal methodologies but on experiential reasoning and incrementally adapted procedures. Therefore, to improve the telemetry downlink it is first necessary to invent a rational framework for discussing operations. This research explores operations as a feedback control problem, develops the conceptual basis for the use of spacecraft telemetry, and presents a method to improve performance. The method is called summarization, a process to make vehicle data more useful to operators. Summarization enables rational trades for telemetry downlink by defining and quantitatively ranking these elements: all operational decisions, the knowledge needed to inform each decision, and all possible sensor mappings to acquire that knowledge. Summarization methods were implemented for the Sapphire microsatellite; conceptual health management and system models were developed and a degree-of-observability metric was defined. An automated tool was created to generate summarization methods from these models. Methods generated using a Sapphire model were compared against the conventional operations plan. Summarization was shown to identify the key decisions and isolate the most appropriate sensors. Secondly, a form of summarization called beacon monitoring was experimentally verified. Beacon monitoring automates the anomaly detection and notification tasks and migrates these responsibilities to the space segment. A

  5. Operators of Approximations and Approximate Power Set Spaces

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xian-yong; MO Zhi-wen; SHU Lan

    2004-01-01

    Boundary inner and outer operators are introduced; and union, intersection, complement operators of approximations are redefined. The approximation operators have a good property of maintaining union, intersection, complement operators, so the rough set theory has been enriched from the operator-oriented and set-oriented views. Approximate power set spaces are defined, and it is proved that the approximation operators are epimorphisms from power set space to approximate power set spaces. Some basic properties of approximate power set space are got by epimorphisms in contrast to power set space.

  6. Constructing a taxonomy to support multi-document summarization of dissertation abstracts

    Institute of Scientific and Technical Information of China (English)

    OU Shi-yan; KHOO Christopher S.G.; GOH Dion H.

    2005-01-01

    This paper reports part of a study to develop a method for automatic multi-document summarization. The current focus is on dissertation abstracts in the field of sociology. The summarization method uses macro-level and micro-level discourse structure to identify important information that can be extracted from dissertation abstracts, and then uses a variable-based framework to integrate and organize extracted information across dissertation abstracts. This framework focuses more on research concepts and their research relationships found in sociology dissertation abstracts and has a hierarchical structure. A taxonomy is constructed to support the summarization process in two ways: (1) helping to identify important concepts and relations expressed in the text, and (2) providing a structure for linking similar concepts in different abstracts. This paper describes the variable-based framework and the summarization process, and then reports the construction of the taxonomy for supporting the summarization process. An example is provided to show how to use the constructed taxonomy to identify important concepts and integrate the concepts extracted from different abstracts.

  7. A new graph based text segmentation using Wikipedia for automatic text summarization

    Directory of Open Access Journals (Sweden)

    Mohsen Pourvali

    2012-01-01

    Full Text Available The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. According to the input text, in this paper we use the knowledge base of Wikipedia and the words of the main text to create independent graphs. We will then determine the important of graphs. Then we are specified importance of graph and sentences that have topics with high importance. Finally, we extract sentences with high importance. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to state-of-the-art summarization approaches

  8. Twitter and public health.

    Science.gov (United States)

    Bartlett, Catherine; Wurtz, Rebecca

    2015-01-01

    Twitter can serve as a powerful communication modality to both "push" and "pull" public health data; each user is a potential public health sensor and actor. However, in 2012, only 8% of local health departments had Twitter accounts. We outline how Twitter works, describe how to access public tweets for public health surveillance purposes, review the literature on Twitter's current and potential role supporting public health's essential services, summarize Twitter's limitations, and make recommendations for health department use.

  9. Linguistic Summarization of Video for Fall Detection Using Voxel Person and Fuzzy Logic.

    Science.gov (United States)

    Anderson, Derek; Luke, Robert H; Keller, James M; Skubic, Marjorie; Rantz, Marilyn; Aud, Myra

    2009-01-01

    In this paper, we present a method for recognizing human activity from linguistic summarizations of temporal fuzzy inference curves representing the states of a three-dimensional object called voxel person. A hierarchy of fuzzy logic is used, where the output from each level is summarized and fed into the next level. We present a two level model for fall detection. The first level infers the states of the person at each image. The second level operates on linguistic summarizations of voxel person's states and inference regarding activity is performed. The rules used for fall detection were designed under the supervision of nurses to ensure that they reflect the manner in which elders perform these activities. The proposed framework is extremely flexible. Rules can be modified, added, or removed, allowing for per-resident customization based on knowledge about their cognitive and physical ability.

  10. International Conference Approximation Theory XV

    CERN Document Server

    Schumaker, Larry

    2017-01-01

    These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...

  11. BioDARA: Data Summarization Approach to Extracting Bio-Medical Structuring Information

    Directory of Open Access Journals (Sweden)

    Chung S. Kheau

    2011-01-01

    Full Text Available Problem statement: Due to the ever growing amount of biomedical datasets stored in multiple tables, Information Extraction (IE from these datasets is increasingly recognized as one of the crucial technologies in bioinformatics. However, for IE to be practically applicable, adaptability of a system is crucial, considering extremely diverse demands in biomedical IE application. One should be able to extract a set of hidden patterns from these biomedical datasets at low cost. Approach: In this study, a new method is proposed, called Bio-medical Data Aggregation for Relational Attributes (BioDARA, for automatic structuring information extraction for biomedical datasets. BioDARA summarizes biomedical data stored in multiple tables in order to facilitate data modeling efforts in a multi-relational setting. BioDARA has the advantages or capabilities to transform biomedical data stored in multiple tables or databases into a Vector Space model, summarize biomedical data using the Information Retrieval theory and finally extract frequent patterns that describe the characteristics of these biomedical datasets. Results: the results show that data summarization performed by DARA, can be beneficial in summarizing biomedical datasets in a complex multi-relational environment, in which biomedical datasets are stored in a multi-level of one-to-many relationships and also in the case of datasets stored in more than one one-to-many relationships with non-target tables. Conclusion: This study concludes that data summarization performed by BioDARA, can be beneficial in summarizing biomedical datasets in a complex multi-relational environment, in which biomedical datasets are stored in a multi-level of one-to-many relationships.

  12. Research on multi-document summarization based on latent semantic indexing

    Institute of Scientific and Technical Information of China (English)

    QIN Bing; LIU Ting; ZHANG Yu; LI Sheng

    2005-01-01

    A multi-document summarization method based on Latent Semantic Indexing (LSI) is proposed. The method combines several reports on the same issue into a matrix of terms and sentences, and uses a Singular Value Decomposition (SVD) to reduce the dimension of the matrix and extract features, and then the sentence similarity is computed. The sentences are clustered according to similarity of sentences. The centroid sentences are selected from each class. Finally, the selected sentences are ordered to generate the summarization. The evaluation and results are presented, which prove that the proposed methods are efficient.

  13. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  14. Nonlinear Approximation Using Gaussian Kernels

    CERN Document Server

    Hangelbroek, Thomas

    2009-01-01

    It is well-known that non-linear approximation has an advantage over linear schemes in the sense that it provides comparable approximation rates to those of the linear schemes, but to a larger class of approximands. This was established for spline approximations and for wavelet approximations, and more recently for homogeneous radial basis function (surface spline) approximations. However, no such results are known for the Gaussian function. The crux of the difficulty lies in the necessity to vary the tension parameter in the Gaussian function spatially according to local information about the approximand: error analysis of Gaussian approximation schemes with varying tension are, by and large, an elusive target for approximators. We introduce and analyze in this paper a new algorithm for approximating functions using translates of Gaussian functions with varying tension parameters. Our scheme is sophisticated to a degree that it employs even locally Gaussians with varying tensions, and that it resolves local ...

  15. Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews

    Science.gov (United States)

    Xiong, Wenting; Litman, Diane

    2014-01-01

    We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no…

  16. Making Use of Semantic Concept Detection for Modelling Human Preferences in Visual Summarization

    NARCIS (Netherlands)

    Rudinac, S.; Worring, M.

    2014-01-01

    In this paper we investigate whether and how the human choice of images for summarizing a visual collection is influenced by the semantic concepts depicted in them. More specifically, by analysing a large collection of human-created visual summaries obtained through crowdsourcing, we aim at automati

  17. Enhancing Summarization Skills Using Twin Texts: Instruction in Narrative and Expository Text Structures

    Science.gov (United States)

    Furtado, Leena; Johnson, Lisa

    2010-01-01

    This action-research case study endeavors to enhance the summarization skills of first grade students who are reading at or above the third grade level during the first trimester of the academic school year. Students read "twin text" sources, meaning, fiction and nonfiction literary selections focusing on a common theme to help identify and…

  18. Death rate due to horseshoe crab poisoning:summarization on Thai reports

    Institute of Scientific and Technical Information of China (English)

    Beuy Joob; Viroj Wiwanitkit

    2015-01-01

    Horseshoe crab can be poisonous and intoxication due to intake of horseshoe crab is possible. Horseshoe crab intoxication can be seen in many countries with seacoasts including Thailand. Here, the authors summarized the death rate due to horseshoe crab poisoning in Thailand.

  19. Summarizing as retrieval strategy versus re-reading. Which learning activity works

    NARCIS (Netherlands)

    Dirkx, Kim; Kester, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Dirkx, K. J. H., Kester, L., & Kirschner, P. A. (2011, 30 August). Summarizing as retrieval strategy versus re-reading: Which learning strategy works best? Paper presented at the annual meeting of the Junior Researchers of the European Association for Research on Learning and Instruction, Exeter, Un

  20. A Qualitative Study on the Use of Summarizing Strategies in Elementary Education

    Science.gov (United States)

    Susar Kirmizi, Fatma; Akkaya, Nevin

    2011-01-01

    The objective of this study is to reveal how well summarizing strategies are used by Grade 4 and Grade 5 students as a reading comprehension strategy. This study was conducted in Buca, Izmir and the document analysis method, a qualitative research strategy, was employed. The study used a text titled "Environmental Pollution" and an…

  1. Utilizing Marzano's Summarizing and Note Taking Strategies on Seventh Grade Students' Mathematics Performance

    Science.gov (United States)

    Jeanmarie-Gardner, Charmaine

    2013-01-01

    A quasi-experimental research study was conducted that investigated the academic impact of utilizing Marzano's summarizing and note taking strategies on mathematic achievement. A sample of seventh graders from a middle school located on Long Island's North Shore was tested to determine whether significant differences existed in mathematic test…

  2. Science Text Comprehension: Drawing, Main Idea Selection, and Summarizing as Learning Strategies

    Science.gov (United States)

    Leopold, Claudia; Leutner, Detlev

    2012-01-01

    The purpose of two experiments was to contrast instructions to generate drawings with two text-focused strategies--main idea selection (Exp. 1) and summarization (Exp. 2)--and to examine whether these strategies could help students learn from a chemistry science text. Both experiments followed a 2 x 2 design, with drawing strategy instructions…

  3. Clustering cliques for graph-based summarization of the biomedical research literature

    DEFF Research Database (Denmark)

    Zhang, Han; Fiszman, Marcelo; Shin, Dongwook

    2013-01-01

    Background: Graph-based notions are increasingly used in biomedical data mining and knowledge discovery tasks. In this paper, we present a clique-clustering method to automatically summarize graphs of semantic predications produced from PubMed citations (titles and abstracts).Results: Sem...

  4. GeneLibrarian: an effective gene-information summarization and visualization system

    Directory of Open Access Journals (Sweden)

    Liu Heng-Hui

    2006-08-01

    Full Text Available Abstract Background Abundant information about gene products is stored in online searchable databases such as annotation or literature. To efficiently obtain and digest such information, there is a pressing need for automated information-summarization and functional-similarity clustering of genes. Results We have developed a novel method for semantic measurement of annotation and integrated it with a biomedical literature summarization system to establish a platform, GeneLibrarian, to provide users well-organized information about any specific group of genes (e.g. one cluster of genes from a microarray chip they might be interested in. The GeneLibrarian generates a summarized viewgraph of candidate genes for a user based on his/her preference and delivers the desired background information effectively to the user. The summarization technique involves optimizing the text mining algorithm and Gene Ontology-based clustering method to enable the discovery of gene relations. Conclusion GeneLibrarian is a Java-based web application that automates the process of retrieving critical information from the literature and expanding the number of potential genes for further analysis. This study concentrates on providing well organized information to users and we believe that will be useful in their researches. GeneLibrarian is available on http://gen.csie.ncku.edu.tw/GeneLibrarian/

  5. Forms of Approximate Radiation Transport

    CERN Document Server

    Brunner, G

    2002-01-01

    Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.

  6. Approximation by Multivariate Singular Integrals

    CERN Document Server

    Anastassiou, George A

    2011-01-01

    Approximation by Multivariate Singular Integrals is the first monograph to illustrate the approximation of multivariate singular integrals to the identity-unit operator. The basic approximation properties of the general multivariate singular integral operators is presented quantitatively, particularly special cases such as the multivariate Picard, Gauss-Weierstrass, Poisson-Cauchy and trigonometric singular integral operators are examined thoroughly. This book studies the rate of convergence of these operators to the unit operator as well as the related simultaneous approximation. The last cha

  7. Approximations of fractional Brownian motion

    CERN Document Server

    Li, Yuqiang; 10.3150/10-BEJ319

    2012-01-01

    Approximations of fractional Brownian motion using Poisson processes whose parameter sets have the same dimensions as the approximated processes have been studied in the literature. In this paper, a special approximation to the one-parameter fractional Brownian motion is constructed using a two-parameter Poisson process. The proof involves the tightness and identification of finite-dimensional distributions.

  8. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  9. Research of Automatic Summarization Methods%自动文摘的方法研究

    Institute of Scientific and Technical Information of China (English)

    卫佳君; 宋继华

    2011-01-01

    It summarizes the main automatic abstracting research methods and strategies and divides the methods into three major categories : automatically extracted summarization,automatic summarization based on information extraction and summarization based on understanding. Automatically extracted method uses mat extract important sentences from the article to form a digest;Abstract based on information extraction method uses that extract information from the article to fill framework which has been prepared, and then use the template to output the content; Abstract based on understanding is to use natural language processing technology to generate abstracts, focuses on automatically extracted summarization from single theme articles and multi-topic articles. After comparing advantages and disadvantages of variety of algorithms,a new multi-topic classification method is proposed.%文中总结了自动文摘的主要研究方法和策略并把方法分成了三大类:自动摘录、基于信息抽取的自动文摘和基于理解的自动文摘.自动摘录方法是从文章中抽取重要句子来形成文摘;基于信息抽取的文摘方法是用从文章中抽取的信息填充已经编好的框架,然后用模板将内容输出;基于理解的文摘方法是利用自然语言处理技术生成文摘.文中重点总结了单主题文章和多主题文章的自动摘录方法,在多种算法进行优缺点比较后提出了一种新的多主题划分方法.

  10. Diplomat/certified knowledge alternatives in the public scene: an approximation to the quackery from the written press of the cities of Córdoba and Buenos Aires, Argentina in the 1920s

    Directory of Open Access Journals (Sweden)

    María Dolores Rivero

    2017-07-01

    Full Text Available This paper aims to study how the written press of the cities of Córdoba and Buenos Aires (Argentina defined the "continuous advance of quackery and charlatanry" at different times of 1920s. In a context of limited national historiography related to the study of the empirical practices, we want to probe this problem in two different geographic spaces that formed part, —even today— of a country with strong regional inequalities. We wonder how journalistic representations defined "quackery" as a puzzle whose pieces refer to the State, graduate professionals, the society and the empirical healers themselves. In our reconstruction, we recognize that these discourses were influenced by the editorial course and the ideological affiliation to which the newspapers of two of the most important metropolis of Argentina asserted. Based on a qualitative methodology, we put into an analytical perspective a set of news with the aim to rescue "voices and looks" that contributed to build the empirical practices of heal as a social problem of the public sphere during the years of study.

  11. International Conference Approximation Theory XIV

    CERN Document Server

    Schumaker, Larry

    2014-01-01

    This volume developed from papers presented at the international conference Approximation Theory XIV,  held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.

  12. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  13. MRST-An Efficient Monitoring Technology of Summarization on Stream Data

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bo Fan; Ting-Ting Xie; Cui-Ping Li; Hong Chen

    2007-01-01

    Monitoring on data streams is an efficient method of acquiring the characters of data stream. However the available resources for each data stream are limited, so the problem of how to use the limited resources to process infinite data stream is an open challenging problem. In this paper, we adopt the wavelet and sliding window methods to design a multi-resolution summarization data structure, the Multi-Resolution Summarization Tree (MRST) which can be updated incrementally with the incoming data and can support point queries, range queries, multi-point queries and keep the precision of queries. We use both synthetic data and real-world data to evaluate our algorithm. The results of experiment indicate that the efficiency of query and the adaptability of MRST have exceeded the current algorithm, at the same time the realizationof it is simpler than others.

  14. A sentence scoring method for extractive text summarization based on Natural language queries

    Directory of Open Access Journals (Sweden)

    R.V.V Murali Krishna

    2012-05-01

    Full Text Available The developments in storage devices and computer networks have given the scope for the world to become a paperless community, for example Digital news paper systems and digital library systems. A paperless community is heavily dependent on information retrieval systems. Text summarization is an area that supports the cause of information retrieval systems by helping the users to get their needed information. This paper discusses on the relevance of using traditional stoplists for text summarization and the use of Statistical analysis for sentence scoring. A new methodology is proposed for implementing the stoplist concept and statistical analysis concept based on parts of speech tagging. A sentence scoring mechanism has been developed by combining the above methodologies with semantic analysis. This sentence scoring method has given good results when applied to find out the relation between natural language queries and the sentences in a document.

  15. Final Technical Report summarizing Purdue research activities as part of the DOE JET Topical Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Molnar, Denes [Purdue Univ., West Lafayette, IN (United States). Dept. of Physics and Astronomy

    2015-09-01

    This report summarizes research activities at Purdue University done as part of the DOE JET Topical Collaboration. These mainly involve calculation of covariant radiative energy loss in the (Djordjevic-)Gyulassy-Levai-Vitev ((D)GLV) framework for relativistic A+A reactions at RHIC and LHC energies using realistic bulk medium evolution with both transverse and longitudinal expansion. The single PDF file provided also includes a report from the entire JET Collaboration.

  16. Semisupervised Learning Based Opinion Summarization and Classification for Online Product Reviews

    Directory of Open Access Journals (Sweden)

    Mita K. Dalal

    2013-01-01

    Full Text Available The growth of E-commerce has led to the invention of several websites that market and sell products as well as allow users to post reviews. It is typical for an online buyer to refer to these reviews before making a buying decision. Hence, automatic summarization of users’ reviews has a great commercial significance. However, since the product reviews are written by nonexperts in an unstructured, natural language text, the task of summarizing them is challenging. This paper presents a semisupervised approach for mining online user reviews to generate comparative feature-based statistical summaries that can guide a user in making an online purchase. It includes various phases like preprocessing and feature extraction and pruning followed by feature-based opinion summarization and overall opinion sentiment classification. Empirical studies indicate that the approach used in the paper can identify opinionated sentences from blog reviews with a high average precision of 91% and can classify the polarity of the reviews with a good average accuracy of 86%.

  17. Evaluation of a gene information summarization system by users during the analysis process of microarray datasets.

    Science.gov (United States)

    Yang, Jianji; Cohen, Aaron; Hersh, William

    2009-02-05

    Summarization of gene information in the literature has the potential to help genomics researchers translate basic research into clinical benefits. Gene expression microarrays have been used to study biomarkers for disease and discover novel types of therapeutics and the task of finding information in journal articles on sets of genes is common for translational researchers working with microarray data. However, manually searching and scanning the literature references returned from PubMed is a time-consuming task for scientists. We built and evaluated an automatic summarizer of information on genes studied in microarray experiments. The Gene Information Clustering and Summarization System (GICSS) is a system that integrates two related steps of the microarray data analysis process: functional gene clustering and gene information gathering. The system evaluation was conducted during the process of genomic researchers analyzing their own experimental microarray datasets. The clusters generated by GICSS were validated by scientists during their microarray analysis process. In addition, presenting sentences in the abstract provided significantly more important information to the users than just showing the title in the default PubMed format. The evaluation results suggest that GICSS can be useful for researchers in genomic area. In addition, the hybrid evaluation method, partway between intrinsic and extrinsic system evaluation, may enable researchers to gauge the true usefulness of the tool for the scientists in their natural analysis workflow and also elicit suggestions for future enhancements. GICSS can be accessed online at: http://ir.ohsu.edu/jianji/index.html.

  18. An Efficient Technique for Network Traffic Summarization using Multiview Clustering and Statistical Sampling

    Directory of Open Access Journals (Sweden)

    Mohiuddin Ahmed

    2015-07-01

    Full Text Available There is significant interest in the data mining and network management communities to efficiently analyse huge amounts of network traffic, given the amount of network traffic generated even in small networks. Summarization is a primary data mining task for generating a concise yet informative summary of the given data and it is a research challenge to create summary from network traffic data. Existing clustering based summarization techniques lack the ability to create a suitable summary for further data mining tasks such as anomaly detection and require the summary size as an external input. Additionally, for complex and high dimensional network traffic datasets, there is often no single clustering solution that explains the structure of the given data. In this paper, we investigate the use of multiview clustering to create a meaningful summary using original data instances from network traffic data in an efficient manner. We develop a mathematically sound approach to select the summary size using a sampling technique. We compare our proposed approach with regular clustering based summarization incorporating the summary size calculation method and random approach. We validate our proposed approach using the benchmark network traffic dataset and state-of-theart summary evaluation metrics.

  19. Text Feature Weighting For Summarization Of Document Bahasa Indonesia Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Aristoteles.

    2012-05-01

    Full Text Available This paper aims to perform the text feature weighting for summarization of document bahasa Indonesia using genetic algorithm. There are eleven text features, i.e, sentence position (f1, positive keywords in sentence (f2, negative keywords in sentence (f3, sentence centrality (f4, sentence resemblance to the title (f5, sentence inclusion of name entity (f6, sentence inclusion of numerical data (f7, sentence relative length (f8, bushy path of the node (f9, summation of similarities for each node (f10, and latent semantic feature (f11. We investigate the effect of the first ten sentence features on the summarization task. Then, we use latent semantic feature to increase the accuracy. All feature score functions are used to train a genetic algorithm model to obtain a suitable combination of feature weights. Evaluation of text summarization uses F-measure. The F-measure directly related to the compression rate. The results showed that adding f11 increases the F-measure by 3.26% and 1.55% for compression ratio of 10% and 30%, respectively. On the other hand, it decreases the F-measure by 0.58% for compression ratio of 20%. Analysis of text feature weight showed that only using f2, f4, f5, and f11 can deliver a similar performance using all eleven features.

  20. BDD Minimization for Approximate Computing

    OpenAIRE

    Soeken, Mathias; Grosse, Daniel; Chandrasekharan, Arun; Drechsler, Rolf

    2016-01-01

    We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the app...

  1. Tree wavelet approximations with applications

    Institute of Scientific and Technical Information of China (English)

    XU Yuesheng; ZOU Qingsong

    2005-01-01

    We construct a tree wavelet approximation by using a constructive greedy scheme(CGS). We define a function class which contains the functions whose piecewise polynomial approximations generated by the CGS have a prescribed global convergence rate and establish embedding properties of this class. We provide sufficient conditions on a tree index set and on bi-orthogonal wavelet bases which ensure optimal order of convergence for the wavelet approximations encoded on the tree index set using the bi-orthogonal wavelet bases. We then show that if we use the tree index set associated with the partition generated by the CGS to encode a wavelet approximation, it gives optimal order of convergence.

  2. Dissecting Causal Pathways Using Mendelian Randomization with Summarized Genetic Data: Application to Age at Menarche and Risk of Breast Cancer.

    Science.gov (United States)

    Burgess, Stephen; Thompson, Deborah J; Rees, Jessica M B; Day, Felix R; Perry, John R; Ong, Ken K

    2017-08-23

    Mendelian randomization is the use of genetic variants as instrumental variables to estimate causal effects of risk factors on outcomes. The total causal effect of a risk factor is the change in the outcome resulting from intervening on the risk factor. This total causal effect may potentially encompass multiple mediating mechanisms. For a proposed mediator, the direct effect of the risk factor is the change in the outcome resulting from a change in the risk factor keeping the mediator constant. A difference between the total effect and the direct effect indicates that the causal pathway from the risk factor to the outcome acts at least in part via the mediator (an indirect effect). Here, we show that Mendelian randomization estimates of total and direct effects can be obtained using summarized data on genetic associations with the risk factor, mediator, and outcome, potentially from different data sources. We perform simulations to test the validity of this approach when there is unmeasured confounding and/or bidirectional effects between the risk factor and mediator. We illustrate this method using the relationship between age at menarche and risk of breast cancer, with body mass index (BMI) as a potential mediator. We show an inverse direct causal effect of age at menarche on risk of breast cancer (independent of BMI) and a positive indirect effect via BMI. In conclusion, multivariable Mendelian randomization using summarized genetic data provides a rapid and accessible analytic strategy that can be undertaken using publicly-available data to better understand causal mechanisms. Copyright © 2017, Genetics.

  3. Diophantine approximation and automorphic spectrum

    CERN Document Server

    Ghosh, Anish; Nevo, Amos

    2010-01-01

    The present paper establishes qunatitative estimates on the rate of diophantine approximation in homogeneous varieties of semisimple algebraic groups. The estimates established generalize and improve previous ones, and are sharp in a number of cases. We show that the rate of diophantine approximation is controlled by the spectrum of the automorphic representation, and is thus subject to the generalised Ramanujan conjectures.

  4. Some results in Diophantine approximation

    DEFF Research Database (Denmark)

    the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered...

  5. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...

  6. Uniform approximation by (quantum) polynomials

    NARCIS (Netherlands)

    Drucker, A.; de Wolf, R.

    2011-01-01

    We show that quantum algorithms can be used to re-prove a classical theorem in approximation theory, Jackson's Theorem, which gives a nearly-optimal quantitative version of Weierstrass's Theorem on uniform approximation of continuous functions by polynomials. We provide two proofs, based respectivel

  7. Formalization and separation: A systematic basis for interpreting approaches to summarizing science for climate policy.

    Science.gov (United States)

    Sundqvist, Göran; Bohlin, Ingemar; Hermansen, Erlend A T; Yearley, Steven

    2015-06-01

    In studies of environmental issues, the question of how to establish a productive interplay between science and policy is widely debated, especially in relation to climate change. The aim of this article is to advance this discussion and contribute to a better understanding of how science is summarized for policy purposes by bringing together two academic discussions that usually take place in parallel: the question of how to deal with formalization (structuring the procedures for assessing and summarizing research, e.g. by protocols) and separation (maintaining a boundary between science and policy in processes of synthesizing science for policy). Combining the two dimensions, we draw a diagram onto which different initiatives can be mapped. A high degree of formalization and separation are key components of the canonical image of scientific practice. Influential Science and Technology Studies analysts, however, are well known for their critiques of attempts at separation and formalization. Three examples that summarize research for policy purposes are presented and mapped onto the diagram: the Intergovernmental Panel on Climate Change, the European Union's Science for Environment Policy initiative, and the UK Committee on Climate Change. These examples bring out salient differences concerning how formalization and separation are dealt with. Discussing the space opened up by the diagram, as well as the limitations of the attraction to its endpoints, we argue that policy analyses, including much Science and Technology Studies work, are in need of a more nuanced understanding of the two crucial dimensions of formalization and separation. Accordingly, two analytical claims are presented, concerning trajectories, how organizations represented in the diagram move over time, and mismatches, how organizations fail to handle the two dimensions well in practice.

  8. Evaluation of a gene information summarization system by users during the analysis process of microarray datasets

    Directory of Open Access Journals (Sweden)

    Cohen Aaron

    2009-02-01

    Full Text Available Abstract Background Summarization of gene information in the literature has the potential to help genomics researchers translate basic research into clinical benefits. Gene expression microarrays have been used to study biomarkers for disease and discover novel types of therapeutics and the task of finding information in journal articles on sets of genes is common for translational researchers working with microarray data. However, manually searching and scanning the literature references returned from PubMed is a time-consuming task for scientists. We built and evaluated an automatic summarizer of information on genes studied in microarray experiments. The Gene Information Clustering and Summarization System (GICSS is a system that integrates two related steps of the microarray data analysis process: functional gene clustering and gene information gathering. The system evaluation was conducted during the process of genomic researchers analyzing their own experimental microarray datasets. Results The clusters generated by GICSS were validated by scientists during their microarray analysis process. In addition, presenting sentences in the abstract provided significantly more important information to the users than just showing the title in the default PubMed format. Conclusion The evaluation results suggest that GICSS can be useful for researchers in genomic area. In addition, the hybrid evaluation method, partway between intrinsic and extrinsic system evaluation, may enable researchers to gauge the true usefulness of the tool for the scientists in their natural analysis workflow and also elicit suggestions for future enhancements. Availability GICSS can be accessed online at: http://ir.ohsu.edu/jianji/index.html

  9. Global approximation of convex functions

    CERN Document Server

    Azagra, D

    2011-01-01

    We show that for every (not necessarily bounded) open convex subset $U$ of $\\R^n$, every (not necessarily Lipschitz or strongly) convex function $f:U\\to\\R$ can be approximated by real analytic convex functions, uniformly on all of $U$. In doing so we provide a technique which transfers results on uniform approximation on bounded sets to results on uniform approximation on unbounded sets, in such a way that not only convexity and $C^k$ smoothness, but also local Lipschitz constants, minimizers, order, and strict or strong convexity, are preserved. This transfer method is quite general and it can also be used to obtain new results on approximation of convex functions defined on Riemannian manifolds or Banach spaces. We also provide a characterization of the class of convex functions which can be uniformly approximated on $\\R^n$ by strongly convex functions.

  10. Approximate circuits for increased reliability

    Energy Technology Data Exchange (ETDEWEB)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  11. Approximate circuits for increased reliability

    Energy Technology Data Exchange (ETDEWEB)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  12. Using LSA and text segmentation to improve automatic Chinese dialogue text summarization

    Institute of Scientific and Technical Information of China (English)

    LIU Chuan-han; WANG Yong-cheng; ZHENG Fei; LIU De-rong

    2007-01-01

    Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all question paragraphs are identified,an automatic text segmentation approach analogous to TextTiling is exploited to improve the precision of correlating question paragraphs and answer paragraphs, and finally some "important" sentences are extracted from the generic content and the question-answer pairs to generate a complete summary. Experimental results showed that our approach is highly efficient and improves significantly the coherence of the summary while not compromising informativeness.

  13. Decommissioning of the ASTRA research reactor: Planning, executing and summarizing the project

    Directory of Open Access Journals (Sweden)

    Meyer Franz

    2010-01-01

    Full Text Available The decommissioning of the ASTRA research reactor at the Austrian Research Centres Seibersdorf was described within three technical papers already released in Nuclear Technology & Radiation Protection throughout the years 2003, 2006, and 2008. Following a suggestion from IAEA the project was investigated well after the files were closed regarding rather administrative than technical matters starting with the project mission, explaining the project structure and identifying the key factors and the key performance indicators. The continuous documentary and reporting system as implemented to fulfil the informational needs of stake-holders, management, and project staff alike is described. Finally the project is summarized in relationship to the performance indicators.

  14. Summarizing background report for Energy Strategy 2025; Sammenfattende baggrundsrapport for Energistrategi 2025

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-06-01

    The Danish Government's long-term energy strategy follows up on the political agreement of 29 March 2004. The energy strategy is a coherent formulation of the Government's long-term energy policy. The pivotal point for the energy strategy is liberalized energy markets and market based tools for obtaining goals such as efficiency, security of supply and environment. The focus is increasingly on the substantial business potential within development of new and more efficient energy technology, in which Denmark takes up several globally strong positions. Furthermore, transportation energy consumption has been included directly in an energy strategy for the first time. At the same time as the energy strategy is presented, a summarizing background report from the Danish Energy Agency with facts, analyses and evaluations is published, as well as a report from energinet.dk that summarizes the system responsibilities' input to that part of the energy strategy that deals with power infrastructure. (BA)

  15. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  16. Automatic video summarization driven by a spatio-temporal attention model

    Science.gov (United States)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  17. Ethics and Scientific Publication

    Science.gov (United States)

    Benos, Dale J.; Fabres, Jorge; Farmer, John; Gutierrez, Jessica P.; Hennessy, Kristin; Kosek, David; Lee, Joo Hyoung; Olteanu, Dragos; Russell, Tara; Wang, Kai

    2005-01-01

    This article summarizes the major categories of ethical violations encountered during submission, review, and publication of scientific articles. We discuss data fabrication and falsification, plagiarism, redundant and duplicate publication, conflict of interest, authorship, animal and human welfare, and reviewer responsibility. In each section,…

  18. Ethics and Scientific Publication

    Science.gov (United States)

    Benos, Dale J.; Fabres, Jorge; Farmer, John; Gutierrez, Jessica P.; Hennessy, Kristin; Kosek, David; Lee, Joo Hyoung; Olteanu, Dragos; Russell, Tara; Wang, Kai

    2005-01-01

    This article summarizes the major categories of ethical violations encountered during submission, review, and publication of scientific articles. We discuss data fabrication and falsification, plagiarism, redundant and duplicate publication, conflict of interest, authorship, animal and human welfare, and reviewer responsibility. In each section,…

  19. Rytov approximation in electron scattering

    Science.gov (United States)

    Krehl, Jonas; Lubk, Axel

    2017-06-01

    In this work we introduce the Rytov approximation in the scope of high-energy electron scattering with the motivation of developing better linear models for electron scattering. Such linear models play an important role in tomography and similar reconstruction techniques. Conventional linear models, such as the phase grating approximation, have reached their limits in current and foreseeable applications, most importantly in achieving three-dimensional atomic resolution using electron holographic tomography. The Rytov approximation incorporates propagation effects which are the most pressing limitation of conventional models. While predominately used in the weak-scattering regime of light microscopy, we show that the Rytov approximation can give reasonable results in the inherently strong-scattering regime of transmission electron microscopy.

  20. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  1. Approximate common divisors via lattices

    CERN Document Server

    Cohn, Henry

    2011-01-01

    We analyze the multivariate generalization of Howgrave-Graham's algorithm for the approximate common divisor problem. In the m-variable case with modulus N and approximate common divisor of size N^beta, this improves the size of the error tolerated from N^(beta^2) to N^(beta^((m+1)/m)), under a commonly used heuristic assumption. This gives a more detailed analysis of the hardness assumption underlying the recent fully homomorphic cryptosystem of van Dijk, Gentry, Halevi, and Vaikuntanathan. While these results do not challenge the suggested parameters, a 2^sqrt(n) approximation algorithm for lattice basis reduction in n dimensions could be used to break these parameters. We have implemented our algorithm, and it performs better in practice than the theoretical analysis suggests. Our results fit into a broader context of analogies between cryptanalysis and coding theory. The multivariate approximate common divisor problem is the number-theoretic analogue of noisy multivariate polynomial interpolation, and we ...

  2. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  3. Binary nucleation beyond capillarity approximation

    NARCIS (Netherlands)

    Kalikmanov, V.I.

    2010-01-01

    Large discrepancies between binary classical nucleation theory (BCNT) and experiments result from adsorption effects and inability of BCNT, based on the phenomenological capillarity approximation, to treat small clusters. We propose a model aimed at eliminating both of these deficiencies. Adsorption

  4. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  5. Nonlinear approximation with redundant dictionaries

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, M.; Gribonval, R.

    2005-01-01

    In this paper we study nonlinear approximation and data representation with redundant function dictionaries. In particular, approximation with redundant wavelet bi-frame systems is studied in detail. Several results for orthonormal wavelets are generalized to the redundant case. In general......, for a wavelet bi-frame system the approximation properties are limited by the number of vanishing moments of the system. In some cases this can be overcome by oversampling, but at a price of replacing the canonical expansion by another linear expansion. Moreover, for special non-oversampled wavelet bi-frames we...... can obtain good approximation properties not restricted by the number of vanishing moments, but again without using the canonical expansion....

  6. Private well water in Colorado: collaboration, data use, and public health outreach.

    Science.gov (United States)

    Brown, Eric M; Van Dyke, Mike; Kuhn, Stephanie; Mitchell, Jane; Dalton, Hope

    2015-01-01

    As a result of participating in the Centers for Disease Control and Prevention's Private Well Initiative and Environmental Public Health Tracking Network (Tracking), the Colorado Department of Public Health and Environment was able to inventory private well water quality data, prioritize potential health concerns associated with drinking water from these wells, and create a Web portal for sharing public health information regarding private well water. The Colorado Department of Public Health and Environment collaborated with a local health department to pilot the project prior to a public implementation. Approximately 18 data sets were identified and inventoried. The Colorado Department of Public Health and Environment also participated in development and pilot testing of best practices for display of well water quality data with other Tracking states. Available data sets were compiled and summarized, and the data made available on the Colorado Tracking portal using geographic information system technology to support public health outreach regarding private wells.

  7. Mathematical algorithms for approximate reasoning

    Science.gov (United States)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  8. Using Synchronic and Diachronic Relations for Summarizing Multiple Documents Describing Evolving Events

    CERN Document Server

    Afantenos, Stergos D; Stamatopoulos, P; Halatsis, C

    2007-01-01

    In this paper we present a fresh look at the problem of summarizing evolving events from multiple sources. After a discussion concerning the nature of evolving events we introduce a distinction between linearly and non-linearly evolving events. We present then a general methodology for the automatic creation of summaries from evolving events. At its heart lie the notions of Synchronic and Diachronic cross-document Relations (SDRs), whose aim is the identification of similarities and differences between sources, from a synchronical and diachronical perspective. SDRs do not connect documents or textual elements found therein, but structures one might call messages. Applying this methodology will yield a set of messages and relations, SDRs, connecting them, that is a graph which we call grid. We will show how such a grid can be considered as the starting point of a Natural Language Generation System. The methodology is evaluated in two case-studies, one for linearly evolving events (descriptions of football matc...

  9. Towards Real-Time Summarization of Scheduled Events from Twitter Streams

    CERN Document Server

    Zubiaga, Arkaitz; Amigó, Enrique; Gonzalo, Julio

    2012-01-01

    This paper explores the real-time summarization of scheduled events such as soccer games from torrential flows of Twitter streams. We propose and evaluate an approach that substantially shrinks the stream of tweets in real-time, and consists of two steps: (i) sub-event detection, which determines if something new has occurred, and (ii) tweet selection, which picks a representative tweet to describe each sub-event. We compare the summaries generated in three languages for all the soccer games in "Copa America 2011" to reference live reports offered by Yahoo! Sports journalists. We show that simple text analysis methods which do not involve external knowledge lead to summaries that cover 84% of the sub-events on average, and 100% of key types of sub-events (such as goals in soccer). Our approach should be straightforwardly applicable to other kinds of scheduled events such as other sports, award ceremonies, keynote talks, TV shows, etc.

  10. Query sensitive comparative summarization of search results using concept based segmentation

    CERN Document Server

    Chitra, P; Sarukesi, K

    2012-01-01

    Query sensitive summarization aims at providing the users with the summary of the contents of single or multiple web pages based on the search query. This paper proposes a novel idea of generating a comparative summary from a set of URLs from the search result. User selects a set of web page links from the search result produced by search engine. Comparative summary of these selected web sites is generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the query and feature keywords. The important sentences from the concept blocks of different web pages are extracted to compose the comparative summary on the fly. This system reduces the time and effort required for the user to browse various web sites to compare the information. The comparative summary of the contents would help the users in quick decision making.

  11. Twisted inhomogeneous Diophantine approximation and badly approximable sets

    CERN Document Server

    Harrap, Stephen

    2010-01-01

    For any real pair i, j geq 0 with i+j=1 let Bad(i, j) denote the set of (i, j)-badly approximable pairs. That is, Bad(i, j) consists of irrational vectors x:=(x_1, x_2) in R^2 for which there exists a positive constant c(x) such that max {||qx_1||^(-i), ||qx_2||^(-j)} > c(x)/q for all q in N. Building on a result of Kurzweil, a new characterization of the set Bad(i, j) in terms of `well-approximable' vectors in the area of `twisted' inhomogeneous Diophantine approximation is established. In addition, it is shown that Bad^x(i, j), the `twisted' inhomogeneous analogue of Bad(i, j), has full Hausdorff dimension 2 when x is chosen from the set Bad(i, j).

  12. A spatio-temporal mining approach towards summarizing and analyzing protein folding trajectories

    Directory of Open Access Journals (Sweden)

    Ucar Duygu

    2007-04-01

    Full Text Available Abstract Understanding the protein folding mechanism remains a grand challenge in structural biology. In the past several years, computational theories in molecular dynamics have been employed to shed light on the folding process. Coupled with high computing power and large scale storage, researchers now can computationally simulate the protein folding process in atomistic details at femtosecond temporal resolution. Such simulation often produces a large number of folding trajectories, each consisting of a series of 3D conformations of the protein under study. As a result, effectively managing and analyzing such trajectories is becoming increasingly important. In this article, we present a spatio-temporal mining approach to analyze protein folding trajectories. It exploits the simplicity of contact maps, while also integrating 3D structural information in the analysis. It characterizes the dynamic folding process by first identifying spatio-temporal association patterns in contact maps, then studying how such patterns evolve along a folding trajectory. We demonstrate that such patterns can be leveraged to summarize folding trajectories, and to facilitate the detection and ordering of important folding events along a folding path. We also show that such patterns can be used to identify a consensus partial folding pathway across multiple folding trajectories. Furthermore, we argue that such patterns can capture both local and global structural topology in a 3D protein conformation, thereby facilitating effective structural comparison amongst conformations. We apply this approach to analyze the folding trajectories of two small synthetic proteins-BBA5 and GSGS (or Beta3S. We show that this approach is promising towards addressing the above issues, namely, folding trajectory summarization, folding events detection and ordering, and consensus partial folding pathway identification across trajectories.

  13. FUSE: a profit maximization approach for functional summarization of biological networks

    Directory of Open Access Journals (Sweden)

    Seah Boon-Siew

    2012-03-01

    Full Text Available Abstract Background The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein interaction network (PPI using graph theoretic analysis. Despite the recent progress, systems level analysis of PPIS remains a daunting task as it is challenging to make sense out of the deluge of high-dimensional interaction data. Specifically, techniques that automatically abstract and summarize PPIS at multiple resolutions to provide high level views of its functional landscape are still lacking. We present a novel data-driven and generic algorithm called FUSE (Functional Summary Generator that generates functional maps of a PPI at different levels of organization, from broad process-process level interactions to in-depth complex-complex level interactions, through a pro t maximization approach that exploits Minimum Description Length (MDL principle to maximize information gain of the summary graph while satisfying the level of detail constraint. Results We evaluate the performance of FUSE on several real-world PPIS. We also compare FUSE to state-of-the-art graph clustering methods with GO term enrichment by constructing the biological process landscape of the PPIS. Using AD network as our case study, we further demonstrate the ability of FUSE to quickly summarize the network and identify many different processes and complexes that regulate it. Finally, we study the higher-order connectivity of the human PPI. Conclusion By simultaneously evaluating interaction and annotation data, FUSE abstracts higher-order interaction maps by reducing the details of the underlying PPI to form a functional summary graph of interconnected functional clusters. Our results demonstrate its effectiveness and superiority over state-of-the-art graph clustering methods with GO term enrichment.

  14. Formative evaluation of a patient-specific clinical knowledge summarization tool.

    Science.gov (United States)

    Del Fiol, Guilherme; Mostafa, Javed; Pu, Dongqiuye; Medlin, Richard; Slager, Stacey; Jonnalagadda, Siddhartha R; Weir, Charlene R

    2016-02-01

    To iteratively design a prototype of a computerized clinical knowledge summarization (CKS) tool aimed at helping clinicians finding answers to their clinical questions; and to conduct a formative assessment of the usability, usefulness, efficiency, and impact of the CKS prototype on physicians' perceived decision quality compared with standard search of UpToDate and PubMed. Mixed-methods observations of the interactions of 10 physicians with the CKS prototype vs. standard search in an effort to solve clinical problems posed as case vignettes. The CKS tool automatically summarizes patient-specific and actionable clinical recommendations from PubMed (high quality randomized controlled trials and systematic reviews) and UpToDate. Two thirds of the study participants completed 15 out of 17 usability tasks. The median time to task completion was less than 10s for 12 of the 17 tasks. The difference in search time between the CKS and standard search was not significant (median=4.9 vs. 4.5m in). Physician's perceived decision quality was significantly higher with the CKS than with manual search (mean=16.6 vs. 14.4; p=0.036). The CKS prototype was well-accepted by physicians both in terms of usability and usefulness. Physicians perceived better decision quality with the CKS prototype compared to standard search of PubMed and UpToDate within a similar search time. Due to the formative nature of this study and a small sample size, conclusions regarding efficiency and efficacy are exploratory. Published by Elsevier Ireland Ltd.

  15. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    Science.gov (United States)

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.

  16. Assumption-versus data-based approaches to summarizing species' ranges.

    Science.gov (United States)

    Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro

    2016-08-04

    For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.

  17. Compressibility Corrections to Closure Approximations for Turbulent Flow Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Cloutman, L D

    2003-02-01

    We summarize some modifications to the usual closure approximations for statistical models of turbulence that are necessary for use with compressible fluids at all Mach numbers. We concentrate here on the gradient-flu approximation for the turbulent heat flux, on the buoyancy production of turbulence kinetic energy, and on a modification of the Smagorinsky model to include buoyancy. In all cases, there are pressure gradient terms that do not appear in the incompressible models and are usually omitted in compressible-flow models. Omission of these terms allows unphysical rates of entropy change.

  18. Reinforcement Learning via AIXI Approximation

    CERN Document Server

    Veness, Joel; Hutter, Marcus; Silver, David

    2010-01-01

    This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. This approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a Monte Carlo Tree Search algorithm along with an agent-specific extension of the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a number of stochastic, unknown, and partially observable domains.

  19. Approximate Matching of Hierarchial Data

    DEFF Research Database (Denmark)

    Augsten, Nikolaus

    The goal of this thesis is to design, develop, and evaluate new methods for the approximate matching of hierarchical data represented as labeled trees. In approximate matching scenarios two items should be matched if they are similar. Computing the similarity between labeled trees is hard...... as in addition to the data values also the structure must be considered. A well-known measure for comparing trees is the tree edit distance. It is computationally expensive and leads to a prohibitively high run time. Our solution for the approximate matching of hierarchical data are pq-grams. The pq...... formally proof that the pq-gram index can be incrementally updated based on the log of edit operations without reconstructing intermediate tree versions. The incremental update is independent of the data size and scales to a large number of changes in the data. We introduce windowed pq...

  20. Concept Approximation between Fuzzy Ontologies

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Fuzzy ontologies are efficient tools to handle fuzzy and uncertain knowledge on the semantic web; but there are heterogeneity problems when gaining interoperability among different fuzzy ontologies. This paper uses concept approximation between fuzzy ontologies based on instances to solve the heterogeneity problems. It firstly proposes an instance selection technology based on instance clustering and weighting to unify the fuzzy interpretation of different ontologies and reduce the number of instances to increase the efficiency. Then the paper resolves the problem of computing the approximations of concepts into the problem of computing the least upper approximations of atom concepts. It optimizes the search strategies by extending atom concept sets and defining the least upper bounds of concepts to reduce the searching space of the problem. An efficient algorithm for searching the least upper bounds of concept is given.

  1. Approximating Graphic TSP by Matchings

    CERN Document Server

    Mömke, Tobias

    2011-01-01

    We present a framework for approximating the metric TSP based on a novel use of matchings. Traditionally, matchings have been used to add edges in order to make a given graph Eulerian, whereas our approach also allows for the removal of certain edges leading to a decreased cost. For the TSP on graphic metrics (graph-TSP), the approach yields a 1.461-approximation algorithm with respect to the Held-Karp lower bound. For graph-TSP restricted to a class of graphs that contains degree three bounded and claw-free graphs, we show that the integrality gap of the Held-Karp relaxation matches the conjectured ratio 4/3. The framework allows for generalizations in a natural way and also leads to a 1.586-approximation algorithm for the traveling salesman path problem on graphic metrics where the start and end vertices are prespecified.

  2. Diophantine approximation and Dirichlet series

    CERN Document Server

    Queffélec, Hervé

    2013-01-01

    This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...

  3. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  4. Approximate Sparse Regularized Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Chengzhi Deng

    2014-01-01

    Full Text Available Sparse regression based unmixing has been recently proposed to estimate the abundance of materials present in hyperspectral image pixel. In this paper, a novel sparse unmixing optimization model based on approximate sparsity, namely, approximate sparse unmixing (ASU, is firstly proposed to perform the unmixing task for hyperspectral remote sensing imagery. And then, a variable splitting and augmented Lagrangian algorithm is introduced to tackle the optimization problem. In ASU, approximate sparsity is used as a regularizer for sparse unmixing, which is sparser than l1 regularizer and much easier to be solved than l0 regularizer. Three simulated and one real hyperspectral images were used to evaluate the performance of the proposed algorithm in comparison to l1 regularizer. Experimental results demonstrate that the proposed algorithm is more effective and accurate for hyperspectral unmixing than state-of-the-art l1 regularizer.

  5. AUTOMATIC SUMMARIZATION OF WEB FORUMS AS SOURCES OF PROFESSIONALLY SIGNIFICANT INFORMATION

    Directory of Open Access Journals (Sweden)

    K. I. Buraya

    2016-07-01

    Full Text Available Subject of Research.The competitive advantage of a modern specialist is the widest possible coverage of informationsources useful from the point of view of obtaining and acquisition of relevant professionally significant information. Among these sources professional web forums occupy a significant place. The paperconsiders the problem of automaticforum text summarization, i.e. identification ofthose fragments that contain professionally relevant information. Method.The research is based on statistical analysis of texts of forums by means of machine learning. Six web forums were selected for research considering aspects of technologies of various subject domains as their subject-matter. The marking of forums was carried out by an expert way. Using various methods of machine learning the models were designed reflecting functional communication between the estimated characteristics of PSI extraction quality and signs of posts. The cumulative NDCG metrics and its dispersion were used for an assessment of quality of models.Main Results. We have shown that an important role in an assessment of PSI extraction efficiency is played by requestcontext. The contexts of requestshave been selected,characteristic of PSI extraction, reflecting various interpretations of information needs of users, designated by terms relevance and informational content. The scales for their estimates have been designed corresponding to worldwide approaches. We have experimentally confirmed that results of the summarization of forums carried out by experts manually significantly depend on requestcontext. We have shown that in the general assessment of PSI extraction efficiency relevance is rather well described by a linear combination of features, and the informational content assessment already requires their nonlinear combination. At the same time at a relevance assessment the leading role is played by the features connected with keywords, and at an informational content

  6. Approximate Bayesian Computation in hydrologic modeling: equifinality of formal and informal approaches

    Directory of Open Access Journals (Sweden)

    M. Sadegh

    2013-04-01

    Full Text Available In recent years, a strong debate has emerged in the hydrologic literature how to properly treat non-traditional error residual distributions and quantify parameter and predictive uncertainty. Particularly, there is strong disagreement whether such uncertainty framework should have its roots within a proper statistical (Bayesian context using Markov chain Monte Carlo (MCMC simulation techniques, or whether such a framework should be based on a quite different philosophy and implement informal likelihood functions and simplistic search methods to summarize parameter and predictive distributions. In this paper we introduce an alternative framework, called Approximate Bayesian Computation (ABC that summarizes the differing viewpoints of formal and informal Bayesian approaches. This methodology has recently emerged in the fields of biology and population genetics and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics that measure the distance of each model simulation to the data. This paper is a follow up of the recent publication of Nott et al. (2012 and further studies the theoretical and numerical equivalence of formal (DREAM and informal (GLUE Bayesian approaches using data from different watersheds in the United States. We demonstrate that the limits of acceptability approach of GLUE is a special variant of ABC in which each discharge observation of the calibration data set is used as a summary diagnostic.

  7. Transfinite Approximation of Hindman's Theorem

    CERN Document Server

    Beiglböck, Mathias

    2010-01-01

    Hindman's Theorem states that in any finite coloring of the integers, there is an infinite set all of whose finite sums belong to the same color. This is much stronger than the corresponding finite form, stating that in any finite coloring of the integers there are arbitrarily long finite sets with the same property. We extend the finite form of Hindman's Theorem to a "transfinite" version for each countable ordinal, and show that Hindman's Theorem is equivalent to the appropriate transfinite approximation holding for every countable ordinal. We then give a proof of Hindman's Theorem by directly proving these transfinite approximations.

  8. How to present, summarize, and defend your poster at the meeting.

    Science.gov (United States)

    Campbell, Robert S

    2004-10-01

    For many people public speaking induces stress and fear, but with adequate planning, practice, and understanding of the "dos and don'ts" you can deliver presentations that will communicate your research clearly, succinctly, and with a professional and confident demeanor. This article provides a guide for the novice researcher to develop the skills to deliver several types of presentation and to minimize (and even make use of) the stress and fear. Planning and practice are the key to success.

  9. Tree wavelet approximations with applications

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    [1]Baraniuk, R. G., DeVore, R. A., Kyriazis, G., Yu, X. M., Near best tree approximation, Adv. Comput. Math.,2002, 16: 357-373.[2]Cohen, A., Dahmen, W., Daubechies, I., DeVore, R., Tree approximation and optimal encoding, Appl. Comput.Harmonic Anal., 2001, 11: 192-226.[3]Dahmen, W., Schneider, R., Xu, Y., Nonlinear functionals of wavelet expansions-adaptive reconstruction and fast evaluation, Numer. Math., 2000, 86: 49-101.[4]DeVore, R. A., Nonlinear approximation, Acta Numer., 1998, 7: 51-150.[5]Davis, G., Mallat, S., Avellaneda, M., Adaptive greedy approximations, Const. Approx., 1997, 13: 57-98.[6]DeVore, R. A., Temlyakov, V. N., Some remarks on greedy algorithms, Adv. Comput. Math., 1996, 5: 173-187.[7]Kashin, B. S., Temlyakov, V. N., Best m-term approximations and the entropy of sets in the space L1, Mat.Zametki (in Russian), 1994, 56: 57-86.[8]Temlyakov, V. N., The best m-term approximation and greedy algorithms, Adv. Comput. Math., 1998, 8:249-265.[9]Temlyakov, V. N., Greedy algorithm and m-term trigonometric approximation, Constr. Approx., 1998, 14:569-587.[10]Hutchinson, J. E., Fractals and self similarity, Indiana. Univ. Math. J., 1981, 30: 713-747.[11]Binev, P., Dahmen, W., DeVore, R. A., Petruchev, P., Approximation classes for adaptive methods, Serdica Math.J., 2002, 28: 1001-1026.[12]Gilbarg, D., Trudinger, N. S., Elliptic Partial Differential Equations of Second Order, Berlin: Springer-Verlag,1983.[13]Ciarlet, P. G., The Finite Element Method for Elliptic Problems, New York: North Holland, 1978.[14]Birman, M. S., Solomiak, M. Z., Piecewise polynomial approximation of functions of the class Wαp, Math. Sb.,1967, 73: 295-317.[15]DeVore, R. A., Lorentz, G. G., Constructive Approximation, New York: Springer-Verlag, 1993.[16]DeVore, R. A., Popov, V., Interpolation of Besov spaces, Trans. Amer. Math. Soc., 1988, 305: 397-414.[17]Devore, R., Jawerth, B., Popov, V., Compression of wavelet decompositions, Amer. J. Math., 1992, 114: 737-785.[18]Storozhenko, E

  10. Summarizing US Wildlife Trade with an Eye Toward Assessing the Risk of Infectious Disease Introduction.

    Science.gov (United States)

    Smith, K M; Zambrana-Torrelio, C; White, A; Asmussen, M; Machalaba, C; Kennedy, S; Lopez, K; Wolf, T M; Daszak, P; Travis, D A; Karesh, W B

    2017-03-01

    The aim of this study was to characterize the role of the USA in the global exchange of wildlife and describe high volume trade with an eye toward prioritizing health risk assessment questions for further analysis. Here we summarize nearly 14 years (2000-2013) of the most comprehensive data available (USFWS LEMIS system), involving 11 billion individual specimens and an additional 977 million kilograms of wildlife. The majority of shipments contained mammals (27%), while the majority of specimens imported were shells (57%) and tropical fish (25%). Most imports were facilitated by the aquatic and pet industry, resulting in one-third of all shipments containing live animals. The importer reported origin of wildlife was 77.7% wild-caught and 17.7% captive-reared. Indonesia was the leading exporter of legal shipments, while Mexico was the leading source reported for illegal shipments. At the specimen level, China was the leading exporter of legal and illegal wildlife imports. The number of annual declared shipments doubled during the period examined, illustrating continually increasing demand, which reinforces the need to scale up capacity for border inspections, risk management protocols and disease surveillance. Most regulatory oversight of wildlife trade is aimed at conservation, rather than prevention of disease introduction.

  11. A supertree pipeline for summarizing phylogenetic and taxonomic information for millions of species

    Science.gov (United States)

    Redelings, Benjamin D.

    2017-01-01

    We present a new supertree method that enables rapid estimation of a summary tree on the scale of millions of leaves. This supertree method summarizes a collection of input phylogenies and an input taxonomy. We introduce formal goals and criteria for such a supertree to satisfy in order to transparently and justifiably represent the input trees. In addition to producing a supertree, our method computes annotations that describe which grouping in the input trees support and conflict with each group in the supertree. We compare our supertree construction method to a previously published supertree construction method by assessing their performance on input trees used to construct the Open Tree of Life version 4, and find that our method increases the number of displayed input splits from 35,518 to 39,639 and decreases the number of conflicting input splits from 2,760 to 1,357. The new supertree method also improves on the previous supertree construction method in that it produces no unsupported branches and avoids unnecessary polytomies. This pipeline is currently used by the Open Tree of Life project to produce all of the versions of project’s “synthetic tree” starting at version 5. This software pipeline is called “propinquity”. It relies heavily on “otcetera”—a set of C++ tools to perform most of the steps of the pipeline. All of the components are free software and are available on GitHub. PMID:28265520

  12. LexRank: Graph-based Lexical Centrality as Salience in Text Summarization

    CERN Document Server

    Erkan, G; 10.1613/jair.1523

    2011-01-01

    We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluatio...

  13. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    Science.gov (United States)

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  14. Summarizing and visualizing structural changes during the evolution of biomedical ontologies using a Diff Abstraction Network.

    Science.gov (United States)

    Ochs, Christopher; Perl, Yehoshua; Geller, James; Haendel, Melissa; Brush, Matthew; Arabandi, Sivaram; Tu, Samson

    2015-08-01

    Biomedical ontologies are a critical component in biomedical research and practice. As an ontology evolves, its structure and content change in response to additions, deletions and updates. When editing a biomedical ontology, small local updates may affect large portions of the ontology, leading to unintended and potentially erroneous changes. Such unwanted side effects often go unnoticed since biomedical ontologies are large and complex knowledge structures. Abstraction networks, which provide compact summaries of an ontology's content and structure, have been used to uncover structural irregularities, inconsistencies and errors in ontologies. In this paper, we introduce Diff Abstraction Networks ("Diff AbNs"), compact networks that summarize and visualize global structural changes due to ontology editing operations that result in a new ontology release. A Diff AbN can be used to support curators in identifying unintended and unwanted ontology changes. The derivation of two Diff AbNs, the Diff Area Taxonomy and the Diff Partial-area Taxonomy, is explained and Diff Partial-area Taxonomies are derived and analyzed for the Ontology of Clinical Research, Sleep Domain Ontology, and eagle-i Research Resource Ontology. Diff Taxonomy usage for identifying unintended erroneous consequences of quality assurance and ontology merging are demonstrated. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. A classification and summarization method for analysis of research activities in an academic faculty

    Directory of Open Access Journals (Sweden)

    Eduardo Rocha Loures

    Full Text Available Abstract Nowadays, more and more scientific research activities are carried out in different laboratories and universities, which not only play an important role in the development of science and technology, but also show a significant inference on education. The improvement of the research capability of an academic faculty can directly impact the quality of education, bring innovations to Industrial Engineering curriculum proposals, and guarantee the subjects are up to date. The investigation of the existing issues in the current research activities is usually considered as the primary and challenging step. As the output of research activities, academic articles are often considered as a kind of evidence-based resources for the investigation. Despite some methodological efforts have been made by existing article review methods, less attention has been paid to discover the implicit academic relationships among the academic staffs and to investigate their research expertise. The objective of this study is to address this existing drawback through the proposition of an Academic Information Classification and Summarization method. A case study is carried out in the Industrial and System Engineering Graduate Program (PPGEPS, PUCPR, Brazil. The result not only highlights the advantages that can be obtained from this proposition from the education perspective related to Industrial Engineering, but also can be used as evidence to balance and compare an academic staff’s research expertise and his/her teaching disciplines.

  16. A supertree pipeline for summarizing phylogenetic and taxonomic information for millions of species

    Directory of Open Access Journals (Sweden)

    Benjamin D. Redelings

    2017-03-01

    Full Text Available We present a new supertree method that enables rapid estimation of a summary tree on the scale of millions of leaves. This supertree method summarizes a collection of input phylogenies and an input taxonomy. We introduce formal goals and criteria for such a supertree to satisfy in order to transparently and justifiably represent the input trees. In addition to producing a supertree, our method computes annotations that describe which grouping in the input trees support and conflict with each group in the supertree. We compare our supertree construction method to a previously published supertree construction method by assessing their performance on input trees used to construct the Open Tree of Life version 4, and find that our method increases the number of displayed input splits from 35,518 to 39,639 and decreases the number of conflicting input splits from 2,760 to 1,357. The new supertree method also improves on the previous supertree construction method in that it produces no unsupported branches and avoids unnecessary polytomies. This pipeline is currently used by the Open Tree of Life project to produce all of the versions of project’s “synthetic tree” starting at version 5. This software pipeline is called “propinquity”. It relies heavily on “otcetera”—a set of C++ tools to perform most of the steps of the pipeline. All of the components are free software and are available on GitHub.

  17. ARABIC TEXT SUMMARIZATION BASED ON LATENT SEMANTIC ANALYSIS TO ENHANCE ARABIC DOCUMENTS CLUSTERING

    Directory of Open Access Journals (Sweden)

    Hanane Froud

    2013-01-01

    Full Text Available Arabic Documents Clustering is an important task for obtaining good results with the traditional Information Retrieval (IR systems especially with the rapid growth of the number of online documents present in Arabic language. Documents clustering aim to automatically group similar documents in one cluster using different similarity/distance measures. This task is often affected by the documents length, useful information on the documents is often accompanied by a large amount of noise, and therefore it is necessary to eliminate this noise while keeping useful information to boost the performance of Documents clustering. In this paper, we propose to evaluate the impact of text summarization using the Latent Semantic Analysis Model on Arabic Documents Clustering in order to solve problems cited above, using five similarity/distance measures: Euclidean Distance, Cosine Similarity, Jaccard Coefficient, Pearson Correlation Coefficient and Averaged Kullback-Leibler Divergence, for two times: without and with stemming. Our experimental results indicate that our proposed approach effectively solves the problems of noisy information and documents length, and thus significantly improve the clustering performance.

  18. Development of a Summarized Health Index (SHI for use in predicting survival in sea turtles.

    Directory of Open Access Journals (Sweden)

    Tsung-Hsien Li

    Full Text Available Veterinary care plays an influential role in sea turtle rehabilitation, especially in endangered species. Physiological characteristics, hematological and plasma biochemistry profiles, are useful references for clinical management in animals, especially when animals are during the convalescence period. In this study, these factors associated with sea turtle surviving were analyzed. The blood samples were collected when sea turtles remained alive, and then animals were followed up for surviving status. The results indicated that significantly negative correlation was found between buoyancy disorders (BD and sea turtle surviving (p < 0.05. Furthermore, non-surviving sea turtles had significantly higher levels of aspartate aminotranspherase (AST, creatinine kinase (CK, creatinine and uric acid (UA than surviving sea turtles (all p < 0.05. After further analysis by multiple logistic regression model, only factors of BD, creatinine and UA were included in the equation for calculating summarized health index (SHI for each individual. Through evaluation by receiver operating characteristic (ROC curve, the result indicated that the area under curve was 0.920 ± 0.037, and a cut-off SHI value of 2.5244 showed 80.0% sensitivity and 86.7% specificity in predicting survival. Therefore, the developed SHI could be a useful index to evaluate health status of sea turtles and to improve veterinary care at rehabilitation facilities.

  19. WKB Approximation in Noncommutative Gravity

    Directory of Open Access Journals (Sweden)

    Maja Buric

    2007-12-01

    Full Text Available We consider the quasi-commutative approximation to a noncommutative geometry defined as a generalization of the moving frame formalism. The relation which exists between noncommutativity and geometry is used to study the properties of the high-frequency waves on the flat background.

  20. Approximation properties of haplotype tagging

    Directory of Open Access Journals (Sweden)

    Dreiseitl Stephan

    2006-01-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.

  1. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...

  2. Approximate Reasoning with Fuzzy Booleans

    NARCIS (Netherlands)

    Broek, van den P.M.; Noppen, J.A.R.

    2004-01-01

    This paper introduces, in analogy to the concept of fuzzy numbers, the concept of fuzzy booleans, and examines approximate reasoning with the compositional rule of inference using fuzzy booleans. It is shown that each set of fuzzy rules is equivalent to a set of fuzzy rules with singleton crisp ante

  3. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    2013-01-01

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  4. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  5. Rational approximation of vertical segments

    Science.gov (United States)

    Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte

    2007-08-01

    In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.

  6. Approximation on the complex sphere

    OpenAIRE

    Alsaud, Huda; Kushpel, Alexander; Levesley, Jeremy

    2012-01-01

    We develop new elements of harmonic analysis on the complex sphere on the basis of which Bernstein's, Jackson's and Kolmogorov's inequalities are established. We apply these results to get order sharp estimates of $m$-term approximations. The results obtained is a synthesis of new results on classical orthogonal polynomials, harmonic analysis on manifolds and geometric properties of Euclidean spaces.

  7. On badly approximable complex numbers

    DEFF Research Database (Denmark)

    Esdahl-Schou, Rune; Kristensen, S.

    We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...

  8. Pythagorean Approximations and Continued Fractions

    Science.gov (United States)

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  9. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  10. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  11. Low Rank Approximation in $G_0W_0$ Approximation

    CERN Document Server

    Shao, Meiyue; Yang, Chao; Liu, Fang; da Jornada, Felipe H; Deslippe, Jack; Louie, Steven G

    2016-01-01

    The single particle energies obtained in a Kohn--Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in transport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green's function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The $G_0W_0$ approximation is a widely used technique in which the self energy is expressed as the convolution of a non-interacting Green's function ($G_0$) and a screened Coulomb interaction ($W_0$) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating $W_0$ at multiple frequencies. In this paper, we discuss how the cos...

  12. Summarizing motion contents of the video clip using moving edge overlaid frame (MEOF)

    Science.gov (United States)

    Yu, Tianli; Zhang, Yujin

    2001-12-01

    How to quickly and effectively exchange video information with the user is a major task for video searching engine's user interface. In this paper, we proposed to use Moving Edge Overlaid Frame (MEOF) image to summarize both the local object motion and global camera motion information of the video clip into a single image. MEOF will supplement the motion information that is generally dropped by the key frame representation, and it will enable faster perception for the user than viewing the actual video. The key technology of our MEOF generating algorithm involves the global motion estimation (GME). In order to extract the precise global motion model from general video, our GME module takes two stages, the match based initial GME and the gradient based GME refinement. The GME module also maintains a sprite image that will be aligned with the new input frame in the background after the global motion compensation transform. The difference between the aligned sprite and the new frame will be used to extract the masks that will help to pick out the moving objects' edges. The sprite is updated with each input frame and the moving edges are extracted at a constant interval. After all the frames are processed, the extracted moving edges are overlaid to the sprite according to there global motion displacement with the sprite and the temporal distance with the last frame, thus create our MEOF image. Experiments show that the MEOF representation of the video clip helps the user acquire the motion knowledge much faster and also be compact enough to serve the needs of online applications.

  13. Automated Text Summarization Base on Lexicales Chain and graph Using of WordNet and Wikipedia Knowledge Base

    CERN Document Server

    Pourvali, Mohsen

    2012-01-01

    The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. The lexical cohesion structure of the text can be exploited to determine the importance of a sentence/phrase. Lexical chains are useful tools to analyze the lexical cohesion structure in a text .In this paper we consider the effect of the use of lexical cohesion features in Summarization, And presenting a algorithm base on the knowledge base. Ours...

  14. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  15. Approximate Inference for Wireless Communications

    DEFF Research Database (Denmark)

    Hansen, Morten

    This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... complexity can potentially lead to limited power consumption, which translates into longer battery life-time in the handsets. The scope of the thesis is more specifically to investigate approximate (nearoptimal) detection methods that can reduce the computationally complexity significantly compared...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...

  16. Hydrogen Beyond the Classic Approximation

    CERN Document Server

    Scivetti, I

    2003-01-01

    The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position

  17. Approximate Privacy: Foundations and Quantification

    CERN Document Server

    Feigenbaum, Joan; Schapira, Michael

    2009-01-01

    Increasing use of computers and networks in business, government, recreation, and almost all aspects of daily life has led to a proliferation of online sensitive data about individuals and organizations. Consequently, concern about the privacy of these data has become a top priority, particularly those data that are created and used in electronic commerce. There have been many formulations of privacy and, unfortunately, many negative results about the feasibility of maintaining privacy of sensitive data in realistic networked environments. We formulate communication-complexity-based definitions, both worst-case and average-case, of a problem's privacy-approximation ratio. We use our definitions to investigate the extent to which approximate privacy is achievable in two standard problems: the second-price Vickrey auction and the millionaires problem of Yao. For both the second-price Vickrey auction and the millionaires problem, we show that not only is perfect privacy impossible or infeasibly costly to achieve...

  18. Approximate Counting of Graphical Realizations.

    Science.gov (United States)

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.

  19. Approximate Counting of Graphical Realizations.

    Directory of Open Access Journals (Sweden)

    Péter L Erdős

    Full Text Available In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007, for regular directed graphs (by Greenhill, 2011 and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013. Several heuristics on counting the number of possible realizations exist (via sampling processes, and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS for counting of all realizations.

  20. Many Faces of Boussinesq Approximations

    CERN Document Server

    Vladimirov, Vladimir A

    2016-01-01

    The \\emph{equations of Boussinesq approximation} (EBA) for an incompressible and inhomogeneous in density fluid are analyzed from a viewpoint of the asymptotic theory. A systematic scaling shows that there is an infinite number of related asymptotic models. We have divided them into three classes: `poor', `reasonable' and `good' Boussinesq approximations. Each model can be characterized by two parameters $q$ and $k$, where $q =1, 2, 3, \\dots$ and $k=0, \\pm 1, \\pm 2,\\dots$. Parameter $q$ is related to the `quality' of approximation, while $k$ gives us an infinite set of possible scales of velocity, time, viscosity, \\emph{etc.} Increasing $q$ improves the quality of a model, but narrows the limits of its applicability. Parameter $k$ allows us to vary the scales of time, velocity and viscosity and gives us the possibility to consider any initial and boundary conditions. In general, we discover and classify a rich variety of possibilities and restrictions, which are hidden behind the routine use of the Boussinesq...

  1. Summarizing Relational Data Using Semi-Supervised Genetic Algorithm-Based Clustering Techniques

    Directory of Open Access Journals (Sweden)

    Rayner Alfred

    2010-01-01

    dispersion and the cluster purity, by putting more weight on the cluster purity measurement. Conclusion: This study showed that semi-supervised genetic algorithm-based clustering techniques can be applied to summarize relational data with more effectively and efficiently.

  2. Public Finance, Public Economics, and Public Choice: A Survey of Undergraduate Textbooks.

    Science.gov (United States)

    Hewett, Roger S.

    1987-01-01

    Reviews undergraduate public finance textbooks for content, difficulty, and ideology. Includes tables summarizing the percentage space devoted to specific topics in 13 popular textbooks. Offers suggestions for supplementary materials. (Author/DH)

  3. Publicity and public relations

    Science.gov (United States)

    Fosha, Charles E.

    1990-01-01

    This paper addresses approaches to using publicity and public relations to meet the goals of the NASA Space Grant College. Methods universities and colleges can use to publicize space activities are presented.

  4. Rollout Sampling Approximate Policy Iteration

    CERN Document Server

    Dimitrakakis, Christos

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.

  5. Approximate Deconvolution Reduced Order Modeling

    CERN Document Server

    Xie, Xuping; Wang, Zhu; Iliescu, Traian

    2015-01-01

    This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})

  6. Approximation for Bayesian Ability Estimation.

    Science.gov (United States)

    1987-02-18

    posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31

  7. Pharmacist-provided immunization compensation and recognition: white paper summarizing APhA/AMCP stakeholder meeting.

    Science.gov (United States)

    Skelton, Jann B

    2011-01-01

    To identify the current challenges and opportunities in compensation and recognition for pharmacist-provided immunizations across the lifespan and to establish guiding principles for pharmacist-provided immunization compensation and recognition. 22 stakeholders gathered on June 29, 2011, at the American Pharmacists Association (APhA) headquarters in Washington, DC, for a meeting on immunization compensation that was convened by APhA and the Academy of Managed Care Pharmacy. Participants included representatives from community pharmacy practices (chain, grocery, and independent), employers, national consumer health and advocacy organizations, national pharmacy and public health organizations, health plan representatives, pharmacy benefit managers, and health information technology, standards, and safety organizations. Key immunization leaders from TRICARE Management Activity, the Centers for Medicare & Medicaid Services, the National Vaccine Program Office of the Department of Health & Human Services, and the Centers for Disease Control and Prevention (CDC) also participated in the meeting. The increased numbers of pharmacists providing vaccination services and the availability of pharmacist-provided immunizations to populations in need of vaccines has continued to increase. This has resulted in a rise in the percentage of patients who receive vaccines at pharmacies. Pharmacists are now working to lever-age their ability to identify people with key risk factors (e.g., diabetes, heart disease or previous myocardial infarction), encourage them to receive their CDC-recommended vaccinations, and administer the required vaccine. Challenges and opportunities in compensation and recognition for pharmacist-provided immunizations across the adult lifespan persist. Variability in state practice acts, reimbursement and compensation processes and systems, and mechanisms for documentation of vaccine services create substantial differences in how pharmacist-provided immunizations

  8. Plasma Physics Approximations in Ares

    Energy Technology Data Exchange (ETDEWEB)

    Managan, R. A.

    2015-01-08

    Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for Aα (ζ ),Aβ (ζ ), ζ, f(ζ ) = (1 + e-μ/θ)F1/2(μ/θ), F1/2'/F1/2, Fcα, and Fcβ. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.

  9. Toetsen als Leerinterventie. Samenvatten in het Testing Effect Paradigma [Tests as learning interventions. Summarization in the testing effect paradigma investigated

    NARCIS (Netherlands)

    Dirkx, Kim; Kester, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Dirkx, K. J. H., Kester, L., & Kirschner, P. A. (2011, July). Toetsen als leerinterventie. Samenvatten in het testing effect paradigma onderzocht [Tests as learning interventions. Summarization in the testing effect paradigma investigated]. Presentation for Erasmus University Rotterdam, Rotterdam.

  10. Toetsen als Leerinterventie. Samenvatten in het Testing Effect Paradigma [Tests as learning interventions. Summarization in the testing effect paradigma investigated

    NARCIS (Netherlands)

    Dirkx, Kim; Kester, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Dirkx, K. J. H., Kester, L., & Kirschner, P. A. (2011, July). Toetsen als leerinterventie. Samenvatten in het testing effect paradigma onderzocht [Tests as learning interventions. Summarization in the testing effect paradigma investigated]. Presentation for Erasmus University Rotterdam, Rotterdam.

  11. Video summarization using descriptors of motion activity: a motion activity based approach to key-frame extraction from video shots

    Science.gov (United States)

    Divakaran, Ajay; Radhakrishnan, Regunathan; Peker, Kadir A.

    2001-10-01

    We describe a video summarization technique that uses motion descriptors computed in the compressed domain. It can either speed up conventional color-based video summarization techniques, or rapidly generate a key-frame based summary by itself. The basic hypothesis of the work is that the intensity of motion activity of a video segment is a direct indication of its `summarizability,' which we experimentally verify using the MPEG-7 motion activity descriptor and the fidelity measure proposed in H. S. Chang, S. Sull, and S. U. Lee, `Efficient video indexing scheme for content-based retrieval,' IEEE Trans. Circuits Syst. Video Technol. 9(8), (1999). Note that the compressed domain extraction of motion activity intensity is much simpler than the color-based calculations. We are thus able to quickly identify easy to summarize segments of a video sequence since they have a low intensity of motion activity. We are able to easily summarize these segments by simply choosing their first frames. We can then apply conventional color-based summarization techniques to the remaining segments. We thus speed up color-based summarization by reducing the number of segments processed. Our results also motivate a simple and novel key-frame extraction technique that relies on a motion activity based nonuniform sampling of the frames. Our results indicate that it can either be used by itself or to speed up color-based techniques as explained earlier.

  12. Dodgson's Rule Approximations and Absurdity

    CERN Document Server

    McCabe-Dansted, John C

    2010-01-01

    With the Dodgson rule, cloning the electorate can change the winner, which Young (1977) considers an "absurdity". Removing this absurdity results in a new rule (Fishburn, 1977) for which we can compute the winner in polynomial time (Rothe et al., 2003), unlike the traditional Dodgson rule. We call this rule DC and introduce two new related rules (DR and D&). Dodgson did not explicitly propose the "Dodgson rule" (Tideman, 1987); we argue that DC and DR are better realizations of the principle behind the Dodgson rule than the traditional Dodgson rule. These rules, especially D&, are also effective approximations to the traditional Dodgson's rule. We show that, unlike the rules we have considered previously, the DC, DR and D& scores differ from the Dodgson score by no more than a fixed amount given a fixed number of alternatives, and thus these new rules converge to Dodgson under any reasonable assumption on voter behaviour, including the Impartial Anonymous Culture assumption.

  13. Approximation by double Walsh polynomials

    Directory of Open Access Journals (Sweden)

    Ferenc Móricz

    1992-01-01

    Full Text Available We study the rate of approximation by rectangular partial sums, Cesàro means, and de la Vallée Poussin means of double Walsh-Fourier series of a function in a homogeneous Banach space X. In particular, X may be Lp(I2, where 1≦p<∞ and I2=[0,1×[0,1, or CW(I2, the latter being the collection of uniformly W-continuous functions on I2. We extend the results by Watari, Fine, Yano, Jastrebova, Bljumin, Esfahanizadeh and Siddiqi from univariate to multivariate cases. As by-products, we deduce sufficient conditions for convergence in Lp(I2-norm and uniform convergence on I2 as well as characterizations of Lipschitz classes of functions. At the end, we raise three problems.

  14. Interplay of approximate planning strategies.

    Science.gov (United States)

    Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P

    2015-03-10

    Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options."

  15. Approximate reduction of dynamical systems

    CERN Document Server

    Tabuada, Paulo; Julius, Agung; Pappas, George J

    2007-01-01

    The reduction of dynamical systems has a rich history, with many important applications related to stability, control and verification. Reduction of nonlinear systems is typically performed in an exact manner - as is the case with mechanical systems with symmetry--which, unfortunately, limits the type of systems to which it can be applied. The goal of this paper is to consider a more general form of reduction, termed approximate reduction, in order to extend the class of systems that can be reduced. Using notions related to incremental stability, we give conditions on when a dynamical system can be projected to a lower dimensional space while providing hard bounds on the induced errors, i.e., when it is behaviorally similar to a dynamical system on a lower dimensional space. These concepts are illustrated on a series of examples.

  16. Diophantine approximations and Diophantine equations

    CERN Document Server

    Schmidt, Wolfgang M

    1991-01-01

    "This book by a leading researcher and masterly expositor of the subject studies diophantine approximations to algebraic numbers and their applications to diophantine equations. The methods are classical, and the results stressed can be obtained without much background in algebraic geometry. In particular, Thue equations, norm form equations and S-unit equations, with emphasis on recent explicit bounds on the number of solutions, are included. The book will be useful for graduate students and researchers." (L'Enseignement Mathematique) "The rich Bibliography includes more than hundred references. The book is easy to read, it may be a useful piece of reading not only for experts but for students as well." Acta Scientiarum Mathematicarum

  17. Truthful approximations to range voting

    DEFF Research Database (Denmark)

    Filos-Ratsika, Aris; Miltersen, Peter Bro

    We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...... maximization in this setting. With m being the number of alternatives, we exhibit a randomized truthful-in-expectation ordinal mechanism implementing an outcome whose expected social welfare is at least an Omega(m^{-3/4}) fraction of the social welfare of the socially optimal alternative. On the other hand, we...... show that for sufficiently many agents and any truthful-in-expectation ordinal mechanism, there is a valuation profile where the mechanism achieves at most an O(m^{-{2/3}) fraction of the optimal social welfare in expectation. We get tighter bounds for the natural special case of m = 3...

  18. Approximation of Surfaces by Cylinders

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1998-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...... projection of the surface onto this plane, a reference curve is determined by use of methods for thinning of binary images. Finally, the cylinder surface is constructed as follows: the directrix of the cylinder surface is determined by a least squares method minimizing the distance to the points...... in the projection within a tolerance given by the reference curve, and the rulings are lines perpendicular to the projection plane. Application of the method in ship design is given....

  19. Analytical approximations for spiral waves

    Energy Technology Data Exchange (ETDEWEB)

    Löber, Jakob, E-mail: jakob@physik.tu-berlin.de; Engel, Harald [Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstrasse 36, EW 7-1, 10623 Berlin (Germany)

    2013-12-15

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  20. On quantum and approximate privacy

    CERN Document Server

    Klauck, H

    2001-01-01

    This paper studies privacy in communication complexity. The focus is on quantum versions of the model and on protocols with only approximate privacy against honest players. We show that the privacy loss (the minimum divulged information) in computing a function can be decreased exponentially by using quantum protocols, while the class of privately computable functions (i.e., those with privacy loss 0) is not increased by quantum protocols. Quantum communication combined with small information leakage on the other hand makes certain functions computable (almost) privately which are not computable using quantum communication without leakage or using classical communication with leakage. We also give an example of an exponential reduction of the communication complexity of a function by allowing a privacy loss of o(1) instead of privacy loss 0.

  1. IONIS: Approximate atomic photoionization intensities

    Science.gov (United States)

    Heinäsmäki, Sami

    2012-02-01

    A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a

  2. Approximate analytic solutions to the NPDD: Short exposure approximations

    Science.gov (United States)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  3. Public Policy Agenda, 2009

    Science.gov (United States)

    American Association of State Colleges and Universities, 2009

    2009-01-01

    The 2009 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. The document is intended to serve as a point of reference for the association's members and other interested organizations as well as federal and state policymakers.…

  4. Public Policy Agenda, 2007

    Science.gov (United States)

    American Association of State Colleges and Universities, 2007

    2007-01-01

    The 2007 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. The document is intended to serve as a point of reference for federal and state policymakers, the association's members, and other interested organizations and…

  5. Public Policy Agenda, 2010

    Science.gov (United States)

    American Association of State Colleges and Universities, 2010

    2010-01-01

    The 2010 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. This paper is intended to serve as a point of reference for the association's members and other interested organizations, as well as federal and state policymakers.…

  6. Public Policy Agenda, 2008

    Science.gov (United States)

    American Association of State Colleges and Universities, 2008

    2008-01-01

    The 2008 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. The document is intended to serve as a point of reference for federal and state policymakers, the association's members, and other interested organizations and…

  7. Ecological, Pedagogical, Public Rhetoric

    Science.gov (United States)

    Rivers, Nathaniel A.; Weber, Ryan P.

    2011-01-01

    Public rhetoric pedagogy can benefit from an ecological perspective that sees change as advocated not through a single document but through multiple mundane and monumental texts. This article summarizes various approaches to rhetorical ecology, offers an ecological read of the Montgomery bus boycotts, and concludes with pedagogical insights on a…

  8. Public Policy Agenda, 2010

    Science.gov (United States)

    American Association of State Colleges and Universities, 2010

    2010-01-01

    The 2010 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. This paper is intended to serve as a point of reference for the association's members and other interested organizations, as well as federal and state policymakers.…

  9. Public Policy Agenda, 2007

    Science.gov (United States)

    American Association of State Colleges and Universities, 2007

    2007-01-01

    The 2007 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. The document is intended to serve as a point of reference for federal and state policymakers, the association's members, and other interested organizations and…

  10. Public Policy Agenda, 2009

    Science.gov (United States)

    American Association of State Colleges and Universities, 2009

    2009-01-01

    The 2009 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. The document is intended to serve as a point of reference for the association's members and other interested organizations as well as federal and state policymakers.…

  11. Public Policy Agenda, 2008

    Science.gov (United States)

    American Association of State Colleges and Universities, 2008

    2008-01-01

    The 2008 Public Policy Agenda summarizes the American Association of State Colleges and Universities' (AASCU's) principles and priorities in key areas of higher education policy. The document is intended to serve as a point of reference for federal and state policymakers, the association's members, and other interested organizations and…

  12. Randomized approximate nearest neighbors algorithm.

    Science.gov (United States)

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  13. Statistical methods used in the public health literature and implications for training of public health professionals.

    Science.gov (United States)

    Hayat, Matthew J; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L

    2017-01-01

    Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals.

  14. Public Education, Public Good.

    Science.gov (United States)

    Tomlinson, John

    1986-01-01

    Criticizes policies which would damage or destroy a public education system. Examines the relationship between government-provided education and democracy. Concludes that privatization of public education would emphasize self-interest and selfishness, further jeopardizing the altruism and civic mindedness necessary for the public good. (JDH)

  15. A New Approach To Focused Crawling: Combination of Text summarizing With Neural Networks and Vector Space Model

    Directory of Open Access Journals (Sweden)

    Fahim Mohammadi

    2013-07-01

    Full Text Available Focused crawlers are programs designed to browse the Web anddownload pages on a specific topic. They are used for answeringuser queries or for building digital libraries on a topic specifiedby the user. In this article we will show how summarizing of webpages is needed for improving performance of a crawler whichuses vector space model to rank the web pages. A neural networkis trained to learn the relevant characteristics of sentences thatshould be included in the summary of a web page. Then theneural network will be used as a filter to summarize web pages.Finally, the crawler will use vector space model to ranksummaries instead of web pages.

  16. Obtaining exact value by approximate computations

    Institute of Scientific and Technical Information of China (English)

    Jing-zhong ZHANG; Yong FENG

    2007-01-01

    Numerical approximate computations can solve large and complex problems fast. They have the advantage of high efficiency. However they only give approximate results, whereas we need exact results in some fields. There is a gap between approximate computations and exact results.In this paper, we build a bridge by which exact results can be obtained by numerical approximate computations.

  17. Fuzzy Set Approximations in Fuzzy Formal Contexts

    Institute of Scientific and Technical Information of China (English)

    Mingwen Shao; Shiqing Fan

    2006-01-01

    In this paper, a kind of multi-level formal concept is introduced. Based on the proposed multi-level formal concept, we present a pair of rough fuzzy set approximations within fuzzy formal contexts. By the proposed rough fuzzy set approximations, we can approximate a fuzzy set according to different precision level. We discuss the properties of the proposed approximation operators in detail.

  18. Obtaining exact value by approximate computations

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Numerical approximate computations can solve large and complex problems fast.They have the advantage of high efficiency.However they only give approximate results,whereas we need exact results in some fields.There is a gap between approximate computations and exact results. In this paper,we build a bridge by which exact results can be obtained by numerical approximate computations.

  19. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation...... with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...

  20. Nonlinear approximation with dictionaries, I: Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    We study various approximation classes associated with $m$-term approximation by elements from a (possibly redundant) dictionary in a Banach space. The standard approximation class associated with the best $m$-term approximation is compared to new classes defined by considering $m......$-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space...

  1. An Algorithm for Summarization of Paragraph Up to One Third with the Help of Cue Words Comparison

    Directory of Open Access Journals (Sweden)

    Noopur Srivastava

    2014-06-01

    Full Text Available In the fast growing information era utility of technology are more precise than completing the assignment manually. The digital information technology creates a knowledge-based society with high-tech global economy which spreads over and influence the corporate and service sector to operate in more efficient and convenient way. Here an attempt was made on Extract Technology based on research. In this technology data could be refined and sourced with certainty and relevance. The application of artificial intelligence matched with the theories of machine learning would prove to be very effective. Sometime summarization of paragraph required rather than page or pages. So, Auto Summarization Model is an agnostic content summarization technology that automatically parses news, information, documents and many more into relevant and contextually accurate abbreviated summaries. This is a concept to convert a whole paragraph into one third. The Auto summarization technology reads a document, much better way than manually prepared, where, keywords and key phrases accurately weighted as they are found in the document, text or web page.

  2. The Effect of a Summarization-Based Cumulative Retelling Strategy on Listening Comprehension of College Students with Visual Impairments

    Science.gov (United States)

    Tuncer, A. Tuba; Altunay, Banu

    2006-01-01

    Because students with visual impairments need auditory materials in order to access information, listening comprehension skills are important to their academic success. The present study investigated the effectiveness of summarization-based cumulative retelling strategy on the listening comprehension of four visually impaired college students. An…

  3. The Effect of Summarization on Intermediate EFL Learners' Reading Comprehension and Their Performance on Display, Referential and Inferential Questions

    Science.gov (United States)

    Ghabanchi, Zargham; Mirza, Fateme Haji

    2010-01-01

    This study examined the effect of summarization as a generative learning strategy of the readers' performance on reading comprehension, in general, and reading comprehension display, referential and inferential questions in particular. The subjects in this study were 61 high school students. They were assigned to two groups--control and…

  4. Comparing Effect of 'Summarizing', 'Question-Answer Relationship', and 'Syntactic Structure Identification' on the Reading Comprehension of Iranian EFL students

    Directory of Open Access Journals (Sweden)

    Fatemeh Hemmati

    2013-01-01

    Full Text Available This study aimed at comparing the effects of 'question-answer relationship strategy', 'summarizing', and 'syntactic structure identification training'on the reading comprehension of Iranian EFL learners. The participants were sixty (34 women and 26 men intermediate students who answered an English reading comprehension test consisting of three reading passages as the pretest. During the treatment, the students in the first group were supposed to summarize the passages. The subjects in the second group familiarized with the syntactic structure identification strategy and the ones in the third group were taught the question-answer relationship strategy. At the end of the treatment, an English reading comprehension test similar to the pretest was administered to the groups as a posttest. The results suggested that there is statistically significant difference between the reading comprehension abilities of the three classes. Furthermore the use of QAR strategy led to better comprehension of reading texts with syntactic structure training and summarizing between which there was no significant difference. Keywords: Reading comprehension, summarization, question-answer relationship, syntactic structure

  5. Statement Summarizing Research Findings on the Issue of the Relationship Between Food-Additive-Free Diets and Hyperkinesis in Children.

    Science.gov (United States)

    Lipton, Morris; Wender, Esther

    The National Advisory Committee on Hyperkinesis and Food Additives paper summarized some research findings on the issue of the relationship between food-additive-free diets and hyperkinesis in children. Based on several challenge studies, it is concluded that the evidence generally refutes Dr. B. F. Feingold's claim that artificial colorings in…

  6. Network histograms and universality of blockmodel approximation

    Science.gov (United States)

    Olhede, Sofia C.; Wolfe, Patrick J.

    2014-01-01

    In this paper we introduce the network histogram, a statistical summary of network interactions to be used as a tool for exploratory data analysis. A network histogram is obtained by fitting a stochastic blockmodel to a single observation of a network dataset. Blocks of edges play the role of histogram bins and community sizes that of histogram bandwidths or bin sizes. Just as standard histograms allow for varying bandwidths, different blockmodel estimates can all be considered valid representations of an underlying probability model, subject to bandwidth constraints. Here we provide methods for automatic bandwidth selection, by which the network histogram approximates the generating mechanism that gives rise to exchangeable random graphs. This makes the blockmodel a universal network representation for unlabeled graphs. With this insight, we discuss the interpretation of network communities in light of the fact that many different community assignments can all give an equally valid representation of such a network. To demonstrate the fidelity-versus-interpretability tradeoff inherent in considering different numbers and sizes of communities, we analyze two publicly available networks—political weblogs and student friendships—and discuss how to interpret the network histogram when additional information related to node and edge labeling is present. PMID:25275010

  7. APPROXIMATE SAMPLING THEOREM FOR BIVARIATE CONTINUOUS FUNCTION

    Institute of Scientific and Technical Information of China (English)

    杨守志; 程正兴; 唐远炎

    2003-01-01

    An approximate solution of the refinement equation was given by its mask, and the approximate sampling theorem for bivariate continuous function was proved by applying the approximate solution. The approximate sampling function defined uniquely by the mask of the refinement equation is the approximate solution of the equation, a piece-wise linear function, and posseses an explicit computation formula. Therefore the mask of the refinement equation is selected according to one' s requirement, so that one may controll the decay speed of the approximate sampling function.

  8. Bernstein-type approximations of smooth functions

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available The Bernstein-type approximation for smooth functions is proposed and studied. We propose the Bernstein-type approximation with definitions that directly apply the binomial distribution and the multivariate binomial distribution. The Bernstein-type approximations generalize the corresponding Bernstein polynomials, by considering definitions that depend on a convenient approximation coefficient in linear kernels. In the Bernstein-type approximations, we study the uniform convergence and the degree of approximation. The Bernstein-type estimators of smooth functions of population means are also proposed and studied.

  9. Public Speech.

    Science.gov (United States)

    Green, Thomas F.

    1994-01-01

    Discusses the importance of public speech in society, noting the power of public speech to create a world and a public. The paper offers a theory of public speech, identifies types of public speech, and types of public speech fallacies. Two ways of speaking of the public and of public life are distinguished. (SM)

  10. Some Reflections on the Task of Content Determination in the Context of Multi-Document Summarization of Evolving Events

    CERN Document Server

    Afantenos, Stergos D

    2007-01-01

    Despite its importance, the task of summarizing evolving events has received small attention by researchers in the field of multi-document summariztion. In a previous paper (Afantenos et al. 2007) we have presented a methodology for the automatic summarization of documents, emitted by multiple sources, which describe the evolution of an event. At the heart of this methodology lies the identification of similarities and differences between the various documents, in two axes: the synchronic and the diachronic. This is achieved by the introduction of the notion of Synchronic and Diachronic Relations. Those relations connect the messages that are found in the documents, resulting thus in a graph which we call grid. Although the creation of the grid completes the Document Planning phase of a typical NLG architecture, it can be the case that the number of messages contained in a grid is very large, exceeding thus the required compression rate. In this paper we provide some initial thoughts on a probabilistic model ...

  11. Usability evaluation of an experimental text summarization system and three search engines: implications for the reengineering of health care interfaces.

    Science.gov (United States)

    Kushniruk, Andre W; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L

    2002-01-01

    This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems.

  12. Applications of Discrepancy Theory in Multiobjective Approximation

    CERN Document Server

    Glaßer, Christian; Witek, Maximilian

    2011-01-01

    We apply a multi-color extension of the Beck-Fiala theorem to show that the multiobjective maximum traveling salesman problem is randomized 1/2-approximable on directed graphs and randomized 2/3-approximable on undirected graphs. Using the same technique we show that the multiobjective maximum satisfiablilty problem is 1/2-approximable.

  13. Fractal Trigonometric Polynomials for Restricted Range Approximation

    Science.gov (United States)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  14. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  15. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  16. Nonlinear approximation with dictionaries, I: Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    $-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space...

  17. Operator approximant problems arising from quantum theory

    CERN Document Server

    Maher, Philip J

    2017-01-01

    This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.

  18. Advanced Concepts and Methods of Approximate Reasoning

    Science.gov (United States)

    1989-12-01

    and L. Valverde. On mode and implication in approximate reasoning. In M.M. Gupta, A. Kandel, W. Bandler , J.B. Kiszka, editors, Approximate Reasoning and...190, 1981. [43] E. Trillas and L. Valverde. On mode and implication in approximate reasoning. In M.M. Gupta, A. Kandel, W. Bandler , J.B. Kiszka

  19. NONLINEAR APPROXIMATION WITH GENERAL WAVE PACKETS

    Institute of Scientific and Technical Information of China (English)

    L. Borup; M. Nielsen

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete characterization of the approximation spaces is derived.

  20. Approximate Nearest Neighbor Queries among Parallel Segments

    DEFF Research Database (Denmark)

    Emiris, Ioannis Z.; Malamatos, Theocharis; Tsigaridas, Elias

    2010-01-01

    We develop a data structure for answering efficiently approximate nearest neighbor queries over a set of parallel segments in three dimensions. We connect this problem to approximate nearest neighbor searching under weight constraints and approximate nearest neighbor searching on historical data...

  1. Nonlinear approximation with general wave packets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2005-01-01

    We study nonlinear approximation in the Triebel-Lizorkin spaces with dictionaries formed by dilating and translating one single function g. A general Jackson inequality is derived for best m-term approximation with such dictionaries. In some special cases where g has a special structure, a complete...... characterization of the approximation spaces is derived....

  2. Nonlinear approximation with bi-framelets

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten; Gribonval, Rémi

    2005-01-01

    We study the approximation in Lebesgue spaces of wavelet bi-frame systems given by translations and dilations of a finite set of generators. A complete characterization of the approximation spaces associated with best m-term approximation of wavelet bi-framelet systems is given...

  3. Approximation properties of fine hyperbolic graphs

    Indian Academy of Sciences (India)

    Benyin Fu

    2016-05-01

    In this paper, we propose a definition of approximation property which is called the metric invariant translation approximation property for a countable discrete metric space. Moreover, we use the techniques of Ozawa’s to prove that a fine hyperbolic graph has the metric invariant translation approximation property.

  4. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    Directory of Open Access Journals (Sweden)

    Irfan Mehmood

    2014-09-01

    Full Text Available Wireless capsule endoscopy (WCE has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  5. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy

    Directory of Open Access Journals (Sweden)

    Nedunchelian Ramanujam

    2016-01-01

    Full Text Available Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.

  6. [The general psychological concept in the later work of Eugen Bleulers. Comparison with a summarized description from a forgotten theory 60 years after the final publication (1939)].

    Science.gov (United States)

    Möller, A; Hell, D

    1999-04-01

    Documents by Eugen Bleuler from 1921 to 1939 that go into general psychological topics of meaning of consciousness, formation of motive and will, are presented. An effort towards integration of seemingly incompatible, side by side standing, unrelated biological and psychological concepts that probably is most likely explainable by the contemporary background of ideas, is recognizable. In this context, Eugen Bleuler refers to an already (especially by Richard Semon) systematically developed theory called "Mnemism" that he interpreted and applied to the psychological circumstances mentioned above. That theory of "mnemism" that can be most adequately described as a biogenetic-vitalistic theory, is assuming, that all organic life--independent of the possibility of a self-reflecting consciousness--is able to learn experiences made by analysis of environment and to pass it on following generations. Pattern of stimulus reactions are in the sense of this theory memorized ("engraphiert") and reactivated under similar situational circumstances ("ekphoriert") by the psychological mode of association.--It can be shown that Bleuler pursued this theory for a period of more than ten years. It represents the benchmark for Bleuler's standpoints, for example for the question of determination of human acting, that as itselves are already known from his earlier documents, but here have found a more theoretically based explanation. The assumption of the efficacy of specific, not necessary consciously remembered "engramms" of memory, suggests the hypothesis of the existence of unconsciousness; in this context textual points of contact to the psychological concepts of S. Freud and C. G. Jung--mnemic memory and collective unconscious--are shown.

  7. An introduction to relativistic magnetohydrodynamics I. The force-free approximation

    Science.gov (United States)

    Karas, Vladimír

    2005-12-01

    This lecture summarizes basic equations of relativistic magnetohydrodynamics (MHD). The aim of the lecture is to present important relations and approximations that have been often employed and found useful in the astrophysical context, namely, in situations when plasma motion is governed by magnetohydrodynamic and gravitational effects competing with each other near a black hole.

  8. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  9. Resonant-state expansion Born Approximation

    CERN Document Server

    Doost, M B

    2015-01-01

    The Born Approximation is a fundamental formula in Physics, it allows the calculation of weak scattering via the Fourier transform of the scattering potential. I extend the Born Approximation by including in the formula the Fourier transform of a truncated basis of the infinite number of appropriately normalised resonant states. This extension of the Born Approximation is named the Resonant-State Expansion Born Approximation or RSE Born Approximation. The resonant-states of the system can be calculated using the recently discovered RSE perturbation theory for electrodynamics and normalised correctly to appear in spectral Green's functions via the flux volume normalisation.

  10. Approximate dynamic programming solving the curses of dimensionality

    CERN Document Server

    Powell, Warren B

    2007-01-01

    Warren B. Powell, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored over 100 refereed publications on stochastic optimization, approximate dynamic programming, and dynamic resource management.

  11. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  12. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  13. On Gakerkin approximations for the quasigeostrophic equations

    CERN Document Server

    Rocha, Cesar B; Grooms, Ian

    2015-01-01

    We study the representation of approximate solutions of the three-dimensional quasigeostrophic (QG) equations using Galerkin series with standard vertical modes. In particular, we show that standard modes are compatible with nonzero buoyancy at the surfaces and can be used to solve the Eady problem. We extend two existing Galerkin approaches (A and B) and develop a new Galerkin approximation (C). Approximation A, due to Flierl (1978), represents the streamfunction as a truncated Galerkin series and defines the potential vorticity (PV) that satisfies the inversion problem exactly. Approximation B, due to Tulloch and Smith (2009b), represents the PV as a truncated Galerkin series and calculates the streamfunction that satisfies the inversion problem exactly. Approximation C, the true Galerkin approximation for the QG equations, represents both streamfunction and PV as truncated Galerkin series, but does not satisfy the inversion equation exactly. The three approximations are fundamentally different unless the b...

  14. GeneBase 1.1: a tool to summarize data from NCBI gene datasets and its application to an update of human gene statistics

    Science.gov (United States)

    Piovesan, Allison; Caracausi, Maria; Antonaros, Francesca; Pelleri, Maria Chiara; Vitale, Lorenza

    2016-01-01

    We release GeneBase 1.1, a local tool with a graphical interface useful for parsing, structuring and indexing data from the National Center for Biotechnology Information (NCBI) Gene data bank. Compared to its predecessor GeneBase (1.0), GeneBase 1.1 now allows dynamic calculation and summarization in terms of median, mean, standard deviation and total for many quantitative parameters associated with genes, gene transcripts and gene features (exons, introns, coding sequences, untranslated regions). GeneBase 1.1 thus offers the opportunity to perform analyses of the main gene structure parameters also following the search for any set of genes with the desired characteristics, allowing unique functionalities not provided by the NCBI Gene itself. In order to show the potential of our tool for local parsing, structuring and dynamic summarizing of publicly available databases for data retrieval, analysis and testing of biological hypotheses, we provide as a sample application a revised set of statistics for human nuclear genes, gene transcripts and gene features. In contrast with previous estimations strongly underestimating the length of human genes, a ‘mean’ human protein-coding gene is 67 kbp long, has eleven 309 bp long exons and ten 6355 bp long introns. Median, mean and extreme values are provided for many other features offering an updated reference source for human genome studies, data useful to set parameters for bioinformatic tools and interesting clues to the biomedical meaning of the gene features themselves. Database URL: http://apollo11.isto.unibo.it/software/ PMID:28025344

  15. GeneBase 1.1: a tool to summarize data from NCBI gene datasets and its application to an update of human gene statistics.

    Science.gov (United States)

    Piovesan, Allison; Caracausi, Maria; Antonaros, Francesca; Pelleri, Maria Chiara; Vitale, Lorenza

    2016-01-01

    We release GeneBase 1.1, a local tool with a graphical interface useful for parsing, structuring and indexing data from the National Center for Biotechnology Information (NCBI) Gene data bank. Compared to its predecessor GeneBase (1.0), GeneBase 1.1 now allows dynamic calculation and summarization in terms of median, mean, standard deviation and total for many quantitative parameters associated with genes, gene transcripts and gene features (exons, introns, coding sequences, untranslated regions). GeneBase 1.1 thus offers the opportunity to perform analyses of the main gene structure parameters also following the search for any set of genes with the desired characteristics, allowing unique functionalities not provided by the NCBI Gene itself. In order to show the potential of our tool for local parsing, structuring and dynamic summarizing of publicly available databases for data retrieval, analysis and testing of biological hypotheses, we provide as a sample application a revised set of statistics for human nuclear genes, gene transcripts and gene features. In contrast with previous estimations strongly underestimating the length of human genes, a 'mean' human protein-coding gene is 67 kbp long, has eleven 309 bp long exons and ten 6355 bp long introns. Median, mean and extreme values are provided for many other features offering an updated reference source for human genome studies, data useful to set parameters for bioinformatic tools and interesting clues to the biomedical meaning of the gene features themselves.Database URL: http://apollo11.isto.unibo.it/software/.

  16. Choctaw National Wildlife Refuge : Public Use Development Plan

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This Public Use Development Plan for Choctaw National Wildlife Refuge summarizes the Refuge’s public use goals, how the Refuge will project a positive attitude, how...

  17. Public Use Plan : DeSoto National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The DeSoto NWR Public Use Plan summarizes public use activities on the Refuge. Background information about the Refuge is provided along with information about...

  18. Improving biconnectivity approximation via local optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ka Wong Chong; Tak Wah Lam [Univ. of Hong Kong (Hong Kong)

    1996-12-31

    The problem of finding the minimum biconnected spanning subgraph of an undirected graph is NP-hard. A lot of effort has been made to find biconnected spanning subgraphs that approximate to the minimum one as close as possible. Recently, new polynomial-time (sequential) approximation algorithms have been devised to improve the approximation factor from 2 to 5/3 , then 3/2, while NC algorithms have also been known to achieve 7/4 + {epsilon}. This paper presents a new technique which can be used to further improve parallel approximation factors to 5/3 + {epsilon}. In the sequential context, the technique reveals an algorithm with a factor of {alpha} + 1/5, where a is the approximation factor of any 2-edge connectivity approximation algorithm.

  19. Frankenstein's Glue: Transition functions for approximate solutions

    CERN Document Server

    Yunes, N

    2006-01-01

    Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate solutions together. In particular, we propose certain sufficient conditions on these functions and proof that these conditions guarantee that the joined solution still satisfies the Einstein equations to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the...

  20. Floating-Point $L^2$-Approximations

    OpenAIRE

    Brisebarre, Nicolas; Hanrot, Guillaume

    2007-01-01

    International audience; Computing good polynomial approximations to usual functions is an important topic for the computer evaluation of those functions. These approximations can be good under several criteria, the most desirable being probably that the relative error is as small as possible in the $L^{\\infty}$ sense, i.e. everywhere on the interval under study. In the present paper, we investigate a simpler criterion, the $L^2$ case. Though finding a best polynomial $L^2$-approximation with ...

  1. Metric Diophantine approximation on homogeneous varieties

    CERN Document Server

    Ghosh, Anish; Nevo, Amos

    2012-01-01

    We develop the metric theory of Diophantine approximation on homogeneous varieties of semisimple algebraic groups and prove results analogous to the classical Khinchin and Jarnik theorems. In full generality our results establish simultaneous Diophantine approximation with respect to several completions, and Diophantine approximation over general number fields using S-algebraic integers. In several important examples, the metric results we obtain are optimal. The proof uses quantitative equidistribution properties of suitable averaging operators, which are derived from spectral bounds in automorphic representations.

  2. Approximately liner phase IIR digital filter banks

    OpenAIRE

    J. D. Ćertić; M. D. Lutovac; L. D. Milić

    2013-01-01

    In this paper, uniform and nonuniform digital filter banks based on approximately linear phase IIR filters and frequency response masking technique (FRM) are presented. Both filter banks are realized as a connection of an interpolated half-band approximately linear phase IIR filter as a first stage of the FRM design and an appropriate number of masking filters. The masking filters are half-band IIR filters with an approximately linear phase. The resulting IIR filter banks are compared with li...

  3. A Note on Generalized Approximation Property

    Directory of Open Access Journals (Sweden)

    Antara Bhar

    2013-01-01

    Full Text Available We introduce a notion of generalized approximation property, which we refer to as --AP possessed by a Banach space , corresponding to an arbitrary Banach sequence space and a convex subset of , the class of bounded linear operators on . This property includes approximation property studied by Grothendieck, -approximation property considered by Sinha and Karn and Delgado et al., and also approximation property studied by Lissitsin et al. We characterize a Banach space having --AP with the help of -compact operators, -nuclear operators, and quasi--nuclear operators. A particular case for ( has also been characterized.

  4. Upper Bounds on Numerical Approximation Errors

    DEFF Research Database (Denmark)

    Raahauge, Peter

    2004-01-01

    This paper suggests a method for determining rigorous upper bounds on approximationerrors of numerical solutions to infinite horizon dynamic programming models.Bounds are provided for approximations of the value function and the policyfunction as well as the derivatives of the value function....... The bounds apply to moregeneral problems than existing bounding methods do. For instance, since strict concavityis not required, linear models and piecewise linear approximations can bedealt with. Despite the generality, the bounds perform well in comparison with existingmethods even when applied...... to approximations of a standard (strictly concave)growth model.KEYWORDS: Numerical approximation errors, Bellman contractions, Error bounds...

  5. TMB: Automatic differentiation and laplace approximation

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte

    2016-01-01

    computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects...... are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three) of the joint likelihood. The computations are designed to be fast for problems with many random effects (approximate to 10(6)) and parameters (approximate to 10...

  6. Public Values

    DEFF Research Database (Denmark)

    Beck Jørgensen, Torben; Rutgers, Mark R.

    2015-01-01

    administration is approached in terms of processes guided or restricted by public values and as public value creating: public management and public policy-making are both concerned with establishing, following and realizing public values. To study public values a broad perspective is needed. The article suggest......This article provides the introduction to a symposium on contemporary public values research. It is argued that the contribution to this symposium represent a Public Values Perspective, distinct from other specific lines of research that also use public value as a core concept. Public...... a research agenda for this encompasing kind of public values research. Finally the contributions to the symposium are introduced....

  7. Lattice QCD simulations beyond the quenched approximation

    Energy Technology Data Exchange (ETDEWEB)

    Ukawa, A. (European Organization for Nuclear Research, Geneva (Switzerland). Theory Div.)

    1989-07-01

    Present status of lattice QCD simulations incorporating the effects of dynamical quarks is presented. After a brief review of the formalism of lattice QCD, the dynamical fermion algorithms in use today are described. Recent attempts at the hadron mass calculation are discussed in relation to the quenched results, and current understanding on the finite temperature behavior of QCD is summarized. (orig.).

  8. Research and Development on a Public Attitude Instrument for Stuttering

    Science.gov (United States)

    St. Louis, Kenneth O.

    2012-01-01

    This paper summarizes research associated with the development of the "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S"), a survey instrument designed to provide a worldwide standard measure of public attitudes toward stuttering. Pilot studies with early experimental prototypes of the "POSHA-S" are summarized that relate to…

  9. Research and Development on a Public Attitude Instrument for Stuttering

    Science.gov (United States)

    St. Louis, Kenneth O.

    2012-01-01

    This paper summarizes research associated with the development of the "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S"), a survey instrument designed to provide a worldwide standard measure of public attitudes toward stuttering. Pilot studies with early experimental prototypes of the "POSHA-S" are summarized that relate to…

  10. Inversion and approximation of Laplace transforms

    Science.gov (United States)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  11. Computing Functions by Approximating the Input

    Science.gov (United States)

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  12. Non-Linear Approximation of Bayesian Update

    KAUST Repository

    Litvinenko, Alexander

    2016-06-23

    We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.

  13. Random Attractors of Stochastic Modified Boussinesq Approximation

    Institute of Scientific and Technical Information of China (English)

    郭春晓

    2011-01-01

    The Boussinesq approximation is a reasonable model to describe processes in body interior in planetary physics. We refer to [1] and [2] for a derivation of the Boussinesq approximation, and [3] for some related results of existence and uniqueness of solution.

  14. Approximating a harmonizable isotropic random field

    Directory of Open Access Journals (Sweden)

    Randall J. Swift

    2001-01-01

    Full Text Available The class of harmonizable fields is a natural extension of the class of stationary fields. This paper considers a stochastic series approximation of a harmonizable isotropic random field. This approximation is useful for numerical simulation of such a field.

  15. On approximating multi-criteria TSP

    NARCIS (Netherlands)

    Manthey, Bodo; Albers, S.; Marion, J.-Y.

    2009-01-01

    We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP), whose performances are independent of the number $k$ of criteria and come close to the approximation ratios obtained for TSP with a single objective function. We present randomized app

  16. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected by...

  17. A case where BO Approximation breaks down

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ The Bom-Oppenheimer (BO)Approximation is ubiquitous in molecular physics,quantum physics and quantum chemistry. However, CAS researchers recently observed a breakdown of the Approximation in the reaction of fluorine with deuterium atoms.The result has been published in the August 24 issue of Science.

  18. Two Point Pade Approximants and Duality

    CERN Document Server

    Banks, Tom

    2013-01-01

    We propose the use of two point Pade approximants to find expressions valid uniformly in coupling constant for theories with both weak and strong coupling expansions. In particular, one can use these approximants in models with a strong/weak duality, when the symmetries do not determine exact expressions for some quantity.

  19. Function Approximation Using Probabilistic Fuzzy Systems

    NARCIS (Netherlands)

    J.H. van den Berg (Jan); U. Kaymak (Uzay); R.J. Almeida e Santos Nogueira (Rui Jorge)

    2011-01-01

    textabstractWe consider function approximation by fuzzy systems. Fuzzy systems are typically used for approximating deterministic functions, in which the stochastic uncertainty is ignored. We propose probabilistic fuzzy systems in which the probabilistic nature of uncertainty is taken into account.

  20. Approximation of the Inverse -Frame Operator

    Indian Academy of Sciences (India)

    M R Abdollahpour; A Najati

    2011-05-01

    In this paper, we introduce the concept of (strong) projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  1. Nonlinear approximation with dictionaries I. Direct estimates

    DEFF Research Database (Denmark)

    Gribonval, Rémi; Nielsen, Morten

    2004-01-01

    with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...

  2. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem/Wim; Kallenberg, W.C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use

  3. Quirks of Stirling's Approximation

    Science.gov (United States)

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  4. INVARIANT RANDOM APPROXIMATION IN NONCONVEX DOMAIN

    Directory of Open Access Journals (Sweden)

    R. Shrivastava

    2012-05-01

    Full Text Available Random fixed point results in the setup of compact and weakly compact domain of Banach spaces which is not necessary starshaped have been obtained in the present work. Invariant random approximation results have also been determined asits application. In this way, random version of invariant approximation results due toMukherjee and Som [13] and Singh [17] have been given.

  5. Approximability and Parameterized Complexity of Minmax Values

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Hansen, Thomas Dueholm; Miltersen, Peter Bro;

    2008-01-01

    We consider approximating the minmax value of a multi player game in strategic form. Tightening recent bounds by Borgs et al., we observe that approximating the value with a precision of ε log n digits (for any constant ε > 0) is NP-hard, where n is the size of the game. On the other hand...

  6. Hardness of approximation for strip packing

    DEFF Research Database (Denmark)

    Adamaszek, Anna Maria; Kociumaka, Tomasz; Pilipczuk, Marcin

    2017-01-01

    [SODA 2016] have recently proposed a (1.4 + ϵ)-approximation algorithm for this variant, thus showing that strip packing with polynomially bounded data can be approximated better than when exponentially large values are allowed in the input. Their result has subsequently been improved to a (4/3 + ϵ...

  7. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, Rajko; Albers, Willem; Kallenberg, Wilbert C.M.

    2005-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use

  8. Approximations for stop-loss reinsurance premiums

    NARCIS (Netherlands)

    Reijnen, R.; Albers, W.; Kallenberg, W.C.M.

    2003-01-01

    Various approximations of stop-loss reinsurance premiums are described in literature. For a wide variety of claim size distributions and retention levels, such approximations are compared in this paper to each other, as well as to a quantitative criterion. For the aggregate claims two models are use

  9. Lifetime of the Nonlinear Geometric Optics Approximation

    DEFF Research Database (Denmark)

    Binzer, Knud Andreas

    The subject of the thesis is to study acertain approximation method for highly oscillatory solutions to nonlinear partial differential equations.......The subject of the thesis is to study acertain approximation method for highly oscillatory solutions to nonlinear partial differential equations....

  10. Simple Lie groups without the approximation property

    DEFF Research Database (Denmark)

    Haagerup, Uffe; de Laat, Tim

    2013-01-01

    For a locally compact group G, let A(G) denote its Fourier algebra, and let M0A(G) denote the space of completely bounded Fourier multipliers on G. The group G is said to have the Approximation Property (AP) if the constant function 1 can be approximated by a net in A(G) in the weak-∗ topology...

  11. Text Summarization and Discovery of Frames and Relationship from Natural Language Text - A R&D Methodology

    Directory of Open Access Journals (Sweden)

    P.Chakrabarti,

    2010-05-01

    Full Text Available The paper deals with the concept of data mining whereby the data resources can be fetched and accessed accordingly with reduced time complexity. Resource sharing is an important aspect in the field ofinformation science. The retrieval techniques are pointed out based on the ideas of binary search tree, Gantt chart, text summarization. A theorem has been cited regarding the summation of total length ofcodes of each leaf search term. Summarization is a hard problem of Natural Language Processing because, to do it properly, one has to really understand the point of a text. This requires semantic analysis, discourse processing, and inferential interpretation (grouping of the content using world knowledge. The last step, especially, is complex, because systems without a great deal of worldknowledge simply cannot do it. Therefore, attempts so far of erforming true abstraction--creating abstracts as summaries--have not been very successful. Fortunately, however, an approximationcalled extraction is more feasible today. To create an extract, a system need simply to identify the most important/topical/central topic(s of the text, and return them to the reader. Although the summary is not necessarily coherent, the reader can form an opinion of the content of the original. Most automated summarization systems today produce extracts only. Another purpose of this paper is to addresses the problem of information discovery in large collections of text. For users, one of the key problems in working with such collections is determining where to focus their attention. Textdocuments often contain valuable structured data that is hidden in regular English sentences. This data is best exploited if available as a relational table that we could use for answering precise queries or for running data mining tasks. We explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. In this paper we have

  12. An improved proximity force approximation for electrostatics

    CERN Document Server

    Fosco, C D; Mazzitelli, F D

    2012-01-01

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated to their shapes. Indeed, in the so called "proximity force approximation" the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contribution of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied to different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful to discuss the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction i...

  13. Approximate Furthest Neighbor in High Dimensions

    DEFF Research Database (Denmark)

    Pagh, Rasmus; Silvestri, Francesco; Sivertsen, Johan von Tangen;

    2015-01-01

    Much recent work has been devoted to approximate nearest neighbor queries. Motivated by applications in recommender systems, we consider approximate furthest neighbor (AFN) queries. We present a simple, fast, and highly practical data structure for answering AFN queries in high-dimensional Euclid......Much recent work has been devoted to approximate nearest neighbor queries. Motivated by applications in recommender systems, we consider approximate furthest neighbor (AFN) queries. We present a simple, fast, and highly practical data structure for answering AFN queries in high......-dimensional Euclidean space. We build on the technique of Indyk (SODA 2003), storing random projections to provide sublinear query time for AFN. However, we introduce a different query algorithm, improving on Indyk’s approximation factor and reducing the running time by a logarithmic factor. We also present a variation...

  14. Trajectory averaging for stochastic approximation MCMC algorithms

    CERN Document Server

    Liang, Faming

    2010-01-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...

  15. Approximating maximum clique with a Hopfield network.

    Science.gov (United States)

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.

  16. A systematic sequence of relativistic approximations.

    Science.gov (United States)

    Dyall, Kenneth G

    2002-06-01

    An approach to the development of a systematic sequence of relativistic approximations is reviewed. The approach depends on the atomically localized nature of relativistic effects, and is based on the normalized elimination of the small component in the matrix modified Dirac equation. Errors in the approximations are assessed relative to four-component Dirac-Hartree-Fock calculations or other reference points. Projection onto the positive energy states of the isolated atoms provides an approximation in which the energy-dependent parts of the matrices can be evaluated in separate atomic calculations and implemented in terms of two sets of contraction coefficients. The errors in this approximation are extremely small, of the order of 0.001 pm in bond lengths and tens of microhartrees in absolute energies. From this approximation it is possible to partition the atoms into relativistic and nonrelativistic groups and to treat the latter with the standard operators of nonrelativistic quantum mechanics. This partitioning is shared with the relativistic effective core potential approximation. For atoms in the second period, errors in the approximation are of the order of a few hundredths of a picometer in bond lengths and less than 1 kJ mol(-1) in dissociation energies; for atoms in the third period, errors are a few tenths of a picometer and a few kilojoule/mole, respectively. A third approximation for scalar relativistic effects replaces the relativistic two-electron integrals with the nonrelativistic integrals evaluated with the atomic Foldy-Wouthuysen coefficients as contraction coefficients. It is similar to the Douglas-Kroll-Hess approximation, and is accurate to about 0.1 pm and a few tenths of a kilojoule/mole. The integrals in all the approximations are no more complicated than the integrals in the full relativistic methods, and their derivatives are correspondingly easy to formulate and evaluate.

  17. Training in summarizing notes: Effects of teaching students a self-regulation study strategy in science learning

    Science.gov (United States)

    Nebres, Michelle

    The last two decades of national data assessments reveal that there has been a sharp decline in nationwide standardized test scores. International assessment data show that in 2012 a very low amount of American students were performing at proficiency or above in science literacy. Research in science literacy education suggests that students benefit most when they are self-regulated (SR) learners. Unfortunately, SR poses a challenge for many students because students lack these skills. The effects of having learned few SR strategies at an early age may lead to long term learning difficulties--preventing students from achieving academic success in college and beyond. As a result, some researchers have begun to investigate how to best support students' SR skills. In order for studying to be successful, students need to know which SR study strategies to implement. This can be tricky for struggling students because they need study strategies that are well defined. This needs to be addressed through effective classroom instruction, and should be addressed prior to entering high school in order for students to be prepared for higher level learning. In this study, students underwent a treatment in which they were taught a SR study strategy called summarizing notes. A crossover repeated measures design was employed to understand the effectiveness of the treatment. Results indicated a weak, but positive correlation between how well students summarized notes and how well they performed on science tests. Self-regulation skills are needed because these are the types of skills young adults will use as they enter the workforce. As young adults began working in a professional setting, they will be expected to know how to observe and become proficient on their own. This study is pertinent to the educational field because it is an opportunity for students to increase SR, which affords students with the skills needed to be a lifelong learner.

  18. Stylistic Features of English Public Speaking

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-hui

    2001-01-01

    Pubic speaking has syntactic, lexical, phonological and rhetorical features. The stylistic features of public speaking can be summarized as follows: The language used in public speaking is formal. Public speaking requires that the language and style be standard and neither too frozen nor too intimate. The use of rhetoric devices makes a speech effective and convincing. The delivering mode of a public speech has both the characteristics of oral English and written English.

  19. Frankenstein's glue: transition functions for approximate solutions

    Science.gov (United States)

    Yunes, Nicolás

    2007-09-01

    Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.

  20. The tendon approximator device in traumatic injuries.

    Science.gov (United States)

    Forootan, Kamal S; Karimi, Hamid; Forootan, Nazilla-Sadat S

    2015-01-01

    Precise and tension-free approximation of two tendon endings is the key predictor of outcomes following tendon lacerations and repairs. We evaluate the efficacy of a new tendon approximator device in tendon laceration repairs. In a comparative study, we used our new tendon approximator device in 99 consecutive patients with laceration of 266 tendons who attend a university hospital and evaluated the operative time to repair the tendons, surgeons' satisfaction as well as patient's outcomes in a long-term follow-up. Data were compared with the data of control patients undergoing tendon repair by conventional method. Totally 266 tendons were repaired by approximator device and 199 tendons by conventional technique. 78.7% of patients in first group were male and 21.2% were female. In approximator group 38% of patients had secondary repair of cut tendons and 62% had primary repair. Patients were followed for a mean period of 3years (14-60 months). Time required for repair of each tendon was significantly reduced with the approximator device (2 min vs. 5.5 min, ptendon repair were identical in the two groups and were not significantly different. 1% of tendons in group A and 1.2% in group B had rupture that was not significantly different. The new nerve approximator device is cheap, feasible to use and reduces the time of tendon repair with sustained outcomes comparable to the conventional methods.

  1. Entanglement in the Born-Oppenheimer Approximation

    CERN Document Server

    Izmaylov, Artur F

    2016-01-01

    The role of electron-nuclear entanglement on the validity of the Born-Oppenheimer (BO) approximation is investigated. While nonadiabatic couplings generally lead to entanglement and to a failure of the BO approximation, surprisingly the degree of electron-nuclear entanglement is found to be uncorrelated with the degree of validity of the BO approximation. This is because while the degree of entanglement of BO states is determined by their deviation from the corresponding states in the crude BO approximation, the accuracy of the BO approximation is dictated, instead, by the deviation of the BO states from the exact electron-nuclear states. In fact, in the context of a minimal avoided crossing model, extreme cases are identified where an adequate BO state is seen to be maximally entangled, and where the BO approximation fails but the associated BO state remains approximately unentangled. Further, the BO states are found to not preserve the entanglement properties of the exact electron-nuclear eigenstates, and t...

  2. DIFFERENCE SCHEMES BASING ON COEFFICIENT APPROXIMATION

    Institute of Scientific and Technical Information of China (English)

    MOU Zong-ze; LONG Yong-xing; QU Wen-xiao

    2005-01-01

    In respect of variable coefficient differential equations, the equations of coefficient function approximation were more accurate than the coefficient to be frozen as a constant in every discrete subinterval. Usually, the difference schemes constructed based on Taylor expansion approximation of the solution do not suit the solution with sharp function.Introducing into local bases to be combined with coefficient function approximation, the difference can well depict more complex physical phenomena, for example, boundary layer as well as high oscillatory, with sharp behavior. The numerical test shows the method is more effective than the traditional one.

  3. Approximate equivalence in von Neumann algebras

    Institute of Scientific and Technical Information of China (English)

    DING; Huiru; Don; Hadwin

    2005-01-01

    One formulation of D. Voiculescu's theorem on approximate unitary equivalence is that two unital representations π and ρ of a separable C*-algebra are approximately unitarily equivalent if and only if rank o π = rank o ρ. We study the analog when the ranges of π and ρ are contained in a von Neumann algebra R, the unitaries inducing the approximate equivalence must come from R, and "rank" is replaced with "R-rank" (defined as the Murray-von Neumann equivalence of the range projection).

  4. Approximation of free-discontinuity problems

    CERN Document Server

    Braides, Andrea

    1998-01-01

    Functionals involving both volume and surface energies have a number of applications ranging from Computer Vision to Fracture Mechanics. In order to tackle numerical and dynamical problems linked to such functionals many approximations by functionals defined on smooth functions have been proposed (using high-order singular perturbations, finite-difference or non-local energies, etc.) The purpose of this book is to present a global approach to these approximations using the theory of gamma-convergence and of special functions of bounded variation. The book is directed to PhD students and researchers in calculus of variations, interested in approximation problems with possible applications.

  5. Mathematical analysis, approximation theory and their applications

    CERN Document Server

    Gupta, Vijay

    2016-01-01

    Designed for graduate students, researchers, and engineers in mathematics, optimization, and economics, this self-contained volume presents theory, methods, and applications in mathematical analysis and approximation theory. Specific topics include: approximation of functions by linear positive operators with applications to computer aided geometric design, numerical analysis, optimization theory, and solutions of differential equations. Recent and significant developments in approximation theory, special functions and q-calculus along with their applications to mathematics, engineering, and social sciences are discussed and analyzed. Each chapter enriches the understanding of current research problems and theories in pure and applied research.

  6. Regression with Sparse Approximations of Data

    DEFF Research Database (Denmark)

    Noorzad, Pardis; Sturm, Bob L.

    2012-01-01

    We propose sparse approximation weighted regression (SPARROW), a method for local estimation of the regression function that uses sparse approximation with a dictionary of measurements. SPARROW estimates the regression function at a point with a linear combination of a few regressands selected...... by a sparse approximation of the point in terms of the regressors. We show SPARROW can be considered a variant of \\(k\\)-nearest neighbors regression (\\(k\\)-NNR), and more generally, local polynomial kernel regression. Unlike \\(k\\)-NNR, however, SPARROW can adapt the number of regressors to use based...

  7. Orthorhombic rational approximants for decagonal quasicrystals

    Indian Academy of Sciences (India)

    S Ranganathan; Anandh Subramaniam

    2003-10-01

    An important exercise in the study of rational approximants is to derive their metric, especially in relation to the corresponding quasicrystal or the underlying clusters. Kuo’s model has been the widely accepted model to calculate the metric of the decagonal approximants. Using an alternate model, the metric of the approximants and other complex structures with the icosahedral cluster are explained elsewhere. In this work a comparison is made between the two models bringing out their equivalence. Further, using the concept of average lattices, a modified model is proposed.

  8. Approximation of the semi-infinite interval

    Directory of Open Access Journals (Sweden)

    A. McD. Mercer

    1980-01-01

    Full Text Available The approximation of a function f∈C[a,b] by Bernstein polynomials is well-known. It is based on the binomial distribution. O. Szasz has shown that there are analogous approximations on the interval [0,∞ based on the Poisson distribution. Recently R. Mohapatra has generalized Szasz' result to the case in which the approximating function is αe−ux∑k=N∞(uxkα+β−1Γ(kα+βf(kαuThe present note shows that these results are special cases of a Tauberian theorem for certain infinite series having positive coefficients.

  9. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  10. Trigonometric Approximations for Some Bessel Functions

    OpenAIRE

    Muhammad Taher Abuelma'atti

    1999-01-01

    Formulas are obtained for approximating the tabulated Bessel functions Jn(x), n = 0–9 in terms of trigonometric functions. These formulas can be easily integrated and differentiated and are convenient for personal computers and pocket calculators.

  11. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  12. Asynchronous stochastic approximation with differential inclusions

    Directory of Open Access Journals (Sweden)

    David S. Leslie

    2012-01-01

    Full Text Available The asymptotic pseudo-trajectory approach to stochastic approximation of Benaïm, Hofbauer and Sorin is extended for asynchronous stochastic approximations with a set-valued mean field. The asynchronicity of the process is incorporated into the mean field to produce convergence results which remain similar to those of an equivalent synchronous process. In addition, this allows many of the restrictive assumptions previously associated with asynchronous stochastic approximation to be removed. The framework is extended for a coupled asynchronous stochastic approximation process with set-valued mean fields. Two-timescales arguments are used here in a similar manner to the original work in this area by Borkar. The applicability of this approach is demonstrated through learning in a Markov decision process.

  13. An approximate Expression for Viscosity of Nanosuspensions

    CERN Document Server

    Domostroeva, N G

    2009-01-01

    We consider liquid suspensions with dispersed nanoparticles. Using two-points Pade approximants and combining results of both hydrodynamic and molecular dynamics methods, we obtain the effective viscosity for any diameters of nanoparticles

  14. On Approximating Four Covering and Packing Problems

    CERN Document Server

    Ashley, Mary; Berman, Piotr; Chaovalitwongse, Wanpracha; DasGupta, Bhaskar; Kao, Ming-Yang; 10.1016/j.jcss.2009.01.002

    2011-01-01

    In this paper, we consider approximability issues of the following four problems: triangle packing, full sibling reconstruction, maximum profit coverage and 2-coverage. All of them are generalized or specialized versions of set-cover and have applications in biology ranging from full-sibling reconstructions in wild populations to biomolecular clusterings; however, as this paper shows, their approximability properties differ considerably. Our inapproximability constant for the triangle packing problem improves upon the previous results; this is done by directly transforming the inapproximability gap of Haastad for the problem of maximizing the number of satisfied equations for a set of equations over GF(2) and is interesting in its own right. Our approximability results on the full siblings reconstruction problems answers questions originally posed by Berger-Wolf et al. and our results on the maximum profit coverage problem provides almost matching upper and lower bounds on the approximation ratio, answering a...

  15. Staying thermal with Hartree ensemble approximations

    Energy Technology Data Exchange (ETDEWEB)

    Salle, Mischa E-mail: msalle@science.uva.nl; Smit, Jan E-mail: jsmit@science.uva.nl; Vink, Jeroen C. E-mail: jcvink@science.uva.nl

    2002-03-25

    We study thermal behavior of a recently introduced Hartree ensemble approximation, which allows for non-perturbative inhomogeneous field configurations as well as for approximate thermalization, in the phi (cursive,open) Greek{sup 4} model in 1+1 dimensions. Using ensembles with a free field thermal distribution as out-of-equilibrium initial conditions we determine thermalization time scales. The time scale for which the system stays in approximate quantum thermal equilibrium is an indication of the time scales for which the approximation method stays reasonable. This time scale turns out to be two orders of magnitude larger than the time scale for thermalization, in the range of couplings and temperatures studied. We also discuss simplifications of our method which are numerically more efficient and make a comparison with classical dynamics.

  16. Approximations of solutions to retarded integrodifferential equations

    Directory of Open Access Journals (Sweden)

    Dhirendra Bahuguna

    2004-11-01

    Full Text Available In this paper we consider a retarded integrodifferential equation and prove existence, uniqueness and convergence of approximate solutions. We also give some examples to illustrate the applications of the abstract results.

  17. APPROXIMATE DEVELOPMENTS FOR SURFACES OF REVOLUTION

    Directory of Open Access Journals (Sweden)

    Mădălina Roxana Buneci

    2016-12-01

    Full Text Available The purpose of this paper is provide a set of Maple procedures to construct approximate developments of a general surface of revolution generalizing the well-known gore method for sphere

  18. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  19. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  20. Public surveys at ESO

    Science.gov (United States)

    Arnaboldi, Magda; Delmotte, Nausicaa; Hilker, Michael; Hussain, Gaitee; Mascetti, Laura; Micol, Alberto; Petr-Gotzens, Monika; Rejkuba, Marina; Retzlaff, Jörg; Mieske, Steffen; Szeifert, Thomas; Ivison, Rob; Leibundgut, Bruno; Romaniello, Martino

    2016-07-01

    ESO has a strong mandate to survey the Southern Sky. In this article, we describe the ESO telescopes and instruments that are currently used for ESO Public Surveys, and the future plans of the community with the new wide-field-spectroscopic instruments. We summarize the ESO policies governing the management of these projects on behalf of the community. The on-going ESO Public Surveys and their science goals, their status of completion, and the new projects selected during the second ESO VISTA call in 2015/2016 are discussed. We then present the impact of these projects in terms of current numbers of refereed publications and the scientific data products published through the ESO Science Archive Facility by the survey teams, including the independent access and scientific use of the published survey data products by the astronomical community.

  1. Approximate Flavor Symmetry in Supersymmetric Model

    OpenAIRE

    Tao, Zhijian

    1998-01-01

    We investigate the maximal approximate flavor symmetry in the framework of generic minimal supersymmetric standard model. We consider the low energy effective theory of the flavor physics with all the possible operators included. Spontaneous flavor symmetry breaking leads to the approximate flavor symmetry in Yukawa sector and the supersymmetry breaking sector. Fermion mass and mixing hierachies are the results of the hierachy of the flavor symmetry breaking. It is found that in this theory i...

  2. Pointwise approximation by elementary complete contractions

    CERN Document Server

    Magajna, Bojan

    2009-01-01

    A complete contraction on a C*-algebra A, which preserves all closed two sided ideals J, can be approximated pointwise by elementary complete contractions if and only if the induced map on the tensor product of B with A/J is contractive for every C*-algebra B, ideal J in A and C*-tensor norm on the tensor product. A lifting obstruction for such an approximation is also obtained.

  3. Polynomial approximation of functions in Sobolev spaces

    Science.gov (United States)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  4. Parallel local approximation MCMC for expensive models

    OpenAIRE

    Conrad, Patrick; Davis, Andrew; Marzouk, Youssef; Pillai, Natesh; Smith, Aaron

    2016-01-01

    Performing Bayesian inference via Markov chain Monte Carlo (MCMC) can be exceedingly expensive when posterior evaluations invoke the evaluation of a computationally expensive model, such as a system of partial differential equations. In recent work [Conrad et al. JASA 2015, arXiv:1402.1694] we described a framework for constructing and refining local approximations of such models during an MCMC simulation. These posterior--adapted approximations harness regularity of the model to reduce the c...

  5. The Actinide Transition Revisited by Gutzwiller Approximation

    Science.gov (United States)

    Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel

    2015-03-01

    We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.

  6. Intuitionistic Fuzzy Automaton for Approximate String Matching

    Directory of Open Access Journals (Sweden)

    K.M. Ravi

    2014-03-01

    Full Text Available This paper introduces an intuitionistic fuzzy automaton model for computing the similarity between pairs of strings. The model details the possible edit operations needed to transform any input (observed string into a target (pattern string by providing a membership and non-membership value between them. In the end, an algorithm is given for approximate string matching and the proposed model computes the similarity and dissimilarity between the pair of strings leading to better approximation.

  7. Approximations for the Erlang Loss Function

    DEFF Research Database (Denmark)

    Mejlbro, Leif

    1998-01-01

    Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error <1E-2, and methods are indicated for improving this bound.......Theoretically, at least three formulae are needed for arbitrarily good approximates of the Erlang Loss Function. In the paper, for convenience five formulae are presented guaranteeing a relative error

  8. Staying Thermal with Hartree Ensemble Approximations

    CERN Document Server

    Salle, M; Vink, Jeroen C

    2000-01-01

    Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.

  9. Lattice quantum chromodynamics with approximately chiral fermions

    Energy Technology Data Exchange (ETDEWEB)

    Hierl, Dieter

    2008-05-15

    In this work we present Lattice QCD results obtained by approximately chiral fermions. We use the CI fermions in the quenched approximation to investigate the excited baryon spectrum and to search for the {theta}{sup +} pentaquark on the lattice. Furthermore we developed an algorithm for dynamical simulations using the FP action. Using FP fermions we calculate some LECs of chiral perturbation theory applying the epsilon expansion. (orig.)

  10. Nonlinear approximation in alpha-modulation spaces

    DEFF Research Database (Denmark)

    Borup, Lasse; Nielsen, Morten

    2006-01-01

    The α-modulation spaces are a family of spaces that contain the Besov and modulation spaces as special cases. In this paper we prove that brushlet bases can be constructed to form unconditional and even greedy bases for the α-modulation spaces. We study m -term nonlinear approximation with brushlet...... bases, and give complete characterizations of the associated approximation spaces in terms of α-modulation spaces....

  11. On surface approximation using developable surfaces

    DEFF Research Database (Denmark)

    Chen, H. Y.; Lee, I. K.; Leopoldseder, s.

    1999-01-01

    We introduce a method for approximating a given surface by a developable surface. It will be either a G(1) surface consisting of pieces of cones or cylinders of revolution or a G(r) NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding. (C) 1999 Academic Press....

  12. On surface approximation using developable surfaces

    DEFF Research Database (Denmark)

    Chen, H. Y.; Lee, I. K.; Leopoldseder, S.

    1998-01-01

    We introduce a method for approximating a given surface by a developable surface. It will be either a G_1 surface consisting of pieces of cones or cylinders of revolution or a G_r NURBS developable surface. Our algorithm will also deal properly with the problems of reverse engineering and produce...... robust approximation of given scattered data. The presented technique can be applied in computer aided manufacturing, e.g. in shipbuilding....

  13. Differential geometry of proteins. Helical approximations.

    Science.gov (United States)

    Louie, A H; Somorjai, R L

    1983-07-25

    We regard a protein molecule as a geometric object, and in a first approximation represent it as a regular parametrized space curve passing through its alpha-carbon atoms (the backbone). In an earlier paper we argued that the regular patterns of secondary structures of proteins (morphons) correspond to geodesics on minimal surfaces. In this paper we discuss methods of recognizing these morphons on space curves that represent the protein backbone conformation. The mathematical tool we employ is the differential geometry of curves and surfaces. We introduce a natural approximation of backbone space curves in terms of helical approximating elements and present a computer algorithm to implement the approximation. Simple recognition criteria are given for the various morphons of proteins. These are incorporated into our helical approximation algorithm, together with more non-local criteria for the recognition of beta-sheet topologies. The method and the algorithm are illustrated with several examples of representative proteins. Generalizations of the helical approximation method are considered and their possible implications for protein energetics are sketched.

  14. Public lighting.

    NARCIS (Netherlands)

    Schreuder, D.A.

    1986-01-01

    The function of public lighting and the relationship between public lighting and accidents are considered briefly as aspects of effective countermeasures. Research needs and recent developments in installation and operational described. Public lighting is an efficient accident countermeasure, but

  15. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...

  16. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    Science.gov (United States)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  17. Legendre-tau approximation for functional differential equations. III - Eigenvalue approximations and uniform stability

    Science.gov (United States)

    Ito, K.

    1985-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A characteristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  18. Dimensionless parameters to summarize the influence of microbial growth and inhibition on the bioremediation of groundwater contaminants.

    Science.gov (United States)

    Mohamed, M; Hatfield, K

    2011-09-01

    Monod expressions are preferred over zero- and first-order decay expressions in modeling contaminants biotransformation in groundwater because they better represent complex conditions. However, the wide-range of values reported for Monod parameters suggests each case-study is unique. Such uniqueness restricts the usefulness of modeling, complicates an interpretation of natural attenuation and limits the utility of a bioattenuation assessment to a small number of similar cases. In this paper, four Monod-based dimensionless parameters are developed that summarize the effects of microbial growth and inhibition on groundwater contaminants. The four parameters represent the normalized effective microbial growth rate (η), the normalized critical contaminant/substrate concentration (S*), the critical contaminant/substrate inhibition factor (N), and the bioremediation efficacy (η*). These parameters enable contaminated site managers to assess natural attenuation or augmented bioremediation at multiple sites and then draw comparisons between disparate remediation activities, sites and target contaminants. Simulations results are presented that reveal the sensitivity of these dimensionless parameters to Monod parameters and varying electron donor/acceptor loads. These simulations also show the efficacy of attenuation (η*) varying over space and time. Results suggest electron donor/acceptor amendments maintained at relative concentrations S* between 0.5 and 1.5 produce the highest remediation efficiencies. Implementation of the developed parameters in a case study proves their usefulness.

  19. Summarization of robot gait planning in RoboCup%RoboCup机器人步态研究综述

    Institute of Scientific and Technical Information of China (English)

    马亮; 杨超; 操凤萍

    2016-01-01

    In this paper, the robot gait planning in RoboCup is summarized. On the basis of introduction of NAO model and robot kinematics, through the analysis of the NAO model in RoboCup3D, the robot kinematics model is established, trajectory of robot walking process is planned, and according to the stability criterion of humanoid, the constraint condition is added to the trajectory. The theories of Center of Gravity and Zero Moment Point (ZMP) are given as well.%综述了RoboCup中机器人的步态规划。首先介绍了Nao模型和机器人运动学,以此为基础,通过对RoboCup3D中采用的Nao模型的分析,建立机器人运动学模型,规划其步行过程的轨迹,再根据稳定性判别依据,添加步态稳定的约束条件,并介绍了重力投影点与ZMP。

  20. Neutron and X-ray effects on small intestine summarized by using a mathematical model or paradigm

    Energy Technology Data Exchange (ETDEWEB)

    Carr, K.E.; McCullough, J.S.; Nunn, S. (Queen' s Univ., Belfast, Northern Ireland (United Kingdom). School of Biomedical Science/Anatomy); Hume, S.P. (Hammersmith Hospital, London (United Kingdom). M.R.C. Cyclotron Unit); Nelson, A.C. (Washington Univ., Seattle, WA (United States). Center for Bioengineering)

    1991-01-01

    The responses of intestinal tissues to ionizing radiation can be described by comparing irradiated cell populations qualitatively or quantitatively with corresponding controls. This paper describes quantitative data obtained from resin-embedded sections of neutron-irradiated mouse small intestine at different times after treatment. Information is collected by counting cells or structures present per complete circumference. The data are assessed by using standard statistical tests, which show that early mitotic arrest precedes changes in goblet, absorptive, endocrine and stromal cells and a decrease in crypt numbers. The data can also produce ratios of irradiated:control figures for cells or structural elements. These ratios, along with tissue area measurements, can be used to summarize the structural damage as a composite graph and table, including a total figure, known as the Morphological Index. This is used to quantify the temporal response of the wall as a whole and to compare the effects of different qualities of radiation, here X-ray and cyclotron-produced neutron radiations. It is possible that such analysis can be used predictively along with other reference data to identify the treatment, dose and time required to produce observed tissue damage. (author).

  1. Summarized Costs, Placement Of Quality Stars, And Other Online Displays Can Help Consumers Select High-Value Health Plans.

    Science.gov (United States)

    Greene, Jessica; Hibbard, Judith H; Sacks, Rebecca M

    2016-04-01

    Starting in 2017, all state and federal health insurance exchanges will present quality data on health plans in addition to cost information. We analyzed variations in the current design of information on state exchanges to identify presentation approaches that encourage consumers to take quality as well as cost into account when selecting a health plan. Using an online sample of 1,025 adults, we randomly assigned participants to view the same comparative information on health plans, displayed in different ways. We found that consumers were much more likely to select a high-value plan when cost information was summarized instead of detailed, when quality stars were displayed adjacent to cost information, when consumers understood that quality stars signified the quality of medical care, and when high-value plans were highlighted with a check mark or blue ribbon. These approaches, which were equally effective for participants with higher and lower numeracy, can inform the development of future displays of plan information in the exchanges.

  2. Tree-fold loop approximation of AMD

    Energy Technology Data Exchange (ETDEWEB)

    Ono, Akira [Tohoku Univ., Sendai (Japan). Faculty of Science

    1997-05-01

    AMD (antisymmetrized molecular dynamics) is a frame work for describing a wave function of nucleon multi-body system by Slater determinant of Gaussian wave flux, and a theory for integrally describing a wide range of nuclear reactions such as intermittent energy heavy ion reaction, nucleon incident reaction and so forth. The aim of this study is induction on approximation equation of expected value, {nu}, in correlation capable of calculation with time proportional A (exp 3) (or lower), and to make AMD applicable to the heavier system such as Au+Au. As it must be avoided to break characteristics of AMD, it needs not to be anxious only by approximating the {nu}-value. However, in order to give this approximation any meaning, error of this approximation will have to be sufficiently small in comparison with bond energy of atomic nucleus and smaller than 1 MeV/nucleon. As the absolute expected value in correlation may be larger than 50 MeV/nucleon, the approximation is required to have a high accuracy within 2 percent. (G.K.)

  3. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  4. Approximation of Bivariate Functions via Smooth Extensions

    Science.gov (United States)

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  5. Approximate Bayesian computation for machine learning, inverse problems and big data

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2017-06-01

    This paper summarizes my tutorial talk in MaxEnt 2016 workshop. Starting from the basics of the Bayesian approach and simple example of low dimensional parameter estimation where almost all the computations can be done easily, we go very fast to high dimensional case. In those real world cases, even for the sample case of linear model with Gaussian prior, where the posterior law is also Gaussian, the cost of the computation of the posterior covariance becomes important and needs approximate and fast algorithms. Different approximation methods for model comparison and model selection in machine learning problems are presented in summary. Among the existing methods, we mention Laplace approximation, Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Variational Bayesian Approximation (VBA) Methods. Finally, through two examples of inverse problems in imaging systems: X ray and Diffraction wave Computed Tomography (CT), we show how to handle the real great dimensional problems.

  6. Discrete Ordinates Approximations to the First- and Second-Order Radiation Transport Equations

    CERN Document Server

    Fan, W C; Powell, J L

    2002-01-01

    The conventional discrete ordinates approximation to the Boltzmann transport equation can be described in a matrix form. Specifically, the within-group scattering integral can be represented by three components: a moment-to-discrete matrix, a scattering cross-section matrix and a discrete-to-moment matrix. Using and extending these entities, we derive and summarize the matrix representations of the second-order transport equations.

  7. Approximation Limits of Linear Programs (Beyond Hierarchies)

    CERN Document Server

    Braun, Gábor; Pokutta, Sebastian; Steurer, David

    2012-01-01

    We develop a framework for approximation limits of polynomial-size linear programs from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any linear program as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations for CLIQUE require linear programs of size 2^{n^\\Omega(eps)}. (This lower bound applies to linear programs using a certain encoding of CLIQUE as a linear optimization problem.) Moreover, we establish a similar result for approximations of semidefinite programs by linear programs. Our main ingredient is a quantitative improvement of Razborov's rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of certain perturbations of the unique disjointness matrix.

  8. Discontinuous Galerkin Methods with Trefftz Approximation

    CERN Document Server

    Kretzschmar, Fritz; Tsukerman, Igor; Weiland, Thomas

    2013-01-01

    We present a novel Discontinuous Galerkin Finite Element Method for wave propagation problems. The method employs space-time Trefftz-type basis functions that satisfy the underlying partial differential equations and the respective interface boundary conditions exactly in an element-wise fashion. The basis functions can be of arbitrary high order, and we demonstrate spectral convergence in the $\\Lebesgue_2$-norm. In this context, spectral convergence is obtained with respect to the approximation error in the entire space-time domain of interest, i.e. in space and time simultaneously. Formulating the approximation in terms of a space-time Trefftz basis makes high order time integration an inherent property of the method and clearly sets it apart from methods, that employ a high order approximation in space only.

  9. Approximating light rays in the Schwarzschild field

    CERN Document Server

    Semerak, Oldrich

    2014-01-01

    A short formula is suggested which approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various "low-order competitors", namely with those following from exact formulas for small $M$, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behaviour is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable--and very accurate--for practical solving of the ray-deflection exercise.

  10. On the approximate zero of Newton method

    Institute of Scientific and Technical Information of China (English)

    黄正达

    2003-01-01

    A judgment criterion to guarantee a point to be a Chen' s approximate zero of Newton method for solving nonlinear equation is sought by dominating sequence techniques. The criterion is based on the fact that the dominating function may have only one simple positive zero, assuming that the operator is weak Lipschitz continuous, which is much more relaxed and can be checked much more easily than Lipschitz continuous in practice. It is demonstrated that a Chen' s approximate zero may not be a Smale' s approximate zero. The error estimate obtained indicated the convergent order when we use |f(x) | < ε to stop computation in software.The result can also be applied for solving partial derivative and integration equations.

  11. On the approximate zero of Newton method

    Institute of Scientific and Technical Information of China (English)

    黄正达

    2003-01-01

    A judgment criterion to guarantee a point to be a Chen's approximate zero of Newton method for solving nonlinear equation is sought by dominating sequence techniques. The criterion is based on the fact that the dominating function may have only one simple positive zero, assuming that the operator is weak Lipschitz continuous, which is much more relaxed and can be checked much more easily than Lipschitz continuous in practice. It is demonstrated that a Chen's approximate zero may not be a Smale's approximate zero. The error estimate obtained indicated the convergent order when we use |f(x)|<ε to stop computation in software. The result can also be applied for solving partial derivative and integration equations.

  12. Optical pulse propagation with minimal approximations

    Science.gov (United States)

    Kinsler, Paul

    2010-01-01

    Propagation equations for optical pulses are needed to assist in describing applications in ever more extreme situations—including those in metamaterials with linear and nonlinear magnetic responses. Here I show how to derive a single first-order propagation equation using a minimum of approximations and a straightforward “factorization” mathematical scheme. The approach generates exact coupled bidirectional equations, after which it is clear that the description can be reduced to a single unidirectional first-order wave equation by means of a simple “slow evolution” approximation, where the optical pulse changes little over the distance of one wavelength. It also allows a direct term-to-term comparison of an exact bidirectional theory with the approximate unidirectional theory.

  13. Rough interfaces beyond the Gaussian approximation

    CERN Document Server

    Caselle, M; Gliozzi, F; Hasenbusch, M; Pinn, K; Vinti, S; Caselle, M; Gliozzi, F; Fiore, R; Hasenbusch, M; Pinn, K; Vinti, S

    1994-01-01

    We compare predictions of the Capillary Wave Model beyond its Gaussian approximation with Monte Carlo results for the energy gap and the surface energy of the 3D Ising model in the scaling region. Our study reveals that the finite size effects of these quantities are well described by the Capillary Wave Model, expanded to two--loop order ( one order beyond the Gaussian approximation). We compare predictions of the Capillary Wave Model with Monte Carlo results for the energy gap and the interface energy of the 3D Ising model in the scaling region. Our study reveals that the finite size effects of these quantities are well described by the Capillary Wave Model, expanded to two-loop order (one order beyond the Gaussian approximation).

  14. Implementing regularization implicitly via approximate eigenvector computation

    CERN Document Server

    Mahoney, Michael W

    2010-01-01

    Regularization is a powerful technique for extracting useful information from noisy data. Typically, it is implemented by adding some sort of norm constraint to an objective function and then exactly optimizing the modified objective function. This procedure typically leads to optimization problems that are computationally more expensive than the original problem, a fact that is clearly problematic if one is interested in large-scale applications. On the other hand, a large body of empirical work has demonstrated that heuristics, and in some cases approximation algorithms, developed to speed up computations sometimes have the side-effect of performing regularization implicitly. Thus, we consider the question: What is the regularized optimization objective that an approximation algorithm is exactly optimizing? We address this question in the context of computing approximations to the smallest nontrivial eigenvector of a graph Laplacian; and we consider three random-walk-based procedures: one based on the heat ...

  15. On approximation of Markov binomial distributions

    CERN Document Server

    Xia, Aihua; 10.3150/09-BEJ194

    2010-01-01

    For a Markov chain $\\mathbf{X}=\\{X_i,i=1,2,...,n\\}$ with the state space $\\{0,1\\}$, the random variable $S:=\\sum_{i=1}^nX_i$ is said to follow a Markov binomial distribution. The exact distribution of $S$, denoted $\\mathcal{L}S$, is very computationally intensive for large $n$ (see Gabriel [Biometrika 46 (1959) 454--460] and Bhat and Lal [Adv. in Appl. Probab. 20 (1988) 677--680]) and this paper concerns suitable approximate distributions for $\\mathcal{L}S$ when $\\mathbf{X}$ is stationary. We conclude that the negative binomial and binomial distributions are appropriate approximations for $\\mathcal{L}S$ when $\\operatorname {Var}S$ is greater than and less than $\\mathbb{E}S$, respectively. Also, due to the unique structure of the distribution, we are able to derive explicit error estimates for these approximations.

  16. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  17. Numerical approximation of partial differential equations

    CERN Document Server

    Bartels, Sören

    2016-01-01

    Finite element methods for approximating partial differential equations have reached a high degree of maturity, and are an indispensible tool in science and technology. This textbook aims at providing a thorough introduction to the construction, analysis, and implementation of finite element methods for model problems arising in continuum mechanics. The first part of the book discusses elementary properties of linear partial differential equations along with their basic numerical approximation, the functional-analytical framework for rigorously establishing existence of solutions, and the construction and analysis of basic finite element methods. The second part is devoted to the optimal adaptive approximation of singularities and the fast iterative solution of linear systems of equations arising from finite element discretizations. In the third part, the mathematical framework for analyzing and discretizing saddle-point problems is formulated, corresponding finte element methods are analyzed, and particular ...

  18. On Born approximation in black hole scattering

    Energy Technology Data Exchange (ETDEWEB)

    Batic, D. [University of West Indies, Department of Mathematics, Kingston (Jamaica); Kelkar, N.G.; Nowakowski, M. [Universidad de los Andes, Departamento de Fisica, Bogota (Colombia)

    2011-12-15

    A massless field propagating on spherically symmetric black hole metrics such as the Schwarzschild, Reissner-Nordstroem and Reissner-Nordstroem-de Sitter backgrounds is considered. In particular, explicit formulae in terms of transcendental functions for the scattering of massless scalar particles off black holes are derived within a Born approximation. It is shown that the conditions on the existence of the Born integral forbid a straightforward extraction of the quasi normal modes using the Born approximation for the scattering amplitude. Such a method has been used in literature. We suggest a novel, well defined method, to extract the large imaginary part of quasinormal modes via the Coulomb-like phase shift. Furthermore, we compare the numerically evaluated exact scattering amplitude with the Born one to find that the approximation is not very useful for the scattering of massless scalar, electromagnetic as well as gravitational waves from black holes. (orig.)

  19. Time Stamps for Fixed-Point Approximation

    DEFF Research Database (Denmark)

    Damian, Daniela

    2001-01-01

    Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed-point approximat......Time stamps were introduced in Shivers's PhD thesis for approximating the result of a control-flow analysis. We show them to be suitable for computing program analyses where the space of results (e.g., control-flow graphs) is large. We formalize time-stamping as a top-down, fixed......-point approximation algorithm which maintains a single copy of intermediate results. We then prove the correctness of this algorithm....

  20. Two Courses in Public Relations.

    Science.gov (United States)

    Phifer, Gregg; Gee, Gerry

    At Florida State University in Tallahassee, 30 students a year are enrolled in the public relations major, beginning with the junior year, so that at any one time there are approximately 60 majors, all of whom have at least a B average. The basic course--PUR 3000, Introduction to Public Relations--enrolls over 150 students a semester in two…

  1. Exponential Approximations Using Fourier Series Partial Sums

    Science.gov (United States)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  2. Public Broadcasting.

    Science.gov (United States)

    Shooshan, Harry M.; Arnheim, Louise

    This paper, the second in a series exploring future options for public policy in the communications and information arenas, examines some of the issues underlying public broadcasting, primarily public television. It advances two reasons why quality local public television programming is scarce: funds for the original production of programming have…

  3. Extending the Eikonal Approximation to Low Energy

    CERN Document Server

    Capel, Pierre; Ogata, Kazuyuki

    2014-01-01

    E-CDCC and DEA, two eikonal-based reaction models are compared to CDCC at low energy (e.g. 20AMeV) to study their behaviour in the regime at which the eikonal approximation is supposed to fail. We confirm that these models lack the Coulomb deflection of the projectile by the target. We show that a hybrid model, built on the CDCC framework at low angular momenta and the eikonal approximation at larger angular momenta gives a perfect agreement with CDCC. An empirical shift in impact parameter can also be used reliably to simulate this missing Coulomb deflection.

  4. Approximately-Balanced Drilling in Daqing Oilfield

    Institute of Scientific and Technical Information of China (English)

    Xia Bairu; Zheng Xiuhua; Li Guoqing; Tian Tuo

    2004-01-01

    The Daqing oilfield is a multilayered heterogeneous oil field where the pressure are different in the same vertical profile causing many troubles to the adjustment well drillings. The approximately-balanced drilling technique has been developed and proved to be efficient and successful in Daqing oilfield. This paper discusses the application of approximately-balanced drilling technique under the condition of multilayered pressure in Daqing oilfield, including the prediction of formation pressure, the pressure discharge technique for the drilling well and the control of the density of drilling fluid.

  5. Faddeev Random Phase Approximation for Molecules

    CERN Document Server

    Degroote, Matthias; Barbieri, Carlo

    2010-01-01

    The Faddeev Random Phase Approximation is a Green's function technique that makes use of Faddeev-equations to couple the motion of a single electron to the two-particle--one-hole and two-hole--one-particle excitations. This method goes beyond the frequently used third-order Algebraic Diagrammatic Construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are described at the level of the Random Phase Approximation. This paper presents the first results for diatomic molecules at equilibrium geometry. The behavior of the method in the dissociation limit is also investigated.

  6. An Approximate Bayesian Fundamental Frequency Estimator

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2012-01-01

    Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency...... and the model order is based on a probability model which corresponds to a minimum of prior information. From this probability model, we give the exact posterior distributions on the fundamental frequency and the model order, and we also present analytical approximations of these distributions which lower...

  7. Approximate Controllability of Fractional Integrodifferential Evolution Equations

    Directory of Open Access Journals (Sweden)

    R. Ganesh

    2013-01-01

    Full Text Available This paper addresses the issue of approximate controllability for a class of control system which is represented by nonlinear fractional integrodifferential equations with nonlocal conditions. By using semigroup theory, p-mean continuity and fractional calculations, a set of sufficient conditions, are formulated and proved for the nonlinear fractional control systems. More precisely, the results are established under the assumption that the corresponding linear system is approximately controllable and functions satisfy non-Lipschitz conditions. The results generalize and improve some known results.

  8. Excluded-Volume Approximation for Supernova Matter

    CERN Document Server

    Yudin, A V

    2014-01-01

    A general scheme of the excluded-volume approximation as applied to multicomponent systems with an arbitrary degree of degeneracy has been developed. This scheme also admits an allowance for additional interactions between the components of a system. A specific form of the excluded-volume approximation for investigating supernova matter at subnuclear densities has been found from comparison with the hard-sphere model. The possibility of describing the phase transition to uniform nuclear matter in terms of the formalism under consideration is discussed.

  9. Generalized companion matrix for approximate GCD

    CERN Document Server

    Boito, Paola

    2011-01-01

    We study a variant of the univariate approximate GCD problem, where the coe?- cients of one polynomial f(x)are known exactly, whereas the coe?cients of the second polynomial g(x)may be perturbed. Our approach relies on the properties of the matrix which describes the operator of multiplication by gin the quotient ring C[x]=(f). In particular, the structure of the null space of the multiplication matrix contains all the essential information about GCD(f; g). Moreover, the multiplication matrix exhibits a displacement structure that allows us to design a fast algorithm for approximate GCD computation with quadratic complexity w.r.t. polynomial degrees.

  10. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  11. Static correlation beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian Sommer

    2014-01-01

    We investigate various approximations to the correlation energy of a H2 molecule in the dissociation limit, where the ground state is poorly described by a single Slater determinant. The correlation energies are derived from the density response function and it is shown that response functions...... derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...

  12. Approximate formulas for moderately small eikonal amplitudes

    Science.gov (United States)

    Kisselev, A. V.

    2016-08-01

    We consider the eikonal approximation for moderately small scattering amplitudes. To find numerical estimates of these approximations, we derive formulas that contain no Bessel functions and consequently no rapidly oscillating integrands. To obtain these formulas, we study improper integrals of the first kind containing products of the Bessel functions J0(z). We generalize the expression with four functions J0(z) and also find expressions for the integrals with the product of five and six Bessel functions. We generalize a known formula for the improper integral with two functions Jυ (az) to the case with noninteger υ and complex a.

  13. The exact renormalization group and approximation solutions

    CERN Document Server

    Morris, T R

    1994-01-01

    We investigate the structure of Polchinski's formulation of the flow equations for the continuum Wilson effective action. Reinterpretations in terms of I.R. cutoff greens functions are given. A promising non-perturbative approximation scheme is derived by carefully taking the sharp cutoff limit and expanding in `irrelevancy' of operators. We illustrate with two simple models of four dimensional $\\lambda \\varphi^4$ theory: the cactus approximation, and a model incorporating the first irrelevant correction to the renormalized coupling. The qualitative and quantitative behaviour give confidence in a fuller use of this method for obtaining accurate results.

  14. Approximating W projection as a separable kernel

    Science.gov (United States)

    Merry, Bruce

    2016-02-01

    W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.

  15. BEST APPROXIMATION BY DOWNWARD SETS WITH APPLICATIONS

    Institute of Scientific and Technical Information of China (English)

    H.Mohebi; A. M. Rubinov

    2006-01-01

    We develop a theory of downward sets for a class of normed ordered spaces. We study best approximation in a normed ordered space X by elements of downward sets, and give necessary and sufficient conditions for any element of best approximation by a closed downward subset of X. We also characterize strictly downward subsets of X, and prove that a downward subset of X is strictly downward if and only if each its boundary point is Chebyshev. The results obtained are used for examination of some Chebyshev pairs (W,x), where x ∈ X and W is a closed downward subset of X.

  16. Local density approximations from finite systems

    CERN Document Server

    Entwistle, Mike; Wetherell, Jack; Longstaff, Bradley; Ramsden, James; Godby, Rex

    2016-01-01

    The local density approximation (LDA) constructed through quantum Monte Carlo calculations of the homogeneous electron gas (HEG) is the most common approximation to the exchange-correlation functional in density functional theory. We introduce an alternative set of LDAs constructed from slab-like systems of one, two and three electrons that resemble the HEG within a finite region, and illustrate the concept in one dimension. Comparing with the exact densities and Kohn-Sham potentials for various test systems, we find that the LDAs give a good account of the self-interaction correction, but are less reliable when correlation is stronger or currents flow.

  17. 既有建筑节能改造质量管理研究综述%SUMMARIZING QUALITY MANAGEMENT OF ENERGY-SAVING RECONSTRUCTION OF EXISTED BUILDING

    Institute of Scientific and Technical Information of China (English)

    焦江辉; 郭汉丁; 师旭燕; 韩新娜

    2011-01-01

    既有建筑节能改造工程是低碳经济、可持续发展的必然要求。由于理论、实践经验不足,既有建筑节能改造工程质量风险偏大。从建筑工程质量政府监管、专业机构及社会监督、建筑主体质量保证三方面梳理了我国工程质量管理体制与实践历程;基于过程控制方法、知识库、管理理念、信息技术和质量评价等视角概括了我国工程质量管理理论研究现状。通过分析既有建筑节能改造工程质量形成的内在特征,以期寻求既有建筑节能改造质量管理研究方向,促进我国既有公共建筑节能改造健康发展。%Energy saving reconstruction of existed building is the inevitable requirement of low-carbon economy and sustainable development. Because of the insufficient of the theoretical and practice research, quality insufficient of existed building energy-saving project is larger. The system and practical process of domestic engineering quality were carded by government supervision on engineering quality, supervision of professional constitution, quality guarantee and control by construction body; summarized theoretical research of domestic engineering quality management based on process control method, knowledge base, new ideas, the information technology and evaluation on engineering quality. Inherent characteristic of quality of existing public building is analyzed, research direction based on the risk control of quality of quality management of existed building is explored, wished to promoting the healthy development of energy saving of reconstruction domestic existing public building.

  18. Summarized on function and preparation of miraculin%神秘果素的研究进展

    Institute of Scientific and Technical Information of China (English)

    刘祺; 马雪梅; 赵鹏翔

    2012-01-01

    Miraculin, a taste-modify glycoprotein, is first discovered in the miracle fruit (Synsepalum dulcificum). Miraculin is made up of 191 amino acids, and the main active forms of it are dimmers and tetramers, in which His-30 and His-60 are the active centers. Cys also plays a role in its structure forming and folding to some degree. The glycosylation sites of miraculin are Asn-42 and Asn-186. Miraculin itself does not taste sweet, and in the neutral pH condition it does not show any activity either. But under acidic condition, it can bring about an activity of changing food from sour to sweet, which may last for 1-2 h. On the side, miraculin can increase the sensitivity of diabetic animals to insulin. The latest preparation methods of miraculin are direct extraction from Synsepalum dulcificum and expressing in transgenic plants or microbiology. In this paper, we summarized the miraculin structure, function and preparation methods in many latest researches.%神秘果素(miraculin,MIR)是在神秘果中(Synsepalum dulcificum)发现的一种变味糖蛋白,又称为变味魔术师。神秘果素由191个氨基酸组成,其主要活性形式为二聚体和四聚体,His-30和His-60是这个糖蛋白的活性中心位点,Asn-42和Asn-186是神秘果素的糖基化位点,Cys对蛋白的结构和折叠也有一定的影响。神秘果素本身并没有甜味,在中性条件下不表现出活性,但是在酸性条件下能够产生让酸的食物变甜的活性,并且这种影响能够持续一段时间(1~ 2 h);另一方面,神秘果素能够在一定程度上提高糖尿病动物对胰岛素的敏感性。最新的制备方法是通过从神秘果中直接提取、转基因到植物中和在微生物中进行表达等方法。综述了神秘果素的结构、功能和制备方法等方面的研究进展。

  19. Rational approximations and quantum algorithms with postselection

    NARCIS (Netherlands)

    Mahadev, U.; de Wolf, R.

    2015-01-01

    We study the close connection between rational functions that approximate a given Boolean function, and quantum algorithms that compute the same function using post-selection. We show that the minimal degree of the former equals (up to a factor of 2) the minimal query complexity of the latter. We gi

  20. Kravchuk functions for the finite oscillator approximation

    Science.gov (United States)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  1. Optical bistability without the rotating wave approximation

    Energy Technology Data Exchange (ETDEWEB)

    Sharaby, Yasser A., E-mail: Yasser_Sharaby@hotmail.co [Physics Department, Faculty of Applied Sciences, Suez Canal University, Suez (Egypt); Joshi, Amitabh, E-mail: ajoshi@eiu.ed [Department of Physics, Eastern Illinois University, Charleston, IL 61920 (United States); Hassan, Shoukry S., E-mail: Shoukryhassan@hotmail.co [Mathematics Department, College of Science, University of Bahrain, P.O. Box 32038 (Bahrain)

    2010-04-26

    Optical bistability for two-level atomic system in a ring cavity is investigated outside the rotating wave approximation (RWA) using non-autonomous Maxwell-Bloch equations with Fourier decomposition up to first harmonic. The first harmonic output field component exhibits reversed or closed loop bistability simultaneously with the usual (anti-clockwise) bistability in the fundamental field component.

  2. Improved Approximations for Multiprocessor Scheduling Under Uncertainty

    CERN Document Server

    Crutchfield, Christopher; Fineman, Jeremy T; Karger, David R; Scott, Jacob

    2008-01-01

    This paper presents improved approximation algorithms for the problem of multiprocessor scheduling under uncertainty, or SUU, in which the execution of each job may fail probabilistically. This problem is motivated by the increasing use of distributed computing to handle large, computationally intensive tasks. In the SUU problem we are given n unit-length jobs and m machines, a directed acyclic graph G of precedence constraints among jobs, and unrelated failure probabilities q_{ij} for each job j when executed on machine i for a single timestep. Our goal is to find a schedule that minimizes the expected makespan, which is the expected time at which all jobs complete. Lin and Rajaraman gave the first approximations for this NP-hard problem for the special cases of independent jobs, precedence constraints forming disjoint chains, and precedence constraints forming trees. In this paper, we present asymptotically better approximation algorithms. In particular, we give an O(loglog min(m,n))-approximation for indep...

  3. Markov operators, positive semigroups and approximation processes

    CERN Document Server

    Altomare, Francesco; Leonessa, Vita; Rasa, Ioan

    2015-01-01

    In recent years several investigations have been devoted to the study of large classes of (mainly degenerate) initial-boundary value evolution problems in connection with the possibility to obtain a constructive approximation of the associated positive C_0-semigroups. In this research monograph we present the main lines of a theory which finds its root in the above-mentioned research field.

  4. Image Compression Via a Fast DCT Approximation

    NARCIS (Netherlands)

    Bayer, F. M.; Cintra, R. J.

    2010-01-01

    Discrete transforms play an important role in digital signal processing. In particular, due to its transform domain energy compaction properties, the discrete cosine transform (DCT) is pivotal in many image processing problems. This paper introduces a numerical approximation method for the DCT based

  5. Approximation algorithms for planning and control

    Science.gov (United States)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  6. Large hierarchies from approximate R symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Kappl, Rolf; Ratz, Michael [Technische Univ. Muenchen, Garching (Germany). Physik Dept. T30; Nilles, Hans Peter [Bonn Univ. (Germany). Bethe Zentrum fuer Theoretische Physik und Physikalisches Inst.; Ramos-Sanchez, Saul; Schmidt-Hoberg, Kai [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Vaudrevange, Patrick K.S. [Ludwig-Maximilians-Univ. Muenchen (Germany). Arnold Sommerfeld Zentrum fuer Theoretische Physik

    2008-12-15

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales. (orig.)

  7. Strong washout approximation to resonant leptogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Garbrecht, Bjoern; Gautier, Florian; Klaric, Juraj [Physik Department T70, James-Franck-Strasse, Techniche Universitaet Muenchen, 85748 Garching (Germany)

    2016-07-01

    We study resonant Leptogenesis with two sterile neutrinos with masses M{sub 1} and M{sub 2}, Yukawa couplings Y{sub 1} and Y{sub 2}, and a single active flavor. Specifically, we focus on the strong washout regime, where the decay width dominates the mass splitting of the two sterile neutrinos. We show that one can approximate the effective decay asymmetry by it's late time limit ε = X sin(2 φ)/(X{sup 2}+sin{sup 2}φ), where X=8 π Δ/(vertical stroke Y{sub 1} vertical stroke {sup 2}+ vertical stroke Y{sub 2} vertical stroke {sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), and φ=arg(Y{sub 2}/Y{sub 1}), and establish criteria for the validity of this approximation. We compare the approximate results with numerical ones, obtained by solving the mixing and oscillations of the sterile neutrinos. We generalize the formula to the case of several active flavors, and demonstrate how it can be used to calculate the lepton asymmetry in phenomenological scenarios which are in agreement with the neutrino oscillation data. We find that that using the late time limit is an applicable approximation throughout the phenomenologically viable parameter space.

  8. Lower Bound Approximation for Elastic Buckling Loads

    NARCIS (Netherlands)

    Vrouwenvelder, A.; Witteveen, J.

    1975-01-01

    An approximate method for the elastic buckling analysis of two-dimensional frames is introduced. The method can conveniently be explained with reference to a physical interpretation: In the frame every member is replaced by two new members: - a flexural member without extensional rigidity to transmi

  9. Approximate Equilibrium Problems and Fixed Points

    Directory of Open Access Journals (Sweden)

    H. Mazaheri

    2013-01-01

    Full Text Available We find a common element of the set of fixed points of a map and the set of solutions of an approximate equilibrium problem in a Hilbert space. Then, we show that one of the sequences weakly converges. Also we obtain some theorems about equilibrium problems and fixed points.

  10. Approximations in diagnosis: motivations and techniques

    NARCIS (Netherlands)

    Harmelen, van F.A.H.; Teije, A. ten

    1995-01-01

    We argue that diagnosis should not be seen as solving a problem with a unique definition, but rather that there exists a whole space of reasonable notions of diagnosis. These notions can be seen as mutual approximations. We present a number of reasons for choosing among different notions of diagnos

  11. Eignets for function approximation on manifolds

    CERN Document Server

    Mhaskar, H N

    2009-01-01

    Let $\\XX$ be a compact, smooth, connected, Riemannian manifold without boundary, $G:\\XX\\times\\XX\\to \\RR$ be a kernel. Analogous to a radial basis function network, an eignet is an expression of the form $\\sum_{j=1}^M a_jG(\\circ,y_j)$, where $a_j\\in\\RR$, $y_j\\in\\XX$, $1\\le j\\le M$. We describe a deterministic, universal algorithm for constructing an eignet for approximating functions in $L^p(\\mu;\\XX)$ for a general class of measures $\\mu$ and kernels $G$. Our algorithm yields linear operators. Using the minimal separation amongst the centers $y_j$ as the cost of approximation, we give modulus of smoothness estimates for the degree of approximation by our eignets, and show by means of a converse theorem that these are the best possible for every \\emph{individual function}. We also give estimates on the coefficients $a_j$ in terms of the norm of the eignet. Finally, we demonstrate that if any sequence of eignets satisfies the optimal estimates for the degree of approximation of a smooth function, measured in ter...

  12. Approximations in diagnosis: motivations and techniques

    NARCIS (Netherlands)

    Harmelen, van F.A.H.; Teije, A. ten

    1995-01-01

    We argue that diagnosis should not be seen as solving a problem with a unique definition, but rather that there exists a whole space of reasonable notions of diagnosis. These notions can be seen as mutual approximations. We present a number of reasons for choosing among different notions of

  13. Empirical progress and nomic truth approximation revisited

    NARCIS (Netherlands)

    Kuipers, Theodorus

    2014-01-01

    In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of

  14. Faddeev Random Phase Approximation applied to molecules

    CERN Document Server

    Degroote, Matthias

    2012-01-01

    This Ph.D. thesis derives the equations of the Faddeev Random Phase Approximation (FRPA) and applies the method to a set of small atoms and molecules. The occurence of RPA instabilities in the dissociation limit is addressed in molecules and by the study of the Hubbard molecule as a test system with reduced dimensionality.

  15. Auction analysis by normal form game approximation

    NARCIS (Netherlands)

    Kaisers, Michael; Tuyls, Karl; Thuijsman, Frank; Parsons, Simon

    2008-01-01

    Auctions are pervasive in todaypsilas society and provide a variety of real markets. This article facilitates a strategic choice between a set of available trading strategies by introducing a methodology to approximate heuristic payoff tables by normal form games. An example from the auction domain

  16. Fostering Formal Commutativity Knowledge with Approximate Arithmetic.

    Directory of Open Access Journals (Sweden)

    Sonja Maria Hansen

    Full Text Available How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2 and third graders (Experiment 3. Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school.

  17. Approximate fixed point of Reich operator

    Directory of Open Access Journals (Sweden)

    M. Saha

    2013-01-01

    Full Text Available In the present paper, we study the existence of approximate fixed pointfor Reich operator together with the property that the ε-fixed points are concentrated in a set with the diameter tends to zero if ε $to$ > 0.

  18. Approximation of Aggregate Losses Using Simulation

    Directory of Open Access Journals (Sweden)

    Mohamed A. Mohamed

    2010-01-01

    Full Text Available Problem statement: The modeling of aggregate losses is one of the main objectives in actuarial theory and practice, especially in the process of making important business decisions regarding various aspects of insurance contracts. The aggregate losses over a fixed time period is often modeled by mixing the distributions of loss frequency and severity, whereby the distribution resulted from this approach is called a compound distribution. However, in many cases, realistic probability distributions for loss frequency and severity cannot be combined mathematically to derive the compound distribution of aggregate losses. Approach: This study aimed to approximate the aggregate loss distribution using simulation approach. In particular, the approximation of aggregate losses was based on a compound Poisson-Pareto distribution. The effects of deductible and policy limit on the individual loss as well as the aggregate losses were also investigated. Results: Based on the results, the approximation of compound Poisson-Pareto distribution via simulation approach agreed with the theoretical mean and variance of each of the loss frequency, loss severity and aggregate losses. Conclusion: This study approximated the compound distribution of aggregate losses using simulation approach. The investigation on retained losses and insurance claims allowed an insured or a company to select an insurance contract that fulfills its requirement. In particular, if a company wants to have an additional risk reduction, it can compare alternative policies by considering the worthiness of the additional expected total cost which can be estimated via simulation approach.

  19. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  20. Approximations in the PE-method

    DEFF Research Database (Denmark)

    Arranz, Marta Galindo

    1996-01-01

    Two differenct sources of errors may occur in the implementation of the PE methods; a phase error introduced in the approximation of a pseudo-differential operator and an amplitude error generated from the starting field. First, the inherent phase errors introduced in the solution are analyzed...