WorldWideScience

Sample records for modeling user querying

  1. User perspectives on query difficulty

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Schütze, Hinrich

    2011-01-01

    , or to statistical and linguistic features of the queries that may render them difficult. This work addresses query difficulty from a different angle, namely the users’ own perspectives on query difficulty. Two research questions are asked: (1) Are users aware that the query they submit to an IR system may......The difficulty of a user query can affect the performance of Information Retrieval (IR) systems. What makes a query difficult and how one may predict this is an active research area, focusing mainly on factors relating to the retrieval algorithm, to the properties of the retrieval data...... for synthesising the user-assessed causes of query difficulty through opinion fusion into an overall assessment of query difficulty. The resulting assessments of query difficulty are found to agree notably more to the TREC categories than the direct user assessments....

  2. User perspectives on query difficulty

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Schütze, Hinrich

    2011-01-01

    The difficulty of a user query can affect the performance of Information Retrieval (IR) systems. What makes a query difficult and how one may predict this is an active research area, focusing mainly on factors relating to the retrieval algorithm, to the properties of the retrieval data, or to sta......The difficulty of a user query can affect the performance of Information Retrieval (IR) systems. What makes a query difficult and how one may predict this is an active research area, focusing mainly on factors relating to the retrieval algorithm, to the properties of the retrieval data......, or to statistical and linguistic features of the queries that may render them difficult. This work addresses query difficulty from a different angle, namely the users’ own perspectives on query difficulty. Two research questions are asked: (1) Are users aware that the query they submit to an IR system may...

  3. Improving the User Query for the Boolean Model Using Genetic Algorithms

    CERN Document Server

    Nassar, Mohammad Othman; Mashagba, Eman Al

    2011-01-01

    The Use of genetic algorithms in the Information retrieval (IR) area, especially in optimizing a user query in Arabic data collections is presented in this paper. Very little research has been carried out on Arabic text collections. Boolean model have been used in this research. To optimize the query using GA we used different fitness functions, different mutation strategies to find which is the best strategy and fitness function that can be used with Boolean model when the data collection is the Arabic language. Our results show that the best GA strategy for the Boolean model is the GA (M2, Precision) method.

  4. Boolean queries for news monitoring: Suggesting new query terms to expert users

    NARCIS (Netherlands)

    Verberne, S.; Wabeke, T.; Kaptein, R.

    2016-01-01

    In this paper, we evaluate query suggestion for Boolean queries in a news monitoring system. Users of this system receive news articles that match their running query on a daily basis. Because the news for a topic continuously changes, the queries need regular updating. We first investigated the

  5. A comparison of user and system query performance predictions

    NARCIS (Netherlands)

    Hauff, C.; Kelly, Diane; Azzopardi, Leif

    2010-01-01

    Query performance prediction methods are usually applied to estimate the retrieval effectiveness of queries, where the evaluation is largely system sided. However, little work has been conducted to understand query performance prediction from the user's perspective. The question we consider is,

  6. XML Multidimensional Modelling and Querying

    CERN Document Server

    Boucher, Serge; Zimányi, Esteban

    2009-01-01

    As XML becomes ubiquitous and XML storage and processing becomes more efficient, the range of use cases for these technologies widens daily. One promising area is the integration of XML and data warehouses, where an XML-native database stores multidimensional data and processes OLAP queries written in the XQuery interrogation language. This paper explores issues arising in the implementation of such a data warehouse. We first compare approaches for multidimensional data modelling in XML, then describe how typical OLAP queries on these models can be expressed in XQuery. We then show how, regardless of the model, the grouping features of XQuery 1.1 improve performance and readability of these queries. Finally, we evaluate the performance of query evaluation in each modelling choice using the eXist database, which we extended with a grouping clause implementation.

  7. Relaxing rdf queries based on user and domain preferences

    DEFF Research Database (Denmark)

    Dolog, Peter; Stueckenschmidt, Heiner; Wache, Holger

    2009-01-01

    knowledge and user preferences. We describe a framework for information access that combines query refinement and relaxation in order to provide robust, personalized access to heterogeneous resource description framework data as well as an implementation in terms of rewriting rules and explain its...

  8. A natural language user interface for fuzzy scope queries

    Institute of Scientific and Technical Information of China (English)

    黄艳; 俞宏峰; 耿卫东; 潘云鹤

    2003-01-01

    This paper presents a two-agent framework to build a natural language query interface for IC information system, focusing more on scope queries in a single English sentence. The first agent, parsing agent, syntactically processes and semantically interprets natural language sentence to construct a fuzzy structured query language (SQL) statement. The second agent, defuzzifying agent, defuzzifies the imprecise part of the fuzzy SQL statement into its equivalent executable precise SQL statement based on fuzzy rules. The first agent can also actively ask the user some necessary questions when it manages to disambiguate the vague retrieval requirements. The adaptive defuzzification approach employed in the defuzzifying agent is discussed in detail. A prototype interface has been implemented to demonstrate the effectiveness.

  9. MOCQL: A Declarative Language for Ad-Hoc Model Querying

    DEFF Research Database (Denmark)

    Störrle, Harald

    2013-01-01

    This paper starts from the observation that existing model query facilities are not easy to use, and are thus not suitable for users without substantial IT/Computer Science background. In an attempt to highlight this issue and explore alternatives, we have created the Model Constraint and Query L...... with MOCQL than when working with OCL. While MOCQL is currently only implemented and validated for the different notations defined by UML, its concepts should be universally applicable....

  10. jQuery UI 1.7 the user interface library for jQuery

    CERN Document Server

    Wellman, Dan

    2009-01-01

    An example-based approach leads you step-by-step through the implementation and customization of each library component and its associated resources in turn. To emphasize the way that jQuery UI takes the difficulty out of user interface design and implementation, each chapter ends with a 'fun with' section that puts together what you've learned throughout the chapter to make a usable and fun page. In these sections you'll often get to experiment with the latest associated technologies like AJAX and JSON. This book is for front-end designers and developers who need to quickly learn how to use t

  11. Querying Business Process Models with VMQL

    DEFF Research Database (Denmark)

    Störrle, Harald; Acretoaie, Vlad

    2013-01-01

    The Visual Model Query Language (VMQL) has been invented with the objectives (1) to make it easier for modelers to query models effectively, and (2) to be universally applicable to all modeling languages. In previous work, we have applied VMQL to UML, and validated the first of these two claims. ...

  12. Automatic Building Information Model Query Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  13. jQuery UI 1.10 the user interface library for jQuery

    CERN Document Server

    Libby, Alex

    2013-01-01

    This book consists of an easy-to-follow, example-based approach that leads you step-by-step through the implementation and customization of each library component.This book is for frontend designers and developers who need to learn how to use jQuery UI quickly. To get the most out of this book, you should have a good working knowledge of HTML, CSS, and JavaScript, and should ideally be comfortable using jQuery.

  14. Using temporal bursts for query modeling

    NARCIS (Netherlands)

    Peetz, M.H.; Meij, E.; de Rijke, M.

    2014-01-01

    We present an approach to query modeling that leverages the temporal distribution of documents in an initially retrieved set of documents. In news-related document collections such distributions tend to exhibit bursts. Here, we define a burst to be a time period where unusually many documents are pu

  15. User Profile-Driven Data Warehouse Summary for Adaptive OLAP Queries

    OpenAIRE

    Rym Khemiri; Fadila Bentayeb

    2013-01-01

    Data warehousing is an essential element of decision support systems. It aims at enabling the user knowledge to make better and faster daily business decisions. To improve this decision support system and to give more and more relevant information to the user, the need to integrate user's profiles into the data warehouse process becomes crucial. In this paper, we propose to exploit users' preferences as a basis for adapting OLAP (On-Line Analytical Processing) queries to the user. For this, w...

  16. MQ-2 A Tool for Prolog-based Model Querying

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald

    2012-01-01

    MQ-2 integrates a Prolog console into the MagicDraw1 modeling environment and equips this console with features targeted specifically to the task of querying models. The vision of MQ-2 is to make Prolog-based model querying accessible to both student and expert modelers by offering powerful query...

  17. Effective Filtering of Query Results on Updated User Behavioral Profiles in Web Mining.

    Science.gov (United States)

    Sadesh, S; Suganthe, R C

    2015-01-01

    Web with tremendous volume of information retrieves result for user related queries. With the rapid growth of web page recommendation, results retrieved based on data mining techniques did not offer higher performance filtering rate because relationships between user profile and queries were not analyzed in an extensive manner. At the same time, existing user profile based prediction in web data mining is not exhaustive in producing personalized result rate. To improve the query result rate on dynamics of user behavior over time, Hamilton Filtered Regime Switching User Query Probability (HFRS-UQP) framework is proposed. HFRS-UQP framework is split into two processes, where filtering and switching are carried out. The data mining based filtering in our research work uses the Hamilton Filtering framework to filter user result based on personalized information on automatic updated profiles through search engine. Maximized result is fetched, that is, filtered out with respect to user behavior profiles. The switching performs accurate filtering updated profiles using regime switching. The updating in profile change (i.e., switches) regime in HFRS-UQP framework identifies the second- and higher-order association of query result on the updated profiles. Experiment is conducted on factors such as personalized information search retrieval rate, filtering efficiency, and precision ratio.

  18. A Comprehensive Trainable Error Model for Sung Music Queries

    CERN Document Server

    Birmingham, W P; 10.1613/jair.1334

    2011-01-01

    We propose a model for errors in sung queries, a variant of the hidden Markov model (HMM). This is a solution to the problem of identifying the degree of similarity between a (typically error-laden) sung query and a potential target in a database of musical works, an important problem in the field of music information retrieval. Similarity metrics are a critical component of query-by-humming (QBH) applications which search audio and multimedia databases for strong matches to oral queries. Our model comprehensively expresses the types of error or variation between target and query: cumulative and non-cumulative local errors, transposition, tempo and tempo changes, insertions, deletions and modulation. The model is not only expressive, but automatically trainable, or able to learn and generalize from query examples. We present results of simulations, designed to assess the discriminatory potential of the model, and tests with real sung queries, to demonstrate relevance to real-world applications.

  19. Categorical and Specificity Differences between User-Supplied Tags and Search Query Terms for Images. An Analysis of "Flickr" Tags and Web Image Search Queries

    Science.gov (United States)

    Chung, EunKyung; Yoon, JungWon

    2009-01-01

    Introduction: The purpose of this study is to compare characteristics and features of user supplied tags and search query terms for images on the "Flickr" Website in terms of categories of pictorial meanings and level of term specificity. Method: This study focuses on comparisons between tags and search queries using Shatford's categorization…

  20. Applying GA for Optimizing the User Query in Image and Video Retrieval

    Directory of Open Access Journals (Sweden)

    Ehsan Lotfi

    2014-12-01

    Full Text Available In an information retrieval system, the query can be made by user sketch. The new method presented here, optimizes the user sketch and applies the optimized query to retrieval the information. This optimization may be used in Content-Based Image Retrieval (CBIR and Content-Based Video Retrieval (CBVR which is based on trajectory extraction. To optimize the retrieval process, one stage of retrieval is performed by the user sketch. The retrieval criterion is based on the proposed distance metric from the user query. Retrieved answers are considered as the primary population for evolutionary optimization. The optimized query may be achieved through reproducing and minimizing the proposed measurement by using Genetic algorithm (GA. The optimized query could then be used for the retrieval of concepts from a given Data Base (DB. The proposed algorithms are evaluated for trajectory retrieval from urban traffic surveillance video and image retrieval from a DB. Practical implementations have demonstrated the high efficiency of this system in trajectory retrieval and image indexing.

  1. A Grammar Analysis Model for the Unified Multimedia Query Language

    Institute of Scientific and Technical Information of China (English)

    Zhong-Sheng Cao; Zong-Da Wu; Yuan-Zhen Wang

    2008-01-01

    The unified multimedia query language (UMQL) is a powerful general-purpose multimedia query language, and it is very suitable for multimedia information retrieval. The paper proposes a grammar analysis model to implement an effective grammatical processing for the language. It separates the grammar analysis of a UMQL query specification into two phases: syntactic analysis and semantic analysis, and then respectively uses Backus-Naur form (EBNF) and logical algebra to specify both restrictive grammar rules. As a result, the model can present error guiding information for a query specification which owns incorrect grammar. The model not only suits well the processing of UMQL queries, but also has a guiding significance for other projects concerning query processings of descriptive query languages.

  2. Modeling and Querying Business Data with Artifact Lifecycle

    Directory of Open Access Journals (Sweden)

    Danfeng Zhao

    2015-01-01

    Full Text Available Business data has been one of the current and future research frontiers, with such big data characteristics as high-volume, high-velocity, high-privacy, and so forth. Most corporations view their business data as a valuable asset and make efforts on the development and optimal utilization on these data. Unfortunately, data management technology at present has been lagging behind the requirements of business big data era. Based on previous business process knowledge, a lifecycle of business data is modeled to achieve consistent description between the data and processes. On this basis, a business data partition method based on user interest is proposed which aims to get minimum number of interferential tuples. Then, to balance data privacy and data transmission cost, our strategy is to explore techniques to execute SQL queries over encrypted business data, split the computations of queries across the server and the client, and optimize the queries with syntax tree. Finally, an instance is provided to verify the usefulness and availability of the proposed method.

  3. Medical Information Retrieval Enhanced with User's Query Expanded with Tag-Neighbors

    DEFF Research Database (Denmark)

    Durao, Frederico; Bayyapu, Karunakar Reddy; Xu, Guandong

    2013-01-01

    ’ original queries with context-relevant information. We compute a set of significant tag neighbor candidates based on the neighbor frequency and weight, and utilize the qualified tag neighbors to expand an entry query. The proposed approach is evaluated by using MedWorm medical article collection......Under-specified queries often lead to undesirable search results that do not contain the information needed. This problem gets worse when it comes to medical information, a natural human demand everywhere. Existing search engines on the Web often are unable to handle medical search well because...... they do not consider its special requirements. Often a medical information searcher is uncertain about his exact questions and unfamiliar with medical terminology. To overcome the limitations of under-specified queries, we utilize tags to enhance information retrieval capabilities by expanding users...

  4. A Model of a Generic Natural Language Interface for Querying Database

    Directory of Open Access Journals (Sweden)

    Hanane Bais

    2016-02-01

    Full Text Available Extracting information from database is typically done by using a structured language such as SQL (Structured Query Language. But non expert users can’t use this later. Wherefore using Natural Language (NL for communicating with database can be a powerful tool. But without any help, computers can’t understand this language; that is why it is essential to develop an interface able to translate user’s query given in NL into an equivalent one in Database Query Language (DBQL. This paper presents a model of a generic natural language query interface for querying database. This model is based on machine learning approach which allows interface to automatically improve its knowledge base through experience. The advantage of this interface is that it functions independently of the database language, content and model. Experimentations are realized to study the performance of this interface and make necessary corrections for its amelioration

  5. Automatic Spell Correction of User query with Semantic Information Retrieval and Ranking of Search Results using WordNet Approach

    Directory of Open Access Journals (Sweden)

    Kirthi J

    2011-03-01

    Full Text Available The proposed semantic information retrieval system handles the following : i automatic spell correction of user query. ii analysis and determination of the semantic feature of the content and development of a semantic pattern that represents the semantic features of the content. iii analysis of user's query and extension of implied semantics through semantic extension to identify more semantic features for matching. vi generation of contents with approximate semantics by clustering the documents and matching against the extended query to provide correct contents to the querist. v a ranking method which computes relevance of documents for actual queries by computing quantitative document-query semantic distance.

  6. Managing user queries using cloud services: KAUST library experience

    KAUST Repository

    Ramli, Rindra M.

    2017-03-01

    The provision of reference and information services are one of the major activities for academic libraries. Answering questions and providing relevant and timely answers for library users are just one of such services. Questions come in many format: in person, phone, email and even on social media platforms. The type of questions may also differ from simple, directional to complicated ones. One of the challenges for libraries is the capturing and managing of these inquiries. Libraries need to address some of these points: •\\tHow the questions will be captured •\\tHow the questions will be answered •\\tWho will answer these questions •\\tWhat is the turn-around time for answering these questions •\\tWhat kind of statistics to monitor •\\tHow are these statistics communicated to internal library staff and other stakeholders This paper describe the initiatives undertaken by KAUST, a brand new Graduate Research Library located in Saudi Arabia. This initiatives include the implementation of LibAnswers to assist the library in capturing and managing all inquiries. We are tracking inquiries coming in via email or widgets (such as online form), converting received questions into FAQ entries, creating and maintaining a public knowledge base for our users. In addition, it will also describe future plans in store to expand reference services for our library users. KAUST: (King Abdullah University of Science and Technology) is a graduate research university located along the shores of the Red Sea. The university was inaugurated in September 2009. The main areas of study are: Mathematics and Computer Science, Physical Sciences and Life Sciences. The university library is situated at the heart of the campus. It is a digitally born library with collections comprising of print and electronic resources. The library has: •\\t310,000 e-book titles •\\tOver 50,000 e-journal titles •\\tOver 30 scientific databases •\\tAbout 3,500 print titles

  7. Enhancing user privacy in SARG04-based private database query protocols

    Science.gov (United States)

    Yu, Fang; Qiu, Daowen; Situ, Haozhen; Wang, Xiaoming; Long, Shun

    2015-11-01

    The well-known SARG04 protocol can be used in a private query application to generate an oblivious key. By usage of the key, the user can retrieve one out of N items from a database without revealing which one he/she is interested in. However, the existing SARG04-based private query protocols are vulnerable to the attacks of faked data from the database since in its canonical form, the SARG04 protocol lacks means for one party to defend attacks from the other. While such attacks can cause significant loss of user privacy, a variant of the SARG04 protocol is proposed in this paper with new mechanisms designed to help the user protect its privacy in private query applications. In the protocol, it is the user who starts the session with the database, trying to learn from it bits of a raw key in an oblivious way. An honesty test is used to detect a cheating database who had transmitted faked data. The whole private query protocol has O( N) communication complexity for conveying at least N encrypted items. Compared with the existing SARG04-based protocols, it is efficient in communication for per-bit learning.

  8. A Grammar Analysis Model for the Unified Multimedia Query Language

    Institute of Scientific and Technical Information of China (English)

    Zhong-Sheng Cao; Zong-Da Wu; Yuan-Zhen Wang

    2008-01-01

    The unified multimedia query language(UMQL) is a powerful general-purpose multimediaquery language, and it is very suitable for multimediainformation retrieval. The paper proposes a grammaranalysis model to implement an effective grammaticalprocessing for the language. It separates the grammaranalysis of a UMQL query specification into two phases:syntactic analysis and semantic analysis, and thenrespectively uses Backus-Naur form (EBNF) and logicalalgebra to specify both restrictive grammar rules. As aresult, the model can present error guiding informationfor a query specification which owns incorrect grammar.The model not only suits well the processing of UMQLqueries, but also has a guiding significance for otherprojects concerning query processings of descriptivequery languages.

  9. User Profile-Driven Data Warehouse Summary for Adaptive OLAP Queries

    Directory of Open Access Journals (Sweden)

    Rym Khemiri

    2013-01-01

    Full Text Available Data warehousing is an essential element of decision support systems. It aims at enabling the userknowledge to make better and faster daily business decisions. To improve this decision support system andto give more and more relevant information to the user, the need to integrate user's profiles into the datawarehouse process becomes crucial. In this paper, we propose to exploit users' preferences as a basis foradapting OLAP (On-Line Analytical Processing queries to the user. For this, we present a user profiledrivendata warehouse approach that allows dening user's profile composed by his/her identifier and a setof his/her preferences. Our approach is based on a general data warehouse architecture and an adaptiveOLAP analysis system. Our main idea consists in creating a data warehouse materialized view for eachuser with respect to his/her profile. This task is performed off-line when the user defines his/her profile forthe first time. Then, when a user query is submitted to the data warehouse, the system deals with his/herdata warehouse materialized view instead of the whole data warehouse. In other words, the datawarehouse view summaries the data warehouse content for the user by taking into account his/herpreferences. Moreover, we are implementing our data warehouse personalization approach under the SQLServer 2005 DBMS (DataBase Management System.

  10. The Effects of Normalisation of the Satisfaction of Novice End-User Querying Databases

    Directory of Open Access Journals (Sweden)

    Conrad Benedict

    1997-05-01

    Full Text Available This paper reports the results of an experiment that investigated the effects different structural characteristics of relational databases have on information satisfaction of end-users querying databases. The results show that unnormalised tables adversely affect end-user satisfaction. The adverse affect on end-user satisfaction is attributable primarily to the use of non atomic data. In this study, the affect on end user satisfaction of repeating fields was not significant. The study contributes to the further development of theories of individual adjustment to information technology in the workplace by alerting organisations and, in particular, database designers to the ways in which the structural characteristics of relational databases may affect end-user satisfaction. More importantly, the results suggest that database designers need to clearly identify the domains for each item appearing in their databases. These issues are of increasing importance because of the growth in the amount of data available to end-users in relational databases.

  11. Quantum private query with perfect user privacy against a joint-measurement attack

    Science.gov (United States)

    Yang, Yu-Guang; Liu, Zhi-Chao; Li, Jian; Chen, Xiu-Bo; Zuo, Hui-Juan; Zhou, Yi-Hua; Shi, Wei-Min

    2016-12-01

    The joint-measurement (JM) attack is the most powerful threat to the database security for existing quantum-key-distribution (QKD)-based quantum private query (QPQ) protocols. Wei et al. (2016) [28] proposed a novel QPQ protocol against the JM attack. However, their protocol relies on two-way quantum communication thereby affecting its real implementation and communication efficiency. Moreover, it cannot ensure perfect user privacy. In this paper, we present a new one-way QPQ protocol in which the special way of classical post-processing of oblivious key ensures the security against the JM attack. Furthermore, it realizes perfect user privacy and lower complexity of communication.

  12. Using Description Logics to Model Context Aware Query Preferences

    NARCIS (Netherlands)

    van Bunningen, A.H.; Feng, L.; Apers, Peter M.G.

    Users’ preferences have traditionally been exploited in query personalization to better serve their information needs. With the emerging ubiquitous computing technologies, users will be situated in an Ambient Intelligent (AmI) environment, where users’ database access will not occur at a single

  13. Marathi-English CLIR using detailed user query and unsupervised corpus-based WSD

    Directory of Open Access Journals (Sweden)

    Savita C. Mayanale

    2015-06-01

    Full Text Available With rapid growth of multilingual information on the Internet, Cross Language Information Retrieval (CLIR is becoming need of the day. It helps user to query in their native language and retrieve information in any language. But the performance of CLIR is poor as compared to monolingual retrieval due to lexical ambiguity, mismatching of query terms and out-of-vocabulary words. In this paper, we have proposed an algorithm for improving the performance of Marathi-English CLIR system. The system first finds possible translations of input query in target language, disambiguates them and then gives English queries to search engine for relevant document retrieval. The disambiguation is based on unsupervised corpus-based method which uses English dictionary as additional resource. The experiment is performed on FIRE 2011 (Forum of Information Retrieval Evaluation dataset using “Title” and “Description” fields as inputs. The experimental results show that proposed approach gives better performance of Marathi-English CLIR system with good precision level.

  14. User Satisfaction Evaluation of the EHR4CR Query Builder: A Multisite Patient Count Cohort System

    Directory of Open Access Journals (Sweden)

    Iñaki Soto-Rey

    2015-01-01

    Full Text Available The Electronic Health Records for Clinical Research (EHR4CR project aims to develop services and technology for the leverage reuse of Electronic Health Records with the purpose of improving the efficiency of clinical research processes. A pilot program was implemented to generate evidence of the value of using the EHR4CR platform. The user acceptance of the platform is a key success factor in driving the adoption of the EHR4CR platform; thus, it was decided to evaluate the user satisfaction. In this paper, we present the results of a user satisfaction evaluation for the EHR4CR multisite patient count cohort system. This study examined the ability of testers (n=22 and n=16 from 5 countries to perform three main tasks (around 20 minutes per task, after a 30-minute period of self-training. The System Usability Scale score obtained was 55.83 (SD: 15.37, indicating a moderate user satisfaction. The responses to an additional satisfaction questionnaire were positive about the design of the interface and the required procedure to design a query. Nevertheless, the most complex of the three tasks proposed in this test was rated as difficult, indicating a need to improve the system regarding complicated queries.

  15. Quantum private query with perfect user privacy against a joint-measurement attack

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yu-Guang, E-mail: yangyang7357@bjut.edu.cn [College of Computer Science and Technology, Beijing University of Technology, Beijing 100124 (China); State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093 (China); Liu, Zhi-Chao [College of Computer Science and Technology, Beijing University of Technology, Beijing 100124 (China); Li, Jian [School of Computer, Beijing University of Posts and Telecommunications, Beijing 100876 (China); Chen, Xiu-Bo [Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876 (China); Zuo, Hui-Juan [College of Mathematics and Information Science, Hebei Normal University, Shijiazhuang 050024 (China); Zhou, Yi-Hua; Shi, Wei-Min [College of Computer Science and Technology, Beijing University of Technology, Beijing 100124 (China)

    2016-12-16

    The joint-measurement (JM) attack is the most powerful threat to the database security for existing quantum-key-distribution (QKD)-based quantum private query (QPQ) protocols. Wei et al. (2016) [28] proposed a novel QPQ protocol against the JM attack. However, their protocol relies on two-way quantum communication thereby affecting its real implementation and communication efficiency. Moreover, it cannot ensure perfect user privacy. In this paper, we present a new one-way QPQ protocol in which the special way of classical post-processing of oblivious key ensures the security against the JM attack. Furthermore, it realizes perfect user privacy and lower complexity of communication. - Highlights: • A special classical post-processing ensures the security against the JM attack. • It ensures perfect user privacy. • It ensures lower complexity of communication. Alice's conclusive key rate is 1/6.

  16. Superfund Query

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Superfund Query allows users to retrieve data from the Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) database.

  17. VMQL: A Visual Language for Ad-Hoc Model Querying

    DEFF Research Database (Denmark)

    Störrle, Harald

    2011-01-01

    In large scale model based development, analysis level models are more like knowledge bases than engineering artifacts. Their effectiveness depends, to a large degree, on the ability of domain experts to retrieve information from them ad hoc. For large scale models, however, existing query facili...

  18. Modeling Large Time Series for Efficient Approximate Query Processing

    DEFF Research Database (Denmark)

    Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang

    2015-01-01

    Evolving customer requirements and increasing competition force business organizations to store increasing amounts of data and query them for information at any given time. Due to the current growth of data volumes, timely extraction of relevant information becomes more and more difficult...... these issues, compression techniques have been introduced in many areas of data processing. In this paper, we outline a new system that does not query complete datasets but instead utilizes models to extract the requested information. For time series data we use Fourier and Cosine transformations and piece...

  19. Improving the Usability of OCL as an Ad-hoc Model Querying Language

    DEFF Research Database (Denmark)

    Störrle, Harald

    2013-01-01

    The OCL is often perceived as dicult to learn and use. In previous research, we have dened experimental query languages exhibiting higher levels of usability than OCL. However, none of these alternatives can rival OCL in terms of adoption and support. In an attempt to leverage the lessons learned...... from our research and make it accessible to the OCL community, we propose the OCL Query API (OQAPI), a library of query-predicates to improve the user-friendliness of OCL for ad-hoc querying. The usability of OQAPI is studied using controlled experiments. We nd considerable evidence to support our...... claim that OQAPI facilitates user querying using OCL....

  20. Boolean Queries and Term Dependencies in Probabilistic Retrieval Models.

    Science.gov (United States)

    Croft, W. Bruce

    1986-01-01

    Proposes approach to integrating Boolean and statistical systems where Boolean queries are interpreted as a means of specifying term dependencies in relevant set of documents. Highlights include series of retrieval experiments designed to test retrieval strategy based on term dependence model and relation of results to other work. (18 references)…

  1. Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data

    Science.gov (United States)

    2014-11-01

    queries of the form P (y|do(x)). We show that causal queries may be recoverable even when the factors in their identifying estimands are not...well as causal queries of the form P(yjdo(x)). We show that causal queries may be recoverable even when the factors in their identifying estimands are...Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data Karthika Mohan and Judea Pearl Cognitive Systems Laboratory

  2. Automatic modeling of the linguistic values for database fuzzy querying

    Directory of Open Access Journals (Sweden)

    Diana STEFANESCU

    2007-12-01

    Full Text Available In order to evaluate vague queries, each linguistic term is considered according to its fuzzy model. Usually, the linguistic terms are defined as fuzzy sets, during a classical knowledge acquisition off-line process. But they can also be automatically extracted from the actual content of the database, by an online process. In at least two situations, automatically modeling the linguistic values would be very useful: first, to simplify the knowledge engineer’s task by extracting the definitions from the database content; and second, where mandatory, to dynamically define the linguistic values in complex criteria queries evaluation. Procedures to automatically extract the fuzzy model of the linguistic values from the existing data are presented in this paper.

  3. Scalable Social Coordination using Enmeshed Queries

    CERN Document Server

    Chen, Jianjun; Varghese, George

    2012-01-01

    Social coordination allows users to move beyond awareness of their friends to efficiently coordinating physical activities with others. While specific forms of social coordination can be seen in tools such as Evite, Meetup and Groupon, we introduce a more general model using what we call {\\em enmeshed queries}. An enmeshed query allows users to declaratively specify an intent to coordinate by specifying social attributes such as the desired group size and who/what/when, and the database returns matching queries. Enmeshed queries are continuous, but new queries (and not data) answer older queries; the variable group size also makes enmeshed queries different from entangled queries, publish-subscribe systems, and dating services. We show that even offline group coordination using enmeshed queries is NP-hard. We then introduce efficient heuristics that use selective indices such as location and time to reduce the space of possible matches; we also add refinements such as delayed evaluation and using the relative...

  4. A Database Query Processing Model in Peer-To-Peer Network ...

    African Journals Online (AJOL)

    PROMOTING ACCESS TO AFRICAN RESEARCH ... This paper presents an extensive evaluation of database query processing model in peer-to-peer networks using top-k query processing technique and implemented by Java,and MySql.

  5. Querying and Serving N-gram Language Models with Python

    Directory of Open Access Journals (Sweden)

    2009-06-01

    Full Text Available Statistical n-gram language modeling is a very important technique in Natural Language Processing (NLP and Computational Linguistics used to assess the fluency of an utterance in any given language. It is widely employed in several important NLP applications such as Machine Translation and Automatic Speech Recognition. However, the most commonly used toolkit (SRILM to build such language models on a large scale is written entirely in C++ which presents a challenge to an NLP developer or researcher whose primary language of choice is Python. This article first provides a gentle introduction to statistical language modeling. It then describes how to build a native and efficient Python interface (using SWIG to the SRILM toolkit such that language models can be queried and used directly in Python code. Finally, it also demonstrates an effective use case of this interface by showing how to leverage it to build a Python language model server. Such a server can prove to be extremely useful when the language model needs to be queried by multiple clients over a network: the language model must only be loaded into memory once by the server and can then satisfy multiple requests. This article includes only those listings of source code that are most salient. To conserve space, some are only presented in excerpted form. The complete set of full source code listings may be found in Volume 1 of The Python Papers Source Codes Journal.

  6. Declarative Visualization Queries

    Science.gov (United States)

    Pinheiro da Silva, P.; Del Rio, N.; Leptoukh, G. G.

    2011-12-01

    In an ideal interaction with machines, scientists may prefer to write declarative queries saying "what" they want from a machine than to write code stating "how" the machine is going to address the user request. For example, in relational database, users have long relied on specifying queries using Structured Query Language (SQL), a declarative language to request data results from a database management system. In the context of visualizations, we see that users are still writing code based on complex visualization toolkit APIs. With the goal of improving the scientists' experience of using visualization technology, we have applied this query-answering pattern to a visualization setting, where scientists specify what visualizations they want generated using a declarative SQL-like notation. A knowledge enhanced management system ingests the query and knows the following: (1) know how to translate the query into visualization pipelines; and (2) how to execute the visualization pipelines to generate the requested visualization. We define visualization queries as declarative requests for visualizations specified in an SQL like language. Visualization queries specify what category of visualization to generate (e.g., volumes, contours, surfaces) as well as associated display attributes (e.g., color and opacity), without any regards for implementation, thus allowing scientists to remain partially unaware of a wide range of visualization toolkit (e.g., Generic Mapping Tools and Visualization Toolkit) specific implementation details. Implementation details are only a concern for our knowledge-based visualization management system, which uses both the information specified in the query and knowledge about visualization toolkit functions to construct visualization pipelines. Knowledge about the use of visualization toolkits includes what data formats the toolkit operates on, what formats they output, and what views they can generate. Visualization knowledge, which is not

  7. Technologies for conceptual modelling and intelligent query formulation

    CSIR Research Space (South Africa)

    Alberts, R

    2008-11-01

    Full Text Available modelling and intelligent query formulation R ALBERTS1, K BRITZ1, A GERBER1, K HALLAND1,2, T MEYER1, L PRETORIUS1,2 (1) Knowledge Systems Group, Meraka Institute, CSIR, Pretoria, Gauteng, South Africa (2) School of Computing, University of South... this problem still more pressing: in this case an excess of information can be equivalent to an absence of information It is therefore necessary to use tools that organise data into intelligible and easily-accessible structures and return answers...

  8. Outside users payload model

    Science.gov (United States)

    1985-01-01

    The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.

  9. Personal lifelong user model clouds

    DEFF Research Database (Denmark)

    Dolog, Peter; Kay, Judy; Kummerfeld, Bob

    This paper explores an architecture for very long term user modelling, based upon personal user model clouds. These ensure that the individual's applications can access their model whenever it is needed. At the same time, the user can control the use of their user model. So, they can ensure...

  10. Query-Optimization Based Virtual Data Warehouse Model%基于查询优化的虚拟数据仓库模型

    Institute of Scientific and Technical Information of China (English)

    徐强; 李文烨; 顾毓清

    2003-01-01

    The technology of virtual data warehouses catches more and more interests of people. In this paper,aQuery-Optimization Based Virtual Data Warehouse (QVDW)model is presented,which not only has all the advantagesthat a virtual data warehouse has,but also can change bottom layer data structures and the model itself according tousers' various query requests. The aim is to offer customized service to OLAP (on-line analysis processing)users.

  11. Personal lifelong user model clouds

    DEFF Research Database (Denmark)

    Dolog, Peter; Kay, Judy; Kummerfeld, Bob

    This paper explores an architecture for very long term user modelling, based upon personal user model clouds. These ensure that the individual's applications can access their model whenever it is needed. At the same time, the user can control the use of their user model. So, they can ensure...... it is accessed only when and where they wish, by applications that they wish. We consider the challenges of representing user models so that they can be reused by multiple applications. We indicate potential synergies between distributed and centralised user modelling architectures, proposing an architecture...

  12. A Hierarchical Approach to Model Web Query Interfaces for Web Source Integration

    OpenAIRE

    Kabisch, Thomas; Dragut, Eduard; Yu, Clement; Leser, Ulf

    2009-01-01

    Much data in the Web is hidden behind Web query interfaces. In most cases the only means to "surface" the content of a Web database is by formulating complex queries on such interfaces. Applications such as Deep Web crawling and Web database integration require an automatic usage of these interfaces. Therefore, an important problem to be addressed is the automatic extraction of query interfaces into an appropriate model. We hypothesize the existence of a set of domain-independent "commonsense...

  13. An Efficient Query Rewriting Approach for Web Cached Data Management

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    With the internet development, querying data on the Web is an attention problem of involving information from distributed, and often dynamically, related Web sources. Basically, some sub-queries can be effectively cached from previous queries or materialized views in order to achieve a better query performance based on the notion of rewriting queries. In this paper, we propose a novel query-rewriting model, called Hierarchical Query Tree, for representing Web queries. Hierarchical Query Tree is a labeled tree that is suitable for representing the inherent hierarchy feature of data on the Web. Based on Hierarchical Query Tree, we use case-based approach to determine what the query results should be. The definitions of queries and query results are both represented as labeled trees. Thus, we can use the same model for representing cases and the medium query results can also be dynamically updated by the user queries. We show that our case-based method can be used to answer a new query based on the combination of previous queries, including changes of requirements and various information sources.

  14. An Overview of Data Models and Query Languages for Content-based Video Retrieval

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.

    2000-01-01

    As a large amount of video data becomes publicly available, the need to model and query this data efficiently becomes significant. Consequently, content-based retrieval of video data turns out to be a challenging and important problem addressing areas such as video modelling, indexing, querying, etc

  15. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  16. User Modeling for Contextual Suggestion

    Science.gov (United States)

    2014-11-01

    per day without charge. Each Yelp search query can return up to 20 results. The results are in JSON ( JavaScript Object Notation) format and contain...RAMA performance on metric P@5. 4.4.1 Comparison with Average Track Median Table 8 compares the two RAMA runs with the Track Median on all three...whereas RUN1 on MRR shows the least improvement (24%). Figure 6. RAMA performance compared with Track Median by User. 4.4.2 Comparison with Track

  17. UCVM: An Open Source Software Package for Querying and Visualizing 3D Velocity Models

    Science.gov (United States)

    Gill, D.; Small, P.; Maechling, P. J.; Jordan, T. H.; Shaw, J. H.; Plesch, A.; Chen, P.; Lee, E. J.; Taborda, R.; Olsen, K. B.; Callaghan, S.

    2015-12-01

    Three-dimensional (3D) seismic velocity models provide foundational data for ground motion simulations that calculate the propagation of earthquake waves through the Earth. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) package for both Linux and OS X. This unique framework provides a cohesive way for querying and visualizing 3D models. UCVM v14.3.0, supports many Southern California velocity models including CVM-S4, CVM-H 11.9.1, and CVM-S4.26. The last model was derived from 26 full-3D tomographic iterations on CVM-S4. Recently, UCVM has been used to deliver a prototype of a new 3D model of central California (CCA) also based on full-3D tomographic inversions. UCVM was used to provide initial plots of this model and will be used to deliver CCA to users when the model is publicly released. Visualizing models is also possible with UCVM. Integrated within the platform are plotting utilities that can generate 2D cross-sections, horizontal slices, and basin depth maps. UCVM can also export models in NetCDF format for easy import into IDV and ParaView. UCVM has also been prototyped to export models that are compatible with IRIS' new Earth Model Collaboration (EMC) visualization utility. This capability allows for user-specified horizontal slices and cross-sections to be plotted in the same 3D Earth space. UCVM was designed to help a wide variety of researchers. It is currently being use to generate velocity meshes for many SCEC wave propagation codes, including AWP-ODC-SGT and Hercules. It is also used to provide the initial input to SCEC's CyberShake platform. For those interested in specific data points, the software framework makes it easy to extract P and S wave propagation speeds and other material properties from 3D velocity models by providing a common interface through which researchers can query earth models for a given location and depth. Also included in the last release was the ability to add small

  18. Reasoning about geological space: Coupling 3D GeoModels and topological queries as an aid to spatial data selection

    Science.gov (United States)

    Pouliot, Jacynthe; Bédard, Karine; Kirkwood, Donna; Lachance, Bernard

    2008-05-01

    Topological relationships between geological objects are of great interest for mining and petroleum exploration. Indeed, adjacency, inclusion and intersection are common relationships between geological objects such as faults, geological units, fractures, mineralized zones and reservoirs. However, in the context of 3D modeling, actual geometric data models used to store those objects are not designed to manage explicit topological relationships. For example, with Gocad© software, topological analyses are possible but they require a series of successive manipulations and are time consuming. This paper presents the development of a 3D topological query prototype, TQuery, compatible with Gocad© modeling platform. It allows the user to export Gocad© objects to a data storage model that regularizes the topological relationships between objects. The development of TQuery was oriented towards the use of volumetric objects that are composed of tetrahedrons. Exported data are then retrieved and used for 3D topological and spatial queries. One of the advantages of TQuery is that different types of objects can be queried at the same time without restricting the operations to voxel regions. TQuery allows the user to analyze data more quickly and efficiently and does not require a 3D modeling specialist to use it, which is particularly attractive in the context of a decision-making aid. The prototype was tested on a 3D GeoModel of a continental red-bed copper deposit in the Silurian Robitaille Formation (Transfiguration property, Québec, Canada).

  19. A model for the evaluation of search queries in medical bibliographic sources

    NARCIS (Netherlands)

    Verhoeven, A A; Boendermaker, P M; Boerma, Edzard J.; Meyboom-de Jong, B

    1999-01-01

    The aim of this study was to develop a model to evaluate the retrieval quality of search queries performed by Dutch general practitioners using the printed Index Medicus, MEDLINE on CD-ROM, and MEDLINE through GRATEFUL MED. Four search queries related to general practice were formulated for a contin

  20. Aspects of data modeling and query processing for complex multidimensional data

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach

    by existing OLAP systems, enable practical pre-aggregation. The algorithms have low computational complexity and may be applied incrementally to reduce the cost of updating OLAP structures. The transformations can be made transparently to the user. A prototype implementation of the techniques is reported......This thesis is about data modeling and query processing for complex multidimensional data. Multidimensional data has become the subject of much attention in both academia and industry in recent years, fueled by the popularity of data warehousing and On-Line Analytical Processing (OLAP) applications....... One application area where complex multidimensional data is common is within medical informatics, an area that may benefit significantly from the functionality offered by data warehousing and OLAP. However, the special nature of clinical applications poses different and new requirements to data...

  1. Search Result Diversification Based on Query Facets

    Institute of Scientific and Technical Information of China (English)

    胡莎; 窦志成; 王晓捷; 继荣

    2015-01-01

    In search engines, different users may search for different information by issuing the same query. To satisfy more users with limited search results, search result diversification re-ranks the results to cover as many user intents as possible. Most existing intent-aware diversification algorithms recognize user intents as subtopics, each of which is usually a word, a phrase, or a piece of description. In this paper, we leverage query facets to understand user intents in diversification, where each facet contains a group of words or phrases that explain an underlying intent of a query. We generate subtopics based on query facets and propose faceted diversification approaches. Experimental results on the public TREC 2009 dataset show that our faceted approaches outperform state-of-the-art diversification models.

  2. Browsing and Querying in Online Documentation:A Study of User Interfaces and the Interaction Process

    DEFF Research Database (Denmark)

    Hertzum, Morten; Frøkjær, Erik

    1996-01-01

    A user interface study concerning the usage effectiveness of selected retrieval modes was conducted using an experimental text retrieval system, TeSS, giving access to online documentation of certain programming tools. Four modes of TeSS were compared: (1) browsing, (2) conventional boolean...

  3. Provenance tracking and querying in the ViroLab Virtual Laboratory

    NARCIS (Netherlands)

    Balis, B.; Bubak, M.; Pelczar, M.; Wach, J.; Priol, T.; Lefevre, L.; Buyya, R.

    2008-01-01

    We present an approach to provenance tracking and querying which enables end-users to construct complex queries over provenance records. The use of ontologies for modeling provenance, data and applications enables query construction in an end-user oriented manner, i.e. by using terms of the scientif

  4. Provenance tracking and querying in the ViroLab Virtual Laboratory

    NARCIS (Netherlands)

    Balis, B.; Bubak, M.; Pelczar, M.; Wach, J.; Priol, T.; Lefevre, L.; Buyya, R.

    2008-01-01

    We present an approach to provenance tracking and querying which enables end-users to construct complex queries over provenance records. The use of ontologies for modeling provenance, data and applications enables query construction in an end-user oriented manner, i.e. by using terms of the

  5. Provenance tracking and querying in the ViroLab Virtual Laboratory

    NARCIS (Netherlands)

    Balis, B.; Bubak, M.; Pelczar, M.; Wach, J.; Priol, T.; Lefevre, L.; Buyya, R.

    2008-01-01

    We present an approach to provenance tracking and querying which enables end-users to construct complex queries over provenance records. The use of ontologies for modeling provenance, data and applications enables query construction in an end-user oriented manner, i.e. by using terms of the scientif

  6. Operators for reclassification queries in a temporal multidimensional model

    Directory of Open Access Journals (Sweden)

    Francisco Moreno

    2011-05-01

    Full Text Available Data warehouse dimensions are usually considered to be static because their schema and data tend not to change; however, both dimension schema and dimension data can change. This paper focuses on a type of dimension data change called reclassificationwhich occurs when a member of a certain level becomes a member of a higher level in the same dimension, e.g. when a product changes category (it is reclassified. This type of change gives rise to the notion of classification period and to a type of query that can be useful for decision-support. For example, What were total chess-set sales during first classification period in Toy category? A set of operators has been proposed to facilitate formulating this type of query and it is shown how to incorporate them in SQL, a familiar database developer language. Our operators’ expressivity is also shown because formulating such queries without using these operators usually leads to complex and non-intuitive solutions.

  7. The Imposed Query: Implications for Library Service Evaluation.

    Science.gov (United States)

    Gross, Melissa

    1998-01-01

    Explores the potential impact of imposed query, a new model of information-seeking behavior, on current approaches to library service and system evaluation. Discusses reference service evaluation, user studies, output measures, and relevance as an evaluation tool. Argues that imposed query broadens understanding of the user and of the role that…

  8. Optimizing Temporal Queries

    DEFF Research Database (Denmark)

    Toman, David; Bowman, Ivan Thomas

    2003-01-01

    Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often......, these query languages are implemented by translating temporal queries into standard relational queries. However, the compiled queries are often quite cumbersome and expensive to execute even using state-of-the-art relational products. This paper presents an optimization technique that produces more efficient...... translated SQL queries by taking into account the properties of the encoding used for temporal attributes. For concreteness, this translation technique is presented in the context of SQL/TP; however, these techniques are also applicable to other temporal query languages....

  9. Optimizing Temporal Queries

    DEFF Research Database (Denmark)

    Toman, David; Bowman, Ivan Thomas

    2003-01-01

    translated SQL queries by taking into account the properties of the encoding used for temporal attributes. For concreteness, this translation technique is presented in the context of SQL/TP; however, these techniques are also applicable to other temporal query languages......., these query languages are implemented by translating temporal queries into standard relational queries. However, the compiled queries are often quite cumbersome and expensive to execute even using state-of-the-art relational products. This paper presents an optimization technique that produces more efficient......Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often...

  10. Efficient Query Rewrite for Structured Web Queries

    CERN Document Server

    Gollapudi, Sreenivas; Ntoulas, Alexandros; Paparizos, Stelios

    2011-01-01

    Web search engines and specialized online verticals are increasingly incorporating results from structured data sources to answer semantically rich user queries. For example, the query \\WebQuery{Samsung 50 inch led tv} can be answered using information from a table of television data. However, the users are not domain experts and quite often enter values that do not match precisely the underlying data. Samsung makes 46- or 55- inch led tvs, but not 50-inch ones. So a literal execution of the above mentioned query will return zero results. For optimal user experience, a search engine would prefer to return at least a minimum number of results as close to the original query as possible. Furthermore, due to typical fast retrieval speeds in web-search, a search engine query execution is time-bound. In this paper, we address these challenges by proposing algorithms that rewrite the user query in a principled manner, surfacing at least the required number of results while satisfying the low-latency constraint. We f...

  11. VMTL: a language for end-user model transformation

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2016-01-01

    these guidelines. VMTL draws on our previous work on the usability-oriented Visual Model Query Language. We implement VMTL using the Henshin model transformation engine, and empirically investigate its learnability via two user experiments and a think-aloud protocol analysis. Our experiments, although conducted...... on computer science students exhibiting only some of the characteristics of end-user modelers, show that VMTL compares favorably in terms of learnability with two state-of the-art model transformation languages: Epsilon and Henshin. Our think-aloud protocol analysis confirms many of the design decisions......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently...

  12. Mental models and user training

    Directory of Open Access Journals (Sweden)

    Saša Zupanič

    1997-01-01

    Full Text Available One of the functions of the reference service is user training which means teaching users how to use the library and it's information sorces (nowadays mainly computerized systems. While the scientific understanding of teaching/learning process is shifting, changes also affect the methods of user training in libraries.Human-computer interaction (HCI is an interdisciplinary and a very active research area which studies how humans use computers - their mental and behavioral characteristics. The application of psychological theories to HCI are especially great on three areas: psychological (mental, conceptual models, individual differences, and error behavior.The mental models theory is powerful tool for understanding the ways in which users interact with an information system. Claims, based on this theory can affect the methods (conceptualization of user training and the overall design of information systems.

  13. Modeling User Behavior and Attention in Search

    Science.gov (United States)

    Huang, Jeff

    2013-01-01

    In Web search, query and click log data are easy to collect but they fail to capture user behaviors that do not lead to clicks. As search engines reach the limits inherent in click data and are hungry for more data in a competitive environment, mining cursor movements, hovering, and scrolling becomes important. This dissertation investigates how…

  14. Modeling User Behavior and Attention in Search

    Science.gov (United States)

    Huang, Jeff

    2013-01-01

    In Web search, query and click log data are easy to collect but they fail to capture user behaviors that do not lead to clicks. As search engines reach the limits inherent in click data and are hungry for more data in a competitive environment, mining cursor movements, hovering, and scrolling becomes important. This dissertation investigates how…

  15. Spatial Keyword Querying

    DEFF Research Database (Denmark)

    Cao, Xin; Chen, Lisi; Cong, Gao;

    2012-01-01

    The web is increasingly being used by mobile users. In addition, it is increasingly becoming possible to accurately geo-position mobile users and web content. This development gives prominence to spatial web data management. Specifically, a spatial keyword query takes a user location and user-sup...... different kinds of functionality as well as the ideas underlying their definition....

  16. Unobtrusive user modeling for adaptive hypermedia

    NARCIS (Netherlands)

    Holz, H.J.; Hofmann, K.; Reed, C.; Uchyigit, G.; Ma, M.Y.

    2008-01-01

    We propose a technique for user modeling in Adaptive Hypermedia (AH) that is unobtrusive at both the level of observable behavior and that of cognition. Unobtrusive user modeling is complementary to transparent user modeling. Unobtrusive user modeling induces user models appropriate for Educational

  17. Time-Dependent Networks as Models to Achieve Fast Exact Time-Table Queries

    DEFF Research Database (Denmark)

    Brodal, Gert Stølting; Jacob, Rico

    2003-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries for travelers using a train system. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  18. Time-dependent Networks as Models to Achieve Fast Exact Time-table Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jacob, Rico

    2001-01-01

    We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models.......We consider efficient algorithms for exact time-table queries, i.e. algorithms that find optimal itineraries. We propose to use time-dependent networks as a model and show advantages of this approach over space-time networks as models....

  19. QueryArch3D: Querying and Visualising 3D Models of a Maya Archaeological Site in a Web-Based Interface

    Directory of Open Access Journals (Sweden)

    Giorgio Agugiaro

    2011-12-01

    Full Text Available Constant improvements in the field of surveying, computing and distribution of digital-content are reshaping the way Cultural Heritage can be digitised and virtually accessed, even remotely via web. A traditional 2D approach for data access, exploration, retrieval and exploration may generally suffice, however more complex analyses concerning spatial and temporal features require 3D tools, which, in some cases, have not yet been implemented or are not yet generally commercially available. Efficient organisation and integration strategies applicable to the wide array of heterogeneous data in the field of Cultural Heritage represent a hot research topic nowadays. This article presents a visualisation and query tool (QueryArch3D conceived to deal with multi-resolution 3D models. Geometric data are organised in successive levels of detail (LoD, provided with geometric and semantic hierarchies and enriched with attributes coming from external data sources. The visualisation and query front-end enables the 3D navigation of the models in a virtual environment, as well as the interaction with the objects by means of queries based on attributes or on geometries. The tool can be used as a standalone application, or served through the web. The characteristics of the research work, along with some implementation issues and the developed QueryArch3D tool will be discussed and presented.

  20. 基于jQuery前端框架提升Web用户体验的研究%Research on Enhancing Web User Experience with jQuery

    Institute of Scientific and Technical Information of China (English)

    侯海平

    2013-01-01

      越来越多互联网公司注重用户体验,jQuery正是顺应这一趋势而诞生的JavaScript轻量级框架,在提升Web用户体验方面有着很大的优势,并且这一效果正在逐渐扩大。本文主要介绍jQuery这一框架的技术特点,并分析如何提升用户体验。%More and more Internet companies focus on user expe-rience, jQuery is the one generated for this tendency. It is a lightweight framework of JavaScript. It has great advantage for enhancing Web user experience, and the effective now is enlarg-ing. This essay mainly introduces the features of jQuery and ana-lyze how to realize it.

  1. Indexing for summary queries

    DEFF Research Database (Denmark)

    Yi, Ke; Wang, Lu; Wei, Zhewei

    2014-01-01

    ), of a particular attribute of these records. Aggregation queries are especially useful in business intelligence and data analysis applications where users are interested not in the actual records, but some statistics of them. They can also be executed much more efficiently than reporting queries, by embedding...

  2. Indexing for summary queries

    DEFF Research Database (Denmark)

    Yi, Ke; Wang, Lu; Wei, Zhewei

    2014-01-01

    ), of a particular attribute of these records. Aggregation queries are especially useful in business intelligence and data analysis applications where users are interested not in the actual records, but some statistics of them. They can also be executed much more efficiently than reporting queries, by embedding...

  3. FORSPAN Model Users Guide

    Science.gov (United States)

    Klett, T.R.; Charpentier, Ronald R.

    2003-01-01

    The USGS FORSPAN model is designed for the assessment of continuous accumulations of crude oil, natural gas, and natural gas liquids (collectively called petroleum). Continuous (also called ?unconventional?) accumulations have large spatial dimensions and lack well defined down-dip petroleum/water contacts. Oil and natural gas therefore are not localized by buoyancy in water in these accumulations. Continuous accumulations include ?tight gas reservoirs,? coalbed gas, oil and gas in shale, oil and gas in chalk, and shallow biogenic gas. The FORSPAN model treats a continuous accumulation as a collection of petroleumcontaining cells for assessment purposes. Each cell is capable of producing oil or gas, but the cells may vary significantly from one another in their production (and thus economic) characteristics. The potential additions to reserves from continuous petroleum resources are calculated by statistically combining probability distributions of the estimated number of untested cells having the potential for additions to reserves with the estimated volume of oil and natural gas that each of the untested cells may potentially produce (total recovery). One such statistical method for combination of number of cells with total recovery, used by the USGS, is called ACCESS.

  4. Provide a New Model for Query Processing in P2P Integration Systems

    Directory of Open Access Journals (Sweden)

    Hassan Shojaee Mend

    2014-07-01

    Full Text Available Most of organizations and companies meet their needs for software by providing distinguished software system during the time. Each one of these systems utilize own way for saving information which are not necessarily similar to other systems. Therefore, we are facing with a collection of information related to an organization which have been scattered in a series of heterogeneous databases. On the other hand, by the emergence of network and internet the number of existing and interrelated databases is increasing daily. The information must be extracted from this data sources. This task can be done through asking query about the information, but the combination of information from different data sources, due to different data structures, different schemas, different access protocols which they need and the distribution of data sources on the surface of network, is a hard and boring task. The purpose of integration systems is to provide an integrated query interface on a set of database, such a way that the user by using this integrated interface, without the need for knowing the location and the way of access to the data, can ask his query in the system and receive the existing results through the network. In this study, architecture has been provided for querying p2p integration system. In the suggested architecture, the features of a p2p environment and the changeability of the environment of the network have been considered. Also, the utilized mechanism for processing the query is such a way that to decrease the network traffic and the time of responding to the minimum. The practical results indicated that the suggested method has a close manner to the optimized status

  5. Orthogonal Query Expansion

    CERN Document Server

    Ackerman, Margareta; Lopez-Ortiz, Alejandro

    2011-01-01

    Over the last fifteen years, web searching has seen tremendous improvements. Starting from a nearly random collection of matching pages in 1995, today, search engines tend to satisfy the user's informational need on well-formulated queries. One of the main remaining challenges is to satisfy the users' needs when they provide a poorly formulated query. When the pages matching the user's original keywords are judged to be unsatisfactory, query expansion techniques are used to alter the result set. These techniques find keywords that are similar to the keywords given by the user, which are then appended to the original query leading to a perturbation of the result set. However, when the original query is sufficiently ill-posed, the user's informational need is best met using entirely different keywords, and a small perturbation of the original result set is bound to fail. We propose a novel approach that is not based on the keywords of the original query. We intentionally seek out orthogonal queries, which are r...

  6. Topic Level Disambiguation for Weak Queries

    Directory of Open Access Journals (Sweden)

    Zhang, Hui

    2013-09-01

    Full Text Available Despite limited success, today's information retrieval (IR systems are not intelligent or reliable. IR systems return poor search results when users formulate their information needs into incomplete or ambiguous queries (i.e., weak queries. Therefore, one of the main challenges in modern IR research is to provide consistent results across all queries by improving the performance on weak queries. However, existing IR approaches such as query expansion are not overly effective because they make little effort to analyze and exploit the meanings of the queries. Furthermore, word sense disambiguation approaches, which rely on textual context, are ineffective against weak queries that are typically short. Motivated by the demand for a robust IR system that can consistently provide highly accurate results, the proposed study implemented a novel topic detection that leveraged both the language model and structural knowledge of Wikipedia and systematically evaluated the effect of query disambiguation and topic-based retrieval approaches on TREC collections. The results not only confirm the effectiveness of the proposed topic detection and topic-based retrieval approaches but also demonstrate that query disambiguation does not improve IR as expected.

  7. Query auto completion in information retrieval

    NARCIS (Netherlands)

    Cai, Fei

    2016-01-01

    Query auto completion is an important feature embedded into today's search engines. It can help users formulate queries which other people have searched for when he/she finishes typing the query prefix. Today's most sophisticated query auto completion approaches are based on the collected query logs

  8. Lower Bounds for Sorted Geometric Queries in the I/O Model

    DEFF Research Database (Denmark)

    Afshani, Peyman; Zeh, Norbert

    2012-01-01

    problems is to construct a small data structure that can answer queries efficiently. We study sorted geometric query problems in the I/O model and prove that, when limited to linear space, the naïve approach of sorting the elements in S in the desired output order from scratch is the best possible....... This is highly relevant in an I/O context because storing a massive data set in a superlinear-space data structure is often infeasible. We also prove that answering queries using I/Os requires space, where N is the input size, B is the block size, and M is the size of the main memory. This bound is unlikely...

  9. An Effective Information Retrieval for Ambiguous Query

    CERN Document Server

    Roul, R K

    2012-01-01

    Search engine returns thousands of web pages for a single user query, in which most of them are not relevant. In this context, effective information retrieval from the expanding web is a challenging task, in particular, if the query is ambiguous. The major question arises here is that how to get the relevant pages for an ambiguous query. We propose an approach for the effective result of an ambiguous query by forming community vector based on association concept of data minning using vector space model and the freedictionary. We develop clusters by computing the similarity between community vectors and document vectors formed from the extracted web pages by the search engine. We use Gensim package to implement the algorithm because of its simplicity and robust nature. Analysis shows that our approach is an effective way to form clusters for an ambiguous query.

  10. Improve Query Performance On Hierarchical Data. Adjacency List Model Vs. Nested Set Model

    Directory of Open Access Journals (Sweden)

    Cornelia Gyorödi

    2016-04-01

    Full Text Available Hierarchical data are found in a variety of database applications, including content management categories, forums, business organization charts, and product categories. In this paper, we will examine two models deal with hierarchical data in relational databases namely, adjacency list model and nested set model. We analysed these models by executing various operations and queries in a web-application for the management of categories, thus highlighting the results obtained during performance comparison tests. The purpose of this paper is to present the advantages and disadvantages of using an adjacency list model compared to nested set model in a relational database integrated into an application for the management of categories, which needs to manipulate a big amount of hierarchical data.

  11. Moving Spatial Keyword Queries

    DEFF Research Database (Denmark)

    Wu, Dingming; Yiu, Man Lung; Jensen, Christian S.

    2013-01-01

    Web users and content are increasingly being geo-positioned. This development gives prominence to spatial keyword queries, which involve both the locations and textual descriptions of content. We study the efficient processing of continuously moving top-k spatial keyword (MkSK) queries over spatial...... text data. State-of-the-art solutions for moving queries employ safe zones that guarantee the validity of reported results as long as the user remains within the safe zone associated with a result. However, existing safe-zone methods focus solely on spatial locations and ignore text relevancy. We...

  12. Provenance Storage, Querying, and Visualization in PBase

    Energy Technology Data Exchange (ETDEWEB)

    Kianmajd, Parisa [University of California, Davis; Ludascher, Bertram [University of California, Davis; Missier, Paolo [Newcastle University, UK; Chirigati, Fernando [New York University; Wei, Yaxing [ORNL; Koop, David [New York University; Dey, Saumen [University of California, Davis

    2015-01-01

    We present PBase, a repository for scientific workflows and their corresponding provenance information that facilitates the sharing of experiments among the scientific community. PBase is interoperable since it uses ProvONE, a standard provenance model for scientific workflows. Workflows and traces are stored in RDF, and with the support of SPARQL and the tree cover encoding, the repository provides a scalable infrastructure for querying the provenance data. Furthermore, through its user interface, it is possible to: visualize workflows and execution traces; visualize reachability relations within these traces; issue SPARQL queries; and visualize query results.

  13. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    Science.gov (United States)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  14. Time-sensitive personalized query auto-completion

    NARCIS (Netherlands)

    Cai, F.; Liang, S.; de Rijke, M.; Li, J.; Wang, X.S.

    2014-01-01

    Query auto-completion (QAC) is a prominent feature of modern search engines. It is aimed at saving user's time and enhancing the search experience. Current QAC models mostly rank matching QAC candidates according to their past popularity, i.e., frequency. However, query popularity changes over time

  15. Multi-threaded Query Agent and Engine for a Very Large Astronomical Database

    Science.gov (United States)

    Thakkar, A. R.; Kunszt, P. Z.; Szalay, A. S.; Szokoly, G. P.

    We describe the Query Agent and Query Engine for the Science Archive of the Sloan Digital Sky Survey. In our client-server model, a GUI client communicates with a Query Agent that retrieves the requested data from the repository (a commercial ODBMS). The multi-threaded Agent is able to maintain multiple concurrent user sessions as well as multiple concurrent queries within each session. We describe the parallel, distributed design of the Query Agent and present the results of performance benchmarks that we have run using typical queries on our test data. We also report on our experiences with loading large amounts of data into Objectivity.

  16. Cross Language Information Retrieval Model for Discovering WSDL Documents Using Arabic Language Query

    Directory of Open Access Journals (Sweden)

    Prof. Dr. Torkey I.Sultan

    2013-09-01

    Full Text Available Web service discovery is the process of finding a suitable Web service for a given user’s query through analyzing the web service‘s WSDL content and finding the best match for the user’s query. The service query should be written in the same language of the WSDL, for example English. Cross Language Information Retrieval techniques does not exist in the web service discovery process. The absence of CLIR methods limits the search language to the English language keywords only, which raises the following question “How do people that do not know the English Language find a web service, This paper proposes the application of CLIR techniques and IR methods to support Bilingual Web service discovery process the second language that proposed here is Arabic. Text mining techniques were applied on WSDL content and user’s query to be ready for CLIR methods. The proposed model was tested on a curated catalogue of Life Science Web Services http://www.biocatalogue.org/ and used for solving the research problem with 99.87 % accuracy and 95.06 precision

  17. Flexible Query Answering Systems 2006

    DEFF Research Database (Denmark)

    submissions, relating to the topic of users posing queries and systems producing answers. The papers cover the fields: Database Management, Information Retrieval, Domain Modeling, Knowledge Representation and Ontologies, Knowledge Discovery and Data Mining, Artificial Intelligence, Classical and Non......-classical Logics, Computational Linguistics and Natural Language Processing, Multimedia Information Systems, and Human--Computer Interaction, including reports of interesting applications. We wish to thank the contributors for their excellent papers and the referees, publisher, and sponsors for their effort...

  18. Efficient Probabilistic Inference with Partial Ranking Queries

    CERN Document Server

    Huang, Jonathan; Guestrin, Carlos E

    2012-01-01

    Distributions over rankings are used to model data in various settings such as preference analysis and political elections. The factorial size of the space of rankings, however, typically forces one to make structural assumptions, such as smoothness, sparsity, or probabilistic independence about these underlying distributions. We approach the modeling problem from the computational principle that one should make structural assumptions which allow for efficient calculation of typical probabilistic queries. For ranking models, "typical" queries predominantly take the form of partial ranking queries (e.g., given a user's top-k favorite movies, what are his preferences over remaining movies?). In this paper, we argue that riffled independence factorizations proposed in recent literature [7, 8] are a natural structural assumption for ranking distributions, allowing for particularly efficient processing of partial ranking queries.

  19. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    Science.gov (United States)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  20. INTEGRATIVE METHOD OF TEACHING INFORMATION MODELING IN PRACTICAL HEALTH SERVICE BASED ON MICROSOFT ACCESS QUERIES

    Directory of Open Access Journals (Sweden)

    Svetlana A. Firsova

    2016-06-01

    Full Text Available Introduction: this article explores the pedagogical technology employed to teach medical students foundations of work with MICROSOFT ACCESS databases. The above technology is based on integrative approach to the information modeling in public health practice, drawing upon basic didactic concepts that pertain to objects and tools databases created in MICROSOFT ACCESS. The article examines successive steps in teaching the topic “Queries in MICROSOFT ACCESS” – from simple queries to complex ones. The main attention is paid to such components of methodological system, as the principles and teaching methods classified according to the degree of learners’ active cognitive activity. The most interesting is the diagram of the relationship of learning principles, teaching methods and specific types of requests. Materials and Methods: the authors used comparative analysis of literature, syllabi, curricula in medical informatics taught at leading medical universities in Russia. Results: the original technique of training in putting queries with databases of MICROSOFT ACCESS is presented for analysis of information models in practical health care. Discussion and Conclusions: it is argued that the proposed pedagogical technology will significantly improve the effectiveness of teaching the course “Medical Informatics”, that includes development and application of models to simulate the operation of certain facilities and services of the health system which, in turn, increases the level of information culture of practitioners.

  1. Proposing a New Method for Query Processing Adaption in DataBase

    CERN Document Server

    Feizi-Derakhshi, Mohammad-Reza; Asil, Amir

    2010-01-01

    This paper proposes a multi agent system by compiling two technologies, query processing optimization and agents which contains features of personalized queries and adaption with changing of requirements. This system uses a new algorithm based on modeling of users' long-term requirements and also GA to gather users' query data. Experimented Result shows more adaption capability for presented algorithm in comparison with classic algorithms.

  2. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    of these proposals. To achieve its first goal, the thesis presents the findings of a Systematic Mapping Study showing that human factors topics are scarcely and relatively poorly addressed in model transformation research. Motivated by these findings, the thesis explores the requirements of end-user modelers......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...

  3. jQuery cookbook

    CERN Document Server

    2010-01-01

    jQuery simplifies building rich, interactive web frontends. Getting started with this JavaScript library is easy, but it can take years to fully realize its breadth and depth; this cookbook shortens the learning curve considerably. With these recipes, you'll learn patterns and practices from 19 leading developers who use jQuery for everything from integrating simple components into websites and applications to developing complex, high-performance user interfaces. Ideal for newcomers and JavaScript veterans alike, jQuery Cookbook starts with the basics and then moves to practical use cases w

  4. Design of Intelligent layer for flexible querying in databases

    CERN Document Server

    Nihalani, Mrs Neelu; Motwani, Dr Mahesh

    2009-01-01

    Computer-based information technologies have been extensively used to help many organizations, private companies, and academic and education institutions manage their processes and information systems hereby become their nervous centre. The explosion of massive data sets created by businesses, science and governments necessitates intelligent and more powerful computing paradigms so that users can benefit from this data. Therefore most new-generation database applications demand intelligent information management to enhance efficient interactions between database and the users. Database systems support only a Boolean query model. A selection query on SQL database returns all those tuples that satisfy the conditions in the query.

  5. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2005-01-01

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical...... and uncertain data, e.g., data from the location-based services domain such as transportation infrastructures and the attached static and dynamic content such as speed limits and vehicle positions. The paper also presents algebraic operators that support querying of such data. The work is motivated...

  6. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical...... and uncertain data, e.g., data from the location-based services domain such as transportation infrastructures and the attached static and dynamic content such as speed limits and vehicle positions. The paper also presents algebraic operators that support querying of such data. Use of pre...

  7. Moving Object Trajectories Meta-Model And Spatio-Temporal Queries

    CERN Document Server

    Boulmakoul, Azedine; Lbath, Ahmed

    2012-01-01

    In this paper, a general moving object trajectories framework is put forward to allow independent applications processing trajectories data benefit from a high level of interoperability, information sharing as well as an efficient answer for a wide range of complex trajectory queries. Our proposed meta-model is based on ontology and event approach, incorporates existing presentations of trajectory and integrates new patterns like space-time path to describe activities in geographical space-time. We introduce recursive Region of Interest concepts and deal mobile objects trajectories with diverse spatio-temporal sampling protocols and different sensors available that traditional data model alone are incapable for this purpose.

  8. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2005-01-01

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical...... and uncertain data, e.g., data from the location-based services domain such as transportation infrastructures and the attached static and dynamic content such as speed limits and vehicle positions. The paper also presents algebraic operators that support querying of such data. The work is motivated...... with a realworld case study, based on our collaboration with a leading Danish vendor of location-based services....

  9. Modeling and interoperability of heterogeneous genomic big data for integrative processing and querying.

    Science.gov (United States)

    Masseroli, Marco; Kaitoua, Abdulrahman; Pinoli, Pietro; Ceri, Stefano

    2016-12-01

    While a huge amount of (epi)genomic data of multiple types is becoming available by using Next Generation Sequencing (NGS) technologies, the most important emerging problem is the so-called tertiary analysis, concerned with sense making, e.g., discovering how different (epi)genomic regions and their products interact and cooperate with each other. We propose a paradigm shift in tertiary analysis, based on the use of the Genomic Data Model (GDM), a simple data model which links genomic feature data to their associated experimental, biological and clinical metadata. GDM encompasses all the data formats which have been produced for feature extraction from (epi)genomic datasets. We specifically describe the mapping to GDM of SAM (Sequence Alignment/Map), VCF (Variant Call Format), NARROWPEAK (for called peaks produced by NGS ChIP-seq or DNase-seq methods), and BED (Browser Extensible Data) formats, but GDM supports as well all the formats describing experimental datasets (e.g., including copy number variations, DNA somatic mutations, or gene expressions) and annotations (e.g., regarding transcription start sites, genes, enhancers or CpG islands). We downloaded and integrated samples of all the above-mentioned data types and formats from multiple sources. The GDM is able to homogeneously describe semantically heterogeneous data and makes the ground for providing data interoperability, e.g., achieved through the GenoMetric Query Language (GMQL), a high-level, declarative query language for genomic big data. The combined use of the data model and the query language allows comprehensive processing of multiple heterogeneous data, and supports the development of domain-specific data-driven computations and bio-molecular knowledge discovery. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance

    Science.gov (United States)

    Chan, Emily H.; Sahai, Vikram; Conrad, Corrie; Brownstein, John S.

    2011-01-01

    Background A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Methodology/Principal Findings Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003–2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Conclusions/Significance Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance. PMID:21647308

  11. Using web search query data to monitor dengue epidemics: a new model for neglected tropical disease surveillance.

    Directory of Open Access Journals (Sweden)

    Emily H Chan

    2011-05-01

    Full Text Available BACKGROUND: A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. METHODOLOGY/PRINCIPAL FINDINGS: Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003-2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. CONCLUSIONS/SIGNIFICANCE: Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance.

  12. User Modeling for Personalized Recommender Systems

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper models a user's interest and value related characteristics based on the extension by using neural networks and econometric techniques to the concept of user model in recommender systems. A two-dimensional taxonomy of user model was proposed in terms of the content and persistence of user characteristics. A user interest model and a customer lifetime value model were developed for the proposed taxonomy framework, to capture the time-dependent evolving nature of user's interests and his/her long-term profitability. The proposed models were empirically validated by using real customer data from a bee product company in health care industry. The experimental results show that these models provide effective assistant tools for the company to target its most valuable customers and implement one-to-one personalized services.

  13. Comparing relational model transformation technologies: implementing Query/View/Transformation with Triple Graph Grammars

    DEFF Research Database (Denmark)

    Greenyer, Joel; Kindler, Ekkart

    2010-01-01

    The Model Driven Architecture (MDA) is an approach to develop software based on different models. There are separate models for the business logic and for platform specific details. Moreover, code can be generated automatically from these models. This makes transformations a core technology for MDA...... and for model-based software engineering approaches in general. QVT (Query/View/Transformation) is the transformation technology recently proposed for this purpose by the OMG. TGGs (Triple Graph Grammars) are another transformation technology proposed in the mid-nineties, used for example in the FUJABA CASE...... tool. In contrast to many other transformation technologies, both QVT and TGGs declaratively define the relation between two models. With this definition, a transformation engine can execute a transformation in either direction and, based on the same definition, can also propagate changes from one...

  14. Bottom-up mining of XML query patterns to improve XML querying

    Institute of Scientific and Technical Information of China (English)

    Yi-jun BEI; Gang CHEN; Jin-xiang DONG; Ke CHEN

    2008-01-01

    Querying XML data is a computationally expensive process due to the complex nature of both the XML data and the XML queries. In this paper we propose an approach to expedite XML query processing by caching the results of frequent queries. We discover frequent query patterns from user-issued queries using an efficient bottom-up mining approach called VBUXMiner. VBUXMiner consists of two main steps. First, all queries are merged into a summary structure named "compressed global tree guide" (CGTG). Second, a bottom-up traversal scheme based on the CGTG is employed to generate frequent query patterns. We use the frequent query patterns in a cache mechanism to improve the XML query performance. Experimental results show that our proposed mining approach outperforms the previous mining algorithms for XML queries, such as XQPMinerTID and FastXMiner, and that by caching the results of frequent query patterns, XML query performance can be dramatically improved.

  15. Cooperative Answering of Fuzzy Queries

    Institute of Scientific and Technical Information of China (English)

    Narjes Hachani; Mohamed Ali Ben Hassine; Hanène Chettaoui; Habib Ounelli

    2009-01-01

    The majority of existing information systems deals with crisp data through crisp database systems. Traditional Database Management Systems (DBMS) have not taken into account imprecision so one can say there is some sort of lack of flexibility. The reason is that queries retrieve only elements which precisely match to the given Boolean query. That is, an element belongs to the result if the query is true for this element; otherwise, no answers are returned to the user. The aim of this paper is to present a cooperative approach to handling empty answers of fuzzy conjunctive queries by referring to the Formal Concept Analysis (FCA) theory and fuzzy logic. We present an architecture which combines FCA and databases. The processing of fuzzy queries allows detecting the minimal reasons of empty answers. We also use concept lattice in order to provide the user with the nearest answers in the case of a query failure.

  16. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    Science.gov (United States)

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  17. Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.

    Science.gov (United States)

    Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung

    2017-01-01

    The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable

  18. The Semantics of Query Modification

    NARCIS (Netherlands)

    Hollink, V.; Tsikrika, T.; Vries, A.P. de

    2010-01-01

    We present a method that exploits `linked data' to determine semantic relations between consecutive user queries. Our method maps queries onto concepts in linked data and searches the linked data graph for direct or indirect relations between the concepts. By comparing relations between large number

  19. Query responses

    Directory of Open Access Journals (Sweden)

    Paweł Łupkowski

    2017-05-01

    Full Text Available In this article we consider the phenomenon of answering a query with a query. Although such answers are common, no large scale, corpus-based characterization exists, with the exception of clarification requests. After briefly reviewing different theoretical approaches on this subject, we present a corpus study of query responses in the British National Corpus and develop a taxonomy for query responses. We point at a variety of response categories that have not been formalized in previous dialogue work, particularly those relevant to adversarial interaction. We show that different response categories have significantly different rates of subsequent answer provision. We provide a formal analysis of the response categories in the framework of KoS.

  20. Query Monitoring and Analysis for Database Privacy - A Security Automata Model Approach.

    Science.gov (United States)

    Kumar, Anand; Ligatti, Jay; Tu, Yi-Cheng

    2015-11-01

    Privacy and usage restriction issues are important when valuable data are exchanged or acquired by different organizations. Standard access control mechanisms either restrict or completely grant access to valuable data. On the other hand, data obfuscation limits the overall usability and may result in loss of total value. There are no standard policy enforcement mechanisms for data acquired through mutual and copyright agreements. In practice, many different types of policies can be enforced in protecting data privacy. Hence there is the need for an unified framework that encapsulates multiple suites of policies to protect the data. We present our vision of an architecture named security automata model (SAM) to enforce privacy-preserving policies and usage restrictions. SAM analyzes the input queries and their outputs to enforce various policies, liberating data owners from the burden of monitoring data access. SAM allows administrators to specify various policies and enforces them to monitor queries and control the data access. Our goal is to address the problems of data usage control and protection through privacy policies that can be defined, enforced, and integrated with the existing access control mechanisms using SAM. In this paper, we lay out the theoretical foundation of SAM, which is based on an automata named Mandatory Result Automata. We also discuss the major challenges of implementing SAM in a real-world database environment as well as ideas to meet such challenges.

  1. Local Mapping Based Modeling Sensor Network Data Algorithm for Query Processing

    Directory of Open Access Journals (Sweden)

    Xin Song

    2012-09-01

    Full Text Available In Wireless Sensor Network (WSN, sensors are devices used to collect data from the environment related to the detection or measurement of physical phenomena. The original sensor network data are commonly large amounts of measurements and high-dimensional. Spatial data mining techniques used to retrieve high-dimensional spatial information from WSN is effectively. Due to the limited energy of the sensors in WSN, energy efficient query evaluation is critical to prolong the system lifetime. In this paper, we propose a local mapping based modeling sensor network data algorithm for query processing to render effective spatial data retrieval services in the resource constrained WSN. The algorithm can be associated with the present features in spatial data and project high-dimensional data to feature space by using local mapping linear embedding for reducing the computation of spatial data retrieval. As the experiment results shown, the algorithm can preserve important spatial feature information and provide effective preprocessing analysis results. Furthermore, the algorithm is the more effectual on feature preserving than local linear embedding (LLE when the sample space is sparse source data.

  2. Probabilistic Data Modeling and Querying for Location-Based Data Warehouses

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to handle complex, dynamic, uncertain multidimensional data in location-based warehouses, this paper proposes a novel probabilistic data model that can address the complexities of such data. The model provides a foundation for handling complex hierarchical...... and uncertain data, e.g., data from the location-based services domain such as transportation infrastructures and the attached static and dynamic content such as speed limits and vehicle positions. The paper also presents algebraic operators that support querying of such data. Use of pre......-aggregation for implementation of the operators is also discussed. The work is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services....

  3. Distributed Bayesian Networks for User Modeling

    DEFF Research Database (Denmark)

    Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...... efficiently combines distributed learner models without the need to exchange internal structure of local Bayesian networks, nor local evidence between the involved platforms....

  4. B.E.A.R. GeneInfo: A tool for identifying gene-related biomedical publications through user modifiable queries

    Directory of Open Access Journals (Sweden)

    Zhou Guohui

    2004-04-01

    Full Text Available Abstract Background Once specific genes are identified through high throughput genomics technologies there is a need to sort the final gene list to a manageable size for validation studies. The triaging and sorting of genes often relies on the use of supplemental information related to gene structure, metabolic pathways, and chromosomal location. Yet in disease states where the genes may not have identifiable structural elements, poorly defined metabolic pathways, or limited chromosomal data, flexible systems for obtaining additional data are necessary. In these situations having a tool for searching the biomedical literature using the list of identified genes while simultaneously defining additional search terms would be useful. Results We have built a tool, BEAR GeneInfo, that allows flexible searches based on the investigators knowledge of the biological process, thus allowing for data mining that is specific to the scientist's strengths and interests. This tool allows a user to upload a series of GenBank accession numbers, Unigene Ids, Locuslink Ids, or gene names. BEAR GeneInfo takes these IDs and identifies the associated gene names, and uses the lists of gene names to query PubMed. The investigator can add additional modifying search terms to the query. The subsequent output provides a list of publications, along with the associated reference hyperlinks, for reviewing the identified articles for relevance and interest. An example of the use of this tool in the study of human prostate cancer cells treated with Selenium is presented. Conclusions This tool can be used to further define a list of genes that have been identified through genomic or genetic studies. Through the use of targeted searches with additional search terms the investigator can limit the list to genes that match their specific research interests or needs. The tool is freely available on the web at http://prostategenomics.org1, and the authors will provide scripts and

  5. Extending OLAP Querying to External Object

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach; Shoshani, Arie; Gu, Junmin

    inherent in data in nonstandard applications are not accommodated well by OLAP systems. In contrast, object database systems are built to handle such complexity, but do not support OLAP-type querying well. This paper presents the concepts and techniques underlying a flexible, multi-model federated system...... that enables OLAP users to exploit simultaneously the features of OLAP and object systems. The system allows data to be handled using the most appropriate data model and technology: OLAP systems for dimensional data and object database systems for more complex, general data. Additionally, physical data...... integration can be avoided. As a vehicle for demonstrating the capabilities of the system, a prototypical OLAP language is defined and extended to naturally support queries that involve data in object databases. The language permits selection criteria that reference object data, queries that return...

  6. Learning jQuery

    CERN Document Server

    Chaffer, Jonathan

    2013-01-01

    Step through each of the core concepts of the jQuery library, building an overall picture of its capabilities. Once you have thoroughly covered the basics, the book returns to each concept to cover more advanced examples and techniques.This book is for web designers who want to create interactive elements for their designs, and for developers who want to create the best user interface for their web applications. Basic JavaScript programming and knowledge of HTML and CSS is required. No knowledge of jQuery is assumed, nor is experience with any other JavaScript libraries.

  7. Information Filtering via Collaborative User Clustering Modeling

    CERN Document Server

    Zhang, Chu-Xu; Yu, Lu; Liu, Chuang; Liu, Hao; Yan, Xiao-Yong

    2013-01-01

    The past few years have witnessed the great success of recommender systems, which can significantly help users find out personalized items for them from the information era. One of the most widely applied recommendation methods is the Matrix Factorization (MF). However, most of researches on this topic have focused on mining the direct relationships between users and items. In this paper, we optimize the standard MF by integrating the user clustering regularization term. Our model considers not only the user-item rating information, but also takes into account the user interest. We compared the proposed model with three typical other methods: User-Mean (UM), Item-Mean (IM) and standard MF. Experimental results on a real-world dataset, MovieLens, show that our method performs much better than other three methods in the accuracy of recommendation.

  8. Information filtering via collaborative user clustering modeling

    Science.gov (United States)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Yu, Lu; Liu, Chuang; Liu, Hao; Yan, Xiao-Yong

    2014-02-01

    The past few years have witnessed the great success of recommender systems, which can significantly help users to find out personalized items for them from the information era. One of the widest applied recommendation methods is the Matrix Factorization (MF). However, most of the researches on this topic have focused on mining the direct relationships between users and items. In this paper, we optimize the standard MF by integrating the user clustering regularization term. Our model considers not only the user-item rating information but also the user information. In addition, we compared the proposed model with three typical other methods: User-Mean (UM), Item-Mean (IM) and standard MF. Experimental results on two real-world datasets, MovieLens 1M and MovieLens 100k, show that our method performs better than other three methods in the accuracy of recommendation.

  9. A new approach to query expansion in information retrieval

    Institute of Scientific and Technical Information of China (English)

    Li Weijiang; Zhao Tiejun; Wang Xiangang

    2008-01-01

    To eliminate the mismatch between words of relevant documents and user's query and more serious negative effects it has on the performance of information retrieval,a method of query expansion on the basis of new terms co-occurrence representation was put forward by analyzing the process of producing query. The expansion terms were selected according to their correlation to the whole query. At the same time, the position information between terms were considered. The experimental result on test retrieval conference (TREC) data collection shows that the method proposed in the paper has made an improvement of 5%~19% all the time than the language modeling method without expansion. Compared to the popular approach of query expansion, pseudo feedback, the precision of the proposed method is competitive.

  10. Research of User Profile Model in Personalized Search%个性化搜索中的用户兴趣模型研究

    Institute of Scientific and Technical Information of China (English)

    宋毅; 徐志明

    2011-01-01

    The goal of the researches is digging user interest and realizing personalized search. The method of research is finding user query. Digging user class by query,but sometime the query is short or query is ambiguity. Query will return some documents and then class the document finding user interest. The result shows query is mapped to the class by document matrix. Query expansion effectively solves the question of query short and ambiguity class. The result is that the input of the model is user query and the document, the output is user interest that provides the foundation for sorting technology.%研究目的是挖掘搜索引擎中用户兴趣偏好,实现个性化搜索引擎技术.研究方法采用识别用户输入查询串,通过查询进行挖掘用户兴趣类别,但有时用户输入查询串短,或者出现查询词歧义等.由于查询会返回一系列文档,将相关文档分类处理,能够更清晰识别用户兴趣偏好.结果显示通过文档关系矩阵,将用户查询映射到对应类别,发现用户兴趣爱好.对于兼类查询等问题可以通过扩展查询解决.结论是该模型通过查询串和相关文档之间关系,进而实现用户偏好的辨别.该技术为搜索引擎信息推荐等技术打下良好基础.

  11. Bayesian Query-Focused Summarization

    CERN Document Server

    Daumé, Hal

    2009-01-01

    We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.

  12. Query optimization over crowdsourced data

    KAUST Repository

    Park, Hyunjung

    2013-08-26

    Deco is a comprehensive system for answering declarative queries posed over stored relational data together with data obtained on-demand from the crowd. In this paper we describe Deco\\'s cost-based query optimizer, building on Deco\\'s data model, query language, and query execution engine presented earlier. Deco\\'s objective in query optimization is to find the best query plan to answer a query, in terms of estimated monetary cost. Deco\\'s query semantics and plan execution strategies require several fundamental changes to traditional query optimization. Novel techniques incorporated into Deco\\'s query optimizer include a cost model distinguishing between "free" existing data versus paid new data, a cardinality estimation algorithm coping with changes to the database state during query execution, and a plan enumeration algorithm maximizing reuse of common subplans in a setting that makes reuse challenging. We experimentally evaluate Deco\\'s query optimizer, focusing on the accuracy of cost estimation and the efficiency of plan enumeration.

  13. Recommendation Sets and Choice Queries

    DEFF Research Database (Denmark)

    Viappiani, Paolo Renato; Boutilier, Craig

    2011-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query users about their preferences and offer recommendations based on the system's belief about the user's utility function. We analyze the connection between...... the problem of generating optimal recommendation sets and the problem of generating optimal choice queries, considering both Bayesian and regret-based elicitation. Our results show that, somewhat surprisingly, under very general circumstances, the optimal recommendation set coincides with the optimal query....

  14. Modelling Spatio-Temporal Relevancy in Urban Context-Aware Pervasive Systems Using Voronoi Continuous Range Query and Multi-Interval Algebra

    Directory of Open Access Journals (Sweden)

    Najmeh Neysani Samany

    2013-01-01

    Full Text Available Space and time are two dominant factors in context-aware pervasive systems which determine whether an entity is related to the moving user or not. This paper specifically addresses the use of spatio-temporal relations for detecting spatio-temporally relevant contexts to the user. The main contribution of this work is that the proposed model is sensitive to the velocity and direction of the user and applies customized Multi Interval Algebra (MIA with Voronoi Continuous Range Query (VCRQ to introduce spatio-temporally relevant contexts according to their arrangement in space. In this implementation the Spatio-Temporal Relevancy Model for Context-Aware Systems (STRMCAS helps the tourist to find his/her preferred areas that are spatio-temporally relevant. The experimental results in a scenario of tourist navigation are evaluated with respect to the accuracy of the model, performance time and satisfaction of users in 30 iterations of the algorithm. The evaluation process demonstrated the efficiency of the model in real-world applications.

  15. Privacy Preserving Moving KNN Queries

    CERN Document Server

    Hashem, Tanzima; Zhang, Rui

    2011-01-01

    We present a novel approach that protects trajectory privacy of users who access location-based services through a moving k nearest neighbor (MkNN) query. An MkNN query continuously returns the k nearest data objects for a moving user (query point). Simply updating a user's imprecise location such as a region instead of the exact position to a location-based service provider (LSP) cannot ensure privacy of the user for an MkNN query: continuous disclosure of regions enables the LSP to follow a user's trajectory. We identify the problem of trajectory privacy that arises from the overlap of consecutive regions while requesting an MkNN query and provide the first solution to this problem. Our approach allows a user to specify the confidence level that represents a bound of how much more the user may need to travel than the actual kth nearest data object. By hiding a user's required confidence level and the required number of nearest data objects from an LSP, we develop a technique to prevent the LSP from tracking...

  16. Optimizing Phylogenetic Queries for Performance.

    Science.gov (United States)

    Jamil, Hasan M

    2017-08-24

    The vast majority of phylogenetic databases do not support declarative querying using which their contents can be flexibly and conveniently accessed and the template based query interfaces they support do not allow arbitrary speculative queries. They therefore also do not support query optimization leveraging unique phylogeny properties. While a small number of graph query languages such as XQuery, Cypher and GraphQL exist for computer savvy users, most are too general and complex to be useful for biologists, and too inefficient for large phylogeny querying. In this paper, we discuss a recently introduced visual query language, called PhyQL, that leverages phylogeny specific properties to support essential and powerful constructs for a large class of phylogentic queries. We develop a range of pruning aids, and propose a substantial set of query optimization strategies using these aids suitable for large phylogeny querying. A hybrid optimization technique that exploits a set of indices and ``graphlet" partitioning is discussed. A ``fail soonest" strategy is used to avoid hopeless processing and is shown to produce dividends. Possible novel optimization techniques yet to be explored are also discussed.

  17. Querying Archetype-Based Electronic Health Records Using Hadoop and Dewey Encoding of openEHR Models.

    Science.gov (United States)

    Sundvall, Erik; Wei-Kleiner, Fang; Freire, Sergio M; Lambrix, Patrick

    2017-01-01

    Archetype-based Electronic Health Record (EHR) systems using generic reference models from e.g. openEHR, ISO 13606 or CIMI should be easy to update and reconfigure with new types (or versions) of data models or entries, ideally with very limited programming or manual database tweaking. Exploratory research (e.g. epidemiology) leading to ad-hoc querying on a population-wide scale can be a challenge in such environments. This publication describes implementation and test of an archetype-aware Dewey encoding optimization that can be used to produce such systems in environments supporting relational operations, e.g. RDBMs and distributed map-reduce frameworks like Hadoop. Initial testing was done using a nine-node 2.2 GHz quad-core Hadoop cluster querying a dataset consisting of targeted extracts from 4+ million real patient EHRs, query results with sub-minute response time were obtained.

  18. Query Language for Complex Similarity Queries

    CERN Document Server

    Budikova, Petra; Zezula, Pavel

    2012-01-01

    For complex data types such as multimedia, traditional data management methods are not suitable. Instead of attribute matching approaches, access methods based on object similarity are becoming popular. Recently, this resulted in an intensive research of indexing and searching methods for the similarity-based retrieval. Nowadays, many efficient methods are already available, but using them to build an actual search system still requires specialists that tune the methods and build the system manually. Several attempts have already been made to provide a more convenient high-level interface in a form of query languages for such systems, but these are limited to support only basic similarity queries. In this paper, we propose a new language that allows to formulate content-based queries in a flexible way, taking into account the functionality offered by a particular search engine in use. To ensure this, the language is based on a general data model with an abstract set of operations. Consequently, the language s...

  19. Workforce Transition Modeling Environment user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Stahlman, E.J.; Oens, M.A.; Lewis, R.E.

    1993-10-01

    The Pacific Northwest Laboratory (PNL) was tasked by the US Department of Energy Albuquerque Field Office (DOE-AL) to develop a workforce assessment and transition planning tool to support integrated decision making at a single DOE installation. The planning tool permits coordinated, integrated workforce planning to manage growth, decline, or transition within a DOE installation. The tool enhances the links and provides commonality between strategic, programmatic, and operations planners and human resources. Successful development and subsequent complex-wide implementation of the model also will facilitate planning at the national level by enforcing a consistent format on data that are now collected by installations in corporate-specific formats that are not amenable to national-level analyses. The workforce assessment and transition planning tool, the Workforce Transition Modeling Environment (WFTME), consists of two components: the Workforce Transition Model and the Workforce Budget Constraint Model. The Workforce Transition Model, the preponderant of the two, assists decision makers identify and evaluate alternatives for transitioning the current workforce to meet the skills required to support projected workforce requirements. The Workforce Budget Constraint Model helps estimate the number of personnel that will be affected by a given workforce budget increase or decrease and assists in identifying how the corresponding hirings or layoffs should be distributed across the Common Occupation Classification System (COCS) occupations. This user`s guide describes the use and operation of the WFTME. This includes the functions of modifying data and running models, interpreting output reports, and an approach for using the WFTME to evaluate various workforce transition scenarios.

  20. Tag Correspondence Model for User Tag Suggestion

    Institute of Scientific and Technical Information of China (English)

    涂存超; 刘知远; 孙茂松

    2015-01-01

    Some microblog services encourage users to annotate themselves with multiple tags, indicating their attributes and interests. User tags play an important role for personalized recommendation and information retrieval. In order to better understand the semantics of user tags, we propose Tag Correspondence Model (TCM) to identify complex correspondences of tags from the rich context of microblog users. The correspondence of a tag is referred to as a unique element in the context which is semantically correlated with this tag. In TCM, we divide the context of a microblog user into various sources (such as short messages, user profile, and neighbors). With a collection of users with annotated tags, TCM can automatically learn the correspondences of user tags from multiple sources. With the learned correspondences, we are able to interpret implicit semantics of tags. Moreover, for the users who have not annotated any tags, TCM can suggest tags according to users’ context information. Extensive experiments on a real-world dataset demonstrate that our method can effciently identify correspondences of tags, which may eventually represent semantic meanings of tags.

  1. A topological framework for interactive queries on 3D models in the Web.

    Science.gov (United States)

    Figueiredo, Mauro; Rodrigues, José I; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications.

  2. Format SPARQL Query Results into HTML Report

    Directory of Open Access Journals (Sweden)

    Dr Sunitha Abburu

    2013-07-01

    Full Text Available SPARQL is one of the powerful query language for querying semantic data. It is recognized by the W3C as a query language for RDF. As an efficient query language for RDF, it has defined several query result formats such as CSV, TSV and XML etc. These formats are not attractive, understandable and readable. The results need to be converted in an appropriate format so that user can easily understand. The above formats require additional transformations or tool support to represent the query result in user readable format. The main aim of this paper is to propose a method to build HTML report dynamically for SPARQL query results. This enables SPARQL query result display, in HTML report format easily, in an attractive understandable format without the support of any additional or external tools or transformation.

  3. A query index for continuous queries on RFID streaming data

    Institute of Scientific and Technical Information of China (English)

    Jaekwan PARK; Bonghee HONG; Chaehoon BAN

    2008-01-01

    RFID middleware collects and filters RFID streaming data to process applications' requests called continuous queries, because they are executed continuously during tag movement. Several approaches to building an index on queries rather than data records, called a query index, have been proposed to evaluate continuous queries over streaming data. EPCgiobal proposed an Event Cycle Specification (ECSpec) model, which is a de facto standard query interface for RFID applications. Continuous queries based on ECSpec consist of a large number of segments that represent the query conditions. The problem when using any of the existing query indexes on these continuous queries is that it takes a long time to build the index, because it is necessary to insert a large number of segments into the index. To solve this problem, we propose a transform method that converts a group of segments into compressed data. We also propose an efficient query index scheme for the transformed space. Comparing with existing query indexes, the performance of proposed index outperforms the others on various datasets.

  4. An Approach to Improve the Representation of the User Model in the Web-Based Systems

    Directory of Open Access Journals (Sweden)

    Yasser A. Nada

    2011-12-01

    Full Text Available A major shortcoming of content-based approaches exists in the representation of the user model. Content-based approaches often employ term vectors to represent each user’s interest. In doing so, they ignore the semantic relations between terms of the vector space model in which indexed terms are not orthogonal and often have semantic relatedness between one another. In this paper, we improve the representation of a user model during building user model in content-based approaches by performing these steps. First is the domain concept filtering in which concepts and items of interests are compared to the domain ontology to check the relevant items to our domain using ontology based semantic similarity. Second, is incorporating semantic content into the term vectors. We use word definitions and relations provided by WordNet to perform word sense disambiguation and employ domain-specific concepts as category labels for the semantically enhanced user models. The implicit information pertaining to the user behavior was extracted from click stream data or web usage sessions captured within the web server logs. Also, our proposed approach aims to update user model, we should analysis user's history query keywords. For a certain keyword, we extract the words which have the semantic relationships with the keyword and add them into the user interest model as nodes according to semantic relationships in the WordNet.

  5. GREAT Process Modeller user manual

    OpenAIRE

    Rueda, Urko; España, Sergio; Ruiz, Marcela

    2015-01-01

    This report contains instructions to install, uninstall and use GREAT Process Modeller, a tool that supports Communication Analysis, a communication-oriented business process modelling method. GREAT allows creating communicative event diagrams (i.e. business process models), specifying message structures (which describe the messages associated to each communicative event), and automatically generating a class diagram (representing the data model of an information system that would support suc...

  6. Condorcet query engine: A query engine for coordinated index terms

    NARCIS (Netherlands)

    van der Vet, P.E.; Mars, Nicolaas

    1999-01-01

    On-line information retrieval systems often offer their users some means to tune the query to match the level of granularity of the information request. Users can be offered a far greater range of possibilities, however, if documents are indexed with coordinated index concepts. Coordinated index

  7. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion

    DEFF Research Database (Denmark)

    Sordoni, Alessandro; Bengio, Yoshua; Vahabi, Hossein

    2015-01-01

    Users may strive to formulate an adequate textual query for their information need. Search engines assist the users by presenting query suggestions. To preserve the original search intent, suggestions should be context-aware and account for the previous queries issued by the user. Achieving conte...... learning pipelines and hand-engineered feature sets. Results show that it outperforms existing context-aware approaches in a next query prediction setting. In addition to query suggestion, our model is general enough to be used in a variety of other applications....... awareness is challenging due to data sparsity. We present a probabilistic suggestion model that is able to account for sequences of previous queries of arbitrary lengths. Our novel hierarchical recurrent encoder-decoder architecture allows the model to be sensitive to the order of queries in the context...... while avoiding data sparsity. Additionally, our model can suggest for rare, or long-tail, queries. The produced suggestions are synthetic and are sampled one word at a time, using computationally cheap decoding techniques. This is in contrast to current synthetic suggestion models relying upon machine...

  8. UCLA at TREC 2014 Clinical Decision Support Track: Exploring Language Models, Query Expansion, and Boosting

    Science.gov (United States)

    2014-11-01

    decreased Raw terms, stopwords removed, appetite enlarged appendix abdominal ultrasound. Xx-year-old phrase removed metamap_concept:Entire appendix UMLS...pain for hours, decreased appetite , and enlarged appendix on abdominal ultrasound. Table 1 describes the query terms for an example patient summary. The...metamap_concept:Right lower quadrant pain ^20 metamap_concept:Decrease in appetite ^20 +(treat treatment manage management Query expansion for topic type and

  9. EquiX-A Search and Query Language for XML.

    Science.gov (United States)

    Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander

    2002-01-01

    Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)

  10. Modelling User Needs: Students as Enterprise Analysts%Modelling User Needs: Students as Enterprise Analysts

    Institute of Scientific and Technical Information of China (English)

    GIANMARIO Motta; THIAGO Barroero Daniele Sacco

    2012-01-01

    We illustrate a case study, where students designed enterprise architectures, that were not only welcome but successfully implemented. The success key was threefold. First the analysis framework, that integrates all the aspects of the systems that are relevant to users, namely user interface, rules, and information. Second, the analysis approach, that guides, trough confirmatory sessions, to elicit real requirements from users. Third, the model-to-model transformation, that assures consistency from the highest aggregate abstraction down to an executable model.

  11. HTGR Cost Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooler Reactor (HTGR) Cost Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Cost Model calculates an estimate of the capital costs, annual operating and maintenance costs, and decommissioning costs for a high-temperature gas-cooled reactor. The user can generate these costs for multiple reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for a single or four-pack configuration; and for a reactor size of 350 or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Cost Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Cost Model. This model was design for users who are familiar with the HTGR design and Excel. Modification of the HTGR Cost Model should only be performed by users familiar with Excel and Visual Basic.

  12. CLASCN: Candidate Network Selection for Efficient Top-k Keyword Queries over Databases

    Institute of Scientific and Technical Information of China (English)

    Jun Zhang; Zhao-Hui Peng; Shan Wang; Hui-Jing Nie

    2007-01-01

    Keyword Search Over Relational Databases (KSORD) enables casual or Web users easily access databases through free-form keyword queries. Improving the performance of KSORD systems is a critical issue in this area. In this paper, a new approach CLASCN (Classification, Learning And Selection of Candidate Network) is developed to efficiently perform top-k keyword queries in schema-graph-based online KSORD systems. In this approach, the Candidate Networks(CNs) from trained keyword queries or executed user queries are classified and stored in the databases, and top-k results from the CNs are learned for constructing CN Language Models (CNLMs). The CNLMs are used to compute the similarity scores between a new user query and the CNs from the query. The CNs with relatively large similarity score, which are the most promising ones to produce top-k results, will be selected and performed. Currently, CLASCN is only applicable for past queries and New All-keyword-Used (NAU) queries which are frequently submitted queries. Extensive experiments also show the efficiency and effectiveness of our CLASCN approach.

  13. Parallel community climate model: Description and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H. [and others

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  14. Explanations for Skyline Query Results

    DEFF Research Database (Denmark)

    Chester, Sean; Assent, Ira

    2015-01-01

    Skyline queries are a well-studied problem for multidimensional data, wherein points are returned to the user iff no other point is preferable across all attributes. This leaves only the points most likely to appeal to an arbitrary user. However, some dominated points may still be interesting, an...

  15. Object-based modeling, identification, and labeling of medical images for content-based retrieval by querying on intervals of attribute values

    Science.gov (United States)

    Thies, Christian; Ostwald, Tamara; Fischer, Benedikt; Lehmann, Thomas M.

    2005-04-01

    The classification and measuring of objects in medical images is important in radiological diagnostics and education, especially when using large databases as knowledge resources, for instance a picture archiving and communication system (PACS). The main challenge is the modeling of medical knowledge and the diagnostic context to label the sought objects. This task is referred to as closing the semantic gap between low-level pixel information and high level application knowledge. This work describes an approach which allows labeling of a-priori unknown objects in an intuitive way. Our approach consists of four main components. At first an image is completely decomposed into all visually relevant partitions on different scales. This provides a hierarchical organized set of regions. Afterwards, for each of the obtained regions a set of descriptive features is computed. In this data structure objects are represented by regions with characteristic attributes. The actual object identification is the formulation of a query. It consists of attributes on which intervals are defined describing those regions that correspond to the sought objects. Since the objects are a-priori unknown, they are described by a medical expert by means of an intuitive graphical user interface (GUI). This GUI is the fourth component. It enables complex object definitions by browsing the data structure and examinating the attributes to formulate the query. The query is executed and if the sought objects have not been identified its parameterization is refined. By using this heuristic approach, object models for hand radiographs have been developed to extract bones from a single hand in different anatomical contexts. This demonstrates the applicability of the labeling concept. By using a rule for metacarpal bones on a series of 105 images, this type of bone could be retrieved with a precision of 0.53 % and a recall of 0.6%.

  16. Improving Web Search for Difficult Queries

    Science.gov (United States)

    Wang, Xuanhui

    2009-01-01

    Search engines have now become essential tools in all aspects of our life. Although a variety of information needs can be served very successfully, there are still a lot of queries that search engines can not answer very effectively and these queries always make users feel frustrated. Since it is quite often that users encounter such "difficult…

  17. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  18. An Efficient Algorithm for Query Transformation in Semantic Query Optimization

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Semantic query optimization (SQO) is comparatively a recent approach for the transformation of given query into equivalent alternative query using matching rules in order to select an optimal query based on the costs of executing alternative queries. The key aspect of the algorithm proposed here is that previous proposed SQO techniques can be considered equally in the uniform cost model, with which optimization opportunities will not be missed. At the same time, the authors used the implication closure to guarantee that any matched rule will not be lost. The authors implemented their algorithm for the optimization of decomposed sub-query in local database in Multi-Database Integrator (MDBI), which is a multidatabase project. The experimental results verify that this algorithm is effective in the process of SQO.

  19. Optimizing Temporal Queries: Efficient Handling of Duplicates

    DEFF Research Database (Denmark)

    Toman, David; Bowman, Ivan Thomas

    2001-01-01

    Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often......, these query languages are implemented by translating temporal queries into standard relational queries. However, the compiled queries are often quite cumbersome and expensive to execute even using state-of-the- art relational products. This paper presents an optimization technique that produces more efficient...... translated SQL queries by taking into account the properties of the encoding used for temporal attributes. For concreteness, this translation technique is presented in the context of SQL/TP; however, these techniques are also applicable to other temporal query languages....

  20. Optimizing Temporal Queries: Efficient Handling of Duplicates

    DEFF Research Database (Denmark)

    Toman, David; Bowman, Ivan Thomas

    2001-01-01

    translated SQL queries by taking into account the properties of the encoding used for temporal attributes. For concreteness, this translation technique is presented in the context of SQL/TP; however, these techniques are also applicable to other temporal query languages......., these query languages are implemented by translating temporal queries into standard relational queries. However, the compiled queries are often quite cumbersome and expensive to execute even using state-of-the- art relational products. This paper presents an optimization technique that produces more efficient......Recent research in the area of temporal databases has proposed a number of query languages that vary in their expressive power and the semantics they provide to users. These query languages represent a spectrum of solutions to the tension between clean semantics and efficient evaluation. Often...

  1. GEOS-5 Chemistry Transport Model User's Guide

    Science.gov (United States)

    Kouatchou, J.; Molod, A.; Nielsen, J. E.; Auer, B.; Putman, W.; Clune, T.

    2015-01-01

    The Goddard Earth Observing System version 5 (GEOS-5) General Circulation Model (GCM) makes use of the Earth System Modeling Framework (ESMF) to enable model configurations with many functions. One of the options of the GEOS-5 GCM is the GEOS-5 Chemistry Transport Model (GEOS-5 CTM), which is an offline simulation of chemistry and constituent transport driven by a specified meteorology and other model output fields. This document describes the basic components of the GEOS-5 CTM, and is a user's guide on to how to obtain and run simulations on the NCCS Discover platform. In addition, we provide information on how to change the model configuration input files to meet users' needs.

  2. User behavioral model in hypertext environment

    Science.gov (United States)

    Moskvin, Oleksii M.; Sailarbek, Saltanat; Gromaszek, Konrad

    2015-12-01

    There is an important role of the users that are traversing Internet resources and their activities which, according to the practice, aren't usually considered by the Internet resource owners so to adjust and optimize hypertext structure. Optimal hypertext structure allows users to locate pages of interest, which are the goals of the informational search, in a faster way. Paper presents a model that conducts user auditory behavior analysis in order to figure out their goals in particular hypertext segment and allows finding out optimal routes for reaching those goals in terms of the routes length and informational value. Potential application of the proposed model is mostly the systems that evaluate hypertext networks and optimize their referential structure for faster information retrieval.

  3. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  4. User Requirements and Domain Model Engineering

    NARCIS (Netherlands)

    Specht, Marcus; Glahn, Christian

    2006-01-01

    Specht, M., & Glahn, C. (2006). User requirements and domain model engineering. Presentation at International Workshop in Learning Networks for Lifelong Competence Development. March, 30-31, 2006. Sofia, Bulgaria: TENCompetence Conference. Retrieved June 30th, 2006, from http://dspace.learningnetwor

  5. Object-Extended OLAP Querying

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach; Gu, Junmin; Shoshani, Arie

    2009-01-01

    inherent in data in non-standard applications are not accommodated well by OLAP systems. In contrast, object database systems are built to handle such complexity, but do not support OLAP-type querying well. This paper presents the concepts and techniques underlying a flexible, "multi-model" federated...... system that enables OLAP users to exploit simultaneously the features of OLAP and object systems. The system allows data to be handled using the most appropriate data model and technology: OLAP systems for dimensional data and object database systems for more complex, general data. This allows data...... analysis on the OLAP data to be significantly enriched by the use of additional object data. Additionally, physical integration of the OLAP and the object data can be avoided. As a vehicle for demonstrating the capabilities of the system, a prototypical OLAP language is defined and extended to naturally...

  6. Internet User Behaviour Model Discovery Process

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The Academy of Economic Studies has more than 45000 students and about 5000 computers with Internet access which are connected to AES network. Students can access internet on these computers through a proxy server which stores information about the way the Internet is accessed. In this paper, we describe the process of discovering internet user behavior models by analyzing proxy server raw data and we emphasize the importance of such models for the e-learning environment.

  7. Internet User Behaviour Model Discovery Process

    OpenAIRE

    Dragos Marcel VESPAN

    2007-01-01

    The Academy of Economic Studies has more than 45000 students and about 5000 computers with Internet access which are connected to AES network. Students can access internet on these computers through a proxy server which stores information about the way the Internet is accessed. In this paper, we describe the process of discovering internet user behavior models by analyzing proxy server raw data and we emphasize the importance of such models for the e-learning environment.

  8. A New User Model based Interactive Product Retrieval Process for improved eBuying

    Directory of Open Access Journals (Sweden)

    Oshadi Alahakoon

    2015-09-01

    Full Text Available When searching for items online there are three common problems that e-buyers may encounter; null retrieval, retrieving unmanageable number of items, and retrieving unsatisfactory items. In the past information retrieval systems or recommender systems were used as solutions. With information retrieval systems, too rigorous filtering based on the user query to reduce unmanageable number of items result in either null retrieval or filtering out the items users prefer. Recommender systems on the other hand do not provide sufficient opportunity for users to communicate their needs. As a solution, this paper introduces a novel method combining a user model with an interactive product retrieval process. The new layered user model has the potential of being applied across multiple product and service domains and is able to adapt to changing user preferences. The new product retrieval algorithm is integrated with the user model and is able to successfully address null retrieval, retrieving unmanageable number of items, and retrieving unsatisfactory items. The process is demonstrated using a bench mark dataset and a case study. Finally the Product retrieval process is evaluated using a set of guidelines to illustrate its suitability to current eBuying environments.

  9. Wake Vortex Inverse Model User's Guide

    Science.gov (United States)

    Lai, David; Delisi, Donald

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input

  10. The PANTHER User Experience

    Energy Technology Data Exchange (ETDEWEB)

    Coram, Jamie L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Morrow, James D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perkins, David Nikolaus [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using both geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.

  11. A solution of spatial query processing and query optimization for spatial databases

    Institute of Scientific and Technical Information of China (English)

    YUAN Jie; XIE Kun-qing; MA Xiu-jun; ZHANG Min; SUN Le-bin

    2004-01-01

    Recently, attention has been focused on spatial query language which is used to query spatial databases. A design of spatial query language has been presented in this paper by extending the standard relational database query language SQL. It recognizes the significantly different requirements of spatial data handling and overcomes the inherent problems of the application of conventional database query languages. This design is based on an extended spatial data model, including the spatial data types and the spatial operators on them. The processing and optimization of spatial queries have also been discussed in this design. In the end, an implementation of this design is given in a spatial query subsystem.

  12. Query-By-Keywords (QBK): Query Formulation Using Semantics and Feedback

    Science.gov (United States)

    Telang, Aditya; Chakravarthy, Sharma; Li, Chengkai

    The staples of information retrieval have been querying and search, respectively, for structured and unstructured repositories. Processing queries over known, structured repositories (e.g., Databases) has been well-understood, and search has become ubiquitous when it comes to unstructured repositories (e.g., Web). Furthermore, searching structured repositories has been explored to a limited extent. However, there is not much work in querying unstructured sources. We argue that querying unstructured sources is the next step in performing focused retrievals. This paper proposed a new approach to generate queries from search-like inputs for unstructured repositories. Instead of burdening the user with schema details, we believe that pre-discovered semantic information in the form of taxonomies, relationship of keywords based on context, and attribute & operator compatibility can be used to generate query skeletons. Furthermore, progressive feedback from users can be used to improve the accuracy of query skeletons generated.

  13. A Quaternary-Stage User Interest Model Based on User Browsing Behavior and Web Page Classification

    Institute of Scientific and Technical Information of China (English)

    Zongli Jiang; Hang Su

    2012-01-01

    The key to personalized search engine lies in user model. Traditional personalized model results in that the search results of secondary search are partial to the long-term interests, besides, forgetting to the long-term interests disenables effective recollection of user interests. This paper presents a quaternary-stage user interest model based on user browsing behavior and web page classification, which consults the principles of cache and recycle bin in operating system, by setting up an illuminating text-stage and a recycle bin interest-stage in front and rear of the traditional interest model respectively to constitute the quaternary-stage user interest model. The model can better reflect the user interests, by using an adaptive natural weight and its calculation method, and by efficiently integrating user browsing behavior and web document content.

  14. Analyzing Medical Image Search Behavior: Semantics and Prediction of Query Results.

    Science.gov (United States)

    De-Arteaga, Maria; Eggel, Ivan; Kahn, Charles E; Müller, Henning

    2015-10-01

    Log files of information retrieval systems that record user behavior have been used to improve the outcomes of retrieval systems, understand user behavior, and predict events. In this article, a log file of the ARRS GoldMiner search engine containing 222,005 consecutive queries is analyzed. Time stamps are available for each query, as well as masked IP addresses, which enables to identify queries from the same person. This article describes the ways in which physicians (or Internet searchers interested in medical images) search and proposes potential improvements by suggesting query modifications. For example, many queries contain only few terms and therefore are not specific; others contain spelling mistakes or non-medical terms that likely lead to poor or empty results. One of the goals of this report is to predict the number of results a query will have since such a model allows search engines to automatically propose query modifications in order to avoid result lists that are empty or too large. This prediction is made based on characteristics of the query terms themselves. Prediction of empty results has an accuracy above 88%, and thus can be used to automatically modify the query to avoid empty result sets for a user. The semantic analysis and data of reformulations done by users in the past can aid the development of better search systems, particularly to improve results for novice users. Therefore, this paper gives important ideas to better understand how people search and how to use this knowledge to improve the performance of specialized medical search engines.

  15. A Faceted Query Engine Applied to Archaeology

    Directory of Open Access Journals (Sweden)

    Kenneth A. Ross

    2007-04-01

    Full Text Available In this article we present the Faceted Query Engine, a system developed at Columbia University under the aegis of the inter-disciplinary project Computational Tools for Modeling, Visualizing and Analyzing Historic and Archaeological Sites. Our system is based on novel Database Systems research that has been published in Computer Science venues (Ross and Janevski, 2004 and Ross et al., 2005. The goal of this article is to introduce our system to the target user audience - the archaeology community. We demonstrate the use of the Faceted Query Engine on a previously unpublished dataset: the Thulamela (South Africa collection. This dataset is comprised of iron-age finds from the Thulamela site at the Kruger National Park. Our project is the first to systematically compile and classify this dataset. We also use a larger dataset, a collection of ancient Egyptian artifacts from the Memphis site (Giddy,1999, to demonstrate some of the features of our system.

  16. Cache-Based Aggregate Query Shipping: An Efficient Scheme of Distributed OLAP Query Processing

    Institute of Scientific and Technical Information of China (English)

    Hua-Ming Liao; Guo-Shun Pei

    2008-01-01

    Our study introduces a novel distributed query plan refinement phase in an enhanced architecture of distributed query processing engine (DQPE). Query plan refinement generates potentially efficient distributed query plan by reusable aggregate query shipping (RAQS) approach. The approach improves response time at the cost of pre-processing time. If theoverheads could not be compensated by query results reusage, RAQS is no more favorable. Therefore a global cost estimation model is employed to get proper operators: RR_Agg, R_Agg, or R_Scan. For the purpose of reusing results of queries with aggregate function in distributed query processing, a multi-level hybrid view caching (HVC) scheme is introduced. The scheme retains the advantages of partial match and aggregate query results caching. By our solution, evaluations with distributed TPC-H queries show significant improvement on average response time.

  17. AQBE — QBE Style Queries for Archetyped Data

    Science.gov (United States)

    Sachdeva, Shelly; Yaginuma, Daigo; Chu, Wanming; Bhalla, Subhash

    Large-scale adoption of electronic healthcare applications requires semantic interoperability. The new proposals propose an advanced (multi-level) DBMS architecture for repository services for health records of patients. These also require query interfaces at multiple levels and at the level of semi-skilled users. In this regard, a high-level user interface for querying the new form of standardized Electronic Health Records system has been examined in this study. It proposes a step-by-step graphical query interface to allow semi-skilled users to write queries. Its aim is to decrease user effort and communication ambiguities, and increase user friendliness.

  18. An Ontology-Based Framework for Modeling User Behavior

    DEFF Research Database (Denmark)

    Razmerita, Liana

    2011-01-01

    This paper focuses on the role of user modeling and semantically enhanced representations for personalization. This paper presents a generic Ontology-based User Modeling framework (OntobUMf), its components, and its associated user modeling processes. This framework models the behavior of the users....... The results of this research may contribute to the development of other frameworks for modeling user behavior, other semantically enhanced user modeling frameworks, or other semantically enhanced information systems....... and classifies its users according to their behavior. The user ontology is the backbone of OntobUMf and has been designed according to the Information Management System Learning Information Package (IMS LIP). The user ontology includes a Behavior concept that extends IMS LIP specification and defines...

  19. User Modeling Combining Access Logs, Page Content and Semantics

    CERN Document Server

    Fortuna, Blaz; Grobelnik, Marko

    2011-01-01

    The paper proposes an approach to modeling users of large Web sites based on combining different data sources: access logs and content of the accessed pages are combined with semantic information about the Web pages, the users and the accesses of the users to the Web site. The assumption is that we are dealing with a large Web site providing content to a large number of users accessing the site. The proposed approach represents each user by a set of features derived from the different data sources, where some feature values may be missing for some users. It further enables user modeling based on the provided characteristics of the targeted user subset. The approach is evaluated on real-world data where we compare performance of the automatic assignment of a user to a predefined user segment when different data sources are used to represent the users.

  20. Multi-Dimensional Path Queries

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...... to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments...

  1. Optimization and Evaluation of Nested Queries and Procedures

    CERN Document Server

    Guravannavar, Ravindra

    2009-01-01

    Many database applications perform complex data retrieval and update tasks. Nested queries, and queries that invoke user-defined functions, which are written using a mix of procedural and SQL constructs, are often used in such applications. A straight-forward evaluation of such queries involves repeated execution of parameterized sub-queries or blocks containing queries and procedural code. An important problem that arises while optimizing nested queries as well as queries with joins, aggregates and set operations is the problem of finding an optimal sort order from a factorial number of possible sort orders. We show that even a special case of this problem is NP-Hard, and present practical heuristics that are effective and easy to incorporate in existing query optimizers. We also consider iterative execution of queries and updates inside complex procedural blocks such as user-defined functions and stored procedures. Parameter batching is an important means of improving performance as it enables set-orientate...

  2. Research on Bayesian Network Based User's Interest Model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weifeng; XU Baowen; CUI Zifeng; XU Lei

    2007-01-01

    It has very realistic significance for improving the quality of users' accessing information to filter and selectively retrieve the large number of information on the Internet. On the basis of analyzing the existing users' interest models and some basic questions of users' interest (representation, derivation and identification of users' interest), a Bayesian network based users' interest model is given. In this model, the users' interest reduction algorithm based on Markov Blanket model is used to reduce the interest noise, and then users' interested and not interested documents are used to train the Bayesian network. Compared to the simple model, this model has the following advantages like small space requirements, simple reasoning method and high recognition rate. The experiment result shows this model can more appropriately reflect the user's interest, and has higher performance and good usability.

  3. Oceanographic ontology-based spatial knowledge query

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The construction of oceanographic ontologies is fundamental to the "digital ocean". Therefore, on the basis of introduction of new concept of oceanographic ontology, an oceanographic ontology-based spatial knowledge query (OOBSKQ) method was proposed and developed. Because the method uses a natural language to describe query conditions and the query result is highly integrated knowledge,it can provide users with direct answers while hiding the complicated computation and reasoning processes, and achieves intelligent,automatic oceanographic spatial information query on the level of knowledge and semantics. A case study of resource and environmental application in bay has shown the implementation process of the method and its feasibility and usefulness.

  4. Modelling Users` Trust in Online Social Networks

    Directory of Open Access Journals (Sweden)

    Iacob Cătoiu

    2014-02-01

    Full Text Available Previous studies (McKnight, Lankton and Tripp, 2011; Liao, Lui and Chen, 2011 have shown the crucial role of trust when choosing to disclose sensitive information online. This is the case of online social networks users, who must disclose a certain amount of personal data in order to gain access to these online services. Taking into account privacy calculus model and the risk/benefit ratio, we propose a model of users’ trust in online social networks with four variables. We have adapted metrics for the purpose of our study and we have assessed their reliability and validity. We use a Partial Least Squares (PLS based structural equation modelling analysis, which validated all our initial assumptions, indicating that our three predictors (privacy concerns, perceived benefits and perceived risks explain 48% of the variation of users’ trust in online social networks, the resulting variable of our study. We also discuss the implications and further research opportunities of our study.

  5. Restricted natural language based querying of clinical databases.

    Science.gov (United States)

    Safari, Leila; Patrick, Jon D

    2014-12-01

    To elevate the level of care to the community it is essential to provide usable tools for healthcare professionals to extract knowledge from clinical data. In this paper a generic translation algorithm is proposed to translate a restricted natural language query (RNLQ) to a standard query language like SQL (Structured Query Language). A special purpose clinical data analytics language (CliniDAL) has been introduced which provides scheme of six classes of clinical questioning templates. A translation algorithm is proposed to translate the RNLQ of users to SQL queries based on a similarity-based Top-k algorithm which is used in the mapping process of CliniDAL. Also a two layer rule-based method is used to interpret the temporal expressions of the query, based on the proposed temporal model. The mapping and translation algorithms are generic and thus able to work with clinical databases in three data design models, including Entity-Relationship (ER), Entity-Attribute-Value (EAV) and XML, however it is only implemented for ER and EAV design models in the current work. It is easy to compose a RNLQ via CliniDAL's interface in which query terms are automatically mapped to the underlying data models of a Clinical Information System (CIS) with an accuracy of more than 84% and the temporal expressions of the query comprising absolute times, relative times or relative events can be automatically mapped to time entities of the underlying CIS and to normalized temporal comparative values. The proposed solution of CliniDAL using the generic mapping and translation algorithms which is enhanced by a temporal analyzer component provides a simple mechanism for composing RNLQ for extracting knowledge from CISs with different data design models for analytics purposes. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Approximate dictionary queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Gasieniec, Leszek

    1996-01-01

    Given a set of n binary strings of length m each. We consider the problem of answering d-queries. Given a binary query string of length m, a d-query is to report if there exists a string in the set within Hamming distance d of . We present a data structure of size O(nm) supporting 1-queries in ti...

  7. SPARQL Query Re-writing Using Partonomy Based Transformation Rules

    Science.gov (United States)

    Jain, Prateek; Yeh, Peter Z.; Verma, Kunal; Henson, Cory A.; Sheth, Amit P.

    Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology's containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries.

  8. Modelling the spatial distribution of Fasciola hepatica in bovines using decision tree, logistic regression and GIS query approaches for Brazil.

    Science.gov (United States)

    Bennema, S C; Molento, M B; Scholte, R G; Carvalho, O S; Pritsch, I

    2017-11-01

    Fascioliasis is a condition caused by the trematode Fasciola hepatica. In this paper, the spatial distribution of F. hepatica in bovines in Brazil was modelled using a decision tree approach and a logistic regression, combined with a geographic information system (GIS) query. In the decision tree and the logistic model, isothermality had the strongest influence on disease prevalence. Also, the 50-year average precipitation in the warmest quarter of the year was included as a risk factor, having a negative influence on the parasite prevalence. The risk maps developed using both techniques, showed a predicted higher prevalence mainly in the South of Brazil. The prediction performance seemed to be high, but both techniques failed to reach a high accuracy in predicting the medium and high prevalence classes to the entire country. The GIS query map, based on the range of isothermality, minimum temperature of coldest month, precipitation of warmest quarter of the year, altitude and the average dailyland surface temperature, showed a possibility of presence of F. hepatica in a very large area. The risk maps produced using these methods can be used to focus activities of animal and public health programmes, even on non-evaluated F. hepatica areas.

  9. Learning Hierarchical User Interest Models from Web Pages

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    We propose an algorithm for learning hierarchical user interest models according to the Web pages users have browsed. In this algorithm, the interests of a user are represented into a tree which is called a user interest tree, the content and the structure of which can change simultaneously to adapt to the changes in a user's interests. This expression represents a user's specific and general interests as a continuum. In some sense, specific interests correspond to short-term interests, while general interests correspond to long-term interests. So this representation more really reflects the users' interests. The algorithm can automatically model a user's multiple interest domains, dynamically generate the interest models and prune a user interest tree when the number of the nodes in it exceeds given value. Finally, we show the experiment results in a Chinese Web Site.

  10. Research on the User Interest Modeling of Personalized Search Engine

    Institute of Scientific and Technical Information of China (English)

    LI Zhengwei; XIA Shixiong; NIU Qiang; XIA Zhanguo

    2007-01-01

    At present, how to enable Search Engine to construct user personal interest model initially, master user's personalized information timely and provide personalized services accurately have become the hotspot in the research of Search Engine area.Aiming at the problems of user model's construction and combining techniques of manual customization modeling and automatic analytical modeling, a User Interest Model (UIM) is proposed in the paper. On the basis of it, the corresponding establishment and update algorithms of User Interest Profile (UIP) are presented subsequently. Simulation tests proved that the UIM proposed and corresponding algorithms could enhance the retrieval precision effectively and have superior adaptability.

  11. The Query-commit Problem

    CERN Document Server

    Molinaro, Marco

    2011-01-01

    In the query-commit problem we are given a graph where edges have distinct probabilities of existing. It is possible to query the edges of the graph, and if the queried edge exists then its endpoints are irrevocably matched. The goal is to find a querying strategy which maximizes the expected size of the matching obtained. This stochastic matching setup is motivated by applications in kidney exchanges and online dating. In this paper we address the query-commit problem from both theoretical and experimental perspectives. First, we show that a simple class of edges can be queried without compromising the optimality of the strategy. This property is then used to obtain in polynomial time an optimal querying strategy when the input graph is sparse. Next we turn our attentions to the kidney exchange application, focusing on instances modeled over real data from existing exchange programs. We prove that, as the number of nodes grows, almost every instance admits a strategy which matches almost all nodes. This resu...

  12. GeoIRIS: Geospatial Information Retrieval and Indexing System-Content Mining, Semantics Modeling, and Complex Queries.

    Science.gov (United States)

    Shyu, Chi-Ren; Klaric, Matt; Scott, Grant J; Barb, Adrian S; Davis, Curt H; Palaniappan, Kannappan

    2007-04-01

    Searching for relevant knowledge across heterogeneous geospatial databases requires an extensive knowledge of the semantic meaning of images, a keen eye for visual patterns, and efficient strategies for collecting and analyzing data with minimal human intervention. In this paper, we present our recently developed content-based multimodal Geospatial Information Retrieval and Indexing System (GeoIRIS) which includes automatic feature extraction, visual content mining from large-scale image databases, and high-dimensional database indexing for fast retrieval. Using these underpinnings, we have developed techniques for complex queries that merge information from heterogeneous geospatial databases, retrievals of objects based on shape and visual characteristics, analysis of multiobject relationships for the retrieval of objects in specific spatial configurations, and semantic models to link low-level image features with high-level visual descriptors. GeoIRIS brings this diverse set of technologies together into a coherent system with an aim of allowing image analysts to more rapidly identify relevant imagery. GeoIRIS is able to answer analysts' questions in seconds, such as "given a query image, show me database satellite images that have similar objects and spatial relationship that are within a certain radius of a landmark."

  13. Metadata: A user`s view

    Energy Technology Data Exchange (ETDEWEB)

    Bretherton, F.P. [Univ. of Wisconsin, Madison, WI (United States); Singley, P.T. [Oak Ridge National Lab., TN (United States)

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  14. Nowcasting Mobile Games Ranking Using Web Search Query Data

    Directory of Open Access Journals (Sweden)

    Yoones A. Sekhavat

    2016-01-01

    Full Text Available In recent years, the Internet has become embedded into the purchasing decision of consumers. The purpose of this paper is to study whether the Internet behavior of users correlates with their actual behavior in computer games market. Rather than proposing the most accurate model for computer game sales, we aim to investigate to what extent web search query data can be exploited to nowcast (contraction of “now” and “forecasting” referring to techniques used to make short-term forecasts (predict the present status of the ranking of mobile games in the world. Google search query data is used for this purpose, since this data can provide a real-time view on the topics of interest. Various statistical techniques are used to show the effectiveness of using web search query data to nowcast mobile games ranking.

  15. BP Network Based Users' Interest Model in Mining WWW Cache

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    By analyzing the WWW Cache model, we bring forward a user-interest description method based on the fuzzy theory and user-interest inferential relations based on BP(back propagation) neural network. By this method, the users' interest in the WWW cache can be described and the neural network of users' interest can be constructed by positive spread of interest and the negative spread of errors. This neural network can infer the users' interest. This model is not the simple extension of the simple interest model, but the round improvement of the model and its related algorithm.

  16. The capillary hysteresis model HYSTR: User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Niemi, A.; Bodvarsson, G.S.

    1991-11-01

    The potential disposal of nuclear waste in the unsaturated zone at Yucca Mountain, Nevada, has generated increased interest in the study of fluid flow through unsaturated media. In the near future, large-scale field tests will be conducted at the Yucca Mountain site, and work is now being done to design and analyze these tests. As part of these efforts a capillary hysteresis model has been developed. A computer program to calculate the hysteretic relationship between capillary pressure {phi} and liquid saturation (S{sub 1}) has been written that is designed to be easily incorporated into any numerical unsaturated flow simulator that computes capillary pressure as a function of liquid saturation. This report gives a detailed description of the model along with information on how it can be interfaced with a transport code. Although the model was developed specifically for calculations related to nuclear waste disposal, it should be applicable to any capillary hysteresis problem for which the secondary and higher order scanning curves can be approximated from the first order scanning curves. HYSTR is a set of subroutines to calculate capillary pressure for a given liquid saturation under hysteretic conditions.

  17. Predicting the Effectiveness of Queries and Retrieval Systems

    NARCIS (Netherlands)

    Hauff, C.

    2010-01-01

    In this thesis we consider users' attempts to express their information needs through queries, or search requests and try to predict whether those requests will be of high or low quality. Intuitively, a query's quality is determined by the outcome of the query, that is, whether the retrieved search

  18. Evaluation methodology for query-based scene understanding systems

    Science.gov (United States)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  19. k-Nearest Neighbor Query Processing Algorithms for a Query Region in Road Networks

    Institute of Scientific and Technical Information of China (English)

    Hyeong-Il Kim; Jae-Woo Chang

    2013-01-01

    Recent development of wireless communication technologies and the popularity of smart phones are making location-based services (LBS) popular.However,requesting queries to LBS servers with users' exact locations may threat the privacy of users.Therefore,there have been many researches on generating a cloaked query region for user privacy protection.Consequently,an efficient query processing algorithm for a query region is required.So,in this paper,we propose k-nearest neighbor query (k-NN) processing algorithms for a query region in road networks.To efficiently retrieve k-NN points of interest (POIs),we make use of the Island index.We also propose a method that generates an adaptive Island index to improve the query processing performance and storage usage.Finally,we show by our performance analysis that our k-NN query processing algorithms outperform the existing k-Range Nearest Neighbor (kRNN) algorithm in terms of network expansion cost and query processing time.

  20. Secure Querying of Recursive XML Views: A Standard XPath-based Technique

    CERN Document Server

    Mahfoud, Houari

    2011-01-01

    Most state-of-the art approaches for securing XML documents allow users to access data only through authorized views defined by annotating an XML grammar (e.g. DTD) with a collection of XPath expressions. To prevent improper disclosure of confidential information, user queries posed on these views need to be rewritten into equivalent queries on the underlying documents. This rewriting enables us to avoid the overhead of view materialization and maintenance. A major concern here is that query rewriting for recursive XML views is still an open problem. To overcome this problem, some works have been proposed to translate XPath queries into non-standard ones, called Regular XPath queries. However, query rewriting under Regular XPath can be of exponential size as it relies on automaton model. Most importantly, Regular XPath remains a theoretical achievement. Indeed, it is not commonly used in practice as translation and evaluation tools are not available. In this paper, we show that query rewriting is always possi...

  1. A Preliminary Mapping of Web Queries Using Existing Image Query Schemes.

    Science.gov (United States)

    Jansen, Bernard J.

    End user searching on the Web has become the primary method of locating images for many people. This study investigates the nature of Web image queries by attempting to map them to known image classification schemes. In this study, approximately 100,000 image queries from a major Web search engine were collected in 1997, 1999, and 2001. A…

  2. Exploring features for automatic identification of news queries through query logs

    Institute of Scientific and Technical Information of China (English)

    Xiaojuan; ZHANG; Jian; LI

    2014-01-01

    Purpose:Existing researches of predicting queries with news intents have tried to extract the classification features from external knowledge bases,this paper tries to present how to apply features extracted from query logs for automatic identification of news queries without using any external resources.Design/methodology/approach:First,we manually labeled 1,220 news queries from Sogou.com.Based on the analysis of these queries,we then identified three features of news queries in terms of query content,time of query occurrence and user click behavior.Afterwards,we used 12 effective features proposed in literature as baseline and conducted experiments based on the support vector machine(SVM)classifier.Finally,we compared the impacts of the features used in this paper on the identification of news queries.Findings:Compared with baseline features,the F-score has been improved from 0.6414 to0.8368 after the use of three newly-identified features,among which the burst point(bst)was the most effective while predicting news queries.In addition,query expression(qes)was more useful than query terms,and among the click behavior-based features,news URL was the most effective one.Research limitations:Analyses based on features extracted from query logs might lead to produce limited results.Instead of short queries,the segmentation tool used in this study has been more widely applied for long texts.Practical implications:The research will be helpful for general-purpose search engines to address search intents for news events.Originality/value:Our approach provides a new and different perspective in recognizing queries with news intent without such large news corpora as blogs or Twitter.

  3. A user experience model for tangible interfaces for children

    NARCIS (Netherlands)

    Reidsma, Dennis; van Dijk, Elisabeth M.A.G.; van der Sluis, Frans; Volpe, G; Camurri, A.; Perloy, L.M.; Nijholt, Antinus

    2015-01-01

    Tangible user interfaces allow children to take advantage of their experience in the real world when interacting with digital information. In this paper we describe a model for tangible user interfaces specifically for children that focuses mainly on the user experience during interaction and on how

  4. CrossQuery: a web tool for easy associative querying of transcriptome data.

    Directory of Open Access Journals (Sweden)

    Toni U Wagner

    Full Text Available Enormous amounts of data are being generated by modern methods such as transcriptome or exome sequencing and microarray profiling. Primary analyses such as quality control, normalization, statistics and mapping are highly complex and need to be performed by specialists. Thereafter, results are handed back to biomedical researchers, who are then confronted with complicated data lists. For rather simple tasks like data filtering, sorting and cross-association there is a need for new tools which can be used by non-specialists. Here, we describe CrossQuery, a web tool that enables straight forward, simple syntax queries to be executed on transcriptome sequencing and microarray datasets. We provide deep-sequencing data sets of stem cell lines derived from the model fish Medaka and microarray data of human endothelial cells. In the example datasets provided, mRNA expression levels, gene, transcript and sample identification numbers, GO-terms and gene descriptions can be freely correlated, filtered and sorted. Queries can be saved for later reuse and results can be exported to standard formats that allow copy-and-paste to all widespread data visualization tools such as Microsoft Excel. CrossQuery enables researchers to quickly and freely work with transcriptome and microarray data sets requiring only minimal computer skills. Furthermore, CrossQuery allows growing association of multiple datasets as long as at least one common point of correlated information, such as transcript identification numbers or GO-terms, is shared between samples. For advanced users, the object-oriented plug-in and event-driven code design of both server-side and client-side scripts allow easy addition of new features, data sources and data types.

  5. CrossQuery: a web tool for easy associative querying of transcriptome data.

    Science.gov (United States)

    Wagner, Toni U; Fischer, Andreas; Thoma, Eva C; Schartl, Manfred

    2011-01-01

    Enormous amounts of data are being generated by modern methods such as transcriptome or exome sequencing and microarray profiling. Primary analyses such as quality control, normalization, statistics and mapping are highly complex and need to be performed by specialists. Thereafter, results are handed back to biomedical researchers, who are then confronted with complicated data lists. For rather simple tasks like data filtering, sorting and cross-association there is a need for new tools which can be used by non-specialists. Here, we describe CrossQuery, a web tool that enables straight forward, simple syntax queries to be executed on transcriptome sequencing and microarray datasets. We provide deep-sequencing data sets of stem cell lines derived from the model fish Medaka and microarray data of human endothelial cells. In the example datasets provided, mRNA expression levels, gene, transcript and sample identification numbers, GO-terms and gene descriptions can be freely correlated, filtered and sorted. Queries can be saved for later reuse and results can be exported to standard formats that allow copy-and-paste to all widespread data visualization tools such as Microsoft Excel. CrossQuery enables researchers to quickly and freely work with transcriptome and microarray data sets requiring only minimal computer skills. Furthermore, CrossQuery allows growing association of multiple datasets as long as at least one common point of correlated information, such as transcript identification numbers or GO-terms, is shared between samples. For advanced users, the object-oriented plug-in and event-driven code design of both server-side and client-side scripts allow easy addition of new features, data sources and data types.

  6. The Research on Automatic Construction of Domain Model Based on Deep Web Query Interfaces

    Science.gov (United States)

    JianPing, Gu

    The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.

  7. Querying quantitative logic models (Q2LM) to study intracellular signaling networks and cell-cytokine interactions

    Science.gov (United States)

    Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A

    2012-01-01

    Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called “querying quantitative logic models” (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. PMID:22125256

  8. A New Publicly Available Chemical Query Language, CSRML, to support Chemotype Representations for Application to Data-Mining and Modeling

    Science.gov (United States)

    A new XML-based query language, CSRML, has been developed for representing chemical substructures, molecules, reaction rules, and reactions. CSRML queries are capable of integrating additional forms of information beyond the simple substructure (e.g., SMARTS) or reaction transfor...

  9. Building a Vietnamese Language Query Processing Framework for e-Library Searching Systems

    Directory of Open Access Journals (Sweden)

    Dang Tuan Nguyen

    2009-10-01

    Full Text Available In the objective of building intelligent searching systems for e-libraries or online bookstores, we have proposed a searching system model based on a Vietnamese language query processing component. Such document searching systems based on this model can allow users to use Vietnamese queries that represent content information as input, instead of entering keywords for searching in specific fields in database. To simplify the realization process of system based on this searching system model, we set a target of building a framework to support the rapid development of Vietnamese language query processing components. Such framework let the implementation of Vietnamese language query processing component in similar systems in this domain to be done more easily.

  10. T:XML: A Tool Supporting User Interface Model Transformation

    Science.gov (United States)

    López-Jaquero, Víctor; Montero, Francisco; González, Pascual

    Model driven development of user interfaces is based on the transformation of an abstract specification into the final user interface the user will interact with. The design of transformation rules to carry out this transformation process is a key issue in any model-driven user interface development approach. In this paper, we introduce T:XML, an integrated development environment for managing, creating and previewing transformation rules. The tool supports the specification of transformation rules by using a graphical notation that works on the basis of the transformation of the input model into a graph-based representation. T:XML allows the design and execution of transformation rules in an integrated development environment. Furthermore, the designer can also preview how the generated user interface looks like after the transformations have been applied. These previewing capabilities can be used to quickly create prototypes to discuss with the users in user-centered design methods.

  11. Learning semantic query suggestions

    NARCIS (Netherlands)

    E. Meij; M. Bron; L. Hollink; B. Huurnink; M. de Rijke

    2009-01-01

    An important application of semantic web technology is recognizing human-defined concepts in text. Query transformation is a strategy often used in search engines to derive queries that are able to return more useful search results than the original query and most popular search engines provide faci

  12. Distributed Top-k Queries in E-commerce Environment

    Institute of Scientific and Technical Information of China (English)

    JiangZhan; YiqingSong; HaixiaZhang

    2004-01-01

    This paper focus on how to make distributed top-k query in e-commerce environment through web service. We first give the query process in such environment, then we present an algorithms for processing such queries, which based on the query model we defined. Experimental results show that the algorithms is efficient.

  13. QUrdPro: Query processing system for Urdu Language

    Directory of Open Access Journals (Sweden)

    Rukhsana Thaker,

    2015-06-01

    Full Text Available The tremendous increase in the multilingual data on the internet has increased the demand for efficient retrieval of information. Urdu is one of the widely spoken and written languages of south Asia. Due to unstructured format of Urdu language information retrieval of information is a big challenge. Question Answering systems aims to retrieve point-to-point answers rather than flooding with documents. It is needed when the user gets an in depth knowledge in a particular domain. When user needs some information, it must give the relevant answer. The question-answer retrieval of ontology knowledge base provides a convenient way to obtain knowledge for use, but the natural language need to be mapped to the query statement of ontology. This paper describes a query processing system QUrdPro based on ontology. This system is a combination of NLP and Ontology. It makes use of ontology in several phases for efficient query processing. Our focus is on the knowledge derived from the concepts used in the ontology and the relationship between these concepts. In this paper we describe the architecture of QUrdPro ,query processing system for Urdu and process model for the system is also discussed in detail.

  14. Stochastic Models Predict User Behavior in Social Media

    CERN Document Server

    Hogg, Tad; Smith, Laura M

    2013-01-01

    User response to contributed content in online social media depends on many factors. These include how the site lays out new content, how frequently the user visits the site, how many friends the user follows, how active these friends are, as well as how interesting or useful the content is to the user. We present a stochastic modeling framework that relates a user's behavior to details of the site's user interface and user activity and describe a procedure for estimating model parameters from available data. We apply the model to study discussions of controversial topics on Twitter, specifically, to predict how followers of an advocate for a topic respond to the advocate's posts. We show that a model of user behavior that explicitly accounts for a user transitioning through a series of states before responding to an advocate's post better predicts response than models that fail to take these states into account. We demonstrate other benefits of stochastic models, such as their ability to identify users who a...

  15. User-oriented and cognitive models of information retrieval

    DEFF Research Database (Denmark)

    Järvelin, Kalervo; Ingwersen, Peter

    2010-01-01

    The domain of user-oriented and cognitive IR is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain.  The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical applications. Several models...... with different emphases on user-oriented and cognitive IR are presented - ranging from overall approaches and relevance models to procedural models, cognitive models, and task-based models. The present entry does not discuss empirical findings based on the models....

  16. User-oriented and cognitive models of information retrieval

    DEFF Research Database (Denmark)

    Järvelin, Kalervo; Ingwersen, Peter

    2010-01-01

    The domain of user-oriented and cognitive IR is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain.  The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical applications. Several models...... with different emphases on user-oriented and cognitive IR are presented - ranging from overall approaches and relevance models to procedural models, cognitive models, and task-based models. The present entry does not discuss empirical findings based on the models....

  17. Hybrid Filtering in Semantic Query Processing

    Science.gov (United States)

    Jeong, Hanjo

    2011-01-01

    This dissertation presents a hybrid filtering method and a case-based reasoning framework for enhancing the effectiveness of Web search. Web search may not reflect user needs, intent, context, and preferences, because today's keyword-based search is lacking semantic information to capture the user's context and intent in posing the search query.…

  18. Query strategy for sequential ontology debugging

    CERN Document Server

    Shchekotykhina, Kostyantyn; Fleiss, Philipp; Rodler, Patrick

    2011-01-01

    Debugging of ontologies is an important prerequisite for their wide-spread application, especially in areas that rely upon everyday users to create and maintain knowledge bases, as in the case of the Semantic Web. Recent approaches use diagnosis methods to identify causes of inconsistent or incoherent ontologies. However, in most debugging scenarios these methods return many alternative diagnoses, thus placing the burden of fault localization on the user. This paper demonstrates how the target diagnosis can be identified by performing a sequence of observations, that is, by querying an oracle about entailments of the target ontology. We exploit a-priori probabilities of typical user errors to formulate information-theoretic concepts for query selection. Our evaluation showed that the proposed method significantly reduces the number of required queries compared to myopic strategies. We experimented with different probability distributions of user errors and different qualities of the a-priori probabilities. Ou...

  19. Enhancing Web Search with Semantic Identification of User Preferences

    Directory of Open Access Journals (Sweden)

    Naglaa Fathy

    2011-11-01

    Full Text Available Personalized web search is able to satisfy individuals information needs by modeling long-term and short-term user interests based on user actions, browsed documents or past queries and incorporate these in the search process. In this paper, we propose a personalized search approach which models the user search preferences in an ontological user profile and semantically compares this model against user current query context to re-rank search results. Our user profile is based on the predefined ontology Open Directory Project (ODP so that after a user's search, relevant web pages are classified into topics in the ontology using semantic and cosine similarity measures. Moreover, interest scores are assigned to topics based on the users ongoing behavior. Our experiments show that re-ranking based on the semantic evidence of the updated user profile efficiently satisfies user information needs with the most relevant results being brought on to the top of the returned results.

  20. Modeling and clustering users with evolving profiles in usage streams

    KAUST Repository

    Zhang, Chongsheng

    2012-09-01

    Today, there is an increasing need of data stream mining technology to discover important patterns on the fly. Existing data stream models and algorithms commonly assume that users\\' records or profiles in data streams will not be updated or revised once they arrive. Nevertheless, in various applications such asWeb usage, the records/profiles of the users can evolve along time. This kind of streaming data evolves in two forms, the streaming of tuples or transactions as in the case of traditional data streams, and more importantly, the evolving of user records/profiles inside the streams. Such data streams bring difficulties on modeling and clustering for exploring users\\' behaviors. In this paper, we propose three models to summarize this kind of data streams, which are the batch model, the Evolving Objects (EO) model and the Dynamic Data Stream (DDS) model. Through creating, updating and deleting user profiles, these models summarize the behaviors of each user as a profile object. Based upon these models, clustering algorithms are employed to discover interesting user groups from the profile objects. We have evaluated all the proposed models on a large real-world data set, showing that the DDS model summarizes the data streams with evolving tuples more efficiently and effectively, and provides better basis for clustering users than the other two models. © 2012 IEEE.

  1. Collective spatial keyword querying

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.;

    2011-01-01

    With the proliferation of geo-positioning and geo-tagging, spatial web objects that possess both a geographical location and a textual description are gaining in prevalence, and spatial keyword queries that exploit both location and textual description are gaining in prominence. However......, the queries studied so far generally focus on finding individual objects that each satisfy a query rather than finding groups of objects where the objects in a group collectively satisfy a query. We define the problem of retrieving a group of spatial web objects such that the group's keywords cover the query...

  2. Macro System Model (MSM) User Guide, Version 1.3

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, M.; Diakov, V.; Sa, T.; Goldsby, M.

    2011-09-01

    This user guide describes the macro system model (MSM). The MSM has been designed to allow users to analyze the financial, environmental, transitional, geographical, and R&D issues associated with the transition to a hydrogen economy. Basic end users can use the MSM to answer cross-cutting questions that were previously difficult to answer in a consistent and timely manner due to various assumptions and methodologies among different models.

  3. OntoQuery: easy-to-use web-based OWL querying.

    Science.gov (United States)

    Tudose, Ilinca; Hastings, Janna; Muthukrishnan, Venkatesh; Owen, Gareth; Turner, Steve; Dekker, Adriano; Kale, Namrata; Ennis, Marcus; Steinbeck, Christoph

    2013-11-15

    The Web Ontology Language (OWL) provides a sophisticated language for building complex domain ontologies and is widely used in bio-ontologies such as the Gene Ontology. The Protégé-OWL ontology editing tool provides a query facility that allows composition and execution of queries with the human-readable Manchester OWL syntax, with syntax checking and entity label lookup. No equivalent query facility such as the Protégé Description Logics (DL) query yet exists in web form. However, many users interact with bio-ontologies such as chemical entities of biological interest and the Gene Ontology using their online Web sites, within which DL-based querying functionality is not available. To address this gap, we introduce the OntoQuery web-based query utility.  The source code for this implementation together with instructions for installation is available at http://github.com/IlincaTudose/OntoQuery. OntoQuery software is fully compatible with all OWL-based ontologies and is available for download (CC-0 license). The ChEBI installation, ChEBI OntoQuery, is available at http://www.ebi.ac.uk/chebi/tools/ontoquery. hastings@ebi.ac.uk.

  4. VMTL: a language for end-user model transformation

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2016-01-01

    , these languages are largely ill-equipped for adoption by end-user modelers in areas such as requirements engineering, business process management, or enterprise architecture. We aim to introduce a model transformation language addressing the skills and requirements of end-user modelers. With this contribution, we......Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently...... hope to broaden the application scope of model transformation and MDE technology in general. We discuss the profile of end-user modelers and propose a set of design guidelines for model transformation languages addressing them. We then introduce Visual Model Transformation Language (VMTL) following...

  5. Nearest Neighbor Queries in Road Networks

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Kolar, Jan; Pedersen, Torben Bach

    2003-01-01

    With wireless communications and geo-positioning being widely available, it becomes possible to offer new e-services that provide mobile users with information about other mobile objects. This paper concerns active, ordered k-nearest neighbor queries for query and data objects that are moving...... for the nearest neighbor search in the prototype is presented in detail. In addition, the paper reports on results from experiments with the prototype system....

  6. Querying Schemas With Access Restrictions

    CERN Document Server

    Benedikt, Michael; Ley, Clemens

    2012-01-01

    We study verification of systems whose transitions consist of accesses to a Web-based data-source. An access is a lookup on a relation within a relational database, fixing values for a set of positions in the relation. For example, a transition can represent access to a Web form, where the user is restricted to filling in values for a particular set of fields. We look at verifying properties of a schema describing the possible accesses of such a system. We present a language where one can describe the properties of an access path, and also specify additional restrictions on accesses that are enforced by the schema. Our main property language, AccLTL, is based on a first-order extension of linear-time temporal logic, interpreting access paths as sequences of relational structures. We also present a lower-level automaton model, Aautomata, which AccLTL specifications can compile into. We show that AccLTL and A-automata can express static analysis problems related to "querying with limited access patterns" that h...

  7. Federated query processing for the semantic web

    CERN Document Server

    Buil-Aranda, C

    2014-01-01

    During the last years, the amount of RDF data has increased exponentially over the Web, exposed via SPARQL endpoints. These SPARQL endpoints allow users to direct SPARQL queries to the RDF data. Federated SPARQL query processing allows to query several of these RDF databases as if they were a single one, integrating the results from all of them. This is a key concept in the Web of Data and it is also a hot topic in the community. Besides of that, the W3C SPARQL-WG has standardized it in the new Recommendation SPARQL 1.1.This book provides a formalisation of the W3C proposed recommendation. Thi

  8. GeoVanet: A Routing Protocol for Query Processing in Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Thierry Delot

    2011-01-01

    Full Text Available In a vehicular ad hoc network (VANET, cars can exchange information by using short-range wireless communications. Along with the opportunities offered by vehicular networks, a number of challenges also arise. In particular, most works so far have focused on a push model, where potentially useful data are pushed towards vehicles. The use of pull models, that would allow users to send queries to a set of cars in order to find the desired information, has not been studied in depth. The main challenge for pull models is the difficulty to route the different results towards the query originator in a highly dynamic network where the nodes move very quickly. To solve this issue, we propose GeoVanet, an anonymous and non-intrusive geographic routing protocol which ensures that the sender of a query can get a consistent answer. Our goal is to ensure that the user will be able to retrieve the query results within a bounded time. To prove the effectiveness of GeoVanet, an extensive experimental evaluation has been performed, that proves the interest of the proposal for both rural and urban areas. It shows that up to 80% of the available query results are delivered to the user.

  9. JEDI Marine and Hydrokinetic Model: User Reference Guide

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, M.; Previsic, M.

    2011-04-01

    The Jobs and Economic Development Impact Model (JEDI) for Marine and Hydrokinetics (MHK) is a user-friendly spreadsheet-based tool designed to demonstrate the economic impacts associated with developing and operating MHK power systems in the United States. The JEDI MHK User Reference Guide was developed to assist users in using and understanding the model. This guide provides information on the model's underlying methodology, as well as the sources and parameters used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted.

  10. Modeling Context, Collaboration, and Civilization in End-User Informatics

    CERN Document Server

    Maney, George A

    2007-01-01

    End-user informatics applications are Internet data web management automation solutions. These are mass modeling and mass management collaborative communal consensus solutions. They are made and maintained by managerial, professional, technical and specialist end-users. In end-user informatics the end-users are always right. So it becomes necessary for information technology professionals to understand information and informatics from the end-user perspective. End-user informatics starts with the observation that practical prose is a mass consensus communal modeling technology. This high technology is the mechanistic modeling medium we all use every day in all of our practical pursuits. Practical information flows are the lifeblood of modern capitalist communities. But what exactly is practical information? It's ultimately physical information, but the physics is highly emergent rather than elementary. So practical reality is just physical reality in deep disguise. Practical prose is the medium that we all us...

  11. ETP Modelling. Intuitive User interaction. Overview 2011-2014

    NARCIS (Netherlands)

    Erp, J.B.F. van; Maanen, P.P. van; Venrooij, W.; Calvert, S.C.; Beurden, M.H.P.H. van

    2014-01-01

    Intuitive User Interaction aims to develop the next generation user interfaces required to grant experts and laymen access to complex models and large amount of data in design and decision processes and therewith to contribute to the ambition to broaden the applicability of models. Important topics

  12. 3D Hilbert Space Filling Curves in 3D City Modeling for Faster Spatial Queries

    DEFF Research Database (Denmark)

    Ujang, Uznir; Antón Castro, Francesc/François; Azri, Suhaibah;

    2014-01-01

    are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a sub-interval of the ([0,1]) interval to the corresponding...... method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban...... web standards. However, these 3D city models consume much more storage compared to two dimensional (2 D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access...

  13. Model driven development of user interface prototypes

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    Many approaches to interface development apply only to isolated aspects of the development of user interfaces (UIs), e.g., exploration during the early phases, design of visual appearance, or implementation in some technology. In this paper we explore an _integrated_ approach to incorporate the w...

  14. Acquiring User Models to Test Automated Assistants

    Science.gov (United States)

    2013-01-01

    vectors, but operates in a markedly different style from the user. Once we have extracted our traces and feature vectors from them, we will want to know ...prior work focuses on performing well on a specific task, such as flying well (Šuc, Bratko, and Sam- mut 2004), (Bain and Sammut 1995), instead of

  15. Users matter : multi-agent systems model of high performance computing cluster users.

    Energy Technology Data Exchange (ETDEWEB)

    North, M. J.; Hood, C. S.; Decision and Information Sciences; IIT

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex due to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.

  16. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  17. GQL: Extending XQuery to Query GML Documents

    Institute of Scientific and Technical Information of China (English)

    GUAN Jihong; ZHU Fubao; ZHOU Jiaogen; NIU Liping

    2006-01-01

    GML is becoming the de facto standard for electronic data exchange among the applications of Web and distributed geographic information systems. However, the conventional query languages (e.g. SQL and its extended versions) are not suitable for direct querying and updating of GML documents. Even the effective approaches working well with XML could not guarantee good results when applied to GML documents. Although XQuery is a powerful standard query language for XML, it is not proposed for querying spatial features, which constitute the most important components in GML documents. We propose GQL, a query language specification to support spatial queries over GML documents by extending XQuery. The data model, algebra, and formal semantics as well as various spatial functions and operations of GQL are presented in detail.

  18. Dynamic Planar Range Maxima Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Tsakalidis, Konstantinos

    2011-01-01

    We consider the dynamic two-dimensional maxima query problem. Let P be a set of n points in the plane. A point is maximal if it is not dominated by any other point in P. We describe two data structures that support the reporting of the t maximal points that dominate a given query point, and allow...... update time, using O(nlogn) space, where t is the size of the output. This improves the worst case deletion time of the dynamic rectangular visibility query problem from O(log^3 n) to O(log^2 n). We adapt the data structure to the RAM model with word size w, where the coordinates of the points...... in the worst case. The data structure also supports the more general query of reporting the maximal points among the points that lie in a given 3-sided orthogonal range unbounded from above in the same complexity. We can support 4-sided queries in O(log^2 n + t) worst case time, and O(log^2 n) worst case...

  19. Shuttle operations simulation model programmers'/users' manual

    Science.gov (United States)

    Porter, D. G.

    1972-01-01

    The prospective user of the shuttle operations simulation (SOS) model is given sufficient information to enable him to perform simulation studies of the space shuttle launch-to-launch operations cycle. The procedures used for modifying the SOS model to meet user requirements are described. The various control card sequences required to execute the SOS model are given. The report is written for users with varying computer simulation experience. A description of the components of the SOS model is included that presents both an explanation of the logic involved in the simulation of the shuttle operations cycle and a description of the routines used to support the actual simulation.

  20. Querying and Manipulating Temporal Databases

    Directory of Open Access Journals (Sweden)

    Mohamed Mkaouar

    2011-03-01

    Full Text Available Many works have focused, for over twenty five years, on the integration of the time dimension indatabases (DB. However, the standard SQL3 does not yet allow easy definition, manipulation andquerying of temporal DBs. In this paper, we study how we can simplify querying and manipulatingtemporal facts in SQL3, using a model that integrates time in a native manner. To do this, we proposenew keywords and syntax to define different temporal versions for many relational operators andfunctions used in SQL. It then becomes possible to perform various queries and updates appropriate totemporal facts. We illustrate the use of these proposals on many examples from a real application.

  1. A Markov Chain Model for Changes in Users' Assessment of Search Results.

    Directory of Open Access Journals (Sweden)

    Maayan Zhitomirsky-Geffet

    Full Text Available Previous research shows that users tend to change their assessment of search results over time. This is a first study that investigates the factors and reasons for these changes, and describes a stochastic model of user behaviour that may explain these changes. In particular, we hypothesise that most of the changes are local, i.e. between results with similar or close relevance to the query, and thus belong to the same"coarse" relevance category. According to the theory of coarse beliefs and categorical thinking, humans tend to divide the range of values under consideration into coarse categories, and are thus able to distinguish only between cross-category values but not within them. To test this hypothesis we conducted five experiments with about 120 subjects divided into 3 groups. Each student in every group was asked to rank and assign relevance scores to the same set of search results over two or three rounds, with a period of three to nine weeks between each round. The subjects of the last three-round experiment were then exposed to the differences in their judgements and were asked to explain them. We make use of a Markov chain model to measure change in users' judgments between the different rounds. The Markov chain demonstrates that the changes converge, and that a majority of the changes are local to a neighbouring relevance category. We found that most of the subjects were satisfied with their changes, and did not perceive them as mistakes but rather as a legitimate phenomenon, since they believe that time has influenced their relevance assessment. Both our quantitative analysis and user comments support the hypothesis of the existence of coarse relevance categories resulting from categorical thinking in the context of user evaluation of search results.

  2. A Markov Chain Model for Changes in Users' Assessment of Search Results.

    Science.gov (United States)

    Zhitomirsky-Geffet, Maayan; Bar-Ilan, Judit; Levene, Mark

    2016-01-01

    Previous research shows that users tend to change their assessment of search results over time. This is a first study that investigates the factors and reasons for these changes, and describes a stochastic model of user behaviour that may explain these changes. In particular, we hypothesise that most of the changes are local, i.e. between results with similar or close relevance to the query, and thus belong to the same"coarse" relevance category. According to the theory of coarse beliefs and categorical thinking, humans tend to divide the range of values under consideration into coarse categories, and are thus able to distinguish only between cross-category values but not within them. To test this hypothesis we conducted five experiments with about 120 subjects divided into 3 groups. Each student in every group was asked to rank and assign relevance scores to the same set of search results over two or three rounds, with a period of three to nine weeks between each round. The subjects of the last three-round experiment were then exposed to the differences in their judgements and were asked to explain them. We make use of a Markov chain model to measure change in users' judgments between the different rounds. The Markov chain demonstrates that the changes converge, and that a majority of the changes are local to a neighbouring relevance category. We found that most of the subjects were satisfied with their changes, and did not perceive them as mistakes but rather as a legitimate phenomenon, since they believe that time has influenced their relevance assessment. Both our quantitative analysis and user comments support the hypothesis of the existence of coarse relevance categories resulting from categorical thinking in the context of user evaluation of search results.

  3. Language Intent Models for Inferring User Browsing Behavior

    NARCIS (Netherlands)

    Tsagkias, M.; Blanco, R.

    2012-01-01

    Modeling user browsing behavior is an active research area with tangible real-world applications, e.g., organizations can adapt their online presence to their visitors browsing behavior with positive effects in user engagement, and revenue. We concentrate on online news agents, and present a semi-su

  4. Query Recommendation by Coupling Personalization with Clustering for Search Engine

    Directory of Open Access Journals (Sweden)

    Dhiliphanrajkumar.Thambidurai

    2016-11-01

    Full Text Available In the present world internet and web search engines have become an important part in one’s day-today life. For a user query, more than few thousand web pages are retrieved but most of them are irrelevant. A major problem in search engine is that the user queries are usually short and ambiguous, and they are not sufficient to satisfy the precise user needs. Also listing more number of results according to user make them worry about searching the desired results and it takes large amount of time to search from the huge list of results. To overcome all the problems, an effective approach is developed by capturing the users’ click through and bookmarking data to provide personalized query recommendation. For retrieving the results, Google API is used. Experimental results show that the proposed method is providing better query recommendation results than the existing query suggestion methods.

  5. The uses and users of design process models

    OpenAIRE

    2016-01-01

    The use of design process models is of great importance for developing better products. Indeed, it is one of the factors that may differentiate the best companies from the rest. However, their adoption in companies is declining. Usefulness and usability issues may be responsible for process models not to meet the needs of its users. The goal of this research is to provide deeper understanding of the users needs of design process models. Three main perspectives are provided: (1) why organizati...

  6. Enhancing Recall in Semantic Querying

    DEFF Research Database (Denmark)

    Rouces, Jacobo

    2013-01-01

    RDF and SPARQL are currently state-of-the-art W3C standards to respectively represent and query structured information, especially when information from different sources must be federated. However, there are various reasons for which the same knowledge can be modeled in RDF graphs that are both ...

  7. Model driven development of user interface prototypes

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    Many approaches to interface development apply only to isolated aspects of the development of user interfaces (UIs), e.g., exploration during the early phases, design of visual appearance, or implementation in some technology. In this paper we explore an _integrated_ approach to incorporate...... groups like graphic designers and software developers by integrating traditional pen-and-paper based methods with contemporary MDA-based CASE tools. We have implemented our approach in the Advanced Interaction Design Environemnt (AIDE), an application to support WEDs....

  8. Queries with Guarded Negation (full version)

    CERN Document Server

    Barany, Vince; Otto, Martin

    2012-01-01

    A well-established and fundamental insight in database theory is that negation (also known as complementation) tends to make queries difficult to process and difficult to reason about. Many basic problems are decidable and admit practical algorithms in the case of unions of conjunctive queries, but become difficult or even undecidable when queries are allowed to contain negation. Inspired by recent results in finite model theory, we consider a restricted form of negation, guarded negation. We introduce a fragment of SQL, called GN-SQL, as well as a fragment of Datalog with stratified negation, called GN-Datalog, that allow only guarded negation, and we show that these query languages are computationally well behaved, in terms of testing query containment, query evaluation, open-world query answering, and boundedness. GN-SQL and GN-Datalog subsume a number of well known query languages and constraint languages, such as unions of conjunctive queries, monadic Datalog, and frontier-guarded tgds. In addition, an a...

  9. Spatial information semantic query based on SPARQL

    Science.gov (United States)

    Xiao, Zhifeng; Huang, Lei; Zhai, Xiaofang

    2009-10-01

    How can the efficiency of spatial information inquiries be enhanced in today's fast-growing information age? We are rich in geospatial data but poor in up-to-date geospatial information and knowledge that are ready to be accessed by public users. This paper adopts an approach for querying spatial semantic by building an Web Ontology language(OWL) format ontology and introducing SPARQL Protocol and RDF Query Language(SPARQL) to search spatial semantic relations. It is important to establish spatial semantics that support for effective spatial reasoning for performing semantic query. Compared to earlier keyword-based and information retrieval techniques that rely on syntax, we use semantic approaches in our spatial queries system. Semantic approaches need to be developed by ontology, so we use OWL to describe spatial information extracted by the large-scale map of Wuhan. Spatial information expressed by ontology with formal semantics is available to machines for processing and to people for understanding. The approach is illustrated by introducing a case study for using SPARQL to query geo-spatial ontology instances of Wuhan. The paper shows that making use of SPARQL to search OWL ontology instances can ensure the result's accuracy and applicability. The result also indicates constructing a geo-spatial semantic query system has positive efforts on forming spatial query and retrieval.

  10. Affective Computing Model for the Set Pair Users on Twitter

    Directory of Open Access Journals (Sweden)

    Chunying Zhang

    2013-01-01

    Full Text Available Affective computing is the calculation about sentiment, sentiment generated and the aspects of affecting the sentiment. However, the different factors often cause the uncertainty of sentiment expression of the users. Today twitter as the information media of real-time and timely has become better sentiment expression vector for users themselves. Therefore, in allusion to the diversity of sentiment form of twitter information to express sentiment, this paper constructs affective computing model, starting from the differences of the constituted form of Twitter based on set pair theory to make analysis and calculation for user sentiment, from the text, emoticon, picture information and other multi-angle to analyze the positive, negative and uncertain emotion of the users for the signal twitter, consolidating the weight of various parts in emotional information, building hierarchical set pair affective computing model for twitter users, to offer more favorable data support for the relevant departments and businesses.

  11. Social media fundamentals, models, and ranking of user-generated content

    CERN Document Server

    Wyrwoll, Claudia

    2014-01-01

    The increasing amount of user-generated content available on social media platforms requires new methods to find, evaluate, and to compare. To this day, existing ranking approaches to user-generated content do not allow for evaluation across platforms by exploiting its metadata. User-generated content, such as blog postings, forum discussions, shared videos etc. does however contain information that can be used for its evaluation independent of specific search interests. Claudia Wyrwoll presents a query- and language-independent ranking approach that allows for global evaluation of user-genera

  12. Quantum associative memory with improved distributed queries

    CERN Document Server

    Njafa, J -P Tchapet; Woafo, Paul

    2012-01-01

    The paper proposes an improved quantum associative algorithm with distributed query based on model proposed by Ezhov et al. We introduce two modifications of the query that optimized data retrieval of correct multi-patterns simultaneously for any rate of the number of the recognition pattern on the total patterns. Simulation results are given.

  13. Query recommendation for children

    NARCIS (Netherlands)

    Duarte Torres, Sergio; Hiemstra, Djoerd; Weber, Ingmar; Serdyukov, Pavel

    2012-01-01

    One of the biggest problems that children experience while searching the web occurs during the query formulation process. Children have been found to struggle formulating queries based on keywords given their limited vocabulary and their difficulty to choose the right keywords. In this work we propo

  14. WATERS Expert Query Tool

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Expert Query Tool is a web-based reporting tool using the EPA’s WATERS database.There are just three steps to using Expert Query:1. View Selection – Choose what...

  15. Mastering jQuery

    CERN Document Server

    Libby, Alex

    2015-01-01

    If you are a developer who is already familiar with using jQuery and wants to push your skill set further, then this book is for you. The book assumes an intermediate knowledge level of jQuery, JavaScript, HTML5, and CSS.

  16. A Minicomputer Implementation of a Data Model Independent, User-Friendly Interface to Databases.

    Science.gov (United States)

    1982-12-15

    tutorial showing simple examples. The command language ia a non- procedural relational algebraic language which includes the most useful of the relational...to access and what his query is. AQERS will be a very useful, user-friendly tool. 4’ I 4 55 References 1. Cardenas , Alphonso F. Data Base Management

  17. Restructuring Large Data Hierarchies for Scientific Query Tools

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, M

    2005-02-08

    Today's large-scale scientific simulations produce data sets tens to hundreds of terabytes in size. The DataFoundry project is developing querying and analysis tools for these data sets. The Approximate Ad-Hoc Query Engine for Simulation Data (AQSIM) uses a multi-resolution, tree-shaped data structure that allows users to place runtime limits on queries over scientific simulation data. In this AQSIM data hierarchy, each node in the tree contains an abstract model describing all of the information contained in the subtree below that node. AQSIM is able to create the data hierarchy in a single pass. However, the nodes in the hierarchy frequently have low node fanout, which leads to inefficient I/O behavior during query processing. Low node fanout is a common problem in tree-shaped indices. This paper presents a set of one-pass tree ''pruning'' algorithms that efficiently restructure the data hierarchy by removing inner nodes, thereby increasing node fanout. As our experimental results show, the best approach is a combination of two algorithms, one that focuses on increasing node fanout and one that attempts to reduce the maximum tree height.

  18. Optimization of Dynamically Generated SQL Queries for Tiny-Huge, Huge-Tiny Problem

    Directory of Open Access Journals (Sweden)

    Arjun K Sirohi

    2013-03-01

    Full Text Available In most new commercial business software applications like Customer Relationship Management, the datais stored in the database layer which is usually a Relational Database Management System (RDBMS likeOracle, DB2 UDB or SQL Server. To access data from these databases, Structured Query Language (SQLqueries are used that are generated dynamically at run time based on defined business models and businessrules. One such business rule is visibility- the capability of the application to restrict data access based onthe role and responsibility of the user logged in to the application. This is generally achieved by appendingsecurity predicates in the form of sub-queries to the main query based on the roles and responsibility of theuser. In some cases, the outer query may be more restrictive while in other cases, the security predicatesmay be more restrictive. This often results in a dilemma for the cost-based optimizer (CBO of the backenddatabase whether to drive from the outer query or drive from the security predicate sub-queries. Thisdilemma is sometimes called the “Tiny-Huge, Huge-Tiny” problem and results in serious performancedegradation by way of increased response times on the application User Interface (UI. This paperprovides a case study of a new approach to vastly reduce this CBO dilemma by a combination of denormalizedcolumns and re-writing of the security predicates’ sub-queries at run-time, thereby levelling theouter and security sub-queries. This approach results in more stable execution plans in the database andmuch better performance of such SQLs, effectively leading to higher performance and scalability of theapplication.

  19. ODQ: A Fluid Office Document Query Language

    Directory of Open Access Journals (Sweden)

    Xuhong Liu

    2015-06-01

    Full Text Available Fluid office documents, as semi-structured data often represented by Extensible Markup Language (XML are important parts of Big Data. These office documents have different formats, and their matching Application Programming Interfaces (APIs depend on developing platform and versions, which causes difficulty in custom development and information retrieval from them. To solve this problem, we have been developing an office document query (ODQ language which provides a uniform method to retrieve content from documents with different formats and versions. ODQ builds common document model ontology to conceal the format details of documents and provides a uniform operation interface to handle office documents with different formats. The results show that ODQ has advantages in format independence, and can facilitate users in developing documents processing systems with good interoperability.

  20. Enabling Ontology Based Semantic Queries in Biomedical Database Systems.

    Science.gov (United States)

    Zheng, Shuai; Wang, Fusheng; Lu, James

    2014-03-01

    There is a lack of tools to ease the integration and ontology based semantic queries in biomedical databases, which are often annotated with ontology concepts. We aim to provide a middle layer between ontology repositories and semantically annotated databases to support semantic queries directly in the databases with expressive standard database query languages. We have developed a semantic query engine that provides semantic reasoning and query processing, and translates the queries into ontology repository operations on NCBO BioPortal. Semantic operators are implemented in the database as user defined functions extended to the database engine, thus semantic queries can be directly specified in standard database query languages such as SQL and XQuery. The system provides caching management to boosts query performance. The system is highly adaptable to support different ontologies through easy customizations. We have implemented the system DBOntoLink as an open source software, which supports major ontologies hosted at BioPortal. DBOntoLink supports a set of common ontology based semantic operations and have them fully integrated with a database management system IBM DB2. The system has been deployed and evaluated with an existing biomedical database for managing and querying image annotations and markups (AIM). Our performance study demonstrates the high expressiveness of semantic queries and the high efficiency of the queries.

  1. Query Adaptive Image Retrieval System

    Directory of Open Access Journals (Sweden)

    Amruta Dubewar

    2014-03-01

    Full Text Available Images play a crucial role in various fields such as art gallery, medical, journalism and entertainment. Increasing use of image acquisition and data storage technologies have enabled the creation of large database. So, it is necessary to develop appropriate information management system to efficiently manage these collections and needed a system to retrieve required images from these collections. This paper proposed query adaptive image retrieval system (QAIRS to retrieve images similar to the query image specified by user from database. The goal of this system is to support image retrieval based on content properties such as colour and texture, usually encoded into feature vectors. In this system, colour feature extracted by various techniques such as colour moment, colour histogram and autocorrelogram and texture feature extracted by using gabor wavelet. Hashing technique is used to embed high dimensional image features into hamming space, where search can be performed by hamming distance of compact hash codes. Depending upon minimum hamming distance it returns the similar image to query image.

  2. Query-Based Outlier Detection in Heterogeneous Information Networks

    Science.gov (United States)

    Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei

    2015-01-01

    Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user’s search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks. PMID:27064397

  3. Goal Directed Relative Skyline Queries in Time Dependent Road Networks

    CERN Document Server

    Iyer, K B Priya

    2012-01-01

    The Wireless GIS technology is progressing rapidly in the area of mobile communications. Location-based spatial queries are becoming an integral part of many new mobile applications. The Skyline queries are latest apps under Location-based services. In this paper we introduce Goal Directed Relative Skyline queries on Time dependent (GD-RST) road networks. The algorithm uses travel time as a metric in finding the data object by considering multiple query points (multi-source skyline) relative to user location and in the user direction of travelling. We design an efficient algorithm based on Filter phase, Heap phase and Refine Skyline phases. At the end, we propose a dynamic skyline caching (DSC) mechanism which helps to reduce the computation cost for future skyline queries. The experimental evaluation reflects the performance of GD-RST algorithm over the traditional branch and bound algorithm for skyline queries in real road networks.

  4. QUERY RESPONSE TIME COMPARISON NOSQLDB MONGODB WITH SQLDB ORACLE

    Directory of Open Access Journals (Sweden)

    Humasak T. A. Simanjuntak

    2015-01-01

    Full Text Available Penyimpanan data saat ini terdapat dua jenis yakni relational database dan non-relational database. Kedua jenis DBMS (Database Managemnet System tersebut berbeda dalam berbagai aspek seperti per-formansi eksekusi query, scalability, reliability maupun struktur penyimpanan data. Kajian ini memiliki tujuan untuk mengetahui perbandingan performansi DBMS antara Oracle sebagai jenis relational data-base dan MongoDB sebagai jenis non-relational database dalam mengolah data terstruktur. Eksperimen dilakukan untuk mengetahui perbandingan performansi kedua DBMS tersebut untuk operasi insert, select, update dan delete dengan menggunakan query sederhana maupun kompleks pada database Northwind. Untuk mencapai tujuan eksperimen, 18 query yang terdiri dari 2 insert query, 10 select query, 2 update query dan 2 delete query dieksekusi. Query dieksekusi melalui sebuah aplikasi .Net yang dibangun sebagai perantara antara user dengan basis data. Eksperimen dilakukan pada tabel dengan atau tanpa relasi pada Oracle dan embedded atau bukan embedded dokumen pada MongoDB. Response time untuk setiap eksekusi query dibandingkan dengan menggunakan metode statistik. Eksperimen menunjukkan response time query untuk proses select, insert, dan update pada MongoDB lebih cepatdaripada Oracle. MongoDB lebih cepat 64.8 % untuk select query;MongoDB lebihcepat 72.8 % untuk insert query dan MongoDB lebih cepat 33.9 % untuk update query. Pada delete query, Oracle lebih cepat 96.8 % daripada MongoDB untuk table yang berelasi, tetapi MongoDB lebih cepat 83.8 % daripada Oracle untuk table yang tidak memiliki relasi.Untuk query kompleks dengan Map Reduce pada MongoDB lebih lambat 97.6% daripada kompleks query dengan aggregate function pada Oracle.

  5. Improve Performance of Data Warehouse by Query Cache

    Science.gov (United States)

    Gour, Vishal; Sarangdevot, S. S.; Sharma, Anand; Choudhary, Vinod

    2010-11-01

    The primary goal of data warehouse is to free the information locked up in the operational database so that decision makers and business analyst can make queries, analysis and planning regardless of the data changes in operational database. As the number of queries is large, therefore, in certain cases there is reasonable probability that same query submitted by the one or multiple users at different times. Each time when query is executed, all the data of warehouse is analyzed to generate the result of that query. In this paper we will study how using query cache improves performance of Data Warehouse and try to find the common problems faced. These kinds of problems are faced by Data Warehouse administrators which are minimizes response time and improves the efficiency of query in data warehouse overall, particularly when data warehouse is updated at regular interval.

  6. Performance Oriented Query Processing In GEO Based Location Search Engines

    CERN Document Server

    Umamaheswari, M

    2010-01-01

    Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable geographic search engines. Query processing is a major bottleneck in standard web search engines, and the main reason for the thousands of machines used by the major engines. Geographic search engine query processing is different in that it requires a combination of text and spatial data processing techniques. We propose several algorithms for efficient query processing in geographic search engines, integrate them into an existing web search query processor, and evaluate them on large sets of real data and query traces.

  7. An Investigation of the Integrated Model of User Technology Acceptance: Internet User Samples in Four Countries

    Science.gov (United States)

    Fusilier, Marcelline; Durlabhji, Subhash; Cucchi, Alain

    2008-01-01

    National background of users may influence the process of technology acceptance. The present study explored this issue with the new, integrated technology use model proposed by Sun and Zhang (2006). Data were collected from samples of college students in India, Mauritius, Reunion Island, and United States. Questionnaire methodology and…

  8. ConQuR-Bio: Consensus Ranking with Query Reformulation for Biological Data

    OpenAIRE

    2014-01-01

    International audience; This paper introduces ConQuR-Bio which aims at assisting scientists when they query public biological databases. Various reformu-lations of the user query are generated using medical terminologies. Such alternative reformulations are then used to rank the query results using a new consensus ranking strategy. The originality of our approach thus lies in using consensus ranking techniques within the context of query reformulation. The ConQuR-Bio system is able to query t...

  9. Cognitive Support using BDI Agent and Adaptive User Modeling

    DEFF Research Database (Denmark)

    Hossain, Shabbir

    2012-01-01

    a set of goals for attaining the objective of this thesis. The initial goal is to recognize the activities of the users to assess the need of support for the user during the activity. However, one of the challenges of the recognition process is the adaptability for variant user behaviour due to physical...... for higher accuracy and reliability. The second goal focus on the selection process of the type support required for user based on the aptitude of performing the activities. The capability model has been extracted from International Classification of Functioning, Disability and Health (ICF), a well...... Observable Markov Decision Process (POMDP) in the BDI agent is proposed in this thesis to handle the irrational behaviour of the user. The fourth goal represents the implementation of the research approaches and performs validation of the system through experiments. The empirical results of the experiments...

  10. Analysis of Users Web Browsing Behavior Using Markov chain Model

    Directory of Open Access Journals (Sweden)

    Diwakar Shukla

    2011-03-01

    Full Text Available In present days of growing information technology, many browsers available for surfing and web mining. A user has option to use any of them at a time to mine out the desired website. Every browser has pre-defined level of popularity and reputation in the market. This paper considers the setup of only two browsers in a computer system and a user prefers to any one, if fails, switches to the other one .The behavior of user is modeled through Markov chain procedure and transition probabilities are calculated. The quitting to browsing is treated as a parameter of variation over the popularity. Graphical study is performed to explain the inter relationship between user behavior parameters and browser market popularity parameters. If rate of a company is lowest in terms of browser failure and lowest in terms of quitting probability then company enjoys better popularity and larger user proportion

  11. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  12. Animating the Web with jQuery

    Directory of Open Access Journals (Sweden)

    Asokan M

    2013-02-01

    Full Text Available World globalization and present day technology increases the web users rapidly. Every website is trying to attract the web users. The web site creators /developers add different kind of animations to their websites. There are many softwares available to create animation. jQuery can be used to create interactive and powerful web pages with animations. JQuery is a JavaScript library intendedto make Java Script programming easier and more fun. A JavaScript library is a complex JavaScript program that both simplifies difficult tasks and solves cross-browser problems. With jQuery, we canaccomplish tasks in a single line of code. JQuery is used on millions of websites. This paper discuss about the advantages and usage statistics of jQuery on the web. A complete procedure to create a slider and banner plug-ins are also included. They are tested with different browsers.

  13. A Driving Behaviour Model of Electrical Wheelchair Users

    Directory of Open Access Journals (Sweden)

    S. O. Onyango

    2016-01-01

    Full Text Available In spite of the presence of powered wheelchairs, some of the users still experience steering challenges and manoeuvring difficulties that limit their capacity of navigating effectively. For such users, steering support and assistive systems may be very necessary. To appreciate the assistance, there is need that the assistive control is adaptable to the user’s steering behaviour. This paper contributes to wheelchair steering improvement by modelling the steering behaviour of powered wheelchair users, for integration into the control system. More precisely, the modelling is based on the improved Directed Potential Field (DPF method for trajectory planning. The method has facilitated the formulation of a simple behaviour model that is also linear in parameters. To obtain the steering data for parameter identification, seven individuals participated in driving the wheelchair in different virtual worlds on the augmented platform. The obtained data facilitated the estimation of user parameters, using the ordinary least square method, with satisfactory regression analysis results.

  14. Modeling Augmented Reality User Interfaces with SSIML/AR

    Directory of Open Access Journals (Sweden)

    Arnd Vitzthum

    2006-06-01

    Full Text Available Augmented Reality (AR technologies open up new possibilities especially for task-focused domains such as assembly and maintenance. However, it can be noticed that there is still a lack of concepts and tools for a structured AR development process and an application specification above the code level. To address this problem we introduce SSIML/AR, a visual modeling language for the abstract specification of AR applications in general and AR user interfaces in particular. With SSIML/AR, three different aspects of AR user interfaces can be described: The user interface structure, the presentation of relevant information depending on the user’s current task and the integration of the user interface with other system components. Code skeletons can be generated automatically from SSIML/AR models. This enables the seamless transition from the design level to the implementation level. In addition, we sketch how SSIML/AR models can be integrated in an overall AR development process.

  15. Discrete-query quantum algorithm for NAND trees

    CERN Document Server

    Childs, A M; Jordan, S P; Yeung, D; Childs, Andrew M.; Cleve, Richard; Jordan, Stephen P.; Yeung, David

    2007-01-01

    Recently, Farhi, Goldstone, and Gutmann gave a quantum algorithm for evaluating NAND trees that runs in time O(sqrt(N log N)) in the Hamiltonian query model. In this note, we point out that their algorithm can be converted into an algorithm using O(N^{1/2 + epsilon}) queries in the conventional quantum query model, for any fixed epsilon > 0.

  16. An outlook of the user support model to educate the users community at the CMS Experiment

    CERN Document Server

    Malik, Sudhir

    2011-01-01

    The CMS (Compact Muon Solenoid) experiment is one of the two large general-purpose particle physics detectors built at the LHC (Large Hadron Collider) at CERN in Geneva, Switzerland. The diverse collaboration combined with a highly distributed computing environment and Petabytes/year of data being collected makes CMS unlike any other High Energy Physics collaborations before. This presents new challenges to educate and bring users, coming from different cultural, linguistics and social backgrounds, up to speed to contribute to the physics analysis. CMS has been able to deal with this new paradigm by deploying a user support structure model that uses collaborative tools to educate about software, computing an physics tools specific to CMS. To carry out the user support mission worldwide, an LHC Physics Centre (LPC) was created few years back at Fermilab as a hub for US physicists. The LPC serves as a "brick and mortar" location for physics excellence for the CMS physicists where graduate and postgraduate scien...

  17. Precipitation-runoff modeling system; user's manual

    Science.gov (United States)

    Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.

    1983-01-01

    The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)

  18. Range-Clustering Queries

    OpenAIRE

    Abrahamsen, Mikkel; de Berg, Mark; Buchin, Kevin; Mehr, Mehran; Mehrabi, Ali D.

    2017-01-01

    In a geometric $k$-clustering problem the goal is to partition a set of points in $\\mathbb{R}^d$ into $k$ subsets such that a certain cost function of the clustering is minimized. We present data structures for orthogonal range-clustering queries on a point set $S$: given a query box $Q$ and an integer $k>2$, compute an optimal $k$-clustering for $S\\setminus Q$. We obtain the following results. We present a general method to compute a $(1+\\epsilon)$-approximation to a range-clustering query, ...

  19. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  20. FastQuery: A Parallel Indexing System for Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Prabhat,

    2011-07-29

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the- art index and query technologies such as FastBit can significantly improve accesses to these datasets by augmenting the user data with indexes and other secondary information. However, a challenge is that the indexes assume the relational data model but the scientific data generally follows the array data model. To match the two data models, we design a generic mapping mechanism and implement an efficient input and output interface for reading and writing the data and their corresponding indexes. To take advantage of the emerging many-core architectures, we also develop a parallel strategy for indexing using threading technology. This approach complements our on-going MPI-based parallelization efforts. We demonstrate the flexibility of our software by applying it to two of the most commonly used scientific data formats, HDF5 and NetCDF. We present two case studies using data from a particle accelerator model and a global climate model. We also conducted a detailed performance study using these scientific datasets. The results show that FastQuery speeds up the query time by a factor of 2.5x to 50x, and it reduces the indexing time by a factor of 16 on 24 cores.

  1. Accomplishing Deterministic XML Query Optimization

    Institute of Scientific and Technical Information of China (English)

    Dun-Ren Che

    2005-01-01

    As the popularity of XML (eXtensible Markup Language) keeps growing rapidly, the management of XML compliant structured-document databases has become a very interesting and compelling research area. Query optimization for XML structured-documents stands out as one of the most challenging research issues in this area because of the much enlarged optimization (search) space, which is a consequence of the intrinsic complexity of the underlying data model of XML data. We therefore propose to apply deterministic transformations on query expressions to most aggressively prune the search space and fast achieve a sufficiently improved alternative (if not the optimal) for each incoming query expression. This idea is not just exciting but practically attainable. This paper first provides an overview of our optimization strategy, and then focuses on the key implementation issues of our rule-based transformation system for XML query optimization in a database environment. The performance results we obtained from experimentation show that our approach is a valid and effective one.

  2. Do recommender systems benefit users? a modeling approach

    Science.gov (United States)

    Yeung, Chi Ho

    2016-04-01

    Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.

  3. Query term suggestion in academic search

    NARCIS (Netherlands)

    Verberne, S.; Sappelli, M.; Kraaij, W.

    2014-01-01

    In this paper, we evaluate query term suggestion in the context of academic professional search. Our overall goal is to support scientists in their information seeking tasks. We set up an interactive search system in which terms are extracted from clicked documents and suggested to the user before e

  4. Adapting Query Expansion to Search Proficiency

    NARCIS (Netherlands)

    C. Boscarino (Corrado); V. Hollink (Vera); A.P. de Vries (Arjen); B. Carterette; E. Kanoulas; P. Clough; M. Sanderson

    2012-01-01

    htmlabstractWe argue that query expansion (QE) based on the full ses- sion improves the overall search experience provided that we know how to adapt the QE weighting schema to a user's search proficiency. We propose a strategy to predict search ability from session parameters. Us- ing an

  5. Query term suggestion in academic search

    NARCIS (Netherlands)

    Verberne, S.; Sappelli, M.; Kraaij, W.

    2014-01-01

    In this paper, we evaluate query term suggestion in the context of academic professional search. Our overall goal is to support scientists in their information seeking tasks. We set up an interactive search system in which terms are extracted from clicked documents and suggested to the user before

  6. Exploiting Conceptual Knowledge for Querying Information Systems

    OpenAIRE

    Selke, Joachim; Balke, Wolf-Tilo

    2011-01-01

    Whereas today's information systems are well-equipped for efficient query handling, their strict mathematical foundations hamper their use for everyday tasks. In daily life, people expect information to be offered in a personalized and focused way. But currently, personalization in digital systems still only takes explicit knowledge into account and does not yet process conceptual information often naturally implied by users. We discuss how to bridge the gap between users and today's systems,...

  7. Exploiting Conceptual Knowledge for Querying Information Systems

    CERN Document Server

    Selke, Joachim

    2011-01-01

    Whereas today's information systems are well-equipped for efficient query handling, their strict mathematical foundations hamper their use for everyday tasks. In daily life, people expect information to be offered in a personalized and focused way. But currently, personalization in digital systems still only takes explicit knowledge into account and does not yet process conceptual information often naturally implied by users. We discuss how to bridge the gap between users and today's systems, building on results from cognitive psychology.

  8. Career Area Rotation Model: User's Manual.

    Science.gov (United States)

    Williams, Richard B.; And Others

    The Career Area Rotation Model (CAROM) was developed as a result of the need for a computer based model describing the rotation of airmen within a specific career area (occupational specialty) through various categories of tour duty, accommodating all policies and interactions which are relevant for evaluation purposes. CAROM is an entity…

  9. HTGR Application Economic Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  10. A Hybrid Tool for User Interface Modeling and Prototyping

    Science.gov (United States)

    Trætteberg, Hallvard

    Although many methods have been proposed, model-based development methods have only to some extent been adopted for UI design. In particular, they are not easy to combine with user-centered design methods. In this paper, we present a hybrid UI modeling and GUI prototyping tool, which is designed to fit better with IS development and UI design traditions. The tool includes a diagram editor for domain and UI models and an execution engine that integrates UI behavior, live UI components and sample data. Thus, both model-based user interface design and prototyping-based iterative design are supported

  11. VIQI: A New Approach for Visual Interpretation of Deep Web Query Interfaces

    CERN Document Server

    Boughamoura, Radhouane; Omri, Mohamed Nazih

    2012-01-01

    Deep Web databases contain more than 90% of pertinent information of the Web. Despite their importance, users don't profit of this treasury. Many deep web services are offering competitive services in term of prices, quality of service, and facilities. As the number of services is growing rapidly, users have difficulty to ask many web services in the same time. In this paper, we imagine a system where users have the possibility to formulate one query using one query interface and then the system translates query to the rest of query interfaces. However, interfaces are created by designers in order to be interpreted visually by users, machines can not interpret query from a given interface. We propose a new approach which emulates capacity of interpretation of users and extracts query from deep web query interfaces. Our approach has proved good performances on two standard datasets.

  12. Ontology Based Queries - Investigating a Natural Language Interface

    NARCIS (Netherlands)

    van der Sluis, Ielka; Hielkema, F.; Mellish, C.; Doherty, G.

    2010-01-01

    In this paper we look at what may be learned from a comparative study examining non-technical users with a background in social science browsing and querying metadata. Four query tasks were carried out with a natural language interface and with an interface that uses a web paradigm with hyperlinks.

  13. On the Suitability of Skyline Queries for Data Exploration

    DEFF Research Database (Denmark)

    Chester, Sean; Mortensen, Michael Lind; Assent, Ira

    2014-01-01

    The skyline operator has been studied in database research for multi-criteria decision making. Until now the focus has been on the efficiency or accuracy of single queries. In practice, however, users are increasingly confronted with unknown data collections, where precise query formulation prove...

  14. METAPHOR (version 1): Users guide. [performability modeling

    Science.gov (United States)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  15. Localized Geometric Query Problems

    CERN Document Server

    Augustine, John; Maheshwari, Anil; Nandy, Subhas C; Roy, Sasanka; Sarvattomananda, Swami

    2011-01-01

    A new class of geometric query problems are studied in this paper. We are required to preprocess a set of geometric objects $P$ in the plane, so that for any arbitrary query point $q$, the largest circle that contains $q$ but does not contain any member of $P$, can be reported efficiently. The geometric sets that we consider are point sets and boundaries of simple polygons.

  16. Querying JSON Streams

    OpenAIRE

    Bo, Yang

    2010-01-01

    A data stream management system (DSMS) is similar to a database management system (DBMS) but can search data directly in on-line streams. Using its mediator-wrapper approach, the extensible database system, Amos II, allows different kinds of distributed data resource to be queried. It has been extended with a stream datatype to query possibly infinite streams, which provides DSMS functionality. Nowadays, more and more web applications start to offer their services in JSON format which is a te...

  17. Managing Time in Relational Databases How to Design, Update and Query Temporal Data

    CERN Document Server

    Johnston, Tom

    2010-01-01

    Managing Time in Relational Databases shows how to make the rich information content of bi-temporal data available to business users, while simplifying the design, maintenance and retrieval of that data. Metadata declarations eliminate the need to directly model temporal data. Temporal data maintenance is isolated in code that can be invoked to update bi-temporal data in any database and from any application program, across the enterprise. Anyone who can write queries against conventional data will be able to write queries against the bi-temporal data structures described in this book.

  18. User's instructions for the erythropoiesis regulatory model

    Science.gov (United States)

    1978-01-01

    The purpose of the model provides a method to analyze some of the events that could account for the decrease in red cell mass observed in crewmen returning from space missions. The model is based on the premise that erythrocyte production is governed by the balance between oxygen supply and demand at a renal sensing site. Oxygen supply is taken to be a function of arterial oxygen tension, mean corpuscular hemoglobin concentration, oxy-hemoglobin carrying capacity, hematocrit, and blood flow. Erythrocyte destruction is based on the law of mass action. The instantaneous hematocrit value is derived by integrating changes in production and destruction rates and accounting for the degree of plasma dilution.

  19. Microwave scattering and emission models for users

    CERN Document Server

    Fung, Adrian K

    2009-01-01

    Today, microwave remote sensing has evolved into a valuable and economical tool for a variety of applications. It is used in a wide range of areas, from geological sensing, geographical mapping, and weather monitoring, to GPS positioning, aircraft traffic, and mapping of oil pollution over the sea surface. This unique resource provides you with practical scattering and emission data models that represent the interaction between electromagnetic waves and a scene on the Earth surface in the microwave region. The book helps you understand and apply these models to your specific work in the field.

  20. Visual graph query formulation and exploration: a new perspective on information retrieval at the edge

    Science.gov (United States)

    Kase, Sue E.; Vanni, Michelle; Knight, Joanne A.; Su, Yu; Yan, Xifeng

    2016-05-01

    Within operational environments decisions must be made quickly based on the information available. Identifying an appropriate knowledge base and accurately formulating a search query are critical tasks for decision-making effectiveness in dynamic situations. The spreading of graph data management tools to access large graph databases is a rapidly emerging research area of potential benefit to the intelligence community. A graph representation provides a natural way of modeling data in a wide variety of domains. Graph structures use nodes, edges, and properties to represent and store data. This research investigates the advantages of information search by graph query initiated by the analyst and interactively refined within the contextual dimensions of the answer space toward a solution. The paper introduces SLQ, a user-friendly graph querying system enabling the visual formulation of schemaless and structureless graph queries. SLQ is demonstrated with an intelligence analyst information search scenario focused on identifying individuals responsible for manufacturing a mosquito-hosted deadly virus. The scenario highlights the interactive construction of graph queries without prior training in complex query languages or graph databases, intuitive navigation through the problem space, and visualization of results in graphical format.

  1. Inverse Queries For Multidimensional Spaces

    CERN Document Server

    Bernecker, Thomas; Kriegel, Hans-Peter; Mamoulis, Nikos; Renz, Matthias; Zhang, Shiming; Züfle, Andreas

    2011-01-01

    Traditional spatial queries return, for a given query object $q$, all database objects that satisfy a given predicate, such as epsilon range and $k$-nearest neighbors. This paper defines and studies {\\em inverse} spatial queries, which, given a subset of database objects $Q$ and a query predicate, return all objects which, if used as query objects with the predicate, contain $Q$ in their result. We first show a straightforward solution for answering inverse spatial queries for any query predicate. Then, we propose a filter-and-refinement framework that can be used to improve efficiency. We show how to apply this framework on a variety of inverse queries, using appropriate space pruning strategies. In particular, we propose solutions for inverse epsilon range queries, inverse $k$-nearest neighbor queries, and inverse skyline queries. Our experiments show that our framework is significantly more efficient than naive approaches.

  2. A Graphical User Interface to Generalized Linear Models in MATLAB

    Directory of Open Access Journals (Sweden)

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  3. Designing user models in a virtual cave environment

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S. [Argonne National Lab., Idaho Falls, ID (United States); Hudson, R. [Argonne National Lab., IL (United States); Gokhale, N. [Madge Networks, San Jose, CA (United States)

    1995-12-31

    In this paper, the results of a first study into the use of virtual reality for human factor studies and design of simple and complex models of control systems, components, and processes are described. The objective was to design a model in a virtual environment that would reflect more characteristics of the user`s mental model of a system and fewer of the designer`s. The technology of a CAVE{trademark} virtual environment and the methodology of Neuro Linguistic Programming were employed in this study.

  4. C++ Model Developer (CMD) User Guide

    Science.gov (United States)

    2005-04-01

    SIMULATION 66 5 SCALABILITY – BUILDING A 6 DEGREE-OF-FREEDOM (6DOF) SIMULATION One problem with a lot of simulation documentation is that simple...Directorate Aviation and Missile Research, Development, and Engineering Center Ray Sells Michael Fennell DESE Research, Inc. 315 Wynn Drive...C++ Model Developer (CMD) is an open-source C++ source code based environment for building simulations of systems described by time-based

  5. Modeling mutual feedback between users and recommender systems

    CERN Document Server

    Zeng, An; Medo, Matus; Zhang, Yi-Cheng

    2015-01-01

    Recommender systems daily influence our decisions on the Internet. While considerable attention has been given to issues such as recommendation accuracy and user privacy, the long-term mutual feedback between a recommender system and the decisions of its users has been neglected so far. We propose here a model of network evolution which allows us to study the complex dynamics induced by this feedback, including the hysteresis effect which is typical for systems with non-linear dynamics. Despite the popular belief that recommendation helps users to discover new things, we find that the long-term use of recommendation can contribute to the rise of extremely popular items and thus ultimately narrow the user choice. These results are supported by measurements of the time evolution of item popularity inequality in real systems. We show that this adverse effect of recommendation can be tamed by sacrificing part of short-term recommendation accuracy.

  6. Discrete Feature Model (DFM) User Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2008-06-15

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this

  7. Earthquake Early Warning Beta Users: Java, Modeling, and Mobile Apps

    Science.gov (United States)

    Strauss, J. A.; Vinci, M.; Steele, W. P.; Allen, R. M.; Hellweg, M.

    2014-12-01

    Earthquake Early Warning (EEW) is a system that can provide a few to tens of seconds warning prior to ground shaking at a user's location. The goal and purpose of such a system is to reduce, or minimize, the damage, costs, and casualties resulting from an earthquake. A demonstration earthquake early warning system (ShakeAlert) is undergoing testing in the United States by the UC Berkeley Seismological Laboratory, Caltech, ETH Zurich, University of Washington, the USGS, and beta users in California and the Pacific Northwest. The beta users receive earthquake information very rapidly in real-time and are providing feedback on their experiences of performance and potential uses within their organization. Beta user interactions allow the ShakeAlert team to discern: which alert delivery options are most effective, what changes would make the UserDisplay more useful in a pre-disaster situation, and most importantly, what actions users plan to take for various scenarios. Actions could include: personal safety approaches, such as drop cover, and hold on; automated processes and procedures, such as opening elevator or fire stations doors; or situational awareness. Users are beginning to determine which policy and technological changes may need to be enacted, and funding requirements to implement their automated controls. The use of models and mobile apps are beginning to augment the basic Java desktop applet. Modeling allows beta users to test their early warning responses against various scenarios without having to wait for a real event. Mobile apps are also changing the possible response landscape, providing other avenues for people to receive information. All of these combine to improve business continuity and resiliency.

  8. Modeling and evaluating user behavior in exploratory visual analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.; Leigh, Jason

    2016-10-01

    Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling and evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.

  9. Using medical history embedded in biometrics medical card for user identity authentication: privacy preserving authentication model by features matching.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan

    2012-01-01

    Many forms of biometrics have been proposed and studied for biometrics authentication. Recently researchers are looking into longitudinal pattern matching that based on more than just a singular biometrics; data from user's activities are used to characterise the identity of a user. In this paper we advocate a novel type of authentication by using a user's medical history which can be electronically stored in a biometric security card. This is a sequel paper from our previous work about defining abstract format of medical data to be queried and tested upon authentication. The challenge to overcome is preserving the user's privacy by choosing only the useful features from the medical data for use in authentication. The features should contain less sensitive elements and they are implicitly related to the target illness. Therefore exchanging questions and answers about a few carefully chosen features in an open channel would not easily or directly expose the illness, but yet it can verify by inference whether the user has a record of it stored in his smart card. The design of a privacy preserving model by backward inference is introduced in this paper. Some live medical data are used in experiments for validation and demonstration.

  10. User-driven Cloud Implementation of environmental models and data for all

    Science.gov (United States)

    Gurney, R. J.; Percy, B. J.; Elkhatib, Y.; Blair, G. S.

    2014-12-01

    Environmental data and models come from disparate sources over a variety of geographical and temporal scales with different resolutions and data standards, often including terabytes of data and model simulations. Unfortunately, these data and models tend to remain solely within the custody of the private and public organisations which create the data, and the scientists who build models and generate results. Although many models and datasets are theoretically available to others, the lack of ease of access tends to keep them out of reach of many. We have developed an intuitive web-based tool that utilises environmental models and datasets located in a cloud to produce results that are appropriate to the user. Storyboards showing the interfaces and visualisations have been created for each of several exemplars. A library of virtual machine images has been prepared to serve these exemplars. Each virtual machine image has been tailored to run computer models appropriate to the end user. Two approaches have been used; first as RESTful web services conforming to the Open Geospatial Consortium (OGC) Web Processing Service (WPS) interface standard using the Python-based PyWPS; second, a MySQL database interrogated using PHP code. In all cases, the web client sends the server an HTTP GET request to execute the process with a number of parameter values and, once execution terminates, an XML or JSON response is sent back and parsed at the client side to extract the results. All web services are stateless, i.e. application state is not maintained by the server, reducing its operational overheads and simplifying infrastructure management tasks such as load balancing and failure recovery. A hybrid cloud solution has been used with models and data sited on both private and public clouds. The storyboards have been transformed into intuitive web interfaces at the client side using HTML, CSS and JavaScript, utilising plug-ins such as jQuery and Flot (for graphics), and Google Maps

  11. Modeling Users, Context and Devices for Ambient Assisted Living Environments

    Science.gov (United States)

    Castillejo, Eduardo; Almeida, Aitor; López-de-Ipiña, Diego; Chen, Liming

    2014-01-01

    The participation of users within AAL environments is increasing thanks to the capabilities of the current wearable devices. Furthermore, the significance of considering user's preferences, context conditions and device's capabilities help smart environments to personalize services and resources for them. Being aware of different characteristics of the entities participating in these situations is vital for reaching the main goals of the corresponding systems efficiently. To collect different information from these entities, it is necessary to design several formal models which help designers to organize and give some meaning to the gathered data. In this paper, we analyze several literature solutions for modeling users, context and devices considering different approaches in the Ambient Assisted Living domain. Besides, we remark different ongoing standardization works in this area. We also discuss the used techniques, modeled characteristics and the advantages and drawbacks of each approach to finally draw several conclusions about the reviewed works. PMID:24643006

  12. Music Information Seeking Behaviour Poses Unique Challenges for the Design of Information Retrieval Systems. A Review of: Lee, J. H. (2010. Analysis of user needs and information features in natural language queries seeking music information. Journal of the American Society for information Science and Technology, 61, 1025-1045.

    Directory of Open Access Journals (Sweden)

    Cari Merkley

    2010-12-01

    Full Text Available Objective – To better understand music information seeking behaviour in a real life situation and to create a taxonomy relating to this behaviour to facilitate better comparison of music information retrieval studies in the future.Design – Content analysis of natural language queries.Setting – Google Answers, a fee based online service.Subjects – 1,705 queries and their related answers and comments posted in the music category of the Google Answers website before April 27, 2005.Methods – A total of 2,208 queries were retrieved from the music category on the Google Answers service. Google Answers was a fee based service in which users posted questions and indicated what they were willing to pay to have them answered. The queries selected for this study were posted prior to April 27, 2005, over a year before the service was discontinued completely. Of the 2208 queries taken from the site, only 1,705 were classified as relevant to the question of music information seeking by the researcher. The off-topic queries were not included in the study. Each of the 1,705 queries was coded according to the needs expressed by the user and the information provided to assist researchers in answering the question. The initial coding framework used by the researcher was informed by previous studies of music information retrieval to facilitate comparison, but was expanded and revised to reflect the evidence itself. Only the questions themselves were subjected to this iterative coding process. The answers provided by the Google Answer researchers and online comments posted by other users were examined by the author, but not coded for inclusion in the study.User needs in the questions were coded for their form and topic. Each question was assigned at least one form and one topic. Form refers to the type of question being asked and consisted of the following 10 categories: identification, location, verification, recommendation, evaluation, ready reference

  13. Automated Query Learning with Wikipedia and Genetic Programming

    CERN Document Server

    Malo, Pekka; Sinha, Ankur

    2010-01-01

    Most of the existing information retrieval systems are based on bag of words model and are not equipped with common world knowledge. Work has been done towards improving the efficiency of such systems by using intelligent algorithms to generate search queries, however, not much research has been done in the direction of incorporating human-and-society level knowledge in the queries. This paper is one of the first attempts where such information is incorporated into the search queries using Wikipedia semantics. The paper presents an essential shift from conventional token based queries to concept based queries, leading to an enhanced efficiency of information retrieval systems. To efficiently handle the automated query learning problem, we propose Wikipedia-based Evolutionary Semantics (Wiki-ES) framework where concept based queries are learnt using a co-evolving evolutionary procedure. Learning concept based queries using an intelligent evolutionary procedure yields significant improvement in performance whic...

  14. Empirical Study and Model of User Acceptance for Personalized Recommendation

    Directory of Open Access Journals (Sweden)

    Zheng Hua

    2013-02-01

    Full Text Available Personalized recommendation technology plays an important role in the current e-commerce system, but the user willingness to accept the personalized recommendation and its influencing factors need to be study. In this study, the Theory of Reasoned Action (TRA and Technology Acceptance Model (TAM are used to construct a user acceptance model of personalized recommendation which tested by the empirical method. The results show that perceived usefulness, perceived ease of use, subjective rules and trust tend had an impact on personalized recommendation.

  15. Mining the SDSS SkyServer SQL queries log

    Science.gov (United States)

    Hirota, Vitor M.; Santos, Rafael; Raddick, Jordan; Thakar, Ani

    2016-05-01

    SkyServer, the Internet portal for the Sloan Digital Sky Survey (SDSS) astronomic catalog, provides a set of tools that allows data access for astronomers and scientific education. One of SkyServer data access interfaces allows users to enter ad-hoc SQL statements to query the catalog. SkyServer also presents some template queries that can be used as basis for more complex queries. This interface has logged over 330 million queries submitted since 2001. It is expected that analysis of this data can be used to investigate usage patterns, identify potential new classes of queries, find similar queries, etc. and to shed some light on how users interact with the Sloan Digital Sky Survey data and how scientists have adopted the new paradigm of e-Science, which could in turn lead to enhancements on the user interfaces and experience in general. In this paper we review some approaches to SQL query mining, apply the traditional techniques used in the literature and present lessons learned, namely, that the general text mining approach for feature extraction and clustering does not seem to be adequate for this type of data, and, most importantly, we find that this type of analysis can result in very different queries being clustered together.

  16. Predicting user click behaviour in search engine advertisements

    Science.gov (United States)

    Daryaie Zanjani, Mohammad; Khadivi, Shahram

    2015-10-01

    According to the specific requirements and interests of users, search engines select and display advertisements that match user needs and have higher probability of attracting users' attention based on their previous search history. New objects such as user, advertisement or query cause a deterioration of precision in targeted advertising due to their lack of history. This article surveys this challenge. In the case of new objects, we first extract similar observed objects to the new object and then we use their history as the history of new object. Similarity between objects is measured based on correlation, which is a relation between user and advertisement when the advertisement is displayed to the user. This method is used for all objects, so it has helped us to accurately select relevant advertisements for users' queries. In our proposed model, we assume that similar users behave in a similar manner. We find that users with few queries are similar to new users. We will show that correlation between users and advertisements' keywords is high. Thus, users who pay attention to advertisements' keywords, click similar advertisements. In addition, users who pay attention to specific brand names might have similar behaviours too.

  17. Designing visual displays and system models for safe reactor operations based on the user`s perspective of the system

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, to minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.

  18. Query log analysis of an electronic health record search engine.

    Science.gov (United States)

    Yang, Lei; Mei, Qiaozhu; Zheng, Kai; Hanauer, David A

    2011-01-01

    We analyzed a longitudinal collection of query logs of a full-text search engine designed to facilitate information retrieval in electronic health records (EHR). The collection, 202,905 queries and 35,928 user sessions recorded over a course of 4 years, represents the information-seeking behavior of 533 medical professionals, including frontline practitioners, coding personnel, patient safety officers, and biomedical researchers for patient data stored in EHR systems. In this paper, we present descriptive statistics of the queries, a categorization of information needs manifested through the queries, as well as temporal patterns of the users' information-seeking behavior. The results suggest that information needs in medical domain are substantially more sophisticated than those that general-purpose web search engines need to accommodate. Therefore, we envision there exists a significant challenge, along with significant opportunities, to provide intelligent query recommendations to facilitate information retrieval in EHR.

  19. A mixing evolution model for bidirectional microblog user networks

    Science.gov (United States)

    Yuan, Wei-Guo; Liu, Yun

    2015-08-01

    Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.

  20. pGQL: A probabilistic graphical query language for gene expression time courses

    Directory of Open Access Journals (Sweden)

    Schilling Ruben

    2011-04-01

    Full Text Available Abstract Background Timeboxes are graphical user interface widgets that were proposed to specify queries on time course data. As queries can be very easily defined, an exploratory analysis of time course data is greatly facilitated. While timeboxes are effective, they have no provisions for dealing with noisy data or data with fluctuations along the time axis, which is very common in many applications. In particular, this is true for the analysis of gene expression time courses, which are mostly derived from noisy microarray measurements at few unevenly sampled time points. From a data mining point of view the robust handling of data through a sound statistical model is of great importance. Results We propose probabilistic timeboxes, which correspond to a specific class of Hidden Markov Models, that constitutes an established method in data mining. Since HMMs are a particular class of probabilistic graphical models we call our method Probabilistic Graphical Query Language. Its implementation was realized in the free software package pGQL. We evaluate its effectiveness in exploratory analysis on a yeast sporulation data set. Conclusions We introduce a new approach to define dynamic, statistical queries on time course data. It supports an interactive exploration of reasonably large amounts of data and enables users without expert knowledge to specify fairly complex statistical models with ease. The expressivity of our approach is by its statistical nature greater and more robust with respect to amplitude and frequency fluctuation than the prior, deterministic timeboxes.

  1. H2A Production Model, Version 2 User Guide

    Energy Technology Data Exchange (ETDEWEB)

    Steward, D.; Ramsden, T.; Zuboy, J.

    2008-09-01

    The H2A Production Model analyzes the technical and economic aspects of central and forecourt hydrogen production technologies. Using a standard discounted cash flow rate of return methodology, it determines the minimum hydrogen selling price, including a specified after-tax internal rate of return from the production technology. Users have the option of accepting default technology input values--such as capital costs, operating costs, and capacity factor--from established H2A production technology cases or entering custom values. Users can also modify the model's financial inputs. This new version of the H2A Production Model features enhanced usability and functionality. Input fields are consolidated and simplified. New capabilities include performing sensitivity analyses and scaling analyses to various plant sizes. This User Guide helps users already familiar with the basic tenets of H2A hydrogen production cost analysis get started using the new version of the model. It introduces the basic elements of the model then describes the function and use of each of its worksheets.

  2. jQuery Mobile

    CERN Document Server

    Reid, Jon

    2011-01-01

    Native apps have distinct advantages, but the future belongs to mobile web apps that function on a broad range of smartphones and tablets. Get started with jQuery Mobile, the touch-optimized framework for creating apps that look and behave consistently across many devices. This concise book provides HTML5, CSS3, and JavaScript code examples, screen shots, and step-by-step guidance to help you build a complete working app with jQuery Mobile. If you're already familiar with the jQuery JavaScript library, you can use your existing skills to build cross-platform mobile web apps right now. This b

  3. XPath Whole Query Optimization

    CERN Document Server

    Maneth, Sebastian

    2010-01-01

    Previous work reports about SXSI, a fast XPath engine which executes tree automata over compressed XML indexes. Here, reasons are investigated why SXSI is so fast. It is shown that tree automata can be used as a general framework for fine grained XML query optimization. We define the "relevant nodes" of a query as those nodes that a minimal automaton must touch in order to answer the query. This notion allows to skip many subtrees during execution, and, with the help of particular tree indexes, even allows to skip internal nodes of the tree. We efficiently approximate runs over relevant nodes by means of on-the-fly removal of alternation and non-determinism of (alternating) tree automata. We also introduce many implementation techniques which allows us to efficiently evaluate tree automata, even in the absence of special indexes. Through extensive experiments, we demonstrate the impact of the different optimization techniques.

  4. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  5. Revisiting the Examination Hypothesis with Query Specific Position Bias

    CERN Document Server

    Gollapudi, Sreenivas

    2010-01-01

    Click through rates (CTR) offer useful user feedback that can be used to infer the relevance of search results for queries. However it is not very meaningful to look at the raw click through rate of a search result because the likelihood of a result being clicked depends not only on its relevance but also the position in which it is displayed. One model of the browsing behavior, the {\\em Examination Hypothesis} \\cite{RDR07,Craswell08,DP08}, states that each position has a certain probability of being examined and is then clicked based on the relevance of the search snippets. This is based on eye tracking studies \\cite{Claypool01, GJG04} which suggest that users are less likely to view results in lower positions. Such a position dependent variation in the probability of examining a document is referred to as {\\em position bias}. Our main observation in this study is that the position bias tends to differ with the kind of information the user is looking for. This makes the position bias {\\em query specific}. In...

  6. Estimating Spoken Dialog System Quality with User Models

    CERN Document Server

    Engelbrecht, Klaus-Peter

    2013-01-01

    Spoken dialog systems have the potential to offer highly intuitive user interfaces, as they allow systems to be controlled using natural language. However, the complexity inherent in natural language dialogs means that careful testing of the system must be carried out from the very beginning of the design process.   This book examines how user models can be used to support such early evaluations in two ways:  by running simulations of dialogs, and by estimating the quality judgments of users. First, a design environment supporting the creation of dialog flows, the simulation of dialogs, and the analysis of the simulated data is proposed.  How the quality of user simulations may be quantified with respect to their suitability for both formative and summative evaluation is then discussed. The remainder of the book is dedicated to the problem of predicting quality judgments of users based on interaction data. New modeling approaches are presented, which process the dialogs as sequences, and which allow knowl...

  7. Trust Your Cloud Service Provider: User Based Crypto Model. Sitanaboina

    Directory of Open Access Journals (Sweden)

    Sri Lakshmi Parvathi

    2014-10-01

    Full Text Available In Data Storage as a Service (STaaS cloud computing environment, the equipment used for business operations can be leased from a single service provider along with the application, and the related business data can be stored on equipment provided by the same service provider. This type of arrangement can help a company save on hardware and software infrastructure costs, but storing the company’s data on the service provider’s equipment raises the possibility that important business information may be improperly disclosed to others [1]. Some researchers have suggested that user data stored on a service-provider’s equipment must be encrypted [2]. Encrypting data prior to storage is a common method of data protection, and service providers may be able to build firewalls to ensure that the decryption keys associated with encrypted user data are not disclosed to outsiders. However, if the decryption key and the encrypted data are held by the same service provider, it raises the possibility that high-level administrators within the service provider would have access to both the decryption key and the encrypted data, thus presenting a risk for the unauthorized disclosure of the user data. we in this paper provides an unique business model of cryptography where crypto keys are distributed across the user and the trusted third party(TTP with adoption of such a model mainly the CSP insider attack an form of misuse of valuable user data can be treated secured.

  8. User`s guide to the META-Net economic modeling system. Version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Lamont, A.

    1994-11-24

    In a market economy demands for commodities are met through various technologies and resources. Markets select the technologies and resources to meet these demands based on their costs. Over time, the competitiveness of different technologies can change due to the exhaustion of resources they depend on, the introduction of newer, more efficient technologies, or even shifts in user demands. As this happens, the structure of the economy changes. The Market Equilibrium and Technology Assessment Network Modelling System, META{center_dot}Net, has been developed for building and solving multi-period equilibrium models to analyze the shifts in the energy system that may occur as new technologies are introduced and resources are exhausted. META{center_dot}Net allows a user to build and solve complex economic models. It models` a market economy as a network of nodes representing resources, conversion processes, markets, and end-use demands. Commodities flow through this network from resources, through conversion processes and market, to the end-users. META{center_dot}Net then finds the multiperiod equilibrium prices and quantities. The solution includes the prices and quantities demanded for each commodity along with the capacity additions (and retirements) for each conversion process, and the trajectories of resource extraction. Although the changes in the economy are largely driven by consumers` behavior and the costs of technologies and resources, they are also affected by various government policies. These can include constraints on prices and quantities, and various taxes and constraints on environmental emissions. META{center_dot}Net can incorporate many of these mechanisms and evaluate their potential impact on the development of the economic system.

  9. Users guide to REGIONAL-1: a regional assessment model

    Energy Technology Data Exchange (ETDEWEB)

    Davis, W.E.; Eadie, W.J.; Powell, D.C.

    1979-09-01

    A guide was prepared to allow a user to run the PNL long-range transport model, REGIONAL 1. REGIONAL 1 is a computer model set up to run atmospheric assessments on a regional basis. The model has the capability of being run in three modes for a single time period. The three modes are: (1) no deposition, (2) dry deposition, (3) wet and dry deposition. The guide provides the physical and mathematical basis used in the model for calculating transport, diffusion, and deposition for all three modes. Also the guide includes a program listing with an explanation of the listings and an example in the form of a short-term assessment for 48 hours. The purpose of the example is to allow a person who has past experience with programming and meteorology to operate the assessment model and compare his results with the guide results. This comparison will assure the user that the program is operating in a proper fashion.

  10. Using Partial Credit and Response History to Model User Knowledge

    Science.gov (United States)

    Van Inwegen, Eric G.; Adjei, Seth A.; Wang, Yan; Heffernan, Neil T.

    2015-01-01

    User modelling algorithms such as Performance Factors Analysis and Knowledge Tracing seek to determine a student's knowledge state by analyzing (among other features) right and wrong answers. Anyone who has ever graded an assignment by hand knows that some answers are "more wrong" than others; i.e. they display less of an understanding…

  11. User Modeling and Personalization in the Microblogging Sphere

    NARCIS (Netherlands)

    Gao, Q.

    2013-01-01

    Microblogging has become a popular mechanism for people to publish, share, and propagate information on the Web. The massive amount of digital traces that people have left in the microblogging sphere, creates new possibilities and poses challenges for user modeling and personalization. How can

  12. The Integrated Decision Modeling System (IDMS) User’s Manual

    Science.gov (United States)

    1991-05-01

    AL-TP-1 991-0009 AD-A23 6 033 THE INTEGRATED DECISION MODELING SYSTEM (IDMS) USER’S MANUAL IJonathan C. Fast John N. Taylor Metrica , Incorporated...Looper 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) . PERFORMING ORGANIZATION REPORT NUMBER Metrica , Incorporated 8301 Broadway, Suite 215 San

  13. Incentives for Delay-Constrained Data Query and Feedback in Mobile Opportunistic Crowdsensing

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2016-07-01

    Full Text Available In this paper, we propose effective data collection schemes that stimulate cooperation between selfish users in mobile opportunistic crowdsensing. A query issuer generates a query and requests replies within a given delay budget. When a data provider receives the query for the first time from an intermediate user, the former replies to it and authorizes the latter as the owner of the reply. Different data providers can reply to the same query. When a user that owns a reply meets the query issuer that generates the query, it requests the query issuer to pay credits. The query issuer pays credits and provides feedback to the data provider, which gives the reply. When a user that carries a feedback meets the data provider, the data provider pays credits to the user in order to adjust its claimed expertise. Queries, replies and feedbacks can be traded between mobile users. We propose an effective mechanism to define rewards for queries, replies and feedbacks. We formulate the bargain process as a two-person cooperative game, whose solution is found by using the Nash theorem. To improve the credit circulation, we design an online auction process, in which the wealthy user can buy replies and feedbacks from the starving one using credits. We have carried out extensive simulations based on real-world traces to evaluate the proposed schemes.

  14. Incentives for Delay-Constrained Data Query and Feedback in Mobile Opportunistic Crowdsensing

    Science.gov (United States)

    Liu, Yang; Li, Fan; Wang, Yu

    2016-01-01

    In this paper, we propose effective data collection schemes that stimulate cooperation between selfish users in mobile opportunistic crowdsensing. A query issuer generates a query and requests replies within a given delay budget. When a data provider receives the query for the first time from an intermediate user, the former replies to it and authorizes the latter as the owner of the reply. Different data providers can reply to the same query. When a user that owns a reply meets the query issuer that generates the query, it requests the query issuer to pay credits. The query issuer pays credits and provides feedback to the data provider, which gives the reply. When a user that carries a feedback meets the data provider, the data provider pays credits to the user in order to adjust its claimed expertise. Queries, replies and feedbacks can be traded between mobile users. We propose an effective mechanism to define rewards for queries, replies and feedbacks. We formulate the bargain process as a two-person cooperative game, whose solution is found by using the Nash theorem. To improve the credit circulation, we design an online auction process, in which the wealthy user can buy replies and feedbacks from the starving one using credits. We have carried out extensive simulations based on real-world traces to evaluate the proposed schemes. PMID:27455261

  15. KoralQuery -- A General Corpus Query Protocol

    DEFF Research Database (Denmark)

    Bingel, Joachim; Diewald, Nils

    2015-01-01

    The task-oriented and format-driven development of corpus query systems has led to the creation of numerous corpus query languages (QLs) that vary strongly in expressiveness and syntax. This is a severe impediment for the interoperability of corpus analysis systems, which lack a common protocol....... In this paper, we present KoralQuery, a JSON-LD based general corpus query protocol, aiming to be independent of particular QLs, tasks and corpus formats. In addition to describing the system of types and operations that KoralQuery is built on, we exemplify the representation of corpus queries in the serialized...

  16. A NEW TOP-K CONDITIONAL XML PREFERENCE QUERIES

    Directory of Open Access Journals (Sweden)

    Shaikhah Alhazmi

    2014-09-01

    Full Text Available Preference querying technology is a very important issue in a variety of applications ranging from ecommerce to personalized search engines. Most of recent research works have been dedicated to this topic in the Artificial Intelligence and Database fields. Several formalisms allowing preference reasoning and specification have been proposed in the Artificial Intelligence domain. On the other hand, in the Database field the interest has been focused mainly in extending standard Structured Query Language (SQL and also eXtensible Markup Language (XML with preference facilities in order to provide personalized query answering. More precisely, the interest in the database context focuses on the notion of Top-k preference query and on the development of efficient methods for evaluating these queries. A Top-k preference query returns k data tuples which are the most preferred according to the user’s preferences. Of course, Top-k preference query answering is closely dependent on the particular preference model underlying the semantics of the operators responsible for selecting the best tuples. In this paper, we consider the Conditional Preference queries (CP-queries where preferences are specified by a set of rules expressed in a logical formalism. We introduce Top-k conditional preference queries (Top-k CP-queries, and the operators BestK-Match and Best-Match for evaluating these queries will be presented.

  17. EHR query language (EQL)--a query language for archetype-based health records.

    Science.gov (United States)

    Ma, Chunlan; Frankel, Heath; Beale, Thomas; Heard, Sam

    2007-01-01

    OpenEHR specifications have been developed to standardise the representation of an international electronic health record (EHR). The language used for querying EHR data is not as yet part of the specification. To fill in this gap, Ocean Informatics has developed a query language currently known as EHR Query Language (EQL), a declarative language supporting queries on EHR data. EQL is neutral to EHR systems, programming languages and system environments and depends only on the openEHR archetype model and semantics. Thus, in principle, EQL can be used in any archetype-based computational context. In the EHR context described here, particular queries mention concepts from the openEHR EHR Reference Model (RM). EQL can be used as a common query language for disparate archetype-based applications. The use of a common RM, archetypes, and a companion query language, such as EQL, semantic interoperability of EHR information is much closer. This paper introduces the EQL syntax and provides example clinical queries to illustrate the syntax. Finally, current implementations and future directions are outlined.

  18. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A.B. [ed.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  19. A User Awareness Model for Averting Computer Threats

    Directory of Open Access Journals (Sweden)

    Fungai Bhunu Shava

    2014-09-01

    Full Text Available One of the major reasons why systems are susceptible to threats in many ways or the other is the lack of the know-how and a follow up system for the education rendered to users, who in most cases are the weakest point for any perpetrated system attacks.Security awareness should be viewed as a gyratory exercise, as the attackers of systems never cease to explore on better and easier ways to penetrate organisational systems. This paper presents a user awareness model based on a case study done in one of the corporate environments in Namibia. The proposed model is a product of the different key facets that are perceived to be detrimental to system security in any average organisational setup. One of the major contributions of the User Awareness Model (UAM is a systematic and procedural assessment tool of the user practice towards computer systems security, which can be applied to any organisational environment with the flexible option to vary the combination of security needs variables

  20. Fuzzy Distance-Based Range Queries over Uncertain Moving Objects

    Institute of Scientific and Technical Information of China (English)

    Yi-Fei Chen; Xiao-Lin Qin; Liang Liu; Bo-Han Li

    2012-01-01

    Data obtained from real world are imprecise or uncertain due to the accuracy of positioning devices,updating protocols or characteristics of applications.On the other hand,users sometimes prefer to qualitatively express their requests with vague conditions and different parts of search region are in-equally important in some applications.We address the problem of efficiently processing the fuzzy range queries for uncertain moving objects whose whereabouts in time are not known exactly,for which the basic syntax is find objects always/sometimes near to the query issuer with the qualifying guarantees no less than a given threshold during a given temporal interval.We model the location uncertainty of moving objects on the utilization of probability density functions and describe the indeterminate boundary of query range with fuzzy set.We present the qualifying guarantee evaluation of objects,and propose pruning techniques based on the α-cut of fuzzy set to shrink the search space efficiently.We also design rules to reject non-qualifying objects and validate qualifying objects in order to avoid unnecessary costly numeric integrations in the refinement step.An extensive empirical study has been conducted to demonstrate the efficiency and effectiveness of algorithms under various experimental settings.

  1. Spatial Keyword Query Processing

    DEFF Research Database (Denmark)

    Chen, Lisi; Jensen, Christian S.; Wu, Dingming

    2013-01-01

    an all-around survey of 12 state- of-the-art geo-textual indices. We propose a benchmark that en- ables the comparison of the spatial keyword query performance. We also report on the findings obtained when applying the bench- mark to the indices, thus uncovering new insights that may guide index...

  2. Conceptual querying through ontologies

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik

    2009-01-01

    We present here ail approach to conceptual querying where the aim is, given a collection of textual database objects or documents, to target an abstraction of the entire database content in terms of the concepts appearing in documents, rather than the documents in the collection. The approach is ...

  3. XIRAF: Ultimate Forensic Querying

    NARCIS (Netherlands)

    Alink, W.; Bhoedjang, R.; Vries, A.P. de; Boncz, P.A.

    2006-01-01

    This paper describes a novel, XML-based approach towards managing and querying forensic traces extracted from digital evidence. This approach has been implemented in XIRAF, a prototype system for forensic analysis. XIRAF systematically applies forensic analysis tools to evidence files (e.g., hard di

  4. Query Driven Visualization

    CERN Document Server

    Buddelmeijer, Hugo

    2011-01-01

    The request driven way of deriving data in Astro-WISE is extended to a query driven way of visualization. This allows scientists to focus on the science they want to perform, because all administration of their data is automated. This can be done over an abstraction layer that enhances control and flexibility for the scientist.

  5. Flexible Query Answering Systems

    DEFF Research Database (Denmark)

    -computer interaction. The special track covers some some specific and, typically, newer fields, namely: environmental scanning for strategic early warning; generating linguistic descriptions of data; advances in fuzzy querying and fuzzy databases: theory and applications; fusion and ensemble techniques for on......-line learning on data streams; and intelligent information extraction from texts....

  6. Flexible Query Answering Systems

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 12th International Conference on Flexible Query Answering Systems, FQAS 2017, held in London, UK, in June 2017. The 21 full papers presented in this book together with 4 short papers were carefully reviewed and selected from 43 submissions...

  7. Algebra-Based Optimization of XML-Extended OLAP Queries

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    is desirable. This report presents a complete foundation for such OLAP-XML federations. This includes a prototypical query engine, a simplified query semantics based on previous work, and a complete physical algebra which enables precise modeling of the execution tasks of an OLAP-XML query. Effective algebra......-based and cost-based query optimization and implementation are also proposed, as well as the execution techniques. Finally, experiments with the prototypical query engine w.r.t. federation performance, optimization effectiveness, and feasibility suggest that our approach, unlike the physical integration...

  8. Learning via Query Synthesis

    KAUST Repository

    Alabdulmohsin, Ibrahim Mansour

    2017-05-07

    Active learning is a subfield of machine learning that has been successfully used in many applications. One of the main branches of active learning is query synthe- sis, where the learning agent constructs artificial queries from scratch in order to reveal sensitive information about the underlying decision boundary. It has found applications in areas, such as adversarial reverse engineering, automated science, and computational chemistry. Nevertheless, the existing literature on membership query synthesis has, generally, focused on finite concept classes or toy problems, with a limited extension to real-world applications. In this thesis, I develop two spectral algorithms for learning halfspaces via query synthesis. The first algorithm is a maximum-determinant convex optimization method while the second algorithm is a Markovian method that relies on Khachiyan’s classical update formulas for solving linear programs. The general theme of these methods is to construct an ellipsoidal approximation of the version space and to synthesize queries, afterward, via spectral decomposition. Moreover, I also describe how these algorithms can be extended to other settings as well, such as pool-based active learning. Having demonstrated that halfspaces can be learned quite efficiently via query synthesis, the second part of this thesis proposes strategies for mitigating the risk of reverse engineering in adversarial environments. One approach that can be used to render query synthesis algorithms ineffective is to implement a randomized response. In this thesis, I propose a semidefinite program (SDP) for learning a distribution of classifiers, subject to the constraint that any individual classifier picked at random from this distributions provides reliable predictions with a high probability. This algorithm is, then, justified both theoretically and empirically. A second approach is to use a non-parametric classification method, such as similarity-based classification. In this

  9. QUESEM: Towards building a Meta Search Service utilizing Query Semantics

    Directory of Open Access Journals (Sweden)

    Neelam Duhan

    2011-01-01

    Full Text Available Current Web Search Engines are built to serve needs of all users, independent of the special needs of any individual. The documents are returned by matching their queries with available documents, with no emphasis on the semantics of query. As a result, the generated information is often very large and inaccurate that results in increased user perceived latency. In this paper, a Semantic Search Service is being developed to help users gather relevant documents more efficiently unlike traditional Web search engines. The approach relies on the online web resource such as dictionary based sites to retrieve possible semantics of the query keywords, which are stored in a definition repository. The service works as a meta-layer above the keyword-based search engine to generate sub-queries based on different meanings of user query, which in turn are sent to the keyword-based search engine to perform Web search. This approach relieves the user in finding the desired information content and improves the search quality for certain types of complex queries. Experiments depict its efficiency as it results in reduced search space.

  10. A novel methodology for querying web images

    Science.gov (United States)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2005-01-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  11. Visualizing multidimensional query results using animation

    Science.gov (United States)

    Sawant, Amit P.; Healey, Christopher G.

    2008-01-01

    Effective representation of large, complex collections of information (datasets) presents a difficult challenge. Visualization is a solution that uses a visual interface to support efficient analysis and discovery within the data. Our primary goal in this paper is a technique that allows viewers to compare multiple query results representing user-selected subsets of a multidimensional dataset. We present an algorithm that visualizes multidimensional information along a space-filling spiral. Graphical glyphs that vary their position, color, and texture appearance are used to represent attribute values for the data elements in each query result. Guidelines from human perception allow us to construct glyphs that are specifically designed to support exploration, facilitate the discovery of trends and relationships both within and between data elements, and highlight exceptions. A clustering algorithm applied to a user-chosen ranking attribute bundles together similar data elements. This encapsulation is used to show relationships across different queries via animations that morph between query results. We apply our techniques to the MovieLens recommender system, to demonstrate their applicability in a real-world environment, and then conclude with a simple validation experiment to identify the strengths and limitations of our design, compared to a traditional side-by-side visualization.

  12. User interface for ground-water modeling: Arcview extension

    Science.gov (United States)

    Tsou, M.-S.; Whittemore, D.O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  13. Google BigQuery analytics

    CERN Document Server

    Tigani, Jordan

    2014-01-01

    How to effectively use BigQuery, avoid common mistakes, and execute sophisticated queries against large datasets Google BigQuery Analytics is the perfect guide for business and data analysts who want the latest tips on running complex queries and writing code to communicate with the BigQuery API. The book uses real-world examples to demonstrate current best practices and techniques, and also explains and demonstrates streaming ingestion, transformation via Hadoop in Google Compute engine, AppEngine datastore integration, and using GViz with Tableau to generate charts of query results. In addit

  14. Model for Predicting End User Web Page Response Time

    CERN Document Server

    Nagarajan, Sathya Narayanan

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we d...

  15. A USER PROTECTION MODEL FOR THE TRUSTED COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Marwan Ibrahim Alshar’e

    2014-01-01

    Full Text Available Information security presents a huge challenge for both individuals and organizations. The Trusted Computing Group (TCG has introduced the Trusted Platform Module (TPM as a solution to end-users to ensure their privacy and confidentiality. TPM has the role of being the root of trust for systems and users by providing protected storage that is accessible only within TPM and thus, protects computers against unwanted access. TPM is designed to prevent software attacks with minimal consideration being given toward physical attacks. Therefore, TPM focus on PIN password identification to control the physical presence of a user. The PIN Password method is not the ideal user verification method. Evil Maid is one of the attacking methods where a piece of code can be loaded and hidden in the boot loader before loading TPM. The code will then collects confidential information at the next boot and store it or send it to attackers via the network. In order to solve this problem, a number of solutions have been proposed. However, most of these solutions does not provide sufficient level of protection to TPM. In this study we introduce the TPM User Authentication Model (TPM-UAM that could assist in protecting TPM against physical attack and thus increase the security of the computer system. The proposed model has been evaluated through a focus group discussion consisting of a number of experts. The expert panel has confirmed that the proposed model is sufficient to provide expected level of protection to the TPM and to assist in preventing physical attack against TPM.

  16. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  17. 基于查询模型的专家检索方法分析%ANALYSING QUERY MODEL-BASED EXPERT SEARCH METHOD

    Institute of Scientific and Technical Information of China (English)

    饶建农; 孙宇轩

    2013-01-01

    After analysing the language model-based expert search method, we propose an expert search approach for query-based modelling. In the method a set of new expert sorting system is established by using query expansion technique and terminology generation probability. Preliminary experimental evaluation has been carried out on enterprise track topics organised by the international Text Retrieval Conference, the results show that this method can effectively improve the retrieval accuracy of expert search tasks.%在对基于语言模型的专家检索方法进行分析后,提出查询式建模的专家检索方法.该方法通过运用查询扩展技术和术语生成概率建立一套新的专家排序体系.根据对国际文本检索会议组织的企业追踪专题进行初步实验评估,结果表明,这种方法能够有效地提高专家检索任务的检索准确度.

  18. Virtual Solar Observatory Distributed Query Construction

    Science.gov (United States)

    Gurman, J. B.; Dimitoglou, G.; Bogart, R.; Davey, A.; Hill, F.; Martens, P.

    2003-01-01

    Through a prototype implementation (Tian et al., this meeting) the VSO has already demonstrated the capability of unifying geographically distributed data sources following the Web Services paradigm and utilizing mechanisms such as the Simple Object Access Protocol (SOAP). So far, four participating sites (Stanford, Montana State University, National Solar Observatory and the Solar Data Analysis Center) permit Web-accessible, time-based searches that allow browse access to a number of diverse data sets. Our latest work includes the extension of the simple, time-based queries to include numerous other searchable observation parameters. For VSO users, this extended functionality enables more refined searches. For the VSO, it is a proof of concept that more complex, distributed queries can be effectively constructed and that results from heterogeneous, remote sources can be synthesized and presented to users as a single, virtual data product.

  19. Clustering-based fragmentation and data replication for flexible query answering in distributed databases

    OpenAIRE

    Wiese, Lena

    2014-01-01

    One feature of cloud storage systems is data fragmentation (or sharding) so that data can be distributed over multiple servers and subqueries can be run in parallel on the fragments. On the other hand, flexible query answering can enable a database system to find related information for a user whose original query cannot be answered exactly. Query generalization is a way to implement flexible query answering on the syntax level. In this paper we study a clustering-based fragmentat...

  20. Wild Card Queries for Searching Resources on the Web

    CERN Document Server

    Rafiei, Davood

    2009-01-01

    We propose a domain-independent framework for searching and retrieving facts and relationships within natural language text sources. In this framework, an extraction task over a text collection is expressed as a query that combines text fragments with wild cards, and the query result is a set of facts in the form of unary, binary and general $n$-ary tuples. A significance of our querying mechanism is that, despite being both simple and declarative, it can be applied to a wide range of extraction tasks. A problem in querying natural language text though is that a user-specified query may not retrieve enough exact matches. Unlike term queries which can be relaxed by removing some of the terms (as is done in search engines), removing terms from a wild card query without ruining its meaning is more challenging. Also, any query expansion has the potential to introduce false positives. In this paper, we address the problem of query expansion, and also analyze a few ranking alternatives to score the results and to r...

  1. Agile IT: Thinking in User-Centric Models

    Science.gov (United States)

    Margaria, Tiziana; Steffen, Bernhard

    We advocate a new teaching direction for modern CS curricula: extreme model-driven development (XMDD), a new development paradigm designed to continuously involve the customer/application expert throughout the whole systems' life cycle. Based on the `One-Thing Approach', which works by successively enriching and refining one single artifact, system development becomes in essence a user-centric orchestration of intuitive service functionality. XMDD differs radically from classical software development, which, in our opinion is no longer adequate for the bulk of application programming - in particular when it comes to heterogeneous, cross organizational systems which must adapt to rapidly changing market requirements. Thus there is a need for new curricula addressing this model-driven, lightweight, and cooperative development paradigm that puts the user process in the center of the development and the application expert in control of the process evolution.

  2. A User-Friendly Dark Energy Model Generator

    CERN Document Server

    Hinton, Kyle A; Huterer, Dragan

    2015-01-01

    We provide software with a graphical user interface to calculate the phenomenology of a wide class of dark energy models featuring multiple scalar fields. The user chooses a subclass of models and, if desired, initial conditions, or else a range of initial parameters for Monte Carlo. The code calculates the energy density of components in the universe, the equation of state of dark energy, and the linear growth of density perturbations, all as a function of redshift and scale factor. The output also includes an approximate conversion into the average equation of state, as well as the common $(w_0, w_a)$ parametrization. The code is available here: http://github.com/kahinton/Dark-Energy-UI-and-MC

  3. Five-Factor Model personality profiles of drug users

    Directory of Open Access Journals (Sweden)

    Crum Rosa M

    2008-04-01

    Full Text Available Abstract Background Personality traits are considered risk factors for drug use, and, in turn, the psychoactive substances impact individuals' traits. Furthermore, there is increasing interest in developing treatment approaches that match an individual's personality profile. To advance our knowledge of the role of individual differences in drug use, the present study compares the personality profile of tobacco, marijuana, cocaine, and heroin users and non-users using the wide spectrum Five-Factor Model (FFM of personality in a diverse community sample. Method Participants (N = 1,102; mean age = 57 were part of the Epidemiologic Catchment Area (ECA program in Baltimore, MD, USA. The sample was drawn from a community with a wide range of socio-economic conditions. Personality traits were assessed with the Revised NEO Personality Inventory (NEO-PI-R, and psychoactive substance use was assessed with systematic interview. Results Compared to never smokers, current cigarette smokers score lower on Conscientiousness and higher on Neuroticism. Similar, but more extreme, is the profile of cocaine/heroin users, which score very high on Neuroticism, especially Vulnerability, and very low on Conscientiousness, particularly Competence, Achievement-Striving, and Deliberation. By contrast, marijuana users score high on Openness to Experience, average on Neuroticism, but low on Agreeableness and Conscientiousness. Conclusion In addition to confirming high levels of negative affect and impulsive traits, this study highlights the links between drug use and low Conscientiousness. These links provide insight into the etiology of drug use and have implications for public health interventions.

  4. Simplified analytical model of penetration with lateral loading -- User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Young, C.W.

    1998-05-01

    The SAMPLL (Simplified Analytical Model of Penetration with Lateral Loading) computer code was originally developed in 1984 to realistically yet economically predict penetrator/target interactions. Since the code`s inception, its use has spread throughout the conventional and nuclear penetrating weapons community. During the penetrator/target interaction, the resistance of the material being penetrated imparts both lateral and axial loads on the penetrator. These loads cause changes to the penetrator`s motion (kinematics). SAMPLL uses empirically based algorithms, formulated from an extensive experimental data base, to replicate the loads the penetrator experiences during penetration. The lateral loads resulting from angle of attack and trajectory angle of the penetrator are explicitly treated in SAMPLL. The loads are summed and the kinematics calculated at each time step. SAMPLL has been continually improved, and the current version, Version 6.0, can handle cratering and spall effects, multiple target layers, penetrator damage/failure, and complex penetrator shapes. Version 6 uses the latest empirical penetration equations, and also automatically adjusts the penetrability index for certain target layers to account for layer thickness and confinement. This report describes the SAMPLL code, including assumptions and limitations, and includes a user`s guide.

  5. Regional Ionospheric Modelling for Single-Frequency Users

    Science.gov (United States)

    Boisits, Janina; Joldzic, Nina; Weber, Robert

    2016-04-01

    Ionospheric signal delays are a main error source in GNSS-based positioning. Thus, single-frequency receivers, which are frequently used nowadays, require additional ionospheric information to mitigate these effects. Within the Austrian Research Promotion Agency (FFG) project Regiomontan (Regional Ionospheric Modelling for Single-Frequency Users) a new and as realistic as possible model is used to obtain precise GNSS ionospheric signal delays. These delays will be provided to single-frequency users to significantly increase positioning accuracy. The computational basis is the Thin-Shell Model. For regional modelling a thin electron layer of the underlying model is approximated by a Taylor series up to degree two. The network used includes 22 GNSS Reference Stations in Austria and nearby. First results were calculated from smoothed code observations by forming the geometry-free linear combination. Satellite and station DCBs were applied. In a least squares adjustment the model parameters, consisting of the VTEC0 at the origin of the investigated area, as well as the first and the second derivatives of the electron content in longitude and latitude, were obtained with a temporal resolution of 1 hour. The height of the layer was kept fixed. The formal errors of the model parameters suggest an accuracy of the VTEC slightly better than 1TECU for a user location within Austria. In a further step, the model parameters were derived from sole phase observations by using a levelling approach to mitigate common range biases. The formal errors of this model approach suggest an accuracy of about a few tenths of a TECU. For validation, the Regiomontan VTEC was compared to IGS TEC maps depicting a very good agreement. Further, a comparison of pseudoranges has been performed to calculate the 'true' error by forming the ionosphere-free linear combination on the one hand, and by applying the Regiomontan model to L1 pseudoranges on the other hand. The resulting differences are mostly

  6. Managing and querying whole slide images

    Science.gov (United States)

    Wang, Fusheng; Oh, Tae W.; Vergara-Niedermayr, Cristobal; Kurc, Tahsin; Saltz, Joel

    2012-02-01

    High-resolution pathology images provide rich information about the morphological and functional characteristics of biological systems, and are transforming the field of pathology into a new era. To facilitate the use of digital pathology imaging for biomedical research and clinical diagnosis, it is essential to manage and query both whole slide images (WSI) and analytical results generated from images, such as annotations made by humans and computed features and classifications made by computer algorithms. There are unique requirements on modeling, managing and querying whole slide images, including compatibility with standards, scalability, support of image queries at multiple granularities, and support of integrated queries between images and derived results from the images. In this paper, we present our work on developing the Pathology Image Database System (PIDB), which is a standard oriented image database to support retrieval of images, tiles, regions and analytical results, image visualization and experiment management through a unified interface and architecture. The system is deployed for managing and querying whole slide images for In Silico brain tumor studies at Emory University. PIDB is generic and open source, and can be easily used to support other biomedical research projects. It has the potential to be integrated into a Picture Archiving and Communications System (PACS) with powerful query capabilities to support pathology imaging.

  7. COMPLEX QUERY AND METADATA

    OpenAIRE

    Nakatoh, Tetsuya; Omori, Keisuke; Yamada, Yasuhiro; Hirokawa, Sachio

    2003-01-01

    We are developing a search system DAISEn which integrates multiple search engines and generates a metasearch engine automatically. The target search engines of DAISEn are not general search engines, but are search engines specialized in some area. Integration of such engines yields efficiency and quality. There are search engines of new type which accept complex query and return structured data. Integration of such search engines is much harder than that of simple search engines which accept ...

  8. Querying genomic databases

    Energy Technology Data Exchange (ETDEWEB)

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  9. Design of personalized search engine based on user-webpage dynamic model

    Science.gov (United States)

    Li, Jihan; Li, Shanglin; Zhu, Yingke; Xiao, Bo

    2013-12-01

    Personalized search engine focuses on establishing a user-webpage dynamic model. In this model, users' personalized factors are introduced so that the search engine is better able to provide the user with targeted feedback. This paper constructs user and webpage dynamic vector tables, introduces singular value decomposition analysis in the processes of topic categorization, and extends the traditional PageRank algorithm.

  10. A Semantic Graph Query Language

    Energy Technology Data Exchange (ETDEWEB)

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  11. Mastering jQuery mobile

    CERN Document Server

    Lambert, Chip

    2015-01-01

    You've started down the path of jQuery Mobile, now begin mastering some of jQuery Mobile's higher level topics. Go beyond jQuery Mobile's documentation and master one of the hottest mobile technologies out there. Previous JavaScript and PHP experience can help you get the most out of this book.

  12. A Model for the Determination of Pollen Count Using Google Search Queries for Patients Suffering from Allergic Rhinitis

    Directory of Open Access Journals (Sweden)

    Volker König

    2014-01-01

    Full Text Available Background. The transregional increase in pollen-associated allergies and their diversity have been scientifically proven. However, patchy pollen count measurement in many regions is a worldwide problem with few exceptions. Methods. This paper used data gathered from pollen count stations in Germany, Google queries using relevant allergological/biological keywords, and patient data from three German study centres collected in a prospective, double-blind, randomised, placebo-controlled, multicentre immunotherapy study to analyse a possible correlation between these data pools. Results. Overall, correlations between the patient-based, combined symptom medication score and Google data were stronger than those with the regionally measured pollen count data. The correlation of the Google data was especially strong in the groups of severe allergy sufferers. The results of the three-centre analyses show moderate to strong correlations with the Google keywords (up to >0.8 cross-correlation coefficient, P < 0.001 in 10 out of 11 groups (three averaged patient cohorts and eight subgroups of severe allergy sufferers: high IgE class, high combined symptom medication score, and asthma. Conclusion. For countries with a good Internet infrastructure but no dense network of pollen traps, this could represent an alternative for determining pollen levels and, forecasting the pollen count for the next day.

  13. Mobile Database System: Role of Mobility on the Query Processing

    CERN Document Server

    Sharma, Samidha Dwivedi

    2010-01-01

    The rapidly expanding technology of mobile communication will give mobile users capability of accessing information from anywhere and any time. The wireless technology has made it possible to achieve continuous connectivity in mobile environment. When the query is specified as continuous, the requesting mobile user can obtain continuously changing result. In order to provide accurate and timely outcome to requesting mobile user, the locations of moving object has to be closely monitored. The objective of paper is to discuss the problem related to the role of personal and terminal mobility and query processing in the mobile environment.

  14. E-Commerce Mobile Marketing Model Resolving Users Acceptance Criteria

    Directory of Open Access Journals (Sweden)

    Veronica S. Moertini

    2012-12-01

    Full Text Available The growth of e-commerce and mobile services has been creating business opportunities. Among these,mobile marketing is predicted to be a new trend of marketing. As mobile devices are personal tools suchthat the services ought to be unique, in the last few years, researches have been conducted studies relatedto the user acceptance of mobile services and produced results. This research aims to develop a model ofmobile e-commerce mobile marketing system, in the form of computer-based information system (CBISthat addresses the recommendations or criteria formulated by researchers. In this paper, the criteriaformulated by researches are presented then each of the criteria is resolved and translated into mobileservices. A model of CBIS, which is an integration of a website and a mobile application, is designed tomaterialize the mobile services. The model is presented in the form of business model, system procedures,network topology, software model of the website and mobile application, and database models.

  15. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    Science.gov (United States)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  16. Multiple k Nearest Neighbor Query Processing in Spatial Network Databases

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... where an upper bound on k is known a priori and then extends the techniques to the case where this is not so. Based on empirical studies with real-world data, the paper offers insight into the circumstances under which the different proposed techniques can be used with advantage for multiple k nearest...

  17. QCS : a system for querying, clustering, and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of

  18. QCS: a system for querying, clustering and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O' Leary, Dianne P. (University of Maryland, College Park, MD); Conroy, John M. (Center for Computing Sciences, Bowie, MD)

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design

  19. A Probability-Based Hybrid User Model for Recommendation System

    Directory of Open Access Journals (Sweden)

    Jia Hao

    2016-01-01

    Full Text Available With the rapid development of information communication technology, the available information or knowledge is exponentially increased, and this causes the well-known information overload phenomenon. This problem is more serious in product design corporations because over half of the valuable design time is consumed in knowledge acquisition, which highly extends the design cycle and weakens the competitiveness. Therefore, the recommender systems become very important in the domain of product domain. This research presents a probability-based hybrid user model, which is a combination of collaborative filtering and content-based filtering. This hybrid model utilizes user ratings and item topics or classes, which are available in the domain of product design, to predict the knowledge requirement. The comprehensive analysis of the experimental results shows that the proposed method gains better performance in most of the parameter settings. This work contributes a probability-based method to the community for implement recommender system when only user ratings and item topics are available.

  20. User Adoption Tendency Modeling for Social Contextual Recommendation

    Directory of Open Access Journals (Sweden)

    Wang Zhisheng

    2015-01-01

    Full Text Available Most of studies on the existing recommender system for Netflix-style sites (scenarios with explicit user feedback focus on rating prediction, but few have systematically analyzed users’ motivations to make decisions on which items to rate. In this paper, the authors study the difficult and challenging task Item Adoption Prediction (IAP for predicting the items users will rate or interact with. It is not only an important supplement to previous works, but also a more realistic requirement of recommendation in this scenario. To recommend the items with high Adoption Tendency, the authors develop a unified model UATM based on the findings of Marketing and Consumer Behavior. The novelty of the model in this paper includes: First, the authors propose a more creative and effective optimization method to tackle One-Class Problem where only the positive feedback is available; second, the authors systematically and conveniently integrate the user adoption information (both explicit and implicit feedbacks included and the social contextual information with quantitatively characterizing different users’ personal sensitivity to various social contextual influences.

  1. Error Checking for Chinese Query by Mining Web Log

    Directory of Open Access Journals (Sweden)

    Jianyong Duan

    2015-01-01

    Full Text Available For the search engine, error-input query is a common phenomenon. This paper uses web log as the training set for the query error checking. Through the n-gram language model that is trained by web log, the queries are analyzed and checked. Some features including query words and their number are introduced into the model. At the same time data smoothing algorithm is used to solve data sparseness problem. It will improve the overall accuracy of the n-gram model. The experimental results show that it is effective.

  2. Modeling users' activity on twitter networks: validation of Dunbar's number.

    Directory of Open Access Journals (Sweden)

    Bruno Gonçalves

    Full Text Available Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  3. Modeling users' activity on Twitter networks: validation of Dunbar's number

    Science.gov (United States)

    Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2012-02-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  4. Modeling users' activity on twitter networks: validation of Dunbar's number.

    Science.gov (United States)

    Gonçalves, Bruno; Perra, Nicola; Vespignani, Alessandro

    2011-01-01

    Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.

  5. Robust Query Processing for Personalized Information Access on the Semantic Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Stuckenschmidt, Heiner; Wache, Holger

    Research in Cooperative Query answering is triggered by the observation that users are often not able to correctly formulate queries to databases that return the intended result. Due to a lack of knowledge of the contents and the structure of a database, users will often only be able to provide v...

  6. An analysis of free-text queries for a multi-field web form

    NARCIS (Netherlands)

    Tjin-Kam-Jet, K.; Trieschnigg, D.; Hiemstra, D.

    2012-01-01

    We report how users interact with an experimental system that transforms single-field textual input into a multi-field query for an existing travel planner system. The experimental system was made publicly available and we collected over 30,000 queries from almost 12,000 users. From the free-text qu

  7. Capturing the Meaning of Internet Search Queries by Taxonomy Mapping

    Science.gov (United States)

    Tikk, Domonkos; Kardkovács, Zsolt T.; Bánsághi, Zoltán

    Capturing the meaning of internet search queries can significantly improve the effectiveness of search retrieval. Users often have problem to find relevant answer to their queries, particularly, when the posted query is ambiguous. The orientation of the user can be greatly facilitated, if answers are grouped into topics of a fixed subject taxonomy. In this manner, the original problem can be transformed to the labelling of queries — and consequently, the answers — with the topic names. Thus the original problem is transformed into a classification set-up. This paper introduces our Ferrety algorithm that performs topic assignment, which also works when there is no directly available training data that describes the semantics of the subject taxonomy. The approach is presented via the example of ACM KDD Cup 2005 problem, where Ferrety was awarded for precision and creativity.

  8. User-Centered Data Management

    CERN Document Server

    Catarci, Tiziana; Kimani, Stephen

    2010-01-01

    This lecture covers several core issues in user-centered data management, including how to design usable interfaces that suitably support database tasks, and relevant approaches to visual querying, information visualization, and visual data mining. Novel interaction paradigms, e.g., mobile and interfaces that go beyond the visual dimension, are also discussed. Table of Contents: Why User-Centered / The Early Days: Visual Query Systems / Beyond Querying / More Advanced Applications / Non-Visual Interfaces / Conclusions

  9. System cost model user`s manual, version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Shropshire, D.

    1995-06-01

    The System Cost Model (SCM) was developed by Lockheed Martin Idaho Technologies in Idaho Falls, Idaho and MK-Environmental Services in San Francisco, California to support the Baseline Environmental Management Report sensitivity analysis for the U.S. Department of Energy (DOE). The SCM serves the needs of the entire DOE complex for treatment, storage, and disposal (TSD) of mixed low-level, low-level, and transuranic waste. The model can be used to evaluate total complex costs based on various configuration options or to evaluate site-specific options. The site-specific cost estimates are based on generic assumptions such as waste loads and densities, treatment processing schemes, existing facilities capacities and functions, storage and disposal requirements, schedules, and cost factors. The SCM allows customization of the data for detailed site-specific estimates. There are approximately forty TSD module designs that have been further customized to account for design differences for nonalpha, alpha, remote-handled, and transuranic wastes. The SCM generates cost profiles based on the model default parameters or customized user-defined input and also generates costs for transporting waste from generators to TSD sites.

  10. Cognitive Modeling of Video Game Player User Experience

    Science.gov (United States)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  11. Mobile Systems Privacy: 'MobiPriv' A Robust System for Snapshot or Continuous Querying Location Based Mobile Systems

    Directory of Open Access Journals (Sweden)

    Philip S. Yu

    2012-04-01

    Full Text Available Many mobile phones have a GPS sensor that can report accurate location. Thus, if these location data are not protected adequately, they may cause privacy breeches. Moreover, several reports are available where persons have been stalked through GPS. The contributions of this paper are in two folds. First, we examine privacy issues in snapshot queries, and present our work and results in this area. The proposed method can guarantee that all queries are protected, while previously proposed algorithms only achieve a low success rate in some situations. Next, we discuss continuous queries and illustrate that current snapshot solutions cannot be applied to continuous queries. Then, we present results for our robust models for continuous queries. We will introduce a novel suite of algorithms called MobiPriv that addressed the shortcomings of previous work in location and query privacy in mobile systems. We evaluated the efficiency and effectiveness of the MobiPriv scheme against previously proposed anonymization approaches. For our experiments, we utilized real world traffic volume data, real world road network and mobile users generated realistically by a mobile object generator.

  12. jQuery 2.0 animation techniques beginner's guide

    CERN Document Server

    Culpepper, Adam

    2013-01-01

    This book is a guide to help you create attractive web page animations using jQuery. Written in a friendly and engaging approach this book is designed to be placed alongside your computer as a mentor.If you are a web designer or a frontend developer or if you want to learn how to animate the user interface of your web applications with jQuery, this book is for you. Experience with jQuery or Javascript would be helpful but solid knowledge base of HTML and CSS is assumed.

  13. Joint Top-K Spatial Keyword Query Processing

    DEFF Research Database (Denmark)

    Wu, Dingming; Yiu, Man Lung; Cong, Gao

    2012-01-01

    Web users and content are increasingly being geopositioned, and increased focus is being given to serving local content in response to web queries. This development calls for spatial keyword queries that take into account both the locations and textual descriptions of content. We study...... keyword queries. Empirical studies show that the proposed solution is efficient on real data sets. We also offer analytical studies on synthetic data sets to demonstrate the efficiency of the proposed solution. Index Terms IEEE Terms Electronic mail , Google , Indexes , Joints , Mobile communication...

  14. Model for Educational Game Using Natural User Interface

    Directory of Open Access Journals (Sweden)

    Azrulhizam Shapi’i

    2016-01-01

    Full Text Available Natural User Interface (NUI is a new approach that has become increasingly popular in Human-Computer Interaction (HCI. The use of this technology is widely used in almost all sectors, including the field of education. In recent years, there are a lot of educational games using NUI technology in the market such as Kinect game. Kinect is a sensor that can recognize body movements, postures, and voices in three dimensions. It enables users to control and interact with game without the need of using game controller. However, the contents of most existing Kinect games do not follow the standard curriculum in classroom, thus making it do not fully achieve the learning objectives. Hence, this research proposes a design model as a guideline in designing educational game using NUI. A prototype has been developed as one of the objectives in this study. The prototype is based on proposed model to ensure and assess the effectiveness of the model. The outcomes of this study conclude that the proposed model contributed to the design method for the development of the educational game using NUI. Furthermore, evaluation results of the prototype show a good response from participant and in line with the standard curriculum.

  15. Instant Cassandra query language

    CERN Document Server

    Singh, Amresh

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. It's an Instant Starter guide.Instant Cassandra Query Language is great for those who are working with Cassandra databases and who want to either learn CQL to check data from the console or build serious applications using CQL. If you're looking for something that helps you get started with CQL in record time and you hate the idea of learning a new language syntax, then this book is for you.

  16. Querying and Extracting Timeline Information from Road Traffic Sensor Data.

    Science.gov (United States)

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-08-23

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system-a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset.

  17. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    Directory of Open Access Journals (Sweden)

    Ardi Imawan

    2016-08-01

    Full Text Available The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset.

  18. Chinese college students’ Web querying behaviors:A case study of Peking University

    Institute of Scientific and Technical Information of China (English)

    QU; Peng; LIU; Chang; LAI; Maosheng

    2010-01-01

    This study examined users’querying behaviors based on a sample of 30 Chinese college students from Peking University.The authors designed 5 search tasks and each participant conducted two randomly selected search tasks during the experiment.The results show that when searching for pre-designed search tasks,users often have relatively clear goals and strategies before searching.When formulating their queries,users often select words from tasks,use concrete concepts directly,or extract"central words"or keywords.When reformulating queries,seven query reformulation types were identified from users’behaviors,i.e.broadening,narrowing,issuing new query,paralleling,changing search tools,reformulating syntax terms,and clicking on suggested queries.The results reveal that the search results and/or the contexts can also influence users’querying behaviors.

  19. Modeling metro users' travel behavior in Tehran: Frequency of Use

    Directory of Open Access Journals (Sweden)

    Amir Reza Mamdoohi

    2016-10-01

    Full Text Available Transit-oriented development (TOD, as a sustainable supporting strategy, emphasizes the improvement of public transportation coverage and quality, land use density and diversity of around public transportation stations and priority of walking and cycling at station areas. Traffic, environmental and economic problems arising from high growth of personal car, inappropriate distribution of land use, and car-orientation of metropolitan area, necessitate adoption of TOD. In recent years, more researches on urban development and transportation have focused on this strategy. This research considering metro stations as base for development, aims to model metro users' travel behavior and decision-making procedures. In this regard, research question is: what are the parameters or factors affecting in the frequency of travel by metro in half-mile radius from stations. The radius was determine based on TOD definitions and 5 minute walking time to metro stations.  A questionnaire was designed in three sections that including travel features by metro, attitudes toward metro, economic and social characteristics of respondents. Ten stations were selected based on their geographic dispersion in Tehran and a sample of 450 respondents was determined. The questionnaires were surveyed face to face in (half-mile vicinity of metro stations. Based on a refined sample on 400 questionnaires ordered discrete choice models were considered. Results of descriptive statistics show that 38.5 percent of the sample used metro more than 4 times per week. Trip purpose for 45.7 percent of metro users is work. Access mode to the metro stations for nearly half of the users (47.6 percent is bus. Results of ordered logit models show a number of significant variables including: habit of using the metro, waiting time in station, trip purpose (working, shopping and recreation, personal car access mode to the metro station, walking access mode to the metro station and being a housewife.

  20. User's Relevance of PIR System Based on Cloud Models

    Institute of Scientific and Technical Information of China (English)

    KANG Hai-yan; FAN Xiao-zhong

    2006-01-01

    A new method to evaluate fuzzily user's relevance on the basis of cloud models has been proposed.All factors of personalized information retrieval system are taken into account in this method. So using this method for personalized information retrieval (PIR) system can efficiently judge multi-value relevance, such as quite relevant, comparatively relevant, commonly relevant, basically relevant and completely non-relevant,and realize a kind of transform of qualitative concepts and quantity and improve accuracy of relevance judgements in PIR system. Experimental data showed that the method is practical and valid. Evaluation results are more accurate and approach to the fact better.

  1. Care 3 model overview and user's guide, first revision

    Science.gov (United States)

    Bavuso, S. J.; Petersen, P. L.

    1985-01-01

    A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.

  2. Dynamic Trust Models between Users over Social Networks

    Science.gov (United States)

    2016-03-30

    model, we consider introducing some trust discount factors. The simplest one is an exponential discount factor defined by ρ(Δt; λ) = exp(−λΔt), where...the Cosmetics review dataset. The other one is composed of review records collected from “anikore”, a ranking and review site for anime , which is...referred to as the Anime review dataset. In both the datasets, each record has 4-triple (u, i, s, t), which means user u gives a score s to item i at

  3. Templates and Queries in Contextual Hypermedia

    DEFF Research Database (Denmark)

    Anderson, Kenneth Mark; Hansen, Frank Allan; Bouvin, Niels Olof

    2006-01-01

    This paper presents a new definition of context for context-aware computing based on a model that relies on dynamic queries over structured objects. This new model enables developers to flexibly specify the relationship between context and context data for their context-aware applications. We dis...

  4. User-friendly software for modeling collective spin wave excitations

    Science.gov (United States)

    Hahn, Steven; Peterson, Peter; Fishman, Randy; Ehlers, Georg

    There exists a great need for user-friendly, integrated software that assists in the scientific analysis of collective spin wave excitations measured with inelastic neutron scattering. SpinWaveGenie is a C + + software library that simplifies the modeling of collective spin wave excitations, allowing scientists to analyze neutron scattering data with sophisticated models fast and efficiently. Furthermore, one can calculate the four-dimensional scattering function S(Q,E) to directly compare and fit calculations to experimental measurements. Its generality has been both enhanced and verified through successful modeling of a wide array of magnetic materials. Recently, we have spent considerable effort transforming SpinWaveGenie from an early prototype to a high quality free open source software package for the scientific community. S.E.H. acknowledges support by the Laboratory's Director's fund, ORNL. Work was sponsored by the Division of Scientific User Facilities, Office of Basic Energy Sciences, US Department of Energy, under Contract No. DE-AC05-00OR22725 with UT-Battelle, LLC.

  5. A Study of Library Databases by Translating Those SQL Queries into Relational Algebra and Generating Query Trees

    Directory of Open Access Journals (Sweden)

    Santhi Lasya

    2011-09-01

    Full Text Available Even in this World Wide Web era where there is unrestricted access to a lot of articles and books at a mouses click, the role of an organized library is immense. It is vital to have effective software to manage various functions in a library and the fundamental for effective software is the underlying database access and the queries used. And hence library databases become our use-case for this study. This paper starts off with considering a basic ER model of a typical library relational database. We would also list all the basic use-cases in a library management system. The next part of the paper deals with the sql queries used for performing certain functions in a library database management system. Along with the queries, we would generate reports for some of the use cases. The final section of the paper forms the crux of this library database study, wherein we would dwell on the concepts of query processing and query optimization in the relational database domain. We would analyze the above mentioned queries, by translating the query into a relational algebra expression and generating a query tree for the same. By converting algebra, we look at optimizing the query, and by generating a query tree, we would come up a cheapest cost plan.

  6. Mathematical Formula Search using Natural Language Queries

    Directory of Open Access Journals (Sweden)

    YANG, S.

    2014-11-01

    Full Text Available This paper presents how to search mathematical formulae written in MathML when given plain words as a query. Since the proposed method allows natural language queries like the traditional Information Retrieval for the mathematical formula search, users do not need to enter any complicated math symbols and to use any formula input tool. For this, formula data is converted into plain texts, and features are extracted from the converted texts. In our experiments, we achieve an outstanding performance, a MRR of 0.659. In addition, we introduce how to utilize formula classification for formula search. By using class information, we finally achieve an improved performance, a MRR of 0.690.

  7. A method of designing smartphone interface based on the extended user's mental model

    Science.gov (United States)

    Zhao, Wei; Li, Fengmin; Bian, Jiali; Pan, Juchen; Song, Song

    2017-01-01

    The user's mental model is the core guiding theory of product design, especially practical products. The essence of practical product is a tool which is used by users to meet their needs. Then, the most important feature of a tool is usability. The design method based on the user's mental model provides a series of practical and feasible theoretical guidance for improving the usability of the product according to the user's awareness of things. In this paper, we propose a method of designing smartphone interface based on the extended user's mental model according to further research on user groups. This approach achieves personalized customization of smartphone application interface and enhance application using efficiency.

  8. Structured Query Translation in Peer to Peer Database Sharing Systems

    Directory of Open Access Journals (Sweden)

    Mehedi Masud

    2009-10-01

    Full Text Available This paper presents a query translation mechanism between heterogeneous peers in Peer to Peer Database Sharing Systems (PDSSs. A PDSS combines a database management system with P2P functionalities. The local databases on peers are called peer databases. In a PDSS, each peer chooses its own data model and schema and maintains data independently without any global coordinator. One of the problems in such a system is translating queries between peers, taking into account both the schema and data heterogeneity. Query translation is the problem of rewriting a query posed in terms of one peer schema to a query in terms of another peer schema. This paper proposes a query translation mechanism between peers where peers are acquainted in data sharing systems through data-level mappings for sharing data.

  9. Index and query methods in road networks

    CERN Document Server

    Feng, Jun

    2015-01-01

    This book presents the index and query techniques on road network and moving objects which are limited to road network. Here, the road network of non-Euclidean space has its unique characteristics such that two moving objects may be very close in a straight line distance. The index used in two-dimensional Euclidean space is not always appropriate for moving objects on road network. Therefore, the index structure needs to be improved in order to obtain suitable indexing methods, explore the shortest path and acquire nearest neighbor query and aggregation query methods under the new index structures. Chapter 1 of this book introduces the present situation of intelligent traffic and index in road network, Chapter 2 introduces the relevant existing spatial indexing methods. Chapter 3-5 focus on several issues of road network and query, they involves: traffic road network models (see Chapter 3), index structures (see Chapter 4) and aggregate query methods (see Chapter 5). Finally, in Chapter 6, the book briefly de...

  10. Natural Language Query System Design for Interactive Information Storage and Retrieval Systems. M.S. Thesis

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Liu, I-Hsiung

    1985-01-01

    The currently developed multi-level language interfaces of information systems are generally designed for experienced users. These interfaces commonly ignore the nature and needs of the largest user group, i.e., casual users. This research identifies the importance of natural language query system research within information storage and retrieval system development; addresses the topics of developing such a query system; and finally, proposes a framework for the development of natural language query systems in order to facilitate the communication between casual users and information storage and retrieval systems.

  11. An Algorithm Reformulation of XQuery Queries Using GLAV Mapping for Mediator-based System

    Directory of Open Access Journals (Sweden)

    Benharzallah SABER

    2014-03-01

    Full Text Available This paper describes an algorithm for reformulation of XQuery queries. The mediation is based on an essential component called mediator. The main role of the mediator is to reformulate a user query, written in terms of global schema, in queries written in terms of sources schemas. Our algorithm is based on the principle of logical equivalence, simple and complex unification, to obtain a better reformulation. It takes as parameter the query XQuery, the global schema (written in XMLSchema, mappings GLAV and gives as a result a query written in terms of sources schemas. The results of implementation show the proper functioning of the algorithm.

  12. Ranking Queries on Uncertain Data

    CERN Document Server

    Hua, Ming

    2011-01-01

    Uncertain data is inherent in many important applications, such as environmental surveillance, market analysis, and quantitative economics research. Due to the importance of those applications and rapidly increasing amounts of uncertain data collected and accumulated, analyzing large collections of uncertain data has become an important task. Ranking queries (also known as top-k queries) are often natural and useful in analyzing uncertain data. Ranking Queries on Uncertain Data discusses the motivations/applications, challenging problems, the fundamental principles, and the evaluation algorith

  13. Research Issues in Mobile Querying

    DEFF Research Database (Denmark)

    Breunig, M.; Jensen, Christian Søndergaard; Klein, M.

    2004-01-01

    This document reports on key aspects of the discussions conducted within the working group. In particular, the document aims to offer a structured and somewhat digested summary of the group's discussions. The document first offers concepts that enable characterization of "mobile queries" as well...... as the types of systems that enable such queries. It explores the notion of context in mobile queries. The document ends with a few observations, mainly regarding challenges....

  14. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  15. Smart Query Answering for Marine Sensor Data

    Directory of Open Access Journals (Sweden)

    Paulo de Souza

    2011-03-01

    Full Text Available We review existing query answering systems for sensor data. We then propose an extended query answering approach termed smart query, specifically for marine sensor data. The smart query answering system integrates pattern queries and continuous queries. The proposed smart query system considers both streaming data and historical data from marine sensor networks. The smart query also uses query relaxation technique and semantics from domain knowledge as a recommender system. The proposed smart query benefits in building data and information systems for marine sensor networks.

  16. QVIZ: A FRAMEWORK FOR QUERYING AND VISUALIZING DATA

    Energy Technology Data Exchange (ETDEWEB)

    T. KEAHEY; P. MCCORMICK; ET AL

    2000-12-01

    Qviz is a lightweight, modular,and easy to use parallel system for interactive analytical query processing and visual presentation of large datasets. Qviz allows queries of arbitrary complexity to be easily constructed using a specialized scripting language. Visual presentation of the results is also easily achieved via simple scripted and interactive commands to our query-specific visualization tools. This paper describes our initial experiences with the Qviz system for querying and visualizing scientific datasets, showing how Qviz has been used in two different applications: ocean modeling and linear accelerator simulations.

  17. Data Caching for XML Query

    Institute of Scientific and Technical Information of China (English)

    SU Fei; CI Lin-lin; ZHU Li-ping; ZHAO Xin-xin

    2006-01-01

    In order to apply the technique of data cache to extensible markup language (XML) database system, the XML-cache system to support data cache for XQuery is presented. According to the character of XML, the queries with nesting are normalized to facilitate the following operation. Based on the idea of incomplete tree, using the document type definition (DTD) schema tree and conditions from normalized XQuery, the results of previous queries are maintained to answer new queries, at the same time, the remainder queries are sent to XML database at the back. The results of experiment show all applications supported by XML database can use this technique to cache data for future use.

  18. A rank-based Prediction Algorithm of Learning User's Intention

    Science.gov (United States)

    Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing

    Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.

  19. VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model

    Energy Technology Data Exchange (ETDEWEB)

    Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern; Steven J. Piet; Benjamin A. Baker; Joseph Grimm

    2009-08-01

    The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intended as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several

  20. Optimal Planar Orthogonal Skyline Counting Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Larsen, Kasper Green

    2014-01-01

    The skyline of a set of points in the plane is the subset of maximal points, where a point (x,y) is maximal if no other point (x',y') satisfies x'≥ x and y'≥ x. We consider the problem of preprocessing a set P of n points into a space efficient static data structure supporting orthogonal skyline...... counting queries, i.e. given a query rectangle R to report the size of the skyline of P\\cap R. We present a data structure for storing n points with integer coordinates having query time O(lg n/lglg n) and space usage O(n). The model of computation is a unit cost RAM with logarithmic word size. We prove...

  1. A comprehensive physiologically based pharmacokinetic knowledgebase and web-based interface for rapid model ranking and querying

    Science.gov (United States)

    Published physiologically based pharmacokinetic (PBPK) models from peer-reviewed articles are often well-parameterized, thoroughly-vetted, and can be utilized as excellent resources for the construction of models pertaining to related chemicals. Specifically, chemical-specific pa...

  2. A structural query system for Han characters

    DEFF Research Database (Denmark)

    Skala, Matthew

    2016-01-01

    The IDSgrep structural query system for Han character dictionaries is presented. This dictionary search system represents the spatial structure of Han characters using Extended Ideographic Description Sequences (EIDSes), a data model and syntax based on the Unicode IDS concept. It includes a query...... language for EIDS databases, with a freely available implementation and format translation from popular third-party IDS and XML character databases. The system is designed to suit the needs of font developers and foreign language learners. The search algorithm includes a bit vector index inspired by Bloom...

  3. Testing a fall risk model for injection drug users.

    Science.gov (United States)

    Pieper, Barbara; Templin, Thomas N; Goldberg, Allon

    2012-01-01

    Fall risk is a critical component of clinical assessment and has not been examined for persons who have injected illicit drugs and are aging. The aim of this study was to test and develop the Fall Risk Model for Injection Drug Users by examining the relationships among injection drug use, chronic venous insufficiency, lower extremity impairments (i.e., decreased ankle range of motion, reduced calf muscle endurance, and leg pain), age and other covariates, and the Tinetti balance and gait total score as a measure of fall risk. A cross-sectional comparative design was used with four crossed factors. Standardized instruments were used to assess the variables. Moderated multiple regression with linear and quadratic trends in age was used to examine the nature of the relationship between the Tinetti balance and gait total and age and the potential moderating role of injection drug use. A prespecified series of models was tested. Participants (n = 713) were men (46.9%) and women with a mean age of 46.26 years and primarily African American (61.7%) in methadone treatment centers. The fall risk of a 48-year-old leg injector was comparable with the fall risk of a 69-year-old who had not injected drugs. Variables were added to the model sequentially, resulting in some lost significance of some when they were explained by subsequent variables. Final significant variables in the model were employment status, number of comorbidities, ankle range of motion, leg pain, and calf muscle endurance. Fall risk was associated with route of drug use. Lower extremity impairments accounted for the effects of injection drug use and chronic venous insufficiency on risk for falls. Further understanding of fall risk in injection users is necessary as they age, attempt to work, and participate in activities.

  4. From Questions to Queries

    Directory of Open Access Journals (Sweden)

    M. Drlík

    2007-12-01

    Full Text Available The extension of (Internet databases forceseveryone to become more familiar with techniques of datastorage and retrieval because users’ success often dependson their ability to pose right questions and to be able tointerpret their answers. University programs pay moreattention to developing database programming skills than todata exploitation skills. To educate our students to become“database users”, the authors intensively exploit supportivetools simplifying the production of database elements astables, queries, forms, reports, web pages, and macros.Videosequences demonstrating “standard operations” forcompleting them have been prepared to enhance out-ofclassroomlearning. The use of SQL and other professionaltools is reduced to the cases when the wizards are unable togenerate the intended construct.

  5. A user-friendly modified pore-solid fractal model

    Science.gov (United States)

    Ding, Dian-Yuan; Zhao, Ying; Feng, Hao; Si, Bing-Cheng; Hill, Robert Lee

    2016-12-01

    The primary objective of this study was to evaluate a range of calculation points on water retention curves (WRC) instead of the singularity point at air-entry suction in the pore-solid fractal (PSF) model, which additionally considered the hysteresis effect based on the PSF theory. The modified pore-solid fractal (M-PSF) model was tested using 26 soil samples from Yangling on the Loess Plateau in China and 54 soil samples from the Unsaturated Soil Hydraulic Database. The derivation results showed that the M-PSF model is user-friendly and flexible for a wide range of calculation point options. This model theoretically describes the primary differences between the soil moisture desorption and the adsorption processes by the fractal dimensions. The M-PSF model demonstrated good performance particularly at the calculation points corresponding to the suctions from 100 cm to 1000 cm. Furthermore, the M-PSF model, used the fractal dimension of the particle size distribution, exhibited an accepted performance of WRC predictions for different textured soils when the suction values were ≥100 cm. To fully understand the function of hysteresis in the PSF theory, the role of allowable and accessible pores must be examined.

  6. KoralQuery -- A General Corpus Query Protocol

    DEFF Research Database (Denmark)

    Bingel, Joachim; Diewald, Nils

    2015-01-01

    The task-oriented and format-driven development of corpus query systems has led to the creation of numerous corpus query languages (QLs) that vary strongly in expressiveness and syntax. This is a severe impediment for the interoperability of corpus analysis systems, which lack a common protocol...... format and illustrate use cases in the KorAP project....

  7. Transfer active learning by querying committee

    Institute of Scientific and Technical Information of China (English)

    Hao SHAO; Feng TAO; Rui XU

    2014-01-01

    In real applications of inductive learning for classifi cation, labeled instances are often defi cient, and labeling them by an oracle is often expensive and time-consuming. Active learning on a single task aims to select only informative unlabeled instances for querying to improve the classifi cation accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a signifi cant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks;otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.

  8. Concepts and implementations of natural language query systems

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Liu, I-Hsiung

    1984-01-01

    The currently developed user language interfaces of information systems are generally intended for serious users. These interfaces commonly ignore potentially the largest user group, i.e., casual users. This project discusses the concepts and implementations of a natural query language system which satisfy the nature and information needs of casual users by allowing them to communicate with the system in the form of their native (natural) language. In addition, a framework for the development of such an interface is also introduced for the MADAM (Multics Approach to Data Access and Management) system at the University of Southwestern Louisiana.

  9. Developing a User-process Model for Designing Menu-based Interfaces: An Exploratory Study.

    Science.gov (United States)

    Ju, Boryung; Gluck, Myke

    2003-01-01

    The purpose of this study was to organize menu items based on a user-process model and implement a new version of current software for enhancing usability of interfaces. A user-process model was developed, drawn from actual users' understanding of their goals and strategies to solve their information needs by using Dervin's Sense-Making Theory…

  10. Tvorba internetových aplikací s využitím framework jQuery

    OpenAIRE

    OKTÁBEC, Michal

    2010-01-01

    This work deals with using framework jQuery for creating web pages. At first it will be explained what is framework jQuery and how is it different from classic JavaScript. Next parts of work will be specialize in syntax and ways of using jQuery. The goal of this work is create first Czech user guide.

  11. A user-centered model for web site design: needs assessment, user interface design, and rapid prototyping.

    Science.gov (United States)

    Kinzie, Mable B; Cohn, Wendy F; Julian, Marti F; Knaus, William A

    2002-01-01

    As the Internet continues to grow as a delivery medium for health information, the design of effective Web sites becomes increasingly important. In this paper, the authors provide an overview of one effective model for Web site design, a user-centered process that includes techniques for needs assessment, goal/task analysis, user interface design, and rapid prototyping. They detail how this approach was employed to design a family health history Web site, Health Heritage . This Web site helps patients record and maintain their family health histories in a secure, confidential manner. It also supports primary care physicians through analysis of health histories, identification of potential risks, and provision of health care recommendations. Visual examples of the design process are provided to show how the use of this model resulted in an easy-to-use Web site that is likely to meet user needs. The model is effective across diverse content arenas and is appropriate for applications in varied media.

  12. Parallel hierarchical evaluation of transitive closure queries

    NARCIS (Netherlands)

    Houtsma, M.A.W.; Cacace, F.; Ceri, S.

    1991-01-01

    Presents a new approach to parallel computation of transitive closure queries using a semantic data fragmentation. Tuples of a large base relation denote edges in a graph, which models a transportation network. A fragmentation algorithm is proposed which produces a partitioning of the base relation

  13. Parallel hierarchical evaluation of transitive closure queries

    NARCIS (Netherlands)

    Houtsma, M.A.W.; Houtsma, M.A.W.; Cacace, F.; Ceri, S.

    1991-01-01

    Presents a new approach to parallel computation of transitive closure queries using a semantic data fragmentation. Tuples of a large base relation denote edges in a graph, which models a transportation network. A fragmentation algorithm is proposed which produces a partitioning of the base relation

  14. Object-Extended OLAP Querying

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach; Gu, Junmin; Shoshani, Arie

    2009-01-01

    On-line analytical processing (OLAP) systems based on a dimensional view of data have found widespread use in business applications and are being used increasingly in non-standard applications. These systems provide good performance and ease-of-use. However, the complex structures and relationshi...... with performance measurements that show that the approach is a viable alternative to a physically integrated data warehouse.......On-line analytical processing (OLAP) systems based on a dimensional view of data have found widespread use in business applications and are being used increasingly in non-standard applications. These systems provide good performance and ease-of-use. However, the complex structures and relationships...... inherent in data in non-standard applications are not accommodated well by OLAP systems. In contrast, object database systems are built to handle such complexity, but do not support OLAP-type querying well. This paper presents the concepts and techniques underlying a flexible, "multi-model" federated...

  15. A Prototype Digital Library for 3D Collections: Tools To Capture, Model, Analyze, and Query Complex 3D Data.

    Science.gov (United States)

    Rowe, Jeremy; Razdan, Anshuman

    The Partnership for Research in Spatial Modeling (PRISM) project at Arizona State University (ASU) developed modeling and analytic tools to respond to the limitations of two-dimensional (2D) data representations perceived by affiliated discipline scientists, and to take advantage of the enhanced capabilities of three-dimensional (3D) data that…

  16. RESEARCH ON EXTENSION OF SPARQL ONTOLOGY QUERY LANGUAGE CONSIDERING THE COMPUTATION OF INDOOR SPATIAL RELATIONS

    Directory of Open Access Journals (Sweden)

    C. Li

    2015-05-01

    Full Text Available A method suitable for indoor complex semantic query considering the computation of indoor spatial relations is provided According to the characteristics of indoor space. This paper designs ontology model describing the space related information of humans, events and Indoor space objects (e.g. Storey and Room as well as their relations to meet the indoor semantic query. The ontology concepts are used in IndoorSPARQL query language which extends SPARQL syntax for representing and querying indoor space. And four types specific primitives for indoor query, "Adjacent", "Opposite", "Vertical" and "Contain", are defined as query functions in IndoorSPARQL used to support quantitative spatial computations. Also a method is proposed to analysis the query language. Finally this paper adopts this method to realize indoor semantic query on the study area through constructing the ontology model for the study building. The experimental results show that the method proposed in this paper can effectively support complex indoor space semantic query.

  17. Research on Extension of Sparql Ontology Query Language Considering the Computation of Indoor Spatial Relations

    Science.gov (United States)

    Li, C.; Zhu, X.; Guo, W.; Liu, Y.; Huang, H.

    2015-05-01

    A method suitable for indoor complex semantic query considering the computation of indoor spatial relations is provided According to the characteristics of indoor space. This paper designs ontology model describing the space related information of humans, events and Indoor space objects (e.g. Storey and Room) as well as their relations to meet the indoor semantic query. The ontology concepts are used in IndoorSPARQL query language which extends SPARQL syntax for representing and querying indoor space. And four types specific primitives for indoor query, "Adjacent", "Opposite", "Vertical" and "Contain", are defined as query functions in IndoorSPARQL used to support quantitative spatial computations. Also a method is proposed to analysis the query language. Finally this paper adopts this method to realize indoor semantic query on the study area through constructing the ontology model for the study building. The experimental results show that the method proposed in this paper can effectively support complex indoor space semantic query.

  18. CORCON-MOD3: An integrated computer model for analysis of molten core-concrete interactions. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)

    1993-10-01

    The CORCON-Mod3 computer code was developed to mechanistically model the important core-concrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.

  19. Research in Mobile Database Query Optimization and Processing

    Directory of Open Access Journals (Sweden)

    Agustinus Borgy Waluyo

    2005-01-01

    Full Text Available The emergence of mobile computing provides the ability to access information at any time and place. However, as mobile computing environments have inherent factors like power, storage, asymmetric communication cost, and bandwidth limitations, efficient query processing and minimum query response time are definitely of great interest. This survey groups a variety of query optimization and processing mechanisms in mobile databases into two main categories, namely: (i query processing strategy, and (ii caching management strategy. Query processing includes both pull and push operations (broadcast mechanisms. We further classify push operation into on-demand broadcast and periodic broadcast. Push operation (on-demand broadcast relates to designing techniques that enable the server to accommodate multiple requests so that the request can be processed efficiently. Push operation (periodic broadcast corresponds to data dissemination strategies. In this scheme, several techniques to improve the query performance by broadcasting data to a population of mobile users are described. A caching management strategy defines a number of methods for maintaining cached data items in clients' local storage. This strategy considers critical caching issues such as caching granularity, caching coherence strategy and caching replacement policy. Finally, this survey concludes with several open issues relating to mobile query optimization and processing strategy.

  20. A General Approach for Securely Querying and Updating XML Data

    CERN Document Server

    Mahfoud, Houari

    2012-01-01

    Over the past years several works have proposed access control models for XML data where only read-access rights over non-recursive DTDs are considered. A few amount of works have studied the access rights for updates. In this paper, we present a general model for specifying access control on XML data in the presence of update operations of W3C XQuery Update Facility. Our approach for enforcing such updates specifications is based on the notion of query rewriting where each update operation defined over arbitrary DTD (recursive or not) is rewritten to a safe one in order to be evaluated only over XML data which can be updated by the user. We investigate in the second part of this report the secure of XML updating in the presence of read-access rights specified by a security views. For an XML document, a security view represents for each class of users all and only the parts of the document these users are able to see. We show that an update operation defined over a security view can cause disclosure of sensit...

  1. GCFM Users Guide Revision for Model Version 5.0

    Energy Technology Data Exchange (ETDEWEB)

    Keimig, Mark A.; Blake, Coleman

    1981-08-10

    This paper documents alterations made to the MITRE/DOE Geothermal Cash Flow Model (GCFM) in the period of September 1980 through September 1981. Version 4.0 of GCFM was installed on the computer at the DOE San Francisco Operations Office in August 1980. This Version has also been distributed to about a dozen geothermal industry firms, for examination and potential use. During late 1980 and 1981, a few errors detected in the Version 4.0 code were corrected, resulting in Version 4.1. If you are currently using GCFM Version 4.0, it is suggested that you make the changes to your code that are described in Section 2.0. User's manual changes listed in Section 3.0 and Section 4.0 should then also be made.

  2. Modeling integrated water user decisions in intermittent supply systems

    Science.gov (United States)

    Rosenberg, David E.; Tarawneh, Tarek; Abdel-Khaleq, Rania; Lund, Jay R.

    2007-07-01

    We apply systems analysis to estimate household water use in an intermittent supply system considering numerous interdependent water user behaviors. Some 39 household actions include conservation; improving local storage or water quality; and accessing sources having variable costs, availabilities, reliabilities, and qualities. A stochastic optimization program with recourse decisions identifies the infrastructure investments and short-term coping actions a customer can adopt to cost-effectively respond to a probability distribution of piped water availability. Monte Carlo simulations show effects for a population of customers. Model calibration reproduces the distribution of billed residential water use in Amman, Jordan. Parametric analyses suggest economic and demand responses to increased availability and alternative pricing. It also suggests potential market penetration for conservation actions, associated water savings, and subsidies to entice further adoption. We discuss new insights to size, target, and finance conservation.

  3. Group-by Skyline Query Processing in Relational Engines

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Luk, Ming-Hay; Lo, Eric

    2009-01-01

    the missing cost model for the BBS algorithm. Experimental results show that our techniques are able to devise the best query plans for a variety of group-by skyline queries. Our focus is on algorithms that can be directly implemented in today's commercial database systems without the addition of new access...

  4. Low Redundancy in Static Dictionaries with Constant Query Time

    DEFF Research Database (Denmark)

    Pagh, Rasmus

    2001-01-01

    A static dictionary is a data structure for storing subsets of a finite universe U, so that membership queries can be answered efficiently. We study this problem in a unit cost RAM model with word size Ω(log |U|), and show that for n-element subsets, constant worst case query time can be obtained...

  5. On the query complexity of finding a local maximum point

    NARCIS (Netherlands)

    Rastsvelaev, A.L.; Beklemishev, L.D.

    2008-01-01

    We calculate the minimal number of queries sufficient to find a local maximum point of a functiun on a discrete interval for a model with M parallel queries, M≥1. Matching upper and lower bounds are obtained. The bounds are formulated in terms of certain Fibonacci type sequences of numbers.

  6. Usability of XML Query Languages

    NARCIS (Netherlands)

    Graaumans, J.P.M.

    2005-01-01

    The eXtensible Markup Language (XML) is a markup language which enables re-use of information. Specific query languages for XML are developed to facilitate this. There are large differences between history, design goal, and syntax of the XML query languages. However, in practice these languages are

  7. Querying Sentiment Development over Time

    DEFF Research Database (Denmark)

    Andreasen, Troels; Christiansen, Henning; Have, Christian Theil

    2013-01-01

    that measures how well a hypothesis characterizes a given time interval; the semantics is parameterized so it can be adjusted to different views of the data. EmoEpisodes is extended to a query language with variables standing for unknown topics and emotions, and the query-answering mechanism will return...

  8. Priming the Query Specification Process.

    Science.gov (United States)

    Toms, Elaine G.; Freund, Luanne

    2003-01-01

    Tests the use of questions as a technique in the query specification process. Using a within-subjects design, 48 people interacted with a modified Google interface to solve four information problems in four domains. Half the tasks were entered as typical keyword queries, and half as questions or statements. Results suggest the typical search box…

  9. Improving 3D spatial queries search: newfangled technique of space filling curves in 3D city modeling

    DEFF Research Database (Denmark)

    Uznir, U.; Anton, François; Suhaibah, A.

    2013-01-01

    web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method......, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects...... modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert’s curve, preserves the Lebesgue measure and is Lipschitz...

  10. An Adaptive Mechanism for Accurate Query Answering under Differential Privacy

    CERN Document Server

    Li, Chao

    2012-01-01

    We propose a novel mechanism for answering sets of count- ing queries under differential privacy. Given a workload of counting queries, the mechanism automatically selects a different set of "strategy" queries to answer privately, using those answers to derive answers to the workload. The main algorithm proposed in this paper approximates the optimal strategy for any workload of linear counting queries. With no cost to the privacy guarantee, the mechanism improves significantly on prior approaches and achieves near-optimal error for many workloads, when applied under (\\epsilon, \\delta)-differential privacy. The result is an adaptive mechanism which can help users achieve good utility without requiring that they reason carefully about the best formulation of their task.

  11. Languages for model-driven development of user interfaces: Review of the state of the art

    Directory of Open Access Journals (Sweden)

    Jovanović Mlađan

    2013-01-01

    Full Text Available In model-driven user interface development, several models are used to describe different aspects of user interface when level of detail varies. The relations between the models are established through model transformations. The Model Driven Engineering (MDE approach has been proposed in software engineering domain in order to provide techniques and tools to deal with models in the automated way. In this paper, we will review existing user interface languages that gain wider acceptance, and discuss their applicability for model-driven user interface development.

  12. jQuery Pocket Reference

    CERN Document Server

    Flanagan, David

    2010-01-01

    "As someone who uses jQuery on a regular basis, it was surprising to discover how much of the library I'm not using. This book is indispensable for anyone who is serious about using jQuery for non-trivial applications."-- Raffaele Cecco, longtime developer of video games, including Cybernoid, Exolon, and Stormlord jQuery is the "write less, do more" JavaScript library. Its powerful features and ease of use have made it the most popular client-side JavaScript framework for the Web. This book is jQuery's trusty companion: the definitive "read less, learn more" guide to the library. jQuery P

  13. Instant jQuery selectors

    CERN Document Server

    De Rosa, Aurelio

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. Instant jQuery Selectors follows a simple how-to format with recipes aimed at making you well versed with the wide range of selectors that jQuery has to offer through a myriad of examples.Instant jQuery Selectors is for web developers who want to delve into jQuery from its very starting point: selectors. Even if you're already familiar with the framework and its selectors, you could find several tips and tricks that you aren't aware of, especially about performance and how jQuery ac

  14. jQuery UI cookbook

    CERN Document Server

    Boduch, Adam

    2013-01-01

    Filled with a practical collection of recipes, jQuery UI Cookbook is full of clear, step-by-step instructions that will help you harness the powerful UI framework in jQuery. Depending on your needs, you can dip in and out of the Cookbook and its recipes, or follow the book from start to finish.If you are a jQuery UI developer looking to improve your existing applications, extract ideas for your new application, or to better understand the overall widget architecture, then jQuery UI Cookbook is a must-have for you. The reader should at least have a rudimentary understanding of what jQuery UI is

  15. Antecedents and Outcomes of Search Engines Loyalty: Designing a model of Iranian Users Loyalty

    Directory of Open Access Journals (Sweden)

    Alireza Hadadian

    2013-09-01

    Full Text Available This paper aims to design proposed model for search engine designers to promote loyalty levels of search engine users. This is a descriptive survey study. The statistical population of the research is composed of Ferdowsi University students. The sample size estimated to be 347. Data gathering instrument was a self administered questionnaire and structural equation modeling (SEM is used for the data analysis. Findings indicate that user communication affects user satisfaction significantly and negatively. Also in research final model search engine image, search engine perceived quality and search engine perceived value affects user satisfaction. Moreover, search engine perceived value affects user loyalty and verbal advertising directly. Also user satisfaction affects user loyalty directly. Finally user loyalty leads into high expected switching costs, search engine revisit and verbal advertising for search engines.

  16. Query Migration from Object Oriented World to Semantic World

    Directory of Open Access Journals (Sweden)

    Nassima Soussi

    2016-06-01

    Full Text Available In the last decades, object-oriented approach was able to take a large share of databases market aiming to design and implement structured and reusable software through the composition of independent elements in order to have programs with a high performance. On the other hand, the mass of information stored in the web is increasing day after day with a vertiginous speed, exposing the currently web faced with the problem of creating a bridge so as to facilitate access to data between different applications and systems as well as to look for relevant and exact information wished by users. In addition, all existing approach of rewriting object oriented languages to SPARQL language rely on models transformation process to guarantee this mapping. All the previous raisons has prompted us to write this paper in order to bridge an important gap between these two heterogeneous worlds (object oriented and semantic web world by proposing the first provably semantics preserving OQLto-SPARQL translation algorithm for each element of OQL Query (SELECT clause, FROM clause, FILTER constraint, implicit/ explicit join and union/intersection SELECT queries.

  17. Proliferation Risk Characterization Model Prototype Model - User and Programmer Guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Dukelow, J.S.; Whitford, D.

    1998-12-01

    A model for the estimation of the risk of diversion of weapons-capable materials was developed. It represents both the threat of diversion and site vulnerability as a product of a small number of variables (two to eight), each of which can take on a small number (two to four) of qualitatively defined (but quantitatively implemented) values. The values of the overall threat and vulnerability variables are then converted to threat and vulnerability categories. The threat and vulnerability categories are used to define the likelihood of diversion, also defined categorically. The evaluator supplies an estimate of the consequences of a diversion, defined categorically, but with the categories based on the IAEA Attractiveness levels. Likelihood and Consequences categories are used to define the Risk, also defined categorically. The threat, vulnerability, and consequences input provided by the evaluator contains a representation of his/her uncertainty in each variable assignment which is propagated all the way through to the calculation of the Risk categories. [Appendix G available on diskette only.

  18. A study of medical and health queries to web search engines.

    Science.gov (United States)

    Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk

    2004-03-01

    This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.

  19. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    Science.gov (United States)

    2009-01-01

    Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems. PMID:19785754

  20. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model

    Directory of Open Access Journals (Sweden)

    Parker Louise E

    2009-09-01

    Full Text Available Abstract Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA. Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. TIDES social marketing approach The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Results Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Discussion and conclusion Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.

  1. A social marketing approach to implementing evidence-based practice in VHA QUERI: the TIDES depression collaborative care model.

    Science.gov (United States)

    Luck, Jeff; Hagigi, Fred; Parker, Louise E; Yano, Elizabeth M; Rubenstein, Lisa V; Kirchner, JoAnn E

    2009-09-28

    Collaborative care models for depression in primary care are effective and cost-effective, but difficult to spread to new sites. Translating Initiatives for Depression into Effective Solutions (TIDES) is an initiative to promote evidence-based collaborative care in the U.S. Veterans Health Administration (VHA). Social marketing applies marketing techniques to promote positive behavior change. Described in this paper, TIDES used a social marketing approach to foster national spread of collaborative care models. The approach relied on a sequential model of behavior change and explicit attention to audience segmentation. Segments included VHA national leadership, Veterans Integrated Service Network (VISN) regional leadership, facility managers, frontline providers, and veterans. TIDES communications, materials and messages targeted each segment, guided by an overall marketing plan. Depression collaborative care based on the TIDES model was adopted by VHA as part of the new Primary Care Mental Health Initiative and associated policies. It is currently in use in more than 50 primary care practices across the United States, and continues to spread, suggesting success for its social marketing-based dissemination strategy. Development, execution and evaluation of the TIDES marketing effort shows that social marketing is a promising approach for promoting implementation of evidence-based interventions in integrated healthcare systems.

  2. A Knowledge Based Approach for Query Optimization in Preferential Mapping Relational Databases

    Directory of Open Access Journals (Sweden)

    P.Ranjani

    2014-10-01

    Full Text Available Relational query databases provide a high level declarative interface to access data stored in relational databases. Two key components of the query evaluation component of a SQL database system are the query optimizer and the query execution engine. System R optimization framework since this was a remarkably elegant approach that helped fuel much of the subsequent work in optimization. Transparent and efficient evaluations of preferential queries are allowed by relational database systems. This results in experimenting extensive evaluation on two real world data sets which illustrates the feasibility and advantages of the framework. Early pruning of results based on score or confidence during query processing are enabled by combining the prefer operator with the rank and rank join operators. During preference evaluation, both the conditional and the scoring part of a preference are used. The conditional part acts as a soft constraint that determines which records are scored without disqualifying any duplicates from the query result. To introduce a preferences mapping relational data model that extends database with profile preferences for query optimizing and an extended algebra that captures the essence of processing queries with ranking method. Based on a set of algebraic properties and a cost model that to propose, to provide several query optimization strategies for extended query plans. To describe a query execution algorithm that blends preference evaluation with query execution, while making effective use of the native query engine.

  3. [Comparison of level of satisfaction of users of home care: integrated model vs. dispensaries model].

    Science.gov (United States)

    Gorina, Marta; Limonero, Joaquín T; Peñart, Xavier; Jiménez, Jordi; Gassó, Javier

    2014-01-01

    To determine the level of satisfaction of users that receive home health care through two different models of primary health care: integrated model and dispensaries model. cross-sectional, observational study. Two primary care centers in the province of Barcelona. The questionnaire was administered to 158 chronic patients over 65 years old, of whom 67 were receiving health care from the integrated model, and 91 from the dispensaries model. The Evaluation of Satisfaction with Home Health Care (SATISFAD12) questionnaire was, together with other complementary questions about service satisfaction of home health care, as well as social demographic questions (age, sex, disease, etc). The patients of the dispensaries model showed more satisfaction than the users receiving care from the integrated model. There was a greater healthcare continuity for those patients from the dispensaries model, and a lower percentage of hospitalizations during the last year. The satisfaction of the users from both models was not associated to gender, the health perception,or independence of the The user satisfaction rate of the home care by primary health care seems to depend of the typical characteristics of each organisational model. The dispensaries model shows a higher rate of satisfaction or perceived quality of care in all the aspects analysed. More studies are neede to extrapolate these results to other primary care centers belonging to other institutions. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  4. In-context query reformulation for failing SPARQL queries

    Science.gov (United States)

    Viswanathan, Amar; Michaelis, James R.; Cassidy, Taylor; de Mel, Geeth; Hendler, James

    2017-05-01

    Knowledge bases for decision support systems are growing increasingly complex, through continued advances in data ingest and management approaches. However, humans do not possess the cognitive capabilities to retain a bird's-eyeview of such knowledge bases, and may end up issuing unsatisfiable queries to such systems. This work focuses on the implementation of a query reformulation approach for graph-based knowledge bases, specifically designed to support the Resource Description Framework (RDF). The reformulation approach presented is instance-and schema-aware. Thus, in contrast to relaxation techniques found in the state-of-the-art, the presented approach produces in-context query reformulation.

  5. A user credit assessment model based on clustering ensemble for broadband network new media service supervision

    Science.gov (United States)

    Liu, Fang; Cao, San-xing; Lu, Rui

    2012-04-01

    This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad videos and audios transmitted on the network. The user credit assessment model based on clustering ensemble proposed by this paper which integrates the advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit classification automatically. The model's effective verification experiments are accomplished which are based on standard credit application dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is robustness and has better prediction accuracy.

  6. A practical tool for modeling biospecimen user fees.

    Science.gov (United States)

    Matzke, Lise; Dee, Simon; Bartlett, John; Damaraju, Sambasivarao; Graham, Kathryn; Johnston, Randal; Mes-Masson, Anne-Marie; Murphy, Leigh; Shepherd, Lois; Schacter, Brent; Watson, Peter H

    2014-08-01

    The question of how best to attribute the unit costs of the annotated biospecimen product that is provided to a research user is a common issue for many biobanks. Some of the factors influencing user fees are capital and operating costs, internal and external demand and market competition, and moral standards that dictate that fees must have an ethical basis. It is therefore important to establish a transparent and accurate costing tool that can be utilized by biobanks and aid them in establishing biospecimen user fees. To address this issue, we built a biospecimen user fee calculator tool, accessible online at www.biobanking.org . The tool was built to allow input of: i) annual operating and capital costs; ii) costs categorized by the major core biobanking operations; iii) specimen products requested by a biobank user; and iv) services provided by the biobank beyond core operations (e.g., histology, tissue micro-array); as well as v) several user defined variables to allow the calculator to be adapted to different biobank operational designs. To establish default values for variables within the calculator, we first surveyed the members of the Canadian Tumour Repository Network (CTRNet) management committee. We then enrolled four different participants from CTRNet biobanks to test the hypothesis that the calculator tool could change approaches to user fees. Participants were first asked to estimate user fee pricing for three hypothetical user scenarios based on their biobanking experience (estimated pricing) and then to calculate fees for the same scenarios using the calculator tool (calculated pricing). Results demonstrated significant variation in estimated pricing that was reduced by calculated pricing, and that higher user fees are consistently derived when using the calculator. We conclude that adoption of this online calculator for user fee determination is an important first step towards harmonization and realistic user fees.

  7. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    Science.gov (United States)

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.

  8. BFIT: A program to analyze and fit the BCJ model parameters to experimental data. Tutorial and user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Lathrop, J.F.

    1996-12-01

    While the BCJ plasticity model is powerful in representing material behavior, finding values for the 20 parameters representing a particular material can be a daunting task. Bfit was designed to make this task at least feasible if not easy. The program has been developed over several years and incorporates the suggestions and requests of several users.

  9. Graphical User Interface for Simulink Integrated Performance Analysis Model

    Science.gov (United States)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  10. User Acceptance of Long-Term Evolution (LTE) Services: An Application of Extended Technology Acceptance Model

    Science.gov (United States)

    Park, Eunil; Kim, Ki Joon

    2013-01-01

    Purpose: The aim of this paper is to propose an integrated path model in order to explore user acceptance of long-term evolution (LTE) services by examining potential causal relationships between key psychological factors and user intention to use the services. Design/methodology/approach: Online survey data collected from 1,344 users are analysed…

  11. Making the Invisible Visible: Personas and Mental Models of Distance Education Library Users

    Science.gov (United States)

    Lewis, Cynthia; Contrino, Jacline

    2016-01-01

    Gaps between users' and designers' mental models of digital libraries often result in adverse user experiences. This article details an exploratory user research study at a large, predominantly online university serving non-traditional distance education students with the goal of understanding these gaps. Using qualitative data, librarians created…

  12. On-demand information retrieval in sensor networks with localised query and energy-balanced data collection.

    Science.gov (United States)

    Teng, Rui; Zhang, Bing

    2011-01-01

    On-demand information retrieval enables users to query and collect up-to-date sensing information from sensor nodes. Since high energy efficiency is required in a sensor network, it is desirable to disseminate query messages with small traffic overhead and to collect sensing data with low energy consumption. However, on-demand query messages are generally forwarded to sensor nodes in network-wide broadcasts, which create large traffic overhead. In addition, since on-demand information retrieval may introduce intermittent and spatial data collections, the construction and maintenance of conventional aggregation structures such as clusters and chains will be at high cost. In this paper, we propose an on-demand information retrieval approach that exploits the name resolution of data queries according to the attribute and location of each sensor node. The proposed approach localises each query dissemination and enable localised data collection with maximised aggregation. To illustrate the effectiveness of the proposed approach, an analytical model that describes the criteria of sink proxy selection is provided. The evaluation results reveal that the proposed scheme significantly reduces energy consumption and improves the balance of energy consumption among sensor nodes by alleviating heavy traffic near the sink.

  13. Long Fibre Composite Modelling Using Cohesive User's Element

    Science.gov (United States)

    Kozák, Vladislav; Chlup, Zdeněk

    2010-09-01

    The development glass matrix composites reinforced by unidirectional long ceramic fibre has resulted in a family of very perspective structural materials. The only disadvantage of such materials is relatively high brittleness at room temperature. The main micromechanisms acting as toughening mechanism are the pull out, crack bridging, matrix cracking. There are other mechanisms as crack deflection etc. but the primer mechanism is mentioned pull out which is governed by interface between fibre and matrix. The contribution shows a way how to predict and/or optimise mechanical behaviour of composite by application of cohesive zone method and write user's cohesive element into the FEM numerical package Abaqus. The presented results from numerical calculations are compared with experimental data. Crack extension is simulated by means of element extinction algorithms. The principal effort is concentrated on the application of the cohesive zone model with the special traction separation (bridging) law and on the cohesive zone modelling. Determination of micro-mechanical parameters is based on the combination of static tests, microscopic observations and numerical calibration procedures.

  14. Propeller aircraft interior noise model: User's manual for computer program

    Science.gov (United States)

    Wilby, E. G.; Pope, L. D.

    1985-01-01

    A computer program entitled PAIN (Propeller Aircraft Interior Noise) has been developed to permit calculation of the sound levels in the cabin of a propeller-driven airplane. The fuselage is modeled as a cylinder with a structurally integral floor, the cabin sidewall and floor being stiffened by ring frames, stringers and floor beams of arbitrary configurations. The cabin interior is covered with acoustic treatment and trim. The propeller noise consists of a series of tones at harmonics of the blade passage frequency. Input data required by the program include the mechanical and acoustical properties of the fuselage structure and sidewall trim. Also, the precise propeller noise signature must be defined on a grid that lies in the fuselage skin. The propeller data are generated with a propeller noise prediction program such as the NASA Langley ANOPP program. The program PAIN permits the calculation of the space-average interior sound levels for the first ten harmonics of a propeller rotating alongside the fuselage. User instructions for PAIN are given in the report. Development of the analytical model is presented in NASA CR 3813.

  15. BPACK -- A computer model package for boiler reburning/co-firing performance evaluations. User`s manual, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Wu, K.T.; Li, B.; Payne, R.

    1992-06-01

    This manual presents and describes a package of computer models uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuel-switching, fuels co-firing, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseous-fuel reburning, and co-firing of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a two-dimensional furnace heat-transfer and combustion code: (b) a detailed chemical-kinetics code; and (c) a boiler convective passage code. This user`s manual presents the computer model package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.

  16. Mining and Querying Multimedia Data

    Science.gov (United States)

    2011-09-29

    SAT1.5GB data set as the ground truth. By randomly choos - ing a small number of these labels as input and leaving out remaining ones for evaluation, we...Record, 30:37–46, May 2001. ISSN 0163-5808. [3] Eugene Agichtein, Eric Brill, and Susan Dumais. Improving web search ranking by in- corporating user...06, pages 19–26, 2006. [4] Eugene Agichtein, Eric Brill, Susan Dumais, and Robert Ragno. Learning user interac- tion models for predicting web search

  17. Using Intent Information to Model User Behavior in Diversified Search

    NARCIS (Netherlands)

    A. Chuklin; P. Serdyukov; M. de Rijke

    2013-01-01

    A result page of a modern commercial search engine often contains documents of different types targeted to satisfy different user intents (news, blogs, multimedia). When evaluating system performance and making design decisions we need to better understand user behavior on such result pages. To addr

  18. User Acceptance of Information Technology: Theories and Models.

    Science.gov (United States)

    Dillon, Andrew; Morris, Michael G.

    1996-01-01

    Reviews literature in user acceptance and resistance to information technology design and implementation. Examines innovation diffusion, technology design and implementation, human-computer interaction, and information systems. Concentrates on the determinants of user acceptance and resistance and emphasizes how researchers and developers can…

  19. Improving accuracy for identifying related PubMed queries by an integrated approach.

    Science.gov (United States)

    Lu, Zhiyong; Wilbur, W John

    2009-10-01

    PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users' search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments.

  20. The role of economics in the QUERI program: QUERI Series

    Directory of Open Access Journals (Sweden)

    Smith Mark W

    2008-04-01

    Full Text Available Abstract Background The United States (U.S. Department of Veterans Affairs (VA Quality Enhancement Research Initiative (QUERI has implemented economic analyses in single-site and multi-site clinical trials. To date, no one has reviewed whether the QUERI Centers are taking an optimal approach to doing so. Consistent with the continuous learning culture of the QUERI Program, this paper provides such a reflection. Methods We present a case study of QUERI as an example of how economic considerations can and should be integrated into implementation research within both single and multi-site studies. We review theoretical and applied cost research in implementation studies outside and within VA. We also present a critique of the use of economic research within the QUERI program. Results Economic evaluation is a key element of implementation research. QUERI has contributed many developments in the field of implementation but has only recently begun multi-site implementation trials across multiple regions within the national VA healthcare system. These trials are unusual in their emphasis on developing detailed costs of implementation, as well as in the use of business case analyses (budget impact analyses. Conclusion Economics appears to play an important role in QUERI implementation studies, only after implementation has reached the stage of multi-site trials. Economic analysis could better inform the choice of which clinical best practices to implement and the choice of implementation interventions to employ. QUERI economics also would benefit from research on costing methods and development of widely accepted international standards for implementation economics.

  1. Query recommendation in the information domain of children

    NARCIS (Netherlands)

    Duarte Torres, Sergio Raúl; Hiemstra, Djoerd; Weber, Ingmar; Serdyukov, Pavel

    2014-01-01

    Children represent an increasing part of web users. One of the key problems that hamper their search experience is their limited vocabulary, their difficulty to use the right keywords, and the inappropriateness of general- purpose query suggestions. In this work, we propose a method that uses tags f

  2. The Islands Approach to Nearest Neighbor Querying in Spatial Networks

    DEFF Research Database (Denmark)

    Huang, Xuegang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2005-01-01

    Much research has recently been devoted to the data management foundations of location-based mobile services. In one important scenario, the service users are constrained to a transportation network. As a result, query processing in spatial road networks is of interest. We propose a versatile app...

  3. A Simple Blueprint for Automatic Boolean Query Processing.

    Science.gov (United States)

    Salton, G.

    1988-01-01

    Describes a new Boolean retrieval environment in which an extended soft Boolean logic is used to automatically construct queries from original natural language formulations provided by users. Experimental results that compare the retrieval effectiveness of this method to conventional Boolean and vector processing are discussed. (27 references)…

  4. A Dynamic Extension of ATLAS Run Query Service

    CERN Document Server

    Buliga, Alexandru

    2015-01-01

    The ATLAS RunQuery is a primarily web-based service for the ATLAS community to access meta information about the data taking in a concise format. In order to provide a better user experience, the service was moved to use a new technology, involving concepts such as: Web Sockets, on demand data, client-side scripting, memory caching and parallelizing execution.

  5. A Survey of Query Auto Completion in Information Retrieval

    NARCIS (Netherlands)

    Cai, F.; de Rijke, M.

    2016-01-01

    In information retrieval, query auto completion (QAC), also known as type-ahead [Xiao et al., 2013, Cai et al., 2014b] and auto-complete suggestion [Jain and Mishne, 2010], refers to the following functionality: given a prefix consisting of a number of characters entered into a search box, the user i

  6. In-route skyline querying for location-based services

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Kristian S.

    2005-01-01

    With the emergence of an infrastructure for location-aware mobile services, the processing of advanced, location-based queries that are expected to underlie such services is gaining in relevance, While much work has assumed that users move in Euclidean space, this paper assumes that movement is c...

  7. Software Helps Retrieve Information Relevant to the User

    Science.gov (United States)

    Mathe, Natalie; Chen, James

    2003-01-01

    The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.

  8. OpenDolphin: presentation models for compelling user interfaces

    CERN Document Server

    CERN. Geneva

    2014-01-01

    Shared applications run on the server. They still need a display, though, be it on the web or on the desktop. OpenDolphin introduces a shared presentation model to clearly differentiate between "what" to display and "how" to display. The "what" is managed on the server and is independent of the UI technology whereas the "how" can fully exploit the UI capabilities like the ubiquity of the web or the power of the desktop in terms of interactivity, animations, effects, 3D worlds, and local devices. If you run a server-centric architecture and still seek to provide the best possible user experience, then this talk is for you. About the speaker Dierk König (JavaOne Rock Star) works as a fellow for Canoo Engineering AG, Basel, Switzerland. He is a committer to many open-source projects including OpenDolphin, Groovy, Grails, GPars and GroovyFX. He is lead author of the "Groovy in Action" book, which is among ...

  9. Location Privacy Preserving k Nearest Neighbor Query Method on Road Network in Presence of User's Preference%支持偏好调控的路网隐私保护k近邻查询方法

    Institute of Scientific and Technical Information of China (English)

    倪巍伟; 陈萧; 马中希

    2015-01-01

    随着人们对个体隐私的日益关注,位置服务中的隐私保护问题成为数据库领域新兴的研究热点.针对面向路网的隐私保护k近邻查询中,保护位置隐私引发的难以兼顾查询质量问题及查询者对查询效率与准确性间偏好调控需求问题,引入PoI(Points of Interest)概率分布概念,通过分析服务器端PoI邻接关系,生成PoI概率分布.将服务器端查找k近邻PoI过程分解为路网扩张查询阶段和迭代替换阶段,为迭代替换阶段构建基于PoI概率分布的可替换PoI概率预测机制.基于所构建概率预测机制,提出支持用户偏好调控的保护位置隐私k近邻查询方法AdPriQuery(Adjustable Privacy-preserving k nearest neighbor Query),查询者通过调节筛选概率阈值,在兼顾位置隐私安全的同时,实现对查询效率与准确性的偏好调控.所提调控机制对已有的基于空间混淆的路网环境保护位置隐私近邻查询方法具有良好的兼容性.理论分析和实验结果表明,所提方法在兼顾保护位置隐私的同时,能有效提高服务器端查询效率,同时支持查询结果准确性与查询效率的偏好调控要求.

  10. jQuery For Dummies

    CERN Document Server

    Beighley, Lynn

    2010-01-01

    Learn how jQuery can make your Web page or blog stand out from the crowd!. jQuery is free, open source software that allows you to extend and customize Joomla!, Drupal, AJAX, and WordPress via plug-ins. Assuming no previous programming experience, Lynn Beighley takes you through the basics of jQuery from the very start. You'll discover how the jQuery library separates itself from other JavaScript libraries through its ease of use, compactness, and friendliness if you're a beginner programmer. Written in the easy-to-understand style of the For Dummies brand, this book demonstrates how you can a

  11. Schedule Sales Query Raw Data

    Data.gov (United States)

    General Services Administration — Schedule Sales Query presents sales volume figures as reported to GSA by contractors. The reports are generated as quarterly reports for the current year and the...

  12. A Trusted Host's Authentication Access and Control Model Faced on User Action

    Institute of Scientific and Technical Information of China (English)

    ZHANG Miao; XU Guoai; HU Zhengming; YANG Yixian

    2006-01-01

    The conception of trusted network connection (TNC) is introduced, and the weakness of TNC to control user's action is analyzed. After this, the paper brings out a set of secure access and control model based on access, authorization and control, and related authentication protocol. At last the security of this model is analyzed. The model can improve TNC's security of user control and authorization.

  13. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    Science.gov (United States)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  14. 一种适应GPU的混合OLAP查询处理模型%GPU Adaptive Hybrid OLAP Query Processing Model

    Institute of Scientific and Technical Information of China (English)

    张宇; 张延松; 陈红; 王珊

    2016-01-01

    The general purpose graphic computing units (GPGPUs) have become the new platform for high performance computing due to their massive parallel computing power, and in recent years more and more high performance database research has placed focus on GPU database development. However, today's GPU database researches commonly inherit ROLAP (relational OLAP) model, and mainly address how to realize relational operators in GPU platform and performance tuning, especially on GPU oriented parallel hash join algorithm. GPUs have higher parallel computing power than CPUs but less logical control and management capacity for complex data structure, therefore they are not adaptive for directly migrating the in-memory database query processing algorithms based on complex data structure and memory management. This paper proposes a GPU vectorized processing oriented hybrid OLAP model, semi-MOLAP, which combines direct array access and array computing of MOLAP with storage efficiency of ROLAP. The pure array oriented GPU semi-MOLAP model simplifies GPU data management, reduces complexity of GPU semi-MOLAP algorithms and improves their code efficiency. Meanwhile, the semi-MOLAP operators are divided into co-computing operators on CPU and GPU platforms to improve utilization of both CPUs and GPUs for higher query processing performance.%通用GPU因其强大的并行计算能力成为新兴的高性能计算平台,并逐渐成为近年来学术界在高性能数据库实现技术领域的研究热点.但当前GPU数据库领域的研究沿袭的是ROLAP(relational OLAP)多维分析模型,研究主要集中在关系操作符在GPU平台上的算法实现和性能优化技术,以哈希连接的GPU并行算法研究为中心.GPU拥有数千个并行计算单元,但其逻辑控制单元较少,相对于CPU具有更强的并行计算能力,但逻辑控制和复杂内存管理能力较弱,因此并不适合需要复杂数据结构和复杂内存管理机制的内存数据库查询处理算

  15. A comparison of peer-to-peer query response modes

    CERN Document Server

    Hoschek, W

    2002-01-01

    In a large distributed system spanning many administrative domains such as a Grid (Foster et al., 2001), it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. However, in such a database system, the set of information tuples in the universe is partitioned over one or more distributed nodes, for reasons including autonomy, scalability, availability, performance and security. This suggests the use of peer-to-peer (P2P) query technology. A variety of query response modes can be used to return matching query results from P2P nodes to an originator. Although from the functional perspective all response modes are equivalent, no mode is optimal under all circumstances. Which query response modes allow to express suitable trade-offs for a wide range ofP2P application? We answer this question by systematically describing and characterizing four query response modes for the unified peer-to-peer database framework (UPDF) proposed ...

  16. A Semantic Cache Framework for Secure XML Queries

    Institute of Scientific and Technical Information of China (English)

    Jian-Hua Feng; Guo-Liang Li; Na Ta

    2008-01-01

    Secure XML query answering to protect data privacy and semantic cache to speed up XML query answering are two hot spots in current research areas of XML database systems. While both issues are explored respectively in depth,they have not been studied together, that is, the problem of semantic cache for secure XML query answering has not been addressed yet. In this paper, we present an interesting joint of these two aspects and propose an efficient framework of semantic cache for secure XML query answering, which can improve the performance of XML database systems under secure circumstances. Our framework combines access control, user privilege management over XML data and the state-of-the-art semantic XML query cache techniques, to ensure that data are presented only to authorized users in an efficient way. To the best of our knowledge, the approach we propose here is among the first beneficial efforts in a novel perspective of combining caching and security for XML database to improve system performance. The efficiency of our framework is verified by comprehensive experiments.

  17. Secure Nearest Neighbor Query on Crowd-Sensing Data.

    Science.gov (United States)

    Cheng, Ke; Wang, Liangmin; Zhong, Hong

    2016-09-22

    Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.

  18. Secure Nearest Neighbor Query on Crowd-Sensing Data

    Directory of Open Access Journals (Sweden)

    Ke Cheng

    2016-09-01

    Full Text Available Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.

  19. User Demand Aware Grid Scheduling Model with Hierarchical Load Balancing

    Directory of Open Access Journals (Sweden)

    P. Suresh

    2013-01-01

    Full Text Available Grid computing is a collection of computational and data resources, providing the means to support both computational intensive applications and data intensive applications. In order to improve the overall performance and efficient utilization of the resources, an efficient load balanced scheduling algorithm has to be implemented. The scheduling approach also needs to consider user demand to improve user satisfaction. This paper proposes a dynamic hierarchical load balancing approach which considers load of each resource and performs load balancing. It minimizes the response time of the jobs and improves the utilization of the resources in grid environment. By considering the user demand of the jobs, the scheduling algorithm also improves the user satisfaction. The experimental results show the improvement of the proposed load balancing method.

  20. A user-oriented model for global enterprise portal design

    NARCIS (Netherlands)

    Feng, X.; Ehrenhard, M.L.; Hicks, J.N.; Maathuis, S.J.; Hou, Y.

    2010-01-01

    Enterprise portals collect and synthesise information from various systems to deliver personalised and highly relevant information to users. Enterprise portals' design and applications are widely discussed in the literature; however, the implications of portal design in a global networked environmen