WorldWideScience

Sample records for modeling information based

  1. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  2. Concept Tree Based Information Retrieval Model

    Directory of Open Access Journals (Sweden)

    Chunyan Yuan

    2014-05-01

    Full Text Available This paper proposes a novel concept-based query expansion technique named Markov concept tree model (MCTM, discovering term relationship through the concept tree deduced by term markov network. We address two important issues for query expansion: the selection and the weighting of expansion search terms. In contrast to earlier methods, queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to a signal query terms. Utilizing Markov network which is constructed according to the co-occurrence information of the terms in collection, it generate concept tree for each original query term, remove the redundant and irrelevant nodes in concept tree, then adjust the weight of original query and the weight of expansion term based on a pruning algorithm. We use this model for query expansion and evaluate the effectiveness of the model by examining the accuracy and robustness of the expansion methods, Compared with the baseline model, the experiments on standard dataset reveal that this method can achieve a better query quality

  3. An XML-based information model for archaeological pottery

    Institute of Scientific and Technical Information of China (English)

    LIU De-zhi; RAZDAN Anshuman; SIMON Arleyn; BAE Myungsoo

    2005-01-01

    An information model is defined to support sharing scientific information on Web for archaeological pottery. Apart from non-shape information, such as age, material, etc., the model also consists of shape information and shape feature information. Shape information is collected by Lasers Scanner and geometric modelling techniques. Feature information is generated from shape information via feature extracting techniques. The model is used in an integrated storage, archival, and sketch-based query and retrieval system for 3D objects, native American ceramic vessels. A novel aspect of the information model is that it is totally implemented with XML, and is designed for Web-based visual query and storage application.

  4. An information theory-based approach to modeling the information processing of NPP operators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute, Taejon (Korea, Republic of)

    2002-08-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory.

  5. Building information modeling based on intelligent parametric technology

    Institute of Scientific and Technical Information of China (English)

    ZENG Xudong; TAN Jie

    2007-01-01

    In order to push the information organization process of the building industry,promote sustainable architectural design and enhance the competitiveness of China's building industry,the author studies building information modeling (BIM) based on intelligent parametric modeling technology.Building information modeling is a new technology in the field of computer aided architectural design,which contains not only geometric data,but also the great amount of engineering data throughout the lifecycle of a building.The author also compares BIM technology with two-dimensional CAD technology,and demonstrates the advantages and characteristics of intelligent parametric modeling technology.Building information modeling,which is based on intelligent parametric modeling technology,will certainly replace traditional computer aided architectural design and become the new driving force to push forward China's building industry in this information age.

  6. Food Security Information Platform Model Based on Internet of Things

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2015-06-01

    Full Text Available According to the tracking and tracing requirements of food supply chain management and quality and safety, this study built food security information platform using the Internet of things technology, with reference to the EPC standard, the use of RFID technology, adopting the model of SOA, based on SCOR core processes, researches the food security information platform which can set up the whole process from the source to the consumption of the traceability information, provides food information, strengthens the food identity verification, prevents food identification and information of error identification to the consumer and government food safety regulators, provides good practices for food safety traceability.

  7. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  8. Ranking streamflow model performance based on Information theory metrics

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  9. GRAMMAR RULE BASED INFORMATION RETRIEVAL MODEL FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Nadana Ravishankar

    2015-07-01

    Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rule based approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rule base coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

  10. A Modeling Framework for Gossip-based Information Spread

    CERN Document Server

    Bakhshi, Rena; Fokkink, Wan; van Steen, Maarten

    2011-01-01

    We present an analytical framework for gossip protocols based on the pairwise information exchange between interacting nodes. This framework allows for studying the impact of protocol parameters on the performance of the protocol. Previously, gossip-based information dissemination protocols have been analyzed under the assumption of perfect, lossless communication channels. We extend our framework for the analysis of networks with lossy channels. We show how the presence of message loss, coupled with specific topology configurations,impacts the expected behavior of the protocol. We validate the obtained models against simulations for two protocols.

  11. A Novel Fuzzy Document Based Information Retrieval Model for Forecasting

    Directory of Open Access Journals (Sweden)

    Partha Roy

    2017-06-01

    Full Text Available Information retrieval systems are generally used to find documents that are most appropriate according to some query that comes dynamically from users. In this paper a novel Fuzzy Document based Information Retrieval Model (FDIRM is proposed for the purpose of Stock Market Index forecasting. The novelty of proposed approach is a modified tf-idf scoring scheme to predict the future trend of the stock market index. The contribution of this paper has two dimensions, 1 In the proposed system the simple time series is converted to an enriched fuzzy linguistic time series with a unique approach of incorporating market sentiment related information along with the price and 2 A unique approach is followed while modeling the information retrieval (IR system which converts a simple IR system into a forecasting system. From the performance comparison of FDIRM with standard benchmark models it can be affirmed that the proposed model has a potential of becoming a good forecasting model. The stock market data provided by Standard & Poor’s CRISIL NSE Index 50 (CNX NIFTY-50 index of National Stock Exchange of India (NSE is used to experiment and validate the proposed model. The authentic data for validation and experimentation is obtained from http://www.nseindia.com which is the official website of NSE. A java program is under construction to implement the model in real-time with graphical users’ interface.

  12. Closed Loop Brain Model of Neocortical Information Based Exchange

    Directory of Open Access Journals (Sweden)

    James eKozloski

    2016-01-01

    Full Text Available Here we describe an information based exchange' model of brain function that ascribes to neocortex, basal ganglia, and thalamus distinct network functions. The model allows us to analyze whole brain system set point measures, such as the rate and heterogeneity of transitions in striatum and neocortex, in the context of neuromodulation and other perturbations. Our closed-loop model is grounded in neuroanatomical observations, proposing a novel Grand Loop through neocortex, and invokes different forms of plasticity at specific tissue interfaces and their principle cell synapses to achieve these transitions. By implementing a system for maximum information based exchange of action potentials between modeled neocortical areas, we observe changes to these measures in simulation. We hypothesize that similar dynamic set points and modulations exist in the brain's resting state activity, and that different modifications to information based exchange may shift the risk profile of different component tissues, resulting in different neurodegenerative diseases. This model is targeted for further development using IBM's Neural Tissue Simulator, which allows scalable elaboration of networks, tissues, and their neural and synaptic components towards ever greater complexity and biological realism.

  13. Information system based on the mathematical model of the EPS

    Science.gov (United States)

    Kalimoldayev, Maksat N.; Abdildayeva, Assel A.; Mamyrbayev, Orken Zh.; Akhmetzhanov, Maksat

    2016-11-01

    This article discusses the structure of an information system, the mathematical and information models of electric power systems. Currently, the major application areas include system relaying data communication systems and automation, automated dispatching and technological management of electric power facilities, as well as computer-aided calculation of energy resources. Automatic control of excitation (ARV) synchronous machines is one of the most effective ways to ensure the stability of power systems. However, the variety of possible options and modes even in a single grid pose significant obstacles to the development of the best means of ensuring sustainability. Thus, the use of ARVs to ensure stability in some cases may not be sufficient. Therefore, there is a need to develop an information system based on a mathematical model.

  14. A Spread Willingness Computing-Based Information Dissemination Model

    Directory of Open Access Journals (Sweden)

    Haojing Huang

    2014-01-01

    Full Text Available This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user’s spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  15. Patent portfolio analysis model based on legal status information

    Institute of Scientific and Technical Information of China (English)

    Xuezhao; WANG; Yajuan; ZHAO; Jing; ZHANG; Ping; ZHAO

    2014-01-01

    Purpose:This research proposes a patent portfolio analysis model based on the legal status information to chart out a competitive landscape in a particular field,enabling organizations to position themselves within the overall technology landscape.Design/methodology/approach:Three indicators were selected for the proposed model:Patent grant rate,valid patents rate and patent maintenance period.The model uses legal status information to perform a qualitative evaluation of relative values of the individual patents,countries or regions’ technological capabilities and competitiveness of patent applicants.The results are visualized by a four-quadrant bubble chart To test the effectiveness of the model,it is used to present a competitive landscape in the lithium ion battery field.Findings:The model can be used to evaluate the values of the individual patents,highlight countries or regions’ positions in the field,and rank the competitiveness of patent applicants in the field.Research limitations:The model currently takes into consideration only three legal status indicators.It is actually feasible to introduce more indicators such as the reason for invalid patents and the distribution of patent maintenance time and associate them with those in the proposed model.Practical implications:Analysis of legal status information in combination of patent application information can help an organization to spot gaps in its patent claim coverage,as well as evaluate patent quality and maintenance situation of its granted patents.The study results can be used to support technology assessment,technology innovation and intellectual property management.Originality/value:Prior studies attempted to assess patent quality or competitiveness by using either single patent legal status indicator or comparative analysis of the impacts of each indicator.However,they are insufficient in presenting the combined effects of the evaluation indicators.Using our model,it appears possible to get a

  16. Construction project investment control model based on instant information

    Institute of Scientific and Technical Information of China (English)

    WANG Xue-tong

    2006-01-01

    Change of construction conditions always influences project investment by causing the loss of construction work time and extending the duration. To resolve such problem as difficult dynamic control in work construction plan, this article presents a concept of instant optimization by ways of adjustment operation time of each working procedure to minimize investment change. Based on this concept, its mathematical model is established and a strict mathematical justification is performed. An instant optimization model takes advantage of instant information in the construction process to duly complete adjustment of construction; thus we maximize cost efficiency of project investment.

  17. Information fusion via isocortex-based Area 37 modeling

    Science.gov (United States)

    Peterson, James K.

    2004-08-01

    A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.

  18. An Integrated Model of Information Literacy, Based upon Domain Learning

    Science.gov (United States)

    Thompson, Gary B.; Lathey, Johnathan W.

    2013-01-01

    Introduction. Grounded in Alexander's model of domain learning, this study presents an integrated micro-model of information literacy. It is predicated upon the central importance of domain learning for the development of the requisite research skills by students. Method. The authors reviewed previous models of information literacy and…

  19. Learning-based saliency model with depth information.

    Science.gov (United States)

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  20. Research on Modeling of Genetic Networks Based on Information Measurement

    Institute of Scientific and Technical Information of China (English)

    ZHANG Guo-wei; SHAO Shi-huang; ZHANG Ying; LI Hai-ying

    2006-01-01

    As the basis of network of biology organism, the genetic network is concerned by many researchers.Current modeling methods to genetic network, especially the Boolean networks modeling method are analyzed. For modeling the genetic network, the information theory is proposed to mining the relations between elements in network. Through calculating the values of information entropy and mutual entropy in a case, the effectiveness of the method is verified.

  1. Food Information System Construction Based on DEA Model

    Directory of Open Access Journals (Sweden)

    AoTian Peng

    2015-03-01

    Full Text Available The study improves the traditional DEA model making it to reflect the subjective preference sequence DEA model, proposes a method to solve the dilemma with the average rate of crosscutting comparison with effective unit and cites a case for demonstration. Both at home and abroad, the food information system construction of the evaluation system implementation is at a lower level, one reason is the lag of food information system evaluation system and the imperfect.

  2. Agent-based model of information spread in social networks

    CERN Document Server

    Lande, D V; Berezin, B O

    2016-01-01

    We propose evolution rules of the multiagent network and determine statistical patterns in life cycle of agents - information messages. The main discussed statistical pattern is connected with the number of likes and reposts for a message. This distribution corresponds to Weibull distribution according to modeling results. We examine proposed model using the data from Twitter, an online social networking service.

  3. Multi-Information Model for PCB-Based ElectronicsProduct Manufacturing

    Institute of Scientific and Technical Information of China (English)

    李春泉; 周德俭; 余涛

    2004-01-01

    Most electronics products use PCB to carry electronic circuits. This paper classifies information contained in PCB-based electronic circuits into several models: geometry model, physics model, performance model and function model. Based on this classification, a multi-information model of product is established. A composite model of product is also created based on object-orientation and characteristics of the product. The model includes a 3D geometry model, a physics model with integrated information that can be divided into microscopic and macroscopic information, a generalized performance model and a function model that are from top to bottom. Finally, a multi-unit analysis is briefly discussed.

  4. Topic Information Collection based on the Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Hai-yan Jiang

    2013-02-01

    Full Text Available Specific-subject oriented information collection is one of the key technologies of vertical search engines, which directly affects the speed and relevance of search results. The topic information collection algorithm is widely used for its accuracy. The Hidden Markov Model (HMM is used to learn and judge the relevance between the Uniform Resource Locator (URL and the topic information. The Rocchio method is used to construct the prototype vectors relevant to the topic information, and the HMM is used to learn the preferred browsing paths. The concept maps including the semantics of the webpage are constructed and the web's link structures can be decided. The validity of the algorithm is proved by the experiment at last. Comparing with the Best-First algorithm, this algorithm can get more information pages and has higher precision ratio.

  5. Implementing a business improvement model based on integrated plant information

    Directory of Open Access Journals (Sweden)

    Swanepoel, Hendrika Francina

    2016-11-01

    Full Text Available The World Energy Council defines numerous challenges in the global energy arena that put pressure on owners and /operators to operate run existing plant better and more efficiently. As such there is an increasing focus on the use of business and technical plant information and data to make better, more integrated, and more informed decisions on the plant. The research study developed a business improvement model (BIM that can be used to establish an integrated plant information management infrastructure as the core foundation for of business improvement initiatives. Operational research then demonstrated how this BIM approach could be successfully implemented to improve business operations and provide decision-making insight.

  6. Investigation of network-based information system model

    Energy Technology Data Exchange (ETDEWEB)

    Konrad, A.M.; Perez, M.; Rivera, J.; Rodriguez, Y.; Durst, M.J.; Merrill, D.W.; Holmes, H.H.

    1996-09-01

    The objective of the DOE-LBNL summer student research program in computer and information sciences focused on investigating database- based http-based information architectures, and implementation of a prototype using DOE`s Comprehensive Epidemiologic Data Resource (CEDR) metadata or Epidemiology Guide content. We were successful in identifying the components of such an information system, and appropriate configuration given the requirements, and in implementing a prototype. This work comprised investigation of various information systems architectures or variants, evaluation and selection of various tools, products, and packages, preparation of databases, database content, output formats, and graphical (World Wide Web- compatible) interfaces. We successfully prepared and demonstrated network access to content from both the CEDR structured documentation and from the DOD Epidemiology Guides (site archive records).

  7. Clinic expert information extraction based on domain model and block importance model.

    Science.gov (United States)

    Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng

    2015-11-01

    To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method.

  8. Information Sharing In Shipbuilding based on the Product State Model

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    The paper provides a review of product modelling technologies and the overall architecture for the Product State Model (PSM) environment as a basis for how dynamically updated product data can improve control of production activities. Especially, the paper focuses on the circumstances prevailing...... in a one-of-a-kind manufacturing environment like the shipbuilding industry, where product modelling technologies already have proved their worth in the design and engineering phases of shipbuilding and in the operation phase. However, the handling of product information on the shop floor is not yet...

  9. Schelling model of cell segregation based only on local information

    Science.gov (United States)

    Nielsen, Alexander Valentin; Gade, Annika Lund; Juul, Jeppe; Strandkvist, Charlotte

    2015-11-01

    While biological studies suggest that motility of cells is involved in cell segregation, few computational models have investigated this mechanism. We apply a simple Schelling model, modified to reflect biological conditions, demonstrating how differences in cell motility arising exclusively from differences in the composition of the local environment can be sufficient to drive segregation. The work presented here demonstrates that the segregation behavior observed in the original Schelling model is robust to a relaxation of the requirement for global information and that the Schelling model may yield insight in the context of biological systems. In the model, the time course of cell segregation follows a power law in accord with experimental observations and previous work.

  10. The Concept of Data Model Pattern Based on Fully Communication Oriented Information Modeling (FCO-IM

    Directory of Open Access Journals (Sweden)

    Fazat Nur Azizah

    2010-04-01

    Full Text Available Just as in many areas of software engineering, patterns have been used in data modeling to create high quality data models. We provide a concept of data model pattern based on Fully Communication Oriented Information Modeling (FCO-IM, a fact oriented data modeling method. A data model pattern is defined as the relation between context, problem, and solution. This definition is adopted from the concept of pattern by Christopher Alexander. We define the concept of Information Grammar for Pattern (IGP in the solution part of a pattern, which works as a template to create a data model. The IGP also shows how a pattern can relate to other patterns. The data model pattern concept is then used to describe 15 data model patterns, organized into 4 categories. A case study on geographical location is provided to show the use of the concept in a real case.

  11. Avian Information Systems: Developing Web-Based Bird Avoidance Models

    Directory of Open Access Journals (Sweden)

    Judy Shamoun-Baranes

    2008-12-01

    Full Text Available Collisions between aircraft and birds, so-called "bird strikes," can result in serious damage to aircraft and even in the loss of lives. Information about the distribution of birds in the air and on the ground can be used to reduce the risk of bird strikes and their impact on operations en route and in and around air fields. Although a wealth of bird distribution and density data is collected by numerous organizations, these data are not readily available nor interpretable by aviation. This paper presents two national efforts, one in the Netherlands and one in the United States, to develop bird avoidance nodels for aviation. These models integrate data and expert knowledge on bird distributions and migratory behavior to provide hazard maps in the form of GIS-enabled Web services. Both models are in operational use for flight planning and flight alteration and for airfield and airfield vicinity management. These models and their presentation on the Internet are examples of the type of service that would be very useful in other fields interested in species distribution and movement information, such as conservation, disease transmission and prevention, or assessment and mitigation of anthropogenic risks to nature. We expect that developments in cyber-technology, a transition toward an open source philosophy, and higher demand for accessible biological data will result in an increase in the number of biological information systems available on the Internet.

  12. Research on Assessment Model of Information System Security Based on Various Security Factors

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    With the rapid development of network technology, the meaning of layers and attributes in respect of information system security must be extended based on the understanding of the concept of information system security. The layering model (LM) of information system security and the five-attribute model (FAM) based on security factors were put forward to perfect the description and modeling of the information system security framework. An effective framework system of risk calculation and assessment was proposed, which is based on FAM.

  13. Models of a Distributed Information Retrieval System Based on Thesauri with Weights.

    Science.gov (United States)

    Mazur, Zygmunt

    1994-01-01

    Discusses distributed information retrieval systems that take into account the weights of descriptors from thesauri. Topics addressed include a mathematical model for information retrieval subsystems; organization of inverted files; models for the distributed homogeneous information systems; a distributed information retrieval system based on…

  14. Model based climate information on drought risk in Africa

    Science.gov (United States)

    Calmanti, S.; Syroka, J.; Jones, C.; Carfagna, F.; Dell'Aquila, A.; Hoefsloot, P.; Kaffaf, S.; Nikulin, G.

    2012-04-01

    The United Nations World Food Programme (WFP) has embarked upon the endeavor of creating a sustainable Africa-wide natural disaster risk management system. A fundamental building block of this initiative is the setup of a drought impact modeling platform called Africa Risk-View that aims to quantify and monitor weather-related food security risk in Africa. The modeling approach is based the Water Requirement Satisfaction Index (WRSI), as the fundamental indicator of the performances of agriculture and uses historical records of food assistance operation to project future potential needs for livelihood protection. By using climate change scenarios as an input to Africa Risk-View it is possible, in principles, to evaluate the future impact of climate variability on critical issues such as food security and the overall performance of the envisaged risk management system. A necessary preliminary step to this challenging task is the exploration of the sources of uncertainties affecting the assessment based on modeled climate change scenarios. For this purpose, a limited set of climate models have been selected in order verify the relevance of using climate model output data with Africa Risk-View and to explore a minimal range of possible sources of uncertainty. This first evaluation exercise started before the setup of the CORDEX framework and has relied on model output available at the time. In particular only one regional downscaling was available for the entire African continent from the ENSEMBLES project. The analysis shows that current coarse resolution global climate models can not directly feed into the Africa RiskView risk-analysis tool. However, regional downscaling may help correcting the inherent biases observed in the datasets. Further analysis is performed by using the first data available under the CORDEX framework. In particular, we consider a set of simulation driven with boundary conditions from the reanalysis ERA-Interim to evaluate the skill drought

  15. Renewal of the base model for geographic information of the Netherlands

    NARCIS (Netherlands)

    Quak, C.W.; Janssen, P.; Reuvers, M.

    2009-01-01

    In 2005 a base model for geographic information in the Netherlands was published named NEN 3610. The model consist of a modeling framework (based on the ISO19100 series) and a collection of extensible base classes. In the following years many organizations have built their models in conformance to t

  16. A new model of information behaviour based on the Search Situation Transition schema Information searching, Information behaviour, Behavior, Information retrieval, Information seeking

    Directory of Open Access Journals (Sweden)

    Nils Pharo

    2004-01-01

    Full Text Available This paper presents a conceptual model of information behaviour. The model is part of the Search Situation Transition method schema. The method schema is developed to discover and analyse interplay between phenomena traditionally analysed as factors influencing either information retrieval or information seeking. In this paper the focus is on the model's five main categories: the work task, the searcher, the social/organisational environment, the search task, and the search process. In particular, the search process and its sub-categories search situation and transition and the relationship between these are discussed. To justify the method schema an empirical study was designed according to the schema's specifications. In the paper a subset of the study is presented analysing the effects of work tasks on Web information searching. Findings from this small-scale study indicate a strong relationship between the work task goal and the level of relevance used for judging resources during search processes.

  17. An Abstraction-Based Data Model for Information Retrieval

    Science.gov (United States)

    McAllister, Richard A.; Angryk, Rafal A.

    Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.

  18. Model of informational system for freight insurance automation based on digital signature

    Directory of Open Access Journals (Sweden)

    Maxim E. SLOBODYANYUK

    2009-01-01

    Full Text Available In the article considered a model of informational system for freight insurance automation based on digital signature, showed architecture, macro flowchart of information flow in model, components (modules and their functions. Described calculation method of costs on interactive cargo insurance via proposed system, represented main characteristics and options of existing transport management systems, conceptual cost models.

  19. Landscape Epidemiology Modeling Using an Agent-Based Model and a Geographic Information System

    Directory of Open Access Journals (Sweden)

    S. M. Niaz Arifin

    2015-05-01

    Full Text Available A landscape epidemiology modeling framework is presented which integrates the simulation outputs from an established spatial agent-based model (ABM of malaria with a geographic information system (GIS. For a study area in Kenya, five landscape scenarios are constructed with varying coverage levels of two mosquito-control interventions. For each scenario, maps are presented to show the average distributions of three output indices obtained from the results of 750 simulation runs. Hot spot analysis is performed to detect statistically significant hot spots and cold spots. Additional spatial analysis is conducted using ordinary kriging with circular semivariograms for all scenarios. The integration of epidemiological simulation-based results with spatial analyses techniques within a single modeling framework can be a valuable tool for conducting a variety of disease control activities such as exploring new biological insights, monitoring epidemiological landscape changes, and guiding resource allocation for further investigation.

  20. Quantum-like model of processing of information in the brain based on classical electromagnetic field

    CERN Document Server

    Khrennikov, Andrei

    2010-01-01

    We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of ``quantum physical brain'' reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a var...

  1. Avian information systems: Developing web-based bird avoidance models

    NARCIS (Netherlands)

    Shamoun-Baranes, J.; Bouten, W.; Buurma, L.; DeFusco, R.; Dekker, A.; Sierdsema, H.; Sluiter, F.; van Belle, J.; van Gasteren, H.; van Loon, E.

    2008-01-01

    Collisions between aircraft and birds, so-called "bird strikes," can result in serious damage to aircraft and even in the loss of lives. Information about the distribution of birds in the air and on the ground can be used to reduce the risk of bird strikes and their impact on operations en route and

  2. Temporal Expectation and Information Processing: A Model-Based Analysis

    Science.gov (United States)

    Jepma, Marieke; Wagenmakers, Eric-Jan; Nieuwenhuis, Sander

    2012-01-01

    People are able to use temporal cues to anticipate the timing of an event, enabling them to process that event more efficiently. We conducted two experiments, using the fixed-foreperiod paradigm (Experiment 1) and the temporal-cueing paradigm (Experiment 2), to assess which components of information processing are speeded when subjects use such…

  3. Information Sharing In Shipbuilding based on the Product State Model

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    The paper provides a review of product modelling technologies and the overall architecture for the Product State Model (PSM) environment as a basis for how dynamically updated product data can improve control of production activities. Especially, the paper focuses on the circumstances prevailing...

  4. Research on information models for the construction schedule management based on the IFC standard

    Directory of Open Access Journals (Sweden)

    Weirui Xue

    2015-05-01

    Full Text Available Purpose: The purpose of this article is to study the description and extension of the Industry Foundation Classes (IFC standard in construction schedule management, which achieves the information exchange and sharing among the different information systems and stakeholders, and facilitates the collaborative construction in the construction projects. Design/methodology/approach: The schedule information processing and coordination are difficult in the complex construction project. Building Information Modeling (BIM provides the platform for exchanging and sharing information among information systems and stakeholders based on the IFC standard. Through analyzing the schedule plan, implementing, check and control, the information flow in the schedule management is reflected based on the IDEF. According to the IFC4, the information model for the schedule management is established, which not only includes the each aspect of the schedule management, but also includes the cost management, the resource management, the quality management and the risk management. Findings: The information requirement for the construction schedule management can be summarized into three aspects: the schedule plan information, the implementing information and the check and control information. The three aspects can be described through the existing and extended entities of IFC4, and the information models are established. Originality/value: The main contribution of the article is to establish the construction schedule management information model, which achieves the information exchange and share in the construction project, and facilitates the development of the application software to meet the requirements of the construction project.

  5. Ensuring HL7-based information model requirements within an ontology framework.

    Science.gov (United States)

    Ouagne, David; Nadah, Nadia; Schober, Daniel; Choquet, Rémy; Teodoro, Douglas; Colaert, Dirk; Schulz, Stefan; Jaulent, Marie-Christine; Daniel, Christel

    2010-01-01

    This paper describes the building of an HL7-based Information Model Ontology (IMO) that can be exploited by a domain ontology in order to distribute querying over different clinical data repositories. We employed the Open Medical Development Framework (OMDF) based on a model driven development methodology. OMDF provides model transformation features to build an HL7-based information model that covers the conceptual scope of a target project. The resulting IMO is used to mediate between ontologically queries and information retrieval from semantically less defined Hospital Information Systems (HIS). In the context of the DebugIT project - which scope corresponds to the control of infectious diseases and antimicrobial resistances - Information Model Ontology is integrated to the DebugIT domain ontology in order to express queries.

  6. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  7. Influence analysis of information erupted on social networks based on SIR model

    Science.gov (United States)

    Zhou, Xue; Hu, Yong; Wu, Yue; Xiong, Xi

    2015-07-01

    In this paper, according to the similarity of chain reaction principle and the characteristics of information propagation on social network, we proposed a new word "information bomb". Based on the complex networks and SIR model, dynamical evolution equations were setup. Then methods used to evaluate the four indexes of bomb power were given, including influence breadth, influence strength, peak time and relaxation time. At last, the power of information was ascertained through these indexes. The process of information propagation is simulated to illustrate the spreading characteristics through the results. Then parameters which impact on the power of information bomb are analyzed and some methods which control the propagation of information are given.

  8. Applying an expectancy-value model to study motivators for work-task based information seeking

    DEFF Research Database (Denmark)

    Sigaard, Karen Tølbøl; Skov, Mette

    2015-01-01

    Purpose: The purpose of this paper is to operationalise and verify a cognitive motivation model that has been adapted to information seeking. The original model was presented within the field of psychology. Design/methodology/approach: An operationalisation of the model is presented based on the ...

  9. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    Zhi-hua SUN

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  10. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  11. Information theory-based approach for modeling the cognitive behavior of NPP operators

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [KAIST, Taejon (Korea, Republic of)

    2001-10-01

    An NPP system consists of three important components: the machine system, operators, and MMI. Through the MMI, operators monitor and control the plant system. The cognitive model of NPP operators has become a target of modeling by cognitive engineers due to their work environment: complex, uncertain, and safe critical. We suggested the contextual model for the cognitive behavior of NPP operator and the mathematical fundamentals based on information theory which can quantify the model. The demerit of the methodology using the information theory is that it cannot evaluate the correctness and quality of information. Therefore, the validation through the experiment is needed.

  12. On a Model of Distributed Information Retrieval Systems Based on Thesauri.

    Science.gov (United States)

    Mazur, Zygmunt

    1984-01-01

    Investigates the properties of a global model consisting of "n" local information retrieval systems based on thesaurus. Definitions of a distributed information retrieval system (thesaurus, documents set, set of queries) and proofs of theorems denoting further properties of the systems are highlighted. Five references are included. (EJS)

  13. Exploring Topic-based Language Models for Effective Web Information Retrieval

    NARCIS (Netherlands)

    Li, R.; Kaptein, Rianne; Hiemstra, Djoerd; Kamps, Jaap; Hoenkamp, E.; De Cock, M.; Hoste, V.

    2008-01-01

    The main obstacle for providing focused search is the relative opaqueness of search request -- searchers tend to express their complex information needs in only a couple of keywords. Our overall aim is to find out if, and how, topic-based language models can lead to more effective web information re

  14. Russian and Foreign Experience of Integration of Agent-Based Models and Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Konstantin Anatol’evich Gulin

    2016-11-01

    Full Text Available The article provides an overview of the mechanisms of integration of agent-based models and GIS technology developed by Russian and foreign researchers. The basic framework of the article is based on critical analysis of domestic and foreign literature (monographs, scientific articles. The study is based on the application of universal scientific research methods: system approach, analysis and synthesis, classification, systematization and grouping, generalization and comparison. The article presents theoretical and methodological bases of integration of agent-based models and geographic information systems. The concept and essence of agent-based models are explained; their main advantages (compared to other modeling methods are identified. The paper characterizes the operating environment of agents as a key concept in the theory of agent-based modeling. It is shown that geographic information systems have a wide range of information resources for calculations, searching, modeling of the real world in various aspects, acting as an effective tool for displaying the agents’ operating environment and allowing to bring the model as close as possible to the real conditions. The authors also focus on a wide range of possibilities for various researches in different spatial and temporal contexts. Comparative analysis of platforms supporting the integration of agent-based models and geographic information systems has been carried out. The authors give examples of complex socio-economic models: the model of a creative city, humanitarian assistance model. In the absence of standards for research results description, the authors focus on the models’ elements such as the characteristics of the agents and their operation environment, agents’ behavior, rules of interaction between the agents and the external environment. The paper describes the possibilities and prospects of implementing these models

  15. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  16. An Ontology-Based Archive Information Model for the Planetary Science Community

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris

    2008-01-01

    The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.

  17. An Ontology-Based Archive Information Model for the Planetary Science Community

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris

    2008-01-01

    The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.

  18. User satisfaction-based quality evaluation model and survey analysis of network information service

    Institute of Scientific and Technical Information of China (English)

    LEI; Xue; JIAO; Yuying

    2009-01-01

    On the basis of user satisfaction,authors made research hypotheses by learning from relevant e-service quality evaluation models.A questionnaire survey was then conducted on some content-based websites in terms of their convenience,information quality,personalization and site aesthetics,which may affect the overall satisfaction of users.Statistical analysis was also made to build a user satisfaction-based quality evaluation system of network information service.

  19. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  20. Constraint-Based Fuzzy Models for an Environment with Heterogeneous Information-Granules

    Institute of Scientific and Technical Information of China (English)

    K. Robert Lai; Yi-Yuan Chiang

    2006-01-01

    A novel framework for fuzzy modeling and model-based control design is described. Based on the theory of fuzzy constraint processing, the fuzzy model can be viewed as a generalized Takagi-Sugeno (TS) fuzzy model with fuzzy functional consequences. It uses multivariate antecedent membership functions obtained by granular-prototype fuzzy clustering methods and consequent fuzzy equations obtained by fuzzy regression techniques. Constrained optimization is used to estimate the consequent parameters, where the constraints are based on control-relevant a priori knowledge about the modeled process. The fuzzy-constraint-based approach provides the following features. 1) The knowledge base of a constraint-based fuzzy model can incorporate information with various types of fuzzy predicates. Consequently, it is easy to provide a fusion of different types of knowledge. The knowledge can be from data-driven approaches and/or from controlrelevant physical models. 2) A corresponding inference mechanism for the proposed model can deal with heterogeneous information granules. 3) Both numerical and linguistic inputs can be accepted for predicting new outputs.The proposed techniques are demonstrated by means of two examples: a nonlinear function-fitting problem and the well-known Box-Jenkins gas furnace process. The first example shows that the proposed model uses fewer fuzzy predicates achieving similar results with the traditional rule-based approach, while the second shows the performance can be significantly improved when the control-relevant constraints are considered.

  1. Ontological Model-Based Transparent Access To Information In A Medical Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Felicia GÎZĂ-BELCIUG

    2012-01-01

    Full Text Available Getting the full electronic medical record of a patient is an important step in providing a quality medical service. But the degree of heterogeneity of data from health unit informational systems is very high, because each unit can have a different model for storing patients’ medical data. In order to achieve the interoperability and integration of data from various medical units that store partial patient medical information, this paper proposes a multi-agent systems and ontology based approach. Therefore, we present an ontological model for describing the particular structure of the data integration process. The system is to be used for centralizing the information from a patient’s partial medical records. The main advantage of the proposed model is the low ratio between the complexity of the model and the amount of information that can be retrieved in order to generate the complete medical history of a patient.

  2. On the impact of information delay on location-based relaying: a markov modeling approach

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova;

    2012-01-01

    For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability...... of the collected information and hence will influence the performance of the relay selection method. This paper analyzes this influence in the decision process for the example of a mobile location-based relay selection approach using a continuous time Markov chain model. The model is used to obtain optimal relay...

  3. Modeling Multisource-heterogeneous Information Based on Random Set and Fuzzy Set Theory

    Institute of Scientific and Technical Information of China (English)

    WEN Cheng-lin; XU Xiao-bin

    2006-01-01

    This paper presents a new idea, named as modeling multisensor-heterogeneous information, to incorporate the fuzzy logic methodologies with mulitsensor-multitarget system under the framework of random set theory. Firstly, based on strong random set and weak random set, the unified form to describe both data (unambiguous information) and fuzzy evidence (uncertain information) is introduced. Secondly, according to signatures of fuzzy evidence, two Bayesian-markov nonlinear measurement models are proposed to fuse effectively data and fuzzy evidence. Thirdly, by use of "the models-based signature-matching scheme", the operation of the statistics of fuzzy evidence defined as random set can be translated into that of the membership functions of relative point state variables. These works are the basis to construct qualitative measurement models and to fuse data and fuzzy evidence.

  4. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    Science.gov (United States)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  5. The Impacts of Information-Sharing Mechanisms on Spatial Market Formation Based on Agent-Based Modeling

    Science.gov (United States)

    Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing’ang; Han, Zhangang

    2013-01-01

    There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman’s logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration. PMID:23484007

  6. The impacts of information-sharing mechanisms on spatial market formation based on agent-based modeling.

    Science.gov (United States)

    Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing'ang; Han, Zhangang

    2013-01-01

    There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman's logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration.

  7. A theoretical extraction scheme of transport information based on exclusion models

    Institute of Scientific and Technical Information of China (English)

    Chen Hua; Du Lei; Qu Cheng-Li; Li Wei-Hua; He Liang; Chen Wen-Hao; Sun Peng

    2010-01-01

    In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.

  8. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  9. Characterizing super-spreading in microblog: An epidemic-based information propagation model

    Science.gov (United States)

    Liu, Yu; Wang, Bai; Wu, Bin; Shang, Suiming; Zhang, Yunlei; Shi, Chuan

    2016-12-01

    As the microblogging services are becoming more prosperous in everyday life for users on Online Social Networks (OSNs), it is more favorable for hot topics and breaking news to gain more attraction very soon than ever before, which are so-called "super-spreading events". In the information diffusion process of these super-spreading events, messages are passed on from one user to another and numerous individuals are influenced by a relatively small portion of users, a.k.a. super-spreaders. Acquiring an awareness of super-spreading phenomena and an understanding of patterns of wide-ranged information propagations benefits several social media data mining tasks, such as hot topic detection, predictions of information propagation, harmful information monitoring and intervention. Taking into account that super-spreading in both information diffusion and spread of a contagious disease are analogous, in this study, we build a parameterized model, the SAIR model, based on well-known epidemic models to characterize super-spreading phenomenon in tweet information propagation accompanied with super-spreaders. For the purpose of modeling information diffusion, empirical observations on a real-world Weibo dataset are statistically carried out. Both the steady-state analysis on the equilibrium and the validation on real-world Weibo dataset of the proposed model are conducted. The case study that validates the proposed model shows that the SAIR model is much more promising than the conventional SIR model in characterizing a super-spreading event of information propagation. In addition, numerical simulations are carried out and discussed to discover how sensitively the parameters affect the information propagation process.

  10. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    Science.gov (United States)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated

  11. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia.

    Science.gov (United States)

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.

  12. [Development method of healthcare information system integration based on business collaboration model].

    Science.gov (United States)

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  13. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  14. The Development of Community-Based Health Information Exchanges: A Comparative Assessment of Organizational Models

    Science.gov (United States)

    Champagne, Tiffany

    2013-01-01

    The purpose of this dissertation research was to critically examine the development of community-based health information exchanges (HIEs) and to comparatively analyze the various models of exchanges in operation today nationally. Specifically this research sought to better understand several aspects of HIE: policy influences, organizational…

  15. Analysis of the Effect of Information System Quality to Intention to Reuse of Employee Management Information System (Simpeg Based on Information Systems Success Model

    Directory of Open Access Journals (Sweden)

    Suryanto Tri Lathif Mardi

    2016-01-01

    Full Text Available This study examines the effect of Information Quality, Systems Quality and Service Quality on the user intention to reuse Employee Management Information System (SIMPEG in University in the city of Surabaya, based on the theoretical foundation of DeLone and McLane Information Systems Success (ISS Model. The distribution of questionnaire was conducted to 120 employees of different universities by means of stratified random sampling. The results showed that: (1 there is a significant positive effect of the System Quality on the Quality of Information, (2 there is a significant positive effect of the Information Quality on the Intention to Reuse, information related to the fulfillment of the user’s needs; (3 there is a significant positive effect of the Quality of the Intention on system re-use, the system related to the fulfillment of the needs of users; (4 there is no effect of the Quality of Service to the Intention to Reuse. In the end, the results of this study provide an analysis and advice to The University officials that can be used as a consideration for Information Technology/Information System investment and development in accordance with the Success of Information System and Intention to Reuse model.

  16. New Challenges for the Management of the Development of Information Systems Based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    has been initiated with the scope of investigating the questions that mathematical modelling technology poses to traditional information systems development projects. Based on the past body of research, this study proposes a framework to guide decision making for managing projects of information......The advancements in complexity and sophistication of mathematical models for manufacturing scheduling and control and the increase of the ratio power/cost of computers are beginning to provide the manufacturing industry with new software tools to improve production. A Danish action research project...

  17. Integrating 3D geological information with a national physically-based hydrological modelling system

    Science.gov (United States)

    Lewis, Elizabeth; Parkin, Geoff; Kessler, Holger; Whiteman, Mark

    2016-04-01

    Robust numerical models are an essential tool for informing flood and water management and policy around the world. Physically-based hydrological models have traditionally not been used for such applications due to prohibitively large data, time and computational resource requirements. Given recent advances in computing power and data availability, a robust, physically-based hydrological modelling system for Great Britain using the SHETRAN model and national datasets has been created. Such a model has several advantages over less complex systems. Firstly, compared with conceptual models, a national physically-based model is more readily applicable to ungauged catchments, in which hydrological predictions are also required. Secondly, the results of a physically-based system may be more robust under changing conditions such as climate and land cover, as physical processes and relationships are explicitly accounted for. Finally, a fully integrated surface and subsurface model such as SHETRAN offers a wider range of applications compared with simpler schemes, such as assessments of groundwater resources, sediment and nutrient transport and flooding from multiple sources. As such, SHETRAN provides a robust means of simulating numerous terrestrial system processes which will add physical realism when coupled to the JULES land surface model. 306 catchments spanning Great Britain have been modelled using this system. The standard configuration of this system performs satisfactorily (NSE > 0.5) for 72% of catchments and well (NSE > 0.7) for 48%. Many of the remaining 28% of catchments that performed relatively poorly (NSE information into SHETRAN for any model setup. The addition of more realistic subsurface representation following this approach is shown to greatly improve model performance in areas dominated by groundwater processes. The resulting modelling system has great potential to be used as a resource at national, regional and local scales in an array of different

  18. The education of medical librarians in evidence-based information services: a model of active learning

    Directory of Open Access Journals (Sweden)

    Huriye Çolaklar

    2013-01-01

    Full Text Available Evidence-based practice stems from clinical approaches which are used in the late 18th and early 19th centuries’ medical practices. This area is new in Turkey, too. Turkey needs some lessons about evidence-based practice in Departments of Information and Records Management. This paper, examining the examples in various other countries, presents a model for including the evidence-based information services, which are based on research done in the fields of health and medicine and especially of dentistry, within the contents of the already existing courses in education of librarianship in Turkey. The paper depicts the aims and fields of use of evidence-based information services and their contribution to active learning; examines the education of this subject in various other countries and shows its place in Turkey, and presents a model for the improvement of this education. It is proved that the education of the librarians who will give evidence-based information services both with special practices within the already existing courses or with optional courses given especially for this aim in education of librarianship will contribute considerably to active learning in dentistry.

  19. New Challenges for the Management of the Development of Information Systems Based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    has been initiated with the scope of investigating the questions that mathematical modelling technology poses to traditional information systems development projects. Based on the past body of research, this study proposes a framework to guide decision making for managing projects of information......’ skills in the development process. Further observations also indicate that flexibility and adaptability, based on grounded theory, are valuable tools when information systems development involves a new technology.......The advancements in complexity and sophistication of mathematical models for manufacturing scheduling and control and the increase of the ratio power/cost of computers are beginning to provide the manufacturing industry with new software tools to improve production. A Danish action research project...

  20. Propagation Modeling of Food Safety Crisis Information Update Based on the Multi-agent System

    Directory of Open Access Journals (Sweden)

    Meihong Wu

    2015-08-01

    Full Text Available This study propose a new multi-agent system frame based on epistemic default complex adaptive theory and use the agent based simulation and modeling the information updating process to study food safety crisis information dissemination. Then, we explore interaction effect between each agent in food safety crisis information dissemination at the current environment and mostly reveals how the government agent, food company agent and network media agent influence users confidence in food safety. The information updating process give a description on how to guide a normal spread of food safety crisis in public opinion in the current environment and how to enhance the confidence of food quality and safety of the average users.

  1. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    Science.gov (United States)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  2. In-House Communication Support System Based on the Information Propagation Model Utilizes Social Network

    Science.gov (United States)

    Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji

    Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.

  3. Information, Meaning and Eigenforms: In the Light of Sociology, Agent-Based Modeling and AI

    Directory of Open Access Journals (Sweden)

    Manfred Füllsack

    2012-08-01

    Full Text Available The paper considers the relation of Shannon-type information to those semantic and hermeneutic aspects of communication, which are often referred to as meaning. It builds on considerations of Talcott Parsons, Niklas Luhmann and Robert K. Logan and relates them to an agent-based model that reproduces key aspects of the Talking Head experiment by Luc Steels. The resulting insights seem to give reason to regard information and meaning not as qualitatively different entities, but as interrelated forms of order that emerge in the interaction of autonomous (self-referentially closed agents. Although on first sight, this way of putting information and meaning into a constructivist framework seems to open possibilities to conceive meaning in terms of Shannon-information, it also suggests a re-conceptualization of information in terms of what cybernetics calls Eigenform in order to do justice to its dynamic interrelation with meaning.

  4. Evaluation Model for Capability of Enterprise Agent Coalition Based on Information Fusion and Attribute Reduction

    Institute of Scientific and Technical Information of China (English)

    Dongjun Liu; Li Li; and Jiayang Wang

    2016-01-01

    For the issue of evaluation of capability of enterprise agent coalition, an evaluation model based on information fusion and entropy weighting method is presented. The attribute reduction method is utilized to reduce indicators of the capability according to the theory of rough set. The new indicator system can be determined. Attribute reduction can also reduce the workload and remove the redundant information, when there are too many indicators or the indicators have strong correlation. The research complexity can be reduced and the efficiency can be improved. Entropy weighting method is used to determine the weights of the remaining indicators, and the importance of indicators is analyzed. The information fusion model based on nearest neighbor method is developed and utilized to evaluate the capability of multiple agent coalitions, compared to cloud evaluation model and D-S evidence method. Simulation results are reasonable and with obvious distinction. Thus they verify the effectiveness and feasibility of the model. The information fusion model can provide more scientific, rational decision support for choosing the best agent coalition, and provide innovative steps for the evaluation process of capability of agent coalitions.

  5. A Model-Based Method to Design an Application Common Platform for Enterprise Information Systems

    Science.gov (United States)

    Ishihara, Akira; Furuta, Hirohisa; Yamaoka, Takayuki; Seo, Kazuo; Nishida, Shogo

    This paper presents a model-based method to design a software platform, called an application common platform for developments of enterprise information systems. This application common platform(ACP) wraps existing reusable software assets to hide their details from application developers and provide domain level application programming interfaces, so that reusability of software assets and productivity of application improve. In this paper, we present a software architecture which organizes applications, ACP, and software assets and illustrate a development process of ACP. Especially, we show design rules to derive ACP design models from application design models and software assets design models. We also define metrics of reusability and productivity and evaluate the proposed method through real developments of enterprise information systems. As a result, the proposed method reduced 20% of development cost compared to estimation cost.

  6. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  7. A Physics-Informed Machine Learning Framework for RANS-based Predictive Turbulence Modeling

    Science.gov (United States)

    Xiao, Heng; Wu, Jinlong; Wang, Jianxun; Ling, Julia

    2016-11-01

    Numerical models based on the Reynolds-averaged Navier-Stokes (RANS) equations are widely used in turbulent flow simulations in support of engineering design and optimization. In these models, turbulence modeling introduces significant uncertainties in the predictions. In light of the decades-long stagnation encountered by the traditional approach of turbulence model development, data-driven methods have been proposed as a promising alternative. We will present a data-driven, physics-informed machine-learning framework for predictive turbulence modeling based on RANS models. The framework consists of three components: (1) prediction of discrepancies in RANS modeled Reynolds stresses based on machine learning algorithms, (2) propagation of improved Reynolds stresses to quantities of interests with a modified RANS solver, and (3) quantitative, a priori assessment of predictive confidence based on distance metrics in the mean flow feature space. Merits of the proposed framework are demonstrated in a class of flows featuring massive separations. Significant improvements over the baseline RANS predictions are observed. The favorable results suggest that the proposed framework is a promising path toward RANS-based predictive turbulence in the era of big data. (SAND2016-7435 A).

  8. Integration of remote sensing based surface information into a three-dimensional microclimate model

    Science.gov (United States)

    Heldens, Wieke; Heiden, Uta; Esch, Thomas; Mueller, Andreas; Dech, Stefan

    2017-03-01

    Climate change urges cities to consider the urban climate as part of sustainable planning. Urban microclimate models can provide knowledge on the climate at building block level. However, very detailed information on the area of interest is required. Most microclimate studies therefore make use of assumptions and generalizations to describe the model area. Remote sensing data with area wide coverage provides a means to derive many parameters at the detailed spatial and thematic scale required by urban climate models. This study shows how microclimate simulations for a series of real world urban areas can be supported by using remote sensing data. In an automated process, surface materials, albedo, LAI/LAD and object height have been derived and integrated into the urban microclimate model ENVI-met. Multiple microclimate simulations have been carried out both with the dynamic remote sensing based input data as well as with manual and static input data to analyze the impact of the RS-based surface information and the suitability of the applied data and techniques. A valuable support of the integration of the remote sensing based input data for ENVI-met is the use of an automated processing chain. This saves tedious manual editing and allows for fast and area wide generation of simulation areas. The analysis of the different modes shows the importance of high quality height data, detailed surface material information and albedo.

  9. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Science.gov (United States)

    Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.

    2016-06-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.

  10. Distinguishing ability analysis of compressed sensing radar imaging based on information theory model

    Science.gov (United States)

    Jiang, Hai; Zhang, Bingchen; Lin, Yueguan; Hong, Wen; Wu, Yirong

    2011-11-01

    Recent theory of compressed sensing (CS) has been widely used in many application areas. In this paper, we mainly concentrate on the CS in radar and analyze the distinguishing ability of CS radar image based on information theory model. The information content contained in the CS radar echoes is analyzed by simplifying the information transmission channel as a parallel Gaussian channel, and the relationship among the signal-to-noise ratio (SNR) of the echo signal, the number of required samples, the length of the sparse targets and the distinguishing level of the radar image is gotten. Based on this result, we introduced the distinguishing ability of the CS radar image and some of its properties are also gotten. Real IECAS advanced scanning two-dimensional railway observation (ASTRO) data experiment demonstrates our conclusions.

  11. Development, implementation and evaluation of an information model for archetype based user responsive medical data visualization.

    Science.gov (United States)

    Kopanitsa, Georgy; Veseli, Hasan; Yampolsky, Vladimir

    2015-06-01

    When medical data have been successfully recorded or exchanged between systems there appear a need to present the data consistently to ensure that it is clearly understood and interpreted. A standard based user interface can provide interoperability on the visual level. The goal of this research was to develop, implement and evaluate an information model for building user interfaces for archetype based medical data. The following types of knowledge were identified as important elements and were included in the information model: medical content related attributes, data type related attributes, user-related attributes, device-related attributes. In order to support flexible and efficient user interfaces an approach that represents different types of knowledge with different models separating the medical concept from a visual concept and interface realization was chosen. We evaluated the developed approach using Guideline for Good Evaluation Practice in Health Informatics (GEP-HI). We developed a higher level information model to complement the ISO 13606 archetype model. This enabled the specification of the presentation properties at the moment of the archetypes' definition. The model allows realizing different users' perspectives on the data. The approach was implemented and evaluated within a functioning EHR system. The evaluation involved 30 patients of different age and IT experience and 5 doctors. One month of testing showed that the time required reading electronic health records decreased for both doctors (from average 310 to 220s) and patients (from average 95 to 39s). Users reported a high level of satisfaction and motivation to use the presented data visualization approach especially in comparison with their previous experience. The introduced information model allows separating medical knowledge and presentation knowledge. The additional presentation layer will enrich the graphical user interface's flexibility and will allow an optimal presentation of

  12. Role of propagation thresholds in sentiment-based model of opinion evolution with information diffusion

    Science.gov (United States)

    Si, Xia-Meng; Wang, Wen-Dong; Ma, Yan

    2016-06-01

    The degree of sentiment is the key factor for internet users in determining their propagating behaviors, i.e. whether participating in a discussion and whether withdrawing from a discussion. For this end, we introduce two sentiment-based propagation thresholds (i.e. infected threshold and refractory threshold) and propose an interacting model based on the Bayesian updating rules. Our model describe the phenomena that few internet users change their decisions and that someone has drop out of discussion about the topic when some others are just aware of it. Numerical simulations show that, large infected threshold restrains information diffusion but favors the lessening of extremism, while large refractory threshold facilitates decision interaction but promotes the extremism. Making netizens calm down and propagate information sanely can restrain the prevailing of extremism about rumors.

  13. Model for Electromagnetic Information Leakage

    OpenAIRE

    Mao Jian; Li Yongmei; Zhang Jiemin; Liu Jinming

    2013-01-01

    Electromagnetic leakage will happen in working information equipments; it could lead to information leakage. In order to discover the nature of information in electromagnetic leakage, this paper combined electromagnetic theory with information theory as an innovative research method. It outlines a systematic model of electromagnetic information leakage, which theoretically describes the process of information leakage, intercept and reproduction based on electromagnetic radiation, and ana...

  14. Model-free stochastic processes studied with q-wavelet-based informational tools

    Energy Technology Data Exchange (ETDEWEB)

    Perez, D.G. [Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso (PUCV), 23-40025 Valparaiso (Chile)]. E-mail: dario.perez@ucv.cl; Zunino, L. [Centro de Investigaciones Opticas, C.C. 124 Correo Central, 1900 La Plata (Argentina) and Departamento de Ciencias Basicas, Facultad de Ingenieria, Universidad Nacional de La Plata (UNLP), 1900 La Plata (Argentina) and Departamento de Fisica, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1900 La Plata (Argentina)]. E-mail: lucianoz@ciop.unlp.edu.ar; Martin, M.T. [Instituto de Fisica (IFLP), Facultad de Ciencias Exactas, Universidad Nacional de La Plata and Argentina' s National Council (CONICET), C.C. 727, 1900 La Plata (Argentina)]. E-mail: mtmartin@venus.unlp.edu.ar; Garavaglia, M. [Centro de Investigaciones Opticas, C.C. 124 Correo Central, 1900 La Plata (Argentina) and Departamento de Fisica, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1900 La Plata (Argentina)]. E-mail: garavagliam@ciop.unlp.edu.ar; Plastino, A. [Instituto de Fisica (IFLP), Facultad de Ciencias Exactas, Universidad Nacional de La Plata and Argentina' s National Council (CONICET), C.C. 727, 1900 La Plata (Argentina)]. E-mail: plastino@venus.unlp.edu.ar; Rosso, O.A. [Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellon II, Ciudad Universitaria, 1428 Ciudad de Buenos Aires (Argentina)]. E-mail: oarosso@fibertel.com.ar

    2007-04-30

    We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework.

  15. DDDAS-based Information-Aggregation for Crowd Dynamics Modeling with UAVs and UGVs

    Directory of Open Access Journals (Sweden)

    Yifei eYuan

    2015-04-01

    Full Text Available Unmanned aerial vehicles (UAVs and unmanned ground vehicles (UGVs collaboratively play important roles in crowd tracking for applications such as border patrol and crowd surveillance. Dynamic data-driven application systems (DDDAS paradigm has been developed for these applications to take advantage of real-time monitoring data. In the DDDAS paradigm, one crucial step in crowd surveillance is crowd dynamics modeling, which is based on multi-resolution crowd observation data collected from both UAVs and UGVs. Data collected from UAVs capture global crowd motion but have low resolution while those from UGVs have high resolution information of local crowd motion. This paper proposes an information-aggregation approach for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is developed to model crowd motion with UAVs’ low-resolution global perception, and an autoregressive model is employed to model individuals’ motion based on UGVs’ detailed perception. A simulation experiment is provided to illustrate and demonstrate the effectiveness of the proposed approach.

  16. Geographic information system-coupling sediment delivery distributed modeling based on observed data.

    Science.gov (United States)

    Lee, S E; Kang, S H

    2014-01-01

    Spatially distributed sediment delivery (SEDD) models are of great interest in estimating the expected effect of changes on soil erosion and sediment yield. However, they can only be applied if the model can be calibrated using observed data. This paper presents a geographic information system (GIS)-based method to calculate the sediment discharge from basins to coastal areas. For this, an SEDD model, with a sediment rating curve method based on observed data, is proposed and validated. The model proposed here has been developed using the combined application of the revised universal soil loss equation (RUSLE) and a spatially distributed sediment delivery ratio, within Model Builder of ArcGIS's software. The model focuses on spatial variability and is useful for estimating the spatial patterns of soil loss and sediment discharge. The model consists of two modules, a soil erosion prediction component and a sediment delivery model. The integrated approach allows for relatively practical and cost-effective estimation of spatially distributed soil erosion and sediment delivery, for gauged or ungauged basins. This paper provides the first attempt at estimating sediment delivery ratio based on observed data in the monsoon region of Korea.

  17. Information retrieval for OCR documents: a content-based probabilistic correction model

    Science.gov (United States)

    Jin, Rong; Zhai, ChangXiang; Hauptmann, Alexander

    2003-01-01

    The difficulty with information retrieval for OCR documents lies in the fact that OCR documents contain a significant amount of erroneous words and unfortunately most information retrieval techniques rely heavily on word matching between documents and queries. In this paper, we propose a general content-based correction model that can work on top of an existing OCR correction tool to "boost" retrieval performance. The basic idea of this correction model is to exploit the whole content of a document to supplement any other useful information provided by an existing OCR correction tool for word corrections. Instead of making an explicit correction decision for each erroneous word as typically done in a traditional approach, we consider the uncertainties in such correction decisions and compute an estimate of the original "uncorrupted" document language model accordingly. The document language model can then be used for retrieval with a language modeling retrieval approach. Evaluation using the TREC standard testing collections indicates that our method significantly improves the performance compared with simple word correction approaches such as using only the top ranked correction.

  18. Orchard spatial information extraction from SPOT-5 image based on CART model

    Science.gov (United States)

    Li, Deyi; Zhang, Shuwen

    2009-07-01

    Orchard is an important agricultural industry and typical land use type in Shandong peninsula of China. This article focused on the automatic information extraction of orchard using SPOT-5 image. After analyzing every object's spectrum, we proposed a CART model based on sub-region and hierarchy theory by exploring spectrum, texture and topography attributes. The whole area was divided into coastal plain region and hill region based on SRTM data and extracted respectively. The accuracy reached to 86.40%, which was much higher than supervised classification method.

  19. Study on a Threat-Countermeasure Model Based on International Standard Information

    Directory of Open Access Journals (Sweden)

    Guillermo Horacio Ramirez Caceres

    2008-12-01

    Full Text Available Many international standards exist in the field of IT security. This research is based on the ISO/IEC 15408, 15446, 19791, 13335 and 17799 standards. In this paper, we propose a knowledge base comprising a threat countermeasure model based on international standards for identifying and specifying threats which affect IT environments. In addition, the proposed knowledge base system aims at fusing similar security control policies and objectives in order to create effective security guidelines for specific IT environments. As a result, a knowledge base of security objectives was developed on the basis of the relationships inside the standards as well as the relationships between different standards. In addition, a web application was developed which displays details about the most common threats to information systems, and for each threat presents a set of related security control policies from different international standards, including ISO/IEC 27002.

  20. Study on a Threat-Countermeasure Model Based on International Standard Information

    Directory of Open Access Journals (Sweden)

    Guillermo Horacio Ramirez Caceres

    2008-12-01

    Full Text Available Many international standards exist in the field of IT security. This research is based on the ISO/IEC 15408, 15446, 19791, 13335 and 17799 standards. In this paper, we propose a knowledge base comprising a threat countermeasure model based on international standards for identifying and specifying threats which affect IT environments. In addition, the proposed knowledge base system aims at fusing similar security control policies and objectives in order to create effective security guidelines for specific IT environments. As a result, a knowledge base of security objectives was developed on the basis of the relationships inside the standards as well as the relationships between different standards. In addition, a web application was developed which displays details about the most common threats to information systems, and for each threat presents a set of related security control policies from different international standards, including ISO/IEC 27002.

  1. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    Science.gov (United States)

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  2. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    Directory of Open Access Journals (Sweden)

    Weiying Wang

    2014-01-01

    Full Text Available Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  3. Formal modeling and quantitative evaluation for information system survivability based on PEPA

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Hui-qiang; ZHAO Guo-sheng

    2008-01-01

    Survivability should be considered beyond security for information system. To assess system survivability accurately, for improvement, a formal modeling and analysis method based on stochastic process algebra is proposed in this article. By abstracting the interactive behaviors between intruders and information system, a transferring graph of system state oriented survivability is constructed. On that basis, parameters are defined and system behaviors are characterized precisely with performance evaluation process algebra (PEPA), simultaneously considering the influence of different attack modes. Ultimately the formal model for survivability is established and quantitative analysis results are obtained by PEPA Workbench tool. Simulation experiments show the effectiveness and feasibility of the developed method, and it can help to direct the designation of survivable system.

  4. INFORMATION AND COMMUNICATION TECHNOLOGY BASED MODEL FOR DEMATERIALIZATION OF ACADEMIC CERTIFICATES FOR INDIAN EDUCATIONAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Raman

    2014-01-01

    Full Text Available Information Communication Technology (ICT can integrate all facets of academic business. Though some schools and Universities in India have attempted to digitize some or many parts of their business processes, awarding the final academic credentials/certificates is still paper based. The printed paper certificates is a huge cause of concern for security and reliability, as it gives opportunity to fraudsters to exploit the system and create fabricated certificate, as there is no easy mechanism available to verify and check the authenticity of the certificate. Secondly the present system is not green and environment friendly. This study proposes an Information Communication Technology (ICT based model for dematerialization of academic certificates for Indian educational system.

  5. IMPLEMENTATION STRATEGY INNOVATIVE OF THE CONSTRUCTION INDUSTRY OF THE RUSSIAN FEDERATION BASED ON INFORMATION MODELING OF INDUSTRIAL AND CIVIL OBJECTS

    Directory of Open Access Journals (Sweden)

    Валерий Владимирович Трофимов

    2017-02-01

    Full Text Available This article discusses the implementation of the Russian construction industry innovative development strategies based on information modeling of industrial and civil objects with the use of information technology. The use of such technologies as the BIM: Building Information modeling - BIM, 4D modeling, Multi-D modeling, Product LiveCycle Management - PLM, allows to activate innovative processes and supports the basic condition for the realization of innovative strategy of development of the construction industry of the Russian Federation.

  6. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  7. Exploring nursing e-learning systems success based on information system success model.

    Science.gov (United States)

    Chang, Hui-Chuan; Liu, Chung-Feng; Hwang, Hsin-Ginn

    2011-12-01

    E-learning is thought of as an innovative approach to enhance nurses' care service knowledge. Extensive research has provided rich information toward system development, courses design, and nurses' satisfaction with an e-learning system. However, a comprehensive view in understanding nursing e-learning system success is an important but less focused-on topic. The purpose of this research was to explore net benefits of nursing e-learning systems based on the updated DeLone and McLean's Information System Success Model. The study used a self-administered questionnaire to collected 208 valid nurses' responses from 21 of Taiwan's medium- and large-scale hospitals that have implemented nursing e-learning systems. The result confirms that the model is sufficient to explore the nurses' use of e-learning systems in terms of intention to use, user satisfaction, and net benefits. However, while the three exogenous quality factors (system quality, information quality, and service quality) were all found to be critical factors affecting user satisfaction, only information quality showed a direct effect on the intention to use. This study provides useful insights for evaluating nursing e-learning system qualities as well as an understanding of nurses' intentions and satisfaction related to performance benefits.

  8. INTEGRATIVE METHOD OF TEACHING INFORMATION MODELING IN PRACTICAL HEALTH SERVICE BASED ON MICROSOFT ACCESS QUERIES

    Directory of Open Access Journals (Sweden)

    Svetlana A. Firsova

    2016-06-01

    Full Text Available Introduction: this article explores the pedagogical technology employed to teach medical students foundations of work with MICROSOFT ACCESS databases. The above technology is based on integrative approach to the information modeling in public health practice, drawing upon basic didactic concepts that pertain to objects and tools databases created in MICROSOFT ACCESS. The article examines successive steps in teaching the topic “Queries in MICROSOFT ACCESS” – from simple queries to complex ones. The main attention is paid to such components of methodological system, as the principles and teaching methods classified according to the degree of learners’ active cognitive activity. The most interesting is the diagram of the relationship of learning principles, teaching methods and specific types of requests. Materials and Methods: the authors used comparative analysis of literature, syllabi, curricula in medical informatics taught at leading medical universities in Russia. Results: the original technique of training in putting queries with databases of MICROSOFT ACCESS is presented for analysis of information models in practical health care. Discussion and Conclusions: it is argued that the proposed pedagogical technology will significantly improve the effectiveness of teaching the course “Medical Informatics”, that includes development and application of models to simulate the operation of certain facilities and services of the health system which, in turn, increases the level of information culture of practitioners.

  9. Model for Electromagnetic Information Leakage

    Directory of Open Access Journals (Sweden)

    Mao Jian

    2013-09-01

    Full Text Available Electromagnetic leakage will happen in working information equipments; it could lead to information leakage. In order to discover the nature of information in electromagnetic leakage, this paper combined electromagnetic theory with information theory as an innovative research method. It outlines a systematic model of electromagnetic information leakage, which theoretically describes the process of information leakage, intercept and reproduction based on electromagnetic radiation, and analyzes amount of leakage information with formulas.  

  10. A priori parameter estimates for global hydrological modeling using geographically based information: Application of the CREST hydrologic model

    Science.gov (United States)

    Gao, Z.; Zhang, K.; Xue, X.; Huang, J.; Hong, Y.

    2016-12-01

    Floods are among the most common natural disasters with worldwide impacts that cause significant humanitarian and economic negative consequences. The increasing availability of satellite-based precipitation estimates and geospatial datasets with global coverage and improved temporal resolutions has enhanced our capability of forecasting floods and monitoring water resources across the world. This study presents an approach combing physically based and empirical methods for a-priori parameter estimates and a parameter dataset for the Coupled Routing and Excess Storage (CREST) hydrological model at the global scale. This approach takes advantage of geographic information such as topography, land cover, and soil properties to derive the distributed parameter values across the world. The main objective of this study is to evaluate the utility of a-priori parameter estimates to improve the performance of the CREST distributed hydrologic model and enable its prediction at poorly gauged or ungauged catchments. Using the CREST hydrologic model, several typical river basins in different continents were selected to serve as test areas. The results show that the simulated daily stream flows using the parameters derived from geographically based information outperform the results using the lumped parameters. Overall, this early study highlights that a priori parameter estimates for hydrologic model warrants improved model predictive capability in ungauged basins at regional to global scales.

  11. An Information Perception-Based Emotion Contagion Model for Fire Evacuation

    Science.gov (United States)

    Liu, Ting Ting; Liu, Zhen; Ma, Minhua; Xuan, Rongrong; Chen, Tian; Lu, Tao; Yu, Lipeng

    2017-03-01

    In fires, people are easier to lose their mind. Panic will lead to irrational behavior and irreparable tragedy. It has great practical significance to make contingency plans for crowd evacuation in fires. However, existing studies about crowd simulation always paid much attention on the crowd density, but little attention on emotional contagion that may cause a panic. Based on settings about information space and information sharing, this paper proposes an emotional contagion model for crowd in panic situations. With the proposed model, a behavior mechanism is constructed for agents in the crowd and a prototype of system is developed for crowd simulation. Experiments are carried out to verify the proposed model. The results showed that the spread of panic not only related to the crowd density and the individual comfort level, but also related to people's prior knowledge of fire evacuation. The model provides a new way for safety education and evacuation management. It is possible to avoid and reduce unsafe factors in the crowd with the lowest cost.

  12. CACM: A New Coordination Model in Mobile Agent-Based Information Retrieval Applications

    Institute of Scientific and Technical Information of China (English)

    TANGXinhuai; ZHANGYaying; YAOYinxiong; YOUJinyuan

    2005-01-01

    In mobile agent systems, an application may be composed of several mobile agents that cooperatively perform a task. Multiple mobile agents need to communicate and interact with each other to accomplish their cooperative goal. Coordination model aims to provide solutions to interactions between concurrent activities, hiding the computing details and focusing on interaction between activities. A Context-aware coordination model (CACM), which combines mobility and coordination, is proposed for mobile agent applications, i.e. in mobile agent based information retrieval applications. The context-aware coordination model transfers interactions between agents from globally coupling interactions to locally uncoupling tuple space interactions. In addition, programmable tuple space is adopted to solve the problems of context-aware coordination introduced by mobility and data heterogeneity in mobile agent systems. Furthermore, environment specific and application specific coordination policy can be integrated into the programmable tuple space for customized requirements. Finally an application sample system-information retrieval in mobile agent applications is carried out to test the performance of the proposed model.

  13. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    Science.gov (United States)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  14. A Novel Quantitative Analysis Model for Information System Survivability Based on Conflict Analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Huiqiang; ZHAO Guosheng

    2007-01-01

    This paper describes a novel quantitative analysis model for system survivability based on conflict analysis, which provides a direct-viewing survivable situation. Based on the three-dimensional state space of conflict, each player's efficiency matrix on its credible motion set can be obtained. The player whose desire is the strongest in all initiates the moving and the overall state transition matrix of information system may be achieved. In addition, the process of modeling and stability analysis of conflict can be converted into a Markov analysis process, thus the obtained results with occurring probability of each feasible situation will help the players to quantitatively judge the probability of their pursuing situations in conflict. Compared with the existing methods which are limited to post-explanation of system's survivable situation, the proposed model is relatively suitable for quantitatively analyzing and forecasting the future development situation of system survivability. The experimental results show that the model may be effectively applied to quantitative analysis for survivability. Moreover, there will be a good application prospect in practice.

  15. HISTORIC BUILDING INFORMATION MODELLING – ADDING INTELLIGENCE TO LASER AND IMAGE BASED SURVEYS

    Directory of Open Access Journals (Sweden)

    M. Murphy

    2012-09-01

    Full Text Available Historic Building Information Modelling (HBIM is a novel prototype library of parametric objects based on historic data and a system of cross platform programmes for mapping parametric objects onto a point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL. The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engin- eering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured.

  16. Historic Building Information Modelling - Adding Intelligence to Laser and Image Based Surveys

    Science.gov (United States)

    Murphy, M.; McGovern, E.; Pavia, S.

    2011-09-01

    Historic Building Information Modelling (HBIM) is a novel prototype library of parametric objects based on historic data and a system of cross platform programmes for mapping parametric objects onto a point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL). The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engin- eering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured).

  17. Historic Building Information Modelling - Adding intelligence to laser and image based surveys of European classical architecture

    Science.gov (United States)

    Murphy, Maurice; McGovern, Eugene; Pavia, Sara

    2013-02-01

    Historic Building Information Modelling (HBIM) is a novel prototype library of parametric objects, based on historic architectural data and a system of cross platform programmes for mapping parametric objects onto point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL). The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engineering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured) for both the analysis and conservation of historic objects, structures and environments.

  18. Construction Process Simulation and Safety Analysis Based on Building Information Model and 4D Technology

    Institute of Scientific and Technical Information of China (English)

    HU Zhenzhong; ZHANG Jianping; DENG Ziyin

    2008-01-01

    Time-dependent structure analysis theory has been proved to be more accurate and reliable com-pared to commonly used methods during construction. However, so far applications are limited to partial pe-riod and part of the structure because of immeasurable artificial intervention. Based on the building informa-tion model (BIM) and four-dimensional (4D) technology, this paper proposes an improves structure analysis method, which can generate structural geometry, resistance model, and loading conditions automatically by a close interlink of the schedule information, architectural model, and material properties. The method was applied to a safety analysis during a continuous and dynamic simulation of the entire construction process.The results show that the organic combination of the BIM, 4D technology, construction simulation, and safety analysis of time-dependent structures is feasible and practical. This research also lays a foundation for further researches on building lifecycle management by combining architectural design, structure analy-sis, and construction management.

  19. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor.

    Science.gov (United States)

    Zhang, Xuming; Ren, Jinxia; Huang, Zhiwen; Zhu, Fei

    2016-09-15

    Multimodal medical image fusion (MIF) plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM) based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  20. Spiking Cortical Model Based Multimodal Medical Image Fusion by Combining Entropy Information with Weber Local Descriptor

    Directory of Open Access Journals (Sweden)

    Xuming Zhang

    2016-09-01

    Full Text Available Multimodal medical image fusion (MIF plays an important role in clinical diagnosis and therapy. Existing MIF methods tend to introduce artifacts, lead to loss of image details or produce low-contrast fused images. To address these problems, a novel spiking cortical model (SCM based MIF method has been proposed in this paper. The proposed method can generate high-quality fused images using the weighting fusion strategy based on the firing times of the SCM. In the weighting fusion scheme, the weight is determined by combining the entropy information of pulse outputs of the SCM with the Weber local descriptor operating on the firing mapping images produced from the pulse outputs. The extensive experiments on multimodal medical images show that compared with the numerous state-of-the-art MIF methods, the proposed method can preserve image details very well and avoid the introduction of artifacts effectively, and thus it significantly improves the quality of fused images in terms of human vision and objective evaluation criteria such as mutual information, edge preservation index, structural similarity based metric, fusion quality index, fusion similarity metric and standard deviation.

  1. Adjoint based data assimilation for phase field model using second order information of a posterior distribution

    Science.gov (United States)

    Ito, Shin-Ichi; Nagao, Hiromichi; Yamanaka, Akinori; Tsukada, Yuhki; Koyama, Toshiyuki; Inoue, Junya

    Phase field (PF) method, which phenomenologically describes dynamics of microstructure evolutions during solidification and phase transformation, has progressed in the fields of hydromechanics and materials engineering. How to determine, based on observation data, an initial state and model parameters involved in a PF model is one of important issues since previous estimation methods require too much computational cost. We propose data assimilation (DA), which enables us to estimate the parameters and states by integrating the PF model and observation data on the basis of the Bayesian statistics. The adjoint method implemented on DA not only finds an optimum solution by maximizing a posterior distribution but also evaluates the uncertainty in the estimations by utilizing the second order information of the posterior distribution. We carried out an estimation test using synthetic data generated by the two-dimensional Kobayashi's PF model. The proposed method is confirmed to reproduce the true initial state and model parameters we assume in advance, and simultaneously estimate their uncertainties due to quality and quantity of the data. This result indicates that the proposed method is capable of suggesting the experimental design to achieve the required accuracy.

  2. Least Information Modeling for Information Retrieval

    CERN Document Server

    Ke, Weimao

    2012-01-01

    We proposed a Least Information theory (LIT) to quantify meaning of information in probability distribution changes, from which a new information retrieval model was developed. We observed several important characteristics of the proposed theory and derived two quantities in the IR context for document representation. Given probability distributions in a collection as prior knowledge, LI Binary (LIB) quantifies least information due to the binary occurrence of a term in a document whereas LI Frequency (LIF) measures least information based on the probability of drawing a term from a bag of words. Three fusion methods were also developed to combine LIB and LIF quantities for term weighting and document ranking. Experiments on four benchmark TREC collections for ad hoc retrieval showed that LIT-based methods demonstrated very strong performances compared to classic TF*IDF and BM25, especially for verbose queries and hard search topics. The least information theory offers a new approach to measuring semantic qua...

  3. Business Value of Information Technology Service Quality Based on Probabilistic Business-Driven Model

    Directory of Open Access Journals (Sweden)

    Jaka Sembiring

    2015-08-01

    Full Text Available The business value of information technology (IT services is often difficult to assess, especially from the point of view of a non-IT manager. This condition could severely impact organizational IT strategic decisions. Various approaches have been proposed to quantify the business value, but some are trapped in technical complexity while others misguide managers into directly and subjectively judging some technical entities outside their domain of expertise. This paper describes a method on how to properly capture both perspectives based on a  probabilistic business-driven model. The proposed model presents a procedure to calculate the business value of IT services. The model also covers IT security services and their business value as an important aspect of IT services that is not covered in previously published researches. The impact of changes in the quality of IT services on business value will also be discussed. A simulation and a case illustration are provided to show the possible application of the proposed model for a simple business process in an enterprise.

  4. Hierarchical hybrid testability modeling and evaluation method based on information fusion

    Institute of Scientific and Technical Information of China (English)

    Xishan Zhang; Kaoli Huang; Pengcheng Yan; Guangyao Lian

    2015-01-01

    In order to meet the demand of testability analysis and evaluation for complex equipment under a smal sample test in the equipment life cycle, the hierarchical hybrid testability model-ing and evaluation method (HHTME), which combines the testabi-lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo-logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob-ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in-formation. Final y, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accurate.

  5. Information Model for Product Modeling

    Institute of Scientific and Technical Information of China (English)

    焦国方; 刘慎权

    1992-01-01

    The Key problems in product modeling for integrated CAD ∥CAM systems are the information structures and representations of products.They are taking more and more important roles in engineering applications.With the investigation on engineering product information and from the viewpoint of industrial process,in this paper,the information models are proposed and the definitions of the framework of product information are given.And then,the integration and the consistence of product information are discussed by introucing the entity and its instance.As a summary,the information structures described in this paper have many advantage and natures helpful in engineering design.

  6. In the transmission of information, the great potential of model-based coding with the SP theory of intelligence

    OpenAIRE

    Wolff, J Gerard

    2016-01-01

    Model-based coding, described by John Pierce in 1961, has great potential to reduce the volume of information that needs to be transmitted in moving big data, without loss of information, from one place to another, or in lossless communications via the internet. Compared with ordinary compression methods, this potential advantage of model-based coding in the transmission of data arises from the fact that both the transmitter ("Alice") and the receiver ("Bob") are equipped with a grammar for t...

  7. Genome-wide prediction, display and refinement of binding sites with information theory-based models

    Directory of Open Access Journals (Sweden)

    Leeder J Steven

    2003-09-01

    Full Text Available Abstract Background We present Delila-genome, a software system for identification, visualization and analysis of protein binding sites in complete genome sequences. Binding sites are predicted by scanning genomic sequences with information theory-based (or user-defined weight matrices. Matrices are refined by adding experimentally-defined binding sites to published binding sites. Delila-Genome was used to examine the accuracy of individual information contents of binding sites detected with refined matrices as a measure of the strengths of the corresponding protein-nucleic acid interactions. The software can then be used to predict novel sites by rescanning the genome with the refined matrices. Results Parameters for genome scans are entered using a Java-based GUI interface and backend scripts in Perl. Multi-processor CPU load-sharing minimized the average response time for scans of different chromosomes. Scans of human genome assemblies required 4–6 hours for transcription factor binding sites and 10–19 hours for splice sites, respectively, on 24- and 3-node Mosix and Beowulf clusters. Individual binding sites are displayed either as high-resolution sequence walkers or in low-resolution custom tracks in the UCSC genome browser. For large datasets, we applied a data reduction strategy that limited displays of binding sites exceeding a threshold information content to specific chromosomal regions within or adjacent to genes. An HTML document is produced listing binding sites ranked by binding site strength or chromosomal location hyperlinked to the UCSC custom track, other annotation databases and binding site sequences. Post-genome scan tools parse binding site annotations of selected chromosome intervals and compare the results of genome scans using different weight matrices. Comparisons of multiple genome scans can display binding sites that are unique to each scan and identify sites with significantly altered binding strengths

  8. Dynamic relationships between microbial biomass, respiration, inorganic nutrients and enzyme activities: informing enzyme based decomposition models

    Directory of Open Access Journals (Sweden)

    Daryl L Moorhead

    2013-08-01

    Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.

  9. A GIS-based modeling system for petroleum waste management. Geographical information system.

    Science.gov (United States)

    Chen, Z; Huang, G H; Li, J B

    2003-01-01

    With an urgent need for effective management of petroleum-contaminated sites, a GIS-aided simulation (GISSIM) system is presented in this study. The GISSIM contains two components: an advanced 3D numerical model and a geographical information system (GIS), which are integrated within a general framework. The modeling component undertakes simulation for the fate of contaminants in subsurface unsaturated and saturated zones. The GIS component is used in three areas throughout the system development and implementation process: (i) managing spatial and non-spatial databases; (ii) linking inputs, model, and outputs; and (iii) providing an interface between the GISSIM and its users. The developed system is applied to a North American case study. Concentrations of benzene, toluene, and xylenes in groundwater under a petroleum-contaminated site are dynamically simulated. Reasonable outputs have been obtained and presented graphically. They provide quantitative and scientific bases for further assessment of site-contamination impacts and risks, as well as decisions on practical remediation actions.

  10. A Model for Water Quality Assessment Based on the Information Entropy and Its Application in the Case of Huiji River

    Institute of Scientific and Technical Information of China (English)

    BingdongZhao; QingliangZhao; JianhuaMa; HuaGuan

    2004-01-01

    Based on the information entropy, a model for water quality assessment is Using this model, the paper gives a case study on the water quality assessment River. The space-time variation law of the water quality is analyzed also in this result indicates that the model possesses some clear mathematic and physical and it is simple, practical and accurate.

  11. A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.

    Science.gov (United States)

    Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F

    2017-08-04

    . This spatial model constitutes an elegant alternative to voxel-based approaches in neuroimaging studies; not only are their atoms biologically informed, they are also adaptive to high resolutions, represent high dimensions efficiently, and capture long-range spatial dependencies, which are important and challenging objectives for neuroimaging data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. A Modified Brain MR Image Segmentation and Bias Field Estimation Model Based on Local and Global Information

    Directory of Open Access Journals (Sweden)

    Wang Cong

    2016-01-01

    Full Text Available Because of the poor radio frequency coil uniformity and gradient-driven eddy currents, there is much noise and intensity inhomogeneity (bias in brain magnetic resonance (MR image, and it severely affects the segmentation accuracy. Better segmentation results are difficult to achieve by traditional methods; therefore, in this paper, a modified brain MR image segmentation and bias field estimation model based on local and global information is proposed. We first construct local constraints including image neighborhood information in Gaussian kernel mapping space, and then the complete regularization is established by introducing nonlocal spatial information of MR image. The weighting between local and global information is automatically adjusted according to image local information. At the same time, bias field information is coupled with the model, and it makes the model reduce noise interference but also can effectively estimate the bias field information. Experimental results demonstrate that the proposed algorithm has strong robustness to noise and bias field is well corrected.

  13. A New Model of Information Systems Efficiency based on Key Performance Indicator (KPI)

    National Research Council Canada - National Science Library

    Ahmad AbdulQadir AlRababah

    2017-01-01

    ... about the performance of employees as well as operating units of the company. One technique that the company could use to evaluate the present performance of both employees and operating units is the utilization of KPI -based management information system...

  14. Investigating Information-Seeking Behavior of Faculty Members Based on Wilson's Model: Case Study of PNU University, Mazandaran, Iran.

    Science.gov (United States)

    Azadeh, Fereydoon; Ghasemi, Shahrzad

    2016-09-01

    The present research aims to study information seeking behavior of faculty Members of Payame Noor University (PNU) in Mazandaran province of Iran by using Wilson's model of information seeking behavior. This is a survey study. Participants were 97 of PNU faculty Members in Mazandaran province. An information-seeking behavior inventory was employed to gather information and research data, which had 24 items based on 5-point likert scale. Collected data were analyzed in SPSS software. Results showed that the most important goal of faculty members was publishing a scientific paper, and their least important goal was updating technical information. Also we found that they mostly use internet-based resources to meet their information needs. Accordingly, 57.7% of them find information resources via online search engines (e.g. Google, Yahoo). Also we concluded that there was a significant relationship between English language proficiency, academic rank, and work experience of them and their information- seeking behavior.

  15. Internet Network Resource Information Model

    Institute of Scientific and Technical Information of China (English)

    陈传峰; 李增智; 唐亚哲; 刘康平

    2002-01-01

    The foundation of any network management systens is a database that con-tains information about the network resources relevant to the management tasks. A networkinformation model is an abstraction of network resources, including both managed resources andmanaging resources. In the SNMP-based management framework, management information isdefined almost exclusively from a "device" viewpoint, namely, managing a network is equiva-lent to managing a collection of individual nodes. Aiming at making use of recent advances indistributed computing and in object-oriented analysis and design, the Internet management ar-chitecture can also be based on the Open Distributed Processing Reference Model (RM-ODP).The purpose of this article is to provide an Internet Network Resource Information Model.First, a layered management information architecture will be discussed. Then the Internetnetwork resource information model is presented. The information model is specified usingObject-Z.

  16. Disulfide Connectivity Prediction Based on Modelled Protein 3D Structural Information and Random Forest Regression.

    Science.gov (United States)

    Yu, Dong-Jun; Li, Yang; Hu, Jun; Yang, Xibei; Yang, Jing-Yu; Shen, Hong-Bin

    2015-01-01

    Disulfide connectivity is an important protein structural characteristic. Accurately predicting disulfide connectivity solely from protein sequence helps to improve the intrinsic understanding of protein structure and function, especially in the post-genome era where large volume of sequenced proteins without being functional annotated is quickly accumulated. In this study, a new feature extracted from the predicted protein 3D structural information is proposed and integrated with traditional features to form discriminative features. Based on the extracted features, a random forest regression model is performed to predict protein disulfide connectivity. We compare the proposed method with popular existing predictors by performing both cross-validation and independent validation tests on benchmark datasets. The experimental results demonstrate the superiority of the proposed method over existing predictors. We believe the superiority of the proposed method benefits from both the good discriminative capability of the newly developed features and the powerful modelling capability of the random forest. The web server implementation, called TargetDisulfide, and the benchmark datasets are freely available at: http://csbio.njust.edu.cn/bioinf/TargetDisulfide for academic use.

  17. A model-based information sharing protocol for profile Hidden Markov Models used for HIV-1 recombination detection.

    Science.gov (United States)

    Bulla, Ingo; Schultz, Anne-Kathrin; Chesneau, Christophe; Mark, Tanya; Serea, Florin

    2014-06-19

    In many applications, a family of nucleotide or protein sequences classified into several subfamilies has to be modeled. Profile Hidden Markov Models (pHMMs) are widely used for this task, modeling each subfamily separately by one pHMM. However, a major drawback of this approach is the difficulty of dealing with subfamilies composed of very few sequences. One of the most crucial bioinformatical tasks affected by the problem of small-size subfamilies is the subtyping of human immunodeficiency virus type 1 (HIV-1) sequences, i.e., HIV-1 subtypes for which only a small number of sequences is known. To deal with small samples for particular subfamilies of HIV-1, we introduce a novel model-based information sharing protocol. It estimates the emission probabilities of the pHMM modeling a particular subfamily not only based on the nucleotide frequencies of the respective subfamily but also incorporating the nucleotide frequencies of all available subfamilies. To this end, the underlying probabilistic model mimics the pattern of commonality and variation between the subtypes with regards to the biological characteristics of HI viruses. In order to implement the proposed protocol, we make use of an existing HMM architecture and its associated inference engine. We apply the modified algorithm to classify HIV-1 sequence data in the form of partial HIV-1 sequences and semi-artificial recombinants. Thereby, we demonstrate that the performance of pHMMs can be significantly improved by the proposed technique. Moreover, we show that our algorithm performs significantly better than Simplot and Bootscanning.

  18. Role-based typology of information technology : Model development and assessment.

    NARCIS (Netherlands)

    Zand, F.; Solaimani, H. (Sam); Beers, van C.

    2015-01-01

    Managers aim to explain how and why IT creates business value, recognize their IT-based capabilities, and select the appropriate IT to enhance and leverage those capabilities. This article synthesizes the Organizational Information Processing Theory and Resource-Based View into a descriptive typolog

  19. Factors associated with adoption of health information technology: a conceptual model based on a systematic review.

    Science.gov (United States)

    Kruse, Clemens Scott; DeShazo, Jonathan; Kim, Forest; Fulton, Lawrence

    2014-05-23

    The Health Information Technology for Economic and Clinical Health Act (HITECH) allocated $19.2 billion to incentivize adoption of the electronic health record (EHR). Since 2009, Meaningful Use Criteria have dominated information technology (IT) strategy. Health care organizations have struggled to meet expectations and avoid penalties to reimbursements from the Center for Medicare and Medicaid Services (CMS). Organizational theories attempt to explain factors that influence organizational change, and many theories address changes in organizational strategy. However, due to the complexities of the health care industry, existing organizational theories fall short of demonstrating association with significant health care IT implementations. There is no organizational theory for health care that identifies, groups, and analyzes both internal and external factors of influence for large health care IT implementations like adoption of the EHR. The purpose of this systematic review is to identify a full-spectrum of both internal organizational and external environmental factors associated with the adoption of health information technology (HIT), specifically the EHR. The result is a conceptual model that is commensurate with the complexity of with the health care sector. We performed a systematic literature search in PubMed (restricted to English), EBSCO Host, and Google Scholar for both empirical studies and theory-based writing from 1993-2013 that demonstrated association between influential factors and three modes of HIT: EHR, electronic medical record (EMR), and computerized provider order entry (CPOE). We also looked at published books on organizational theories. We made notes and noted trends on adoption factors. These factors were grouped as adoption factors associated with various versions of EHR adoption. The resulting conceptual model summarizes the diversity of independent variables (IVs) and dependent variables (DVs) used in articles, editorials, books, as

  20. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    Science.gov (United States)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  1. An information-flow-based model with dissipation, saturation and direction for active pathway inference

    Directory of Open Access Journals (Sweden)

    Wu Ling-Yun

    2010-05-01

    Full Text Available Abstract Background Biological systems process the genetic information and environmental signals through pathways. How to map the pathways systematically and efficiently from high-throughput genomic and proteomic data is a challenging open problem. Previous methods design different heuristics but do not describe explicitly the behaviours of the information flow. Results In this study, we propose new concepts of dissipation, saturation and direction to decipher the information flow behaviours in the pathways and thereby infer the biological pathways from a given source to its target. This model takes into account explicitly the common features of the information transmission and provides a general framework to model the biological pathways. It can incorporate different types of bio-molecular interactions to infer the signal transduction pathways and interpret the expression quantitative trait loci (eQTL associations. The model is formulated as a linear programming problem and thus is solved efficiently. Experiments on the real data of yeast indicate that the reproduced pathways are highly consistent with the current knowledge. Conclusions Our model explicitly treats the biological pathways as information flows with dissipation, saturation and direction. The effective applications suggest that the three new concepts may be valid to describe the organization rules of biological pathways. The deduced linear programming should be a promising tool to infer the various biological pathways from the high-throughput data.

  2. Neurally and ocularly informed graph-based models for searching 3D environments

    Science.gov (United States)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  3. Model-based system-of-systems engineering for space-based command, control, communication, and information architecture design

    Science.gov (United States)

    Sindiy, Oleg V.

    This dissertation presents a model-based system-of-systems engineering (SoSE) approach as a design philosophy for architecting in system-of-systems (SoS) problems. SoS refers to a special class of systems in which numerous systems with operational and managerial independence interact to generate new capabilities that satisfy societal needs. Design decisions are more complicated in a SoS setting. A revised Process Model for SoSE is presented to support three phases in SoS architecting: defining the scope of the design problem, abstracting key descriptors and their interrelations in a conceptual model, and implementing computer-based simulations for architectural analyses. The Process Model enables improved decision support considering multiple SoS features and develops computational models capable of highlighting configurations of organizational, policy, financial, operational, and/or technical features. Further, processes for verification and validation of SoS models and simulations are also important due to potential impact on critical decision-making and, thus, are addressed. Two research questions frame the research efforts described in this dissertation. The first concerns how the four key sources of SoS complexity---heterogeneity of systems, connectivity structure, multi-layer interactions, and the evolutionary nature---influence the formulation of SoS models and simulations, trade space, and solution performance and structure evaluation metrics. The second question pertains to the implementation of SoSE architecting processes to inform decision-making for a subset of SoS problems concerning the design of information exchange services in space-based operations domain. These questions motivate and guide the dissertation's contributions. A formal methodology for drawing relationships within a multi-dimensional trade space, forming simulation case studies from applications of candidate architecture solutions to a campaign of notional mission use cases, and

  4. Climate information based streamflow and rainfall forecasts for Huai River Basin using Hierarchical Bayesian Modeling

    Directory of Open Access Journals (Sweden)

    X. Chen

    2013-09-01

    Full Text Available A Hierarchal Bayesian model for forecasting regional summer rainfall and streamflow season-ahead using exogenous climate variables for East Central China is presented. The model provides estimates of the posterior forecasted probability distribution for 12 rainfall and 2 streamflow stations considering parameter uncertainty, and cross-site correlation. The model has a multilevel structure with regression coefficients modeled from a common multivariate normal distribution results in partial-pooling of information across multiple stations and better representation of parameter and posterior distribution uncertainty. Covariance structure of the residuals across stations is explicitly modeled. Model performance is tested under leave-10-out cross-validation. Frequentist and Bayesian performance metrics used include Receiver Operating Characteristic, Reduction of Error, Coefficient of Efficiency, Rank Probability Skill Scores, and coverage by posterior credible intervals. The ability of the model to reliably forecast regional summer rainfall and streamflow season-ahead offers potential for developing adaptive water risk management strategies.

  5. An Analytical Model of Information Dissemination for a Gossip-based Protocol

    OpenAIRE

    Bakhshi, R.R.; Gavidia Simonetti, D.P.; Fokkink, W.J.; van, der Steen, JT

    2009-01-01

    We develop an analytical model of information dissemination for a gossiping protocol that combines both pull and push approaches. With this model we analyse how fast an item is replicated through a network, and how fast the item spreads in the network, and how fast the item covers the network. We also determine the optimal size of the exchange buffer, to obtain fast replication. Our results are confirmed by large-scale simulation experiments.

  6. An Analytical Model of Information Dissemination for a Gossip-based Protocol

    CERN Document Server

    Bakhshi, Rena; Fokkink, Wan; van Steen, Maarten

    2008-01-01

    We develop an analytical model of information dissemination for a gossiping protocol that combines both pull and push approaches. With this model we analyse how fast an item is replicated through a network, and how fast the item spreads in the network, and how fast the item covers the network. We also determine the optimal size of the exchange buffer, to obtain fast replication. Our results are confirmed by large-scale simulation experiments.

  7. Multiple Perspective Approach for the Development of Information Systems Based on Advanced Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    modeling (AMM) in scheduling and control systems. Advanced mathematical techniques are relatively new in scheduling and control systems, at least in real production situations, and therefore the project included the research of methods and tools for the development of these systems. Because of the novelty...... are grounded in an understanding of reality as a socially constructed phenomenon where the multiple perspectives of the actors involved (weltanschauung in the dissertation) are used as filters to understand the process of creation of the information system. Soft systems theory was used as the theoretical lens....... Keywords: Information systems development, information systems development methodology, advanced mathematical models, loosely coupled systems, distributed systems, knowledge exchange, boundary objects, systems theory, multiple perspectives, weltanschauung....

  8. A geo-information theoretical approach to inductive erosion modelling based on terrain mapping units.

    NARCIS (Netherlands)

    Suryana, N.

    1997-01-01

    Three main aspects of the research, namely the concept of object orientation, the development of an Inductive Erosion Model (IEM) and the development of a framework for handling uncertainty in the data or information resulting from a GIS are interwoven in this thesis. The first and the second aspect

  9. Multi-objectives fuzzy optimization model for ship form demonstration based on information entropy

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Selecting optimization ship form scheme is an important content in the process of concept design of ship. Multi-objective fuzzy decision-making model for ship form demonstration is set up according to the fuzzy pattern-recognition theory. Weight coefficients of each target of ship form scheme are determined by information entropy and individual subjective partiality. This model is used to select the optimal ship form scheme,the example shows that the model is exact and the result is credible. It can provide a reference for choosing the optimization scheme of ship form.

  10. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between ...

  11. An E-Cash Based Implementation Model for Facilitating Anonymous Purchasing of Information Products

    Science.gov (United States)

    Zhang, Zhen; Kim, K. H. (Kane); Kang, Myeong-Ho; Zhou, Tianran; Chung, Byung-Ho; Kim, Shin-Hyo; Lee, Seok-Joon

    The rapid growing of online purchasing of information products poses challenges of how to preserve the customer's privacy during the online transactions. The current widely used way of online shopping does not consider the customer's privacy protection. It exposes the customer's sensitive information unnecessarily. We propose a new five-party implementation model called 5PAPS that provides much enhanced protection of the customer's privacy. The model combines the advantages of the e-cash techniques, the mix technique, the anonymous-honoring merchant model, and the anonymity-protecting payment gateway model. It is aimed for protecting the customer's anonymity in all applicable aspects. Security and anonymity issues of the model have been analyzed. The results show that the model is robust against varieties of common attacks and the customer's anonymity can be protected even in the presence of some collusion among the parties involved in the transactions. Experimental prototyping of the essential parts yields partial validation of the practical nature of the 5PAPS model, and it has also produced reliable estimates of the storage and messaging volume requirements present in sizable purchasing systems.

  12. A General Stochastic Information Diffusion Model in Social Networks Based on Epidemic Diseases

    Directory of Open Access Journals (Sweden)

    Hamidreza Sotoodeh

    2013-09-01

    Full Text Available Social networks are an important infrastructure forinformation, viruses and innovations propagation.Since users’behavior has influenced by other users’ activity, some groups of people would be made regard to similarity of users’interests. On the other hand, dealing with many events in real worlds, can be justified in social networks; spreadingdisease is one instance of them. People’s manner and infection severity are more important parameters in dissemination of diseases. Both of these reasons derive, whether the diffusion leads to an epidemic or not. SIRS is a hybrid model of SIR and SIS disease models to spread contamination. A person in this model can be returned tosusceptible state after it removed. According to communities which are established on the social network, we use thecompartmental type of SIRS model. During this paper, a general compartmental information diffusion model wouldbe proposed and extracted some of the beneficial parameters to analyze our model. To adapt our model to realistic behaviors, we use Mark ovian model, which would be helpful to create a stochastic manner of the proposed model.In the case of random model, we can calculate probabilities of transaction between states and predicting value of each state. The comparison between two mode of themodel shows that, the prediction of population would beverified in each state.

  13. The Extraction Model of Paddy Rice Information Based on GF-1 Satellite WFV Images.

    Science.gov (United States)

    Yang, Yan-jun; Huang, Yan; Tian, Qing-jiu; Wang, Lei; Geng, Jun; Yang, Ran-ran

    2015-11-01

    In the present, using the characteristics of paddy rice at different phenophase to identify it by remote sensing images is an efficient way in the information extraction. According to the remarkably properties of paddy rice different from other vegetation, which the surface of paddy fields is with a large number of water in the early stage, NDWI (normalized difference water index) which is used to extract water information can reasonably be applied in the extraction of paddy rice at the early stage of the growth. And using NDWI ratio of two phenophase can expand the difference between paddy rice and other surface features, which is an important part for the extraction of paddy rice with high accuracy. Then using the variation of NDVI (normalized differential vegetation index) in different phenophase can further enhance accuracy of paddy rice information extraction. This study finds that making full advantage of the particularity of paddy rice in different phenophase and combining two indices (NDWI and NDVI) associated with paddy rice can establish a reasonable, accurate and effective extraction model of paddy rice. This is also the main way to improve the accuracy of paddy rice extraction. The present paper takes Lai'an in Anhui Province as the research area, and rice as the research object. It constructs the extraction model of paddy rice information using NDVI and NDWI between tillering stage and heading stage. Then the model was applied to GF1-WFV remote sensing image on July 12, 2013 and August 30, 2013. And it effectively extracted out of paddy rice distribution in Lai'an and carried on the mapping. At last, the result of extraction was verified and evaluated combined with field investigation data in the study area. The result shows that using the extraction model can quickly and accurately obtain the distribution of rice information, and it has the very good universality.

  14. Spatial characterization and prediction of Neanderthal sites based on environmental information and stochastic modelling

    Science.gov (United States)

    Maerker, Michael; Bolus, Michael

    2014-05-01

    We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less

  15. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  16. Organization of pattern information in the pattern based software development: A POMSDP model

    Institute of Scientific and Technical Information of China (English)

    TANG Yong; LIU Ri-guang; WANG Yan

    2008-01-01

    Focused on the lack of proper organization for patterns in the development of pattern based software, a POMSDP model with layered tree structure for organizing patterns during the process of development was put forward. The model and its interrelated concepts were strictly defined and introduced by applying the theory of set, symbolic logic and pattern, which ensures the correctness, maturity and expansibility of the model. The expansibility of the model was discussed mainly. The basic realization and the application in the automatic que-ry system were presented. Based on the existing software development methods, the POMSDP model resolves the problem of chaos in the application of patterns, strengthens the controllability of the system, and facilitates the improvement, maintenance, expansion, and especially the reengineering of the software system.

  17. Conceptual model of health information ethics as a basis for computer-based instructions for electronic patient record systems.

    Science.gov (United States)

    Okada, Mihoko; Yamamoto, Kazuko; Watanabe, Kayo

    2007-01-01

    A computer-based learning system called Electronic Patient Record (EPR) Laboratory has been developed for students to acquire knowledge and practical skills of EPR systems. The Laboratory is basically for self-learning. Among the subjects dealt with in the system is health information ethics. We consider this to be of the utmost importance for personnel involved in patient information handling. The variety of material on the subject has led to a problem in dealing with it in a methodical manner. In this paper, we present a conceptual model of health information ethics developed using UML to represent the semantics and the knowledge of the domain. Based on the model, we could represent the scope of health information ethics, give structure to the learning materials, and build a control mechanism for a test, fail and review cycle. We consider that the approach is applicable to other domains.

  18. A NOVEL INTELLIGENT MODEL FOR ENTERPRISE INFORMATION SYSTEM BASED ON WIRELESS SENSOR NETWORK

    Directory of Open Access Journals (Sweden)

    OUAIL ABROUN

    2014-05-01

    Full Text Available Wireless Sensor Network (WSN technologies became a leading solution in many significant fields, by offering the desired high accuracy in a large variety of control applications under rational cost. However, for the sake of generating optimal decisions and choosing adequate reactions, the current information systems used as enterprise service require more accurate and real-time data. In this work, we propose a novel model of Enterprise Information System (EIS, which integrates the WSN technologies benefits, to make an intelligent hardware and software architecture which is able to generate business managing decisions for several enterprise services with high accurateness. This paper explains the different elements treated to integrate WSN into the EIS, through presenting our suggested integration architecture, then the integration middleware layer, and finally the decisional model analysis and results.

  19. Model of Information Security Risk Assessment based on Improved Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Gang Chen

    2013-09-01

    Full Text Available This paper concentrates on the information security risk assessment model utilizing the improved wavelet neural network. The structure of wavelet neural network is similar to the multi-layer neural network, which is a feed-forward neural network with one or more inputs. Afterwards, we point out that the training process of wavelet neural networks is made up of four steps until the value of error function can satisfy a pre-defined error criteria. In order to enhance the quality of information security risk assessment, we proposed a modified version of wavelet neural network which can effectively combine all influencing factors in assessing information security risk by linear integrating several weights. Furthermore, the proposed wavelet neural network is trained by the BP algorithm with batch mode, and the weight coefficients of the wavelet are modified with the adopting mode. Finally, a series of experiments are conduct to make performance evaluation. From the experimental results, we can see that the proposed model can assess information security risk accurately and rapidly

  20. A computational model for knowledge-driven monitoring of nuclear power plant operators based on information theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Man Cheol [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)]. E-mail: charleskim@kaist.ac.kr; Seong, Poong Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2006-03-15

    To develop operator behavior models such as IDAC, quantitative models for the cognitive activities of nuclear power plant (NPP) operators in abnormal situations are essential. Among them, only few quantitative models for the monitoring and detection have been developed. In this paper, we propose a computational model for the knowledge-driven monitoring, which is also known as model-driven monitoring, of NPP operators in abnormal situations, based on the information theory. The basic assumption of the proposed model is that the probability that an operator shifts his or her attention to an information source is proportional to the expected information from the information source. A small experiment performed to evaluate the feasibility of the proposed model shows that the predictions made by the proposed model have high correlations with the experimental results. Even though it has been argued that heuristics might play an important role on human reasoning, we believe that the proposed model can provide part of the mathematical basis for developing quantitative models for knowledge-driven monitoring of NPP operators when NPP operators are assumed to behave very logically.

  1. A computational model for knowledge-driven monitoring of nuclear power plant operators based on information theory

    Energy Technology Data Exchange (ETDEWEB)

    Man Cheol Kim; Poong Hyun Seong [Korea Advanced Institute of Science and Technology, Daejeon (Korea). Department of Nuclear and Quantum Engineering

    2006-03-15

    To develop operator behavior models such as IDAC, quantitative models for the cognitive activities of nuclear power plant (NPP) operators in abnormal situations are essential. Among them, only few quantitative models for the monitoring and detection have been developed. In this paper, we propose a computational model for the knowledge-driven monitoring, which is also known as model-driven monitoring, of NPP operators in abnormal situations, based on the information theory. The basic assumption of the proposed model is that the probability that an operator shifts his or her attention to an information source is proportional to the expected information from the information source. A small experiment performed to evaluate the feasibility of the proposed model show that the predictions made by the proposed model have high correlations with the experimental results. Even though it has been argued that heuristics might play an important role on human reasoning, we believe that the proposed model can provide part of the mathematical basis for developing quantitative models for knowledge-driven monitoring of NPP operators when NPP operators are assumed to behave very logically. (author)

  2. The Research of Petroleum Enterprise Information System Architecture Based on the G/S Model

    Science.gov (United States)

    Rui, Liu; Xirong, Guo; Fang, Miao

    This paper explains the petroleum engineering technologies of petroleum enterprise supported by G/S model, which combine process of exploring, developing, and transporting of petroleum enterprise, these key technologies with spatial information technology supported by Digital Earth Platform, resulting in the improvement of the scientificity, accuracy, and rationality of the petroleum engineering technologies and the reduction of the cost and the increase of the benefits.

  3. Building information modelling (BIM)

    CSIR Research Space (South Africa)

    Conradie, Dirk CU

    2009-02-01

    Full Text Available The concept of a Building Information Model (BIM) also known as a Building Product Model (BPM) is nothing new. A short article on BIM will never cover the entire filed, because it is a particularly complex filed that is recently beginning to receive...

  4. Junior high school students' cognitive process in solving the developed algebraic problems based on information processing taxonomy model

    Science.gov (United States)

    Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd

    2017-05-01

    This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.

  5. Spatio-temporal rectification of tower-based eddy-covariance flux measurements for consistently informing process-based models

    Science.gov (United States)

    Metzger, S.; Xu, K.; Desai, A. R.; Taylor, J. R.; Kljun, N.; Schneider, D.; Kampe, T. U.; Fox, A. M.

    2013-12-01

    Process-based models, such as land surface models (LSMs), allow insight in the spatio-temporal distribution of stocks and the exchange of nutrients, trace gases etc. among environmental compartments. More recently, LSMs also become capable of assimilating time-series of in-situ reference observations. This enables calibrating the underlying functional relationships to site-specific characteristics, or to constrain the model results after each time-step in an attempt to minimize drift. The spatial resolution of LSMs is typically on the order of 10^2-10^4 km2, which is suitable for linking regional to continental scales and beyond. However, continuous in-situ observations of relevant stock and exchange variables, such as tower-based eddy-covariance (EC) fluxes, represent orders of magnitude smaller spatial scales (10^-6-10^1 km2). During data assimilation, this significant gap in spatial representativeness is typically either neglected, or side-stepped using simple tiling approaches. Moreover, at ';coarse' resolutions, a single LSM evaluation per time-step implies linearity among the underlying functional relationships as well as among the sub-grid land cover fractions. This, however, is not warranted for land-atmosphere exchange processes over more complex terrain. Hence, it is desirable to explicitly consider spatial variability at LSM sub-grid scales. Here we present a procedure that determines from a single EC tower the spatially integrated probability density function (PDF) of the surface-atmosphere exchange for individual land covers. These PDFs allow quantifying the expected value, as well as spatial variability over a target domain, can be assimilated in tiling-capable LSMs, and mitigate linearity assumptions at ';coarse' resolutions. The procedure is based on the extraction and extrapolation of environmental response functions (ERFs), for which a technical-oriented companion poster is submitted. In short, the subsequent steps are: (i) Time

  6. Integration of Life Cycle Assessment Into Agent-Based Modeling: Toward Informed Decisions on Evolving Infrastructure Systems

    NARCIS (Netherlands)

    Davis, C.B.; Nikolić, I.; Dijkema, G.P.J.

    2009-01-01

    A method is presented that allows for a life cycle assessment (LCA) to provide environmental information on an energy infrastructure system while it evolves. Energy conversion facilities are represented in an agent-based model (ABM) as distinct instances of technologies with owners capable of making

  7. Applying the Practical Inquiry Model to Investigate the Quality of Students' Online Discourse in an Information Ethics Course Based on Bloom's Teaching Goal and Bird's 3C Model

    Science.gov (United States)

    Liu, Chien-Jen; Yang, Shu Ching

    2012-01-01

    The goal of this study is to better understand how the study participants' cognitive discourse is displayed in their learning transaction in an asynchronous, text-based conferencing environment based on Garrison's Practical Inquiry Model (2001). The authors designed an online information ethics course based on Bloom's taxonomy of educational…

  8. An agent-based information management model of the Chinese pig sector

    NARCIS (Netherlands)

    Osinga, S.A.; Kramer, M.R.; Hofstede, G.J.; Roozmand, O.; Beulens, A.J.M.

    2010-01-01

    This paper investigates the effect of a selected top-down measure (what-if scenario) on actual agent behaviour and total system behaviour by means of an agent-based simulation model, when agents’ behaviour cannot fully be managed because the agents are autonomous. The Chinese pork sector serves as c

  9. Impact of Navigational Models on Task Completion in Web-Based Information Systems.

    Science.gov (United States)

    Frick, Theodore; Kisling, Eric; Cai, Weijia; Yu, Byeong Min; Giles, Frank; Brown, J. P.

    This study investigated performance differences between three different World Wide Web-based navigation models: linear, persistent, and semi-persistent menu structures. Subjects, 44 graduate and undergraduate students at Indiana University and public school teachers, were placed into one of the three navigation conditions and completed…

  10. Information exchange in global logistics chains : an application for model-based auditing,

    NARCIS (Netherlands)

    Veenstra, A.W.; Hulstijn, J.; Christiaanse, R.M.J.; Tan, Y.

    2013-01-01

    An integrated data pipeline has been proposed to meet requirements for visibility, supervision and control in global supply chains. How can data integration be used for risk assessment, monitoring and control in global supply chains? We argue that concepts from model-based auditing can be used to mo

  11. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    Science.gov (United States)

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  12. Integrated Methodology for Information System Change Control Based on Enterprise Architecture Models

    Directory of Open Access Journals (Sweden)

    Pirta Ruta

    2015-12-01

    Full Text Available The information system (IS change management and governance, according to the best practices, are defined and described in several international methodologies, standards, and frameworks (ITIL, COBIT, ValIT etc.. These methodologies describe IS change management aspects from the viewpoint of their particular enterprise resource management area. The areas are mainly viewed in a partly isolated environment, and the integration of the existing methodologies is insufficient for providing unified and controlled methodological support for holistic IS change management. In this paper, an integrated change management methodology is introduced. The methodology consists of guidelines for IS change control by integrating the following significant resource management areas – information technology (IT governance, change management and enterprise architecture (EA change management. In addition, the methodology includes lists of controls applicable at different phases. The approach is based on re-use and fusion of principles used by related methodologies as well as on empirical observations about typical IS change management mistakes in enterprises.

  13. Cross-efficiency DEA model-based evaluation of allocative efficiency of rural information resources in Jiangsu Province, China

    Institute of Scientific and Technical Information of China (English)

    Jiannong; ZHOU; Aidong; PENG; Jing; CUI; Shuiqing; HUANG

    2012-01-01

    Purpose:This paper aims to compare and rank the allocative efficiency of information resources in rural areas by taking 13 rural areas in Jiangsu Province,China as the research sample.Design/methodology/approach:We designed input and output indicators for allocation of rural information resources and conducted the quantitative evaluation of allocative efficiency of rural information resources based on cross-efficiency model in combination with the classical CCR model in data envelopment analysis(DEA).Findings:Cross-efficiency DEA model can be used for our research with the objective to evaluate quantitatively and objectively whether the allocation of information resources in various rural areas is reasonable and whether the output is commensurate with the input.Research limitations:We have to give up using some indicators because of limited data availability.There is a need to further improve the cross-efficiency DEA model because it cannot identify the specific factors influencing the efficiency of decision-making units(DMUs).Practical implications:The evaluation results will help us understand the present allocative efficiency levels of information resources in various rural areas so as to provide a decisionmaking basis for formulation of the policies aimed at promoting the circulation of information resources in rural areas.Originality/value:Little or no research has been published about the allocative efficiency of rural information resources.The value of this research lies in its focus on studying rural informatization from the perspective of allocative efficiency of rural information resources and in the application of cross-efficiency DEA model to evaluate allocative efficiency of rural information resources as well.

  14. INFORMATION MODEL OF SOCIAL TRANSFORMATIONS

    Directory of Open Access Journals (Sweden)

    Мария Васильевна Комова

    2013-09-01

    Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the

  15. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  16. An Information Management System Model for the Industrial Incidents in Saudi Arabia: A Conceptual Framework Based on SDLC Methodology

    Directory of Open Access Journals (Sweden)

    Saleh Al-Zahrani

    2006-01-01

    Full Text Available The main focus of this study has been on the development of a conceptual framework for improving the current status of industrial accidents' control. The framework is aimed to use of ICT to improve the information exchange between the Civil Defence and Industrial Sector and to provide an information management system model for the Industrial Incidents Administration System (IIAS. The purposed system, designed to highlight the method by which data should be transferred between the Civil Defence and Industrial Sector, as well as other emergency services involved in assessing and controlling industrial accidents. This study used a survey in form of questionnaire and face-to-face interview supplemented by a document analysis of activities relating to those tow sectors and direct observation. This conceptual model based on the traditional System development life cycle methodology (SDLC.Study found that designing an information system network to link the Civil Defence and Industrial Sector in Saudi Arabia to facilitate the exchange of information to control industrial accidents is considered to be important in improving the current situation. As result of this study information management system model was purposed. Such model can be expected to contribute to improving and developing the information exchange system between the tow Sectors.

  17. Exploring User Engagement in Information Networks: Behavioural – based Navigation Modelling, Ideas and Directions

    Directory of Open Access Journals (Sweden)

    Vesna Kumbaroska

    2017-04-01

    Full Text Available Revealing an endless array of user behaviors in an online environment is a very good indicator of the user’s interests either in the process of browsing or in purchasing. One such behavior is the navigation behavior, so detected user navigation patterns are able to be used for practical purposes such as: improving user engagement, turning most browsers into buyers, personalize content or interface, etc. In this regard, our research represents a connection between navigation modelling and user engagement. A usage of the Generalized Stochastic Petri Nets concept for stochastic behavioral-based modelling of the navigation process is proposed for measuring user engagement components. Different types of users are automatically identified and clustered according to their navigation behaviors, thus the developed model gives great insight into the navigation process. As part of this study, Peterson’s model for measuring the user engagement is explored and a direct calculation of its components is illustrated. At the same time, asssuming that several user sessions/visits are initialized in a certain time frame, following the Petri Nets dynamics is indicating that the proposed behavioral – based model could be used for user engagement metrics calculation, thus some basic ideas are discussed, and initial directions are given.

  18. River Water Quality Model Based on Remote Sensing Information Methods--A Case Study of Lijing River in Guilin City

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    River water quality models based on remote sensing information models are superior to pure water quality models because they combine the inevitability and risk of geographical phenomena and can take complex geographical characteristics into account. A water quality model for forecasting COD has been established with remote sensing information modeling methods by monitoring and analyzing water quantity and water quality of the Lijing River reach which flows through a complicated Karst mountain area. This model provides a good tool to predict water quality of complex rivers. It is validated by simulating contaminant concentrations of the study area. The results show that remote sensing information models are suitable for complex geography. It is not only a combined model of inevitability and risk of the geographical phenomena, but also a semi-theoretical and semi-empirical formula, providing a good tool to study organic contaminants in complicated rivers. The coefficients and indices obtained have limited value and the model is not suitable for all situations. Some improvements are required.

  19. Model of information diffusion

    CERN Document Server

    Lande, D V

    2008-01-01

    The system of cellular automata, which expresses the process of dissemination and publication of the news among separate information resources, has been described. A bell-shaped dependence of news diffusion on internet-sources (web-sites) coheres well with a real behavior of thematic data flows, and at local time spans - with noted models, e.g., exponential and logistic ones.

  20. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  1. A Partition-Based Active Contour Model Incorporating Local Information for Image Segmentation

    Directory of Open Access Journals (Sweden)

    Jiao Shi

    2014-01-01

    Full Text Available Active contour models are always designed on the assumption that images are approximated by regions with piecewise-constant intensities. This assumption, however, cannot be satisfied when describing intensity inhomogeneous images which frequently occur in real world images and induced considerable difficulties in image segmentation. A milder assumption that the image is statistically homogeneous within different local regions may better suit real world images. By taking local image information into consideration, an enhanced active contour model is proposed to overcome difficulties caused by intensity inhomogeneity. In addition, according to curve evolution theory, only the region near contour boundaries is supposed to be evolved in each iteration. We try to detect the regions near contour boundaries adaptively for satisfying the requirement of curve evolution theory. In the proposed method, pixels within a selected region near contour boundaries have the opportunity to be updated in each iteration, which enables the contour to be evolved gradually. Experimental results on synthetic and real world images demonstrate the advantages of the proposed model when dealing with intensity inhomogeneity images.

  2. A partition-based active contour model incorporating local information for image segmentation.

    Science.gov (United States)

    Shi, Jiao; Wu, Jiaji; Paul, Anand; Jiao, Licheng; Gong, Maoguo

    2014-01-01

    Active contour models are always designed on the assumption that images are approximated by regions with piecewise-constant intensities. This assumption, however, cannot be satisfied when describing intensity inhomogeneous images which frequently occur in real world images and induced considerable difficulties in image segmentation. A milder assumption that the image is statistically homogeneous within different local regions may better suit real world images. By taking local image information into consideration, an enhanced active contour model is proposed to overcome difficulties caused by intensity inhomogeneity. In addition, according to curve evolution theory, only the region near contour boundaries is supposed to be evolved in each iteration. We try to detect the regions near contour boundaries adaptively for satisfying the requirement of curve evolution theory. In the proposed method, pixels within a selected region near contour boundaries have the opportunity to be updated in each iteration, which enables the contour to be evolved gradually. Experimental results on synthetic and real world images demonstrate the advantages of the proposed model when dealing with intensity inhomogeneity images.

  3. Multiple Perspective Approach for the Development of Information Systems Based on Advanced Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    with a relativist approach. Arriving at the design of an ISD methodology required the combination of previous theoretical results with the observations from the case study. The case study showed some of the key elements to be integrated in the methodology. Firstly, plans and models are subject of a high degree......This dissertation presents the results of a three-year long case study of an information systems development project where a scheduling and control system was developed for a manufacturing company. The project goal was to test the feasibility of a new technology called advanced mathematical...... organizations that are both distributed and loosely coupled. Given the current trends towards telecommuting and international mergers, the development project presented a setting for research that was addressing both a theoretical hole and also pressing practical needs. In order to achieve this goal I had...

  4. Progress in bionic information processing techniques for an electronic nose based on olfactory models

    Institute of Scientific and Technical Information of China (English)

    LI Guang; FU Jun; ZHANG Jia; ZHENG JunBao

    2009-01-01

    As a novel bionic analytical technique, an electronic nose, inspired by the mechanism of the biological olfactory system and integrated with modern sensing technology, electronic technology and pattern recognition technology, has been widely used in many areas. Moreover, recent basic research findings in biological olfaction combined with computational neuroscience promote its development both in methodology and application. In this review, the basic information processing principle of biological olfaction and artificial olfaction are summarized and compared, and four olfactory models and their applications to electronic noses are presented. Finally, a chaotic olfactory neural network is detailed and the utilization of several biologically oriented learning rules and its spatiotemporal dynamic prop-ties for electronic noses are discussed. The integration of various phenomena and their mechanisms for biological olfaction into an electronic nose context for information processing will not only make them more bionic, but also perform better than conventional methods. However, many problems still remain, which should be solved by further cooperation between theorists and engineers.

  5. A Metadata Model Based on Coupling Testing Information to Increase Testability of Component

    Institute of Scientific and Technical Information of China (English)

    MA Liang-li; GUO Fu-liang; WU Zhao-hui

    2008-01-01

    A software component must be tested every time it is reused in order to assure quality of component itself and system in which it is to be integrated. So how to increase testability of component has become a key technology in the software engineering community. Here a method is introduced to increase component testability. And meanings of component testability and relative effective ways to increase testability are summarized. Then definitions of component coupling testing criterion, DU-I (Definition-Use Information) and OP-Vs (Observation-Point Values) are given. Base on these, a definition-use table is introduced, which includes DU-A and OP-Vs item, to help component testers to understand and observe interior details about component under test better. Then a framework of testable component based on above DU-table is given. These facilities provide ways to detect errors, observe state variables by observation-points based monitor mechanism. Moreover, above methods are applied to our application developed by ourselves before, and some test cases are generated. Then our method is compared with Orso method and Kan method using the same example, presenting the comparison results. The results illustrate the validity of our method, effectively generating test cases and killing more mutants.

  6. Testing of money multiplier model for Pakistan: does monetary base carry any information?

    Directory of Open Access Journals (Sweden)

    Muhammad Arshad Khan

    2010-02-01

    Full Text Available This paper tests the constancy and stationarity of mechanic version of the money multiplier model for Pakistan using monthly data over the period 1972M1-2009M2. We split the data into pre-liberalization (1972M1-1990M12 and post-liberalization (1991M1-2009M2 periods to examine the impact of financial sector reforms. We first examine the constancy and stationarity of the money multiplier and the results suggest the money multiplier remains non-stationary for the entire sample period and sub-periods. We then tested cointegration between money supply and monetary base and find the evidence of cointegration between two variables for the entire period and two sub-periods. The coefficient restrictions are satisfied only for the post-liberalization period. Two-way long-run causality between money supply and monetary base is found for the entire period and post-liberalization. For the post-liberalization period the evidence of short-run causality running from monetary base to money supply is also identified. On the whole, the results suggest that money multiplier model can serve as framework for conducting short-run monetary policy in Pakistan. However, the monetary authority may consider the co-movements between money supply and reserve money at the time of conducting monetary policy.

  7. Power distribution system diagnosis with uncertainty information based on rough sets and clouds model

    Science.gov (United States)

    Sun, Qiuye; Zhang, Huaguang

    2006-11-01

    During the distribution system fault period, usually the explosive growth signals including fuzziness and randomness are too redundant to make right decision for the dispatcher. The volume of data with a few uncertainties overwhelms classic information systems in the distribution control center and exacerbates the existing knowledge acquisition process of expert systems. So intelligent methods must be developed to aid users in maintaining and using this abundance of information effectively. An important issue in distribution fault diagnosis system (DFDS) is to allow the discovered knowledge to be as close as possible to natural languages to satisfy user needs with tractability, and to offer DFDS robustness. At this junction, the paper describes a systematic approach for detecting superfluous data. The approach therefore could offer user both the opportunity to learn about the data and to validate the extracted knowledge. It is considered as a "white box" rather than a "black box" like in the case of neural network. The cloud theory is introduced and the mathematical description of cloud has effectively integrated the fuzziness and randomness of linguistic terms in a unified way. Based on it, a method of knowledge representation in DFDS is developed which bridges the gap between quantitative knowledge and qualitative knowledge. In relation to classical rough set, the cloud-rough method can deal with the uncertainty of the attribute and make a soft discretization for continuous ones (such as the current and the voltage). A novel approach, including discretization, attribute reduction, rule reliability computation and equipment reliability computation, is presented. The data redundancy is greatly reduced based on an integrated use of cloud theory and rough set theory. Illustrated with a power distribution DFDS shows the effectiveness and practicality of the proposed approach.

  8. Evaluating Learning Effectiveness of an Information Law Course in a Blended Learning Environment Based on the Kirkpatrick Model

    Directory of Open Access Journals (Sweden)

    Naicheng Chang

    2015-10-01

    Full Text Available The purpose of this study was to apply the Kirkpatrick four-level model (reaction, learning, behavior, result to evaluate the learning effectiveness of those students who studied the general education digital materials of “Information Law” in a blended learning environment. The study used a mixed-method approach, including a mainly quantitative online questionnaire, followed by semi-structured interviews. The results demonstrated that the students had a high overall satisfaction towards the course (reaction level, positive learning outcomes (learning level, and a positive behavior transfer after learning, which could occur either immediately or after a period of time (behavior level. The students had high efficiency and a positive contribution to their organizations and the information society after learning information law (result level. Based on the Kirkpatrick model, the results provided favorable evidence for the students’ learning effectiveness and the course’s value.

  9. Perspectives of IT Artefacts: Information Systems based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    A solution for production scheduling that is lately attracting the interests of the manufacturing industry involves the use of complex mathematical modeling techniques in scheduling software. However this technology is fairly unknown among manufacturing practitioners, as are the social problems...... of its development and use. The aim of this article is to show how an approach based on multiple perspectives can help understand the emergence of complex software and help understand why and how the reasons and motives of the different stakeholders are, at times, incompatible....

  10. Information model of economy

    Directory of Open Access Journals (Sweden)

    N.S.Gonchar

    2006-01-01

    Full Text Available A new stochastic model of economy is developed that takes into account the choice of consumers are the dependent random fields. Axioms of such a model are formulated. The existence of random fields of consumer's choice and decision making by firms are proved. New notions of conditionally independent random fields and random fields of evaluation of information by consumers are introduced. Using the above mentioned random fields the random fields of consumer choice and decision making by firms are constructed. The theory of economic equilibrium is developed.

  11. Classification model based on mutual information%基于互信息量的分类模型

    Institute of Scientific and Technical Information of China (English)

    张震; 胡学钢

    2011-01-01

    Concerning the relevance between the attributes and the contribution difference of attribute values to attribute weights in classification dataset, an improved classification model and the formulas for calculating the impact factor and sample forecast information were proposed based on mutual information. And the classification model predicted the unlabelled object classes with the sample forecast information. Finally, the experimental results show that the classification model based on mutual information can effectively improve forecast precision and accuracy performance of classification algorithm.%针对分类数据集中属性之间的相关性及每个属性取值对属性权值的贡献程度的差别,提出基于互信息量的分类模型以及影响因子与样本预测信息量的计算公式,并利用样本预测信息量预测分类标号.经实验证明,基于互信息量的分类模型可以有效地提高分类算法的预测精度和准确率.

  12. Executive Information Systems' Multidimensional Models

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.

  13. Entropy-based Cartographic Communication Models: Evolution from Special to General Cartographic Information Theory

    Directory of Open Access Journals (Sweden)

    LI Zhilin

    2016-07-01

    Full Text Available Map is a kind of powerful means to help people in understanding the objective world. The key function of map is to transmit spatial information. The measurement of spatial information of maps dates back to 1960s, when the information theory of communication was introduced to the field of cartography. The introduction led to a new branch of cartography, i.e. cartographic information theory. This paper provides a review of the development of cartographic information theory over the past 50 years. Emphasis is on the evolution from the special to the general cartographic information theory.

  14. Test of the technology acceptance model for a Web-based information system in a Hong Kong Chinese sample.

    Science.gov (United States)

    Cheung, Emily Yee Man; Sachs, John

    2006-12-01

    The modified technology acceptance model was used to predict actual Blackboard usage (a web-based information system) in a sample of 57 Hong Kong student teachers whose mean age was 27.8 yr. (SD = 6.9). While the general form of the model was supported, Application-specific Self-efficacy was a more powerful predictor of system use than Behavioural Intention as predicted by the theory of reasoned action. Thus in this cultural and educational context, it has been shown that the model does not fully mediate the effect of Self-efficacy on System Use. Also, users' Enjoyment exerted considerable influence on the component variables of Usefulness and Ease of Use and on Application-specific Self-efficacy, thus indirectly influencing system usage. Consequently, efforts to gain students' acceptance and, therefore, use of information systems such as Blackboard must pay adequate attention to users' Self-efficacy and motivational variables such as Enjoyment.

  15. Bayesian methods for quantitative trait loci mapping based on model selection: approximate analysis using the Bayesian information criterion.

    Science.gov (United States)

    Ball, R D

    2001-11-01

    We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.

  16. Geo-Information Logistical Modeling

    Directory of Open Access Journals (Sweden)

    Nikolaj I. Kovalenko

    2014-11-01

    Full Text Available This paper examines geo-information logistical modeling. The author illustrates the similarities between geo-informatics and logistics in the area of spatial objectives; illustrates that applying geo-data expands the potential of logistics; brings to light geo-information modeling as the basis of logistical modeling; describes the types of geo-information logistical modeling; describes situational geo-information modeling as a variety of geo-information logistical modeling.

  17. A graph model for preventing railway accidents based on the maximal information coefficient

    Science.gov (United States)

    Shao, Fubo; Li, Keping

    2017-01-01

    A number of factors influences railway safety. It is an important work to identify important influencing factors and to build the relationship between railway accident and its influencing factors. The maximal information coefficient (MIC) is a good measure of dependence for two-variable relationships which can capture a wide range of associations. Employing MIC, a graph model is proposed for preventing railway accidents which avoids complex mathematical computation. In the graph, nodes denote influencing factors of railway accidents and edges represent dependence of the two linked factors. With the increasing of dependence level, the graph changes from a globally coupled graph to isolated points. Moreover, the important influencing factors are identified from many factors which are the monitor key. Then the relationship between railway accident and important influencing factors is obtained by employing the artificial neural networks. With the relationship, a warning mechanism is built by giving the dangerous zone. If the related factors fall into the dangerous zone in railway operations, the warning level should be raised. The built warning mechanism can prevent railway accidents and can promote railway safety.

  18. Hybrid attribute-based recommender system for learning material using genetic algorithm and a multidimensional information model

    Directory of Open Access Journals (Sweden)

    Mojtaba Salehi

    2013-03-01

    Full Text Available In recent years, the explosion of learning materials in the web-based educational systems has caused difficulty of locating appropriate learning materials to learners. A personalized recommendation is an enabling mechanism to overcome information overload occurred in the new learning environments and deliver suitable materials to learners. Since users express their opinions based on some specific attributes of items, this paper proposes a hybrid recommender system for learning materials based on their attributes to improve the accuracy and quality of recommendation. The presented system has two main modules: explicit attribute-based recommender and implicit attribute-based recommender. In the first module, weights of implicit or latent attributes of materials for learner are considered as chromosomes in genetic algorithm then this algorithm optimizes the weights according to historical rating. Then, recommendation is generated by Nearest Neighborhood Algorithm (NNA using the optimized weight vectors implicit attributes that represent the opinions of learners. In the second, preference matrix (PM is introduced that can model the interests of learner based on explicit attributes of learning materials in a multidimensional information model. Then, a new similarity measure between PMs is introduced and recommendations are generated by NNA. The experimental results show that our proposed method outperforms current algorithms on accuracy measures and can alleviate some problems such as cold-start and sparsity.

  19. A Neural Network Based Hybrid Mixture Model to Extract Information from Non-linear Mixed Pixels

    Directory of Open Access Journals (Sweden)

    Uttam Kumar

    2012-09-01

    Full Text Available Signals acquired by sensors in the real world are non-linear combinations, requiring non-linear mixture models to describe the resultant mixture spectra for the endmember’s (pure pixel’s distribution. This communication discusses inferring class fraction through a novel hybrid mixture model (HMM. HMM is a three-step process, where the endmembers are first derived from the images themselves using the N-FINDR algorithm. These endmembers are used by the linear mixture model (LMM in the second step that provides an abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual ground proportions are fed into neural network based multi-layer perceptron (MLP architecture as input to train the neurons. The neural output further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. HMM is first implemented and validated on simulated hyper spectral data of 200 bands and subsequently on real time MODIS data with a spatial resolution of 250 m. The results on computer simulated data show that the method gives acceptable results for unmixing pixels with an overall RMSE of 0.0089 ± 0.0022 with LMM and 0.0030 ± 0.0001 with the HMM when compared to actual class proportions. The unmixed MODIS images showed overall RMSE with HMM as 0.0191 ± 0.022 as compared to the LMM output considered alone that had an overall RMSE of 0.2005 ± 0.41, indicating that individual class abundances obtained from HMM are very close to the real observations.

  20. Conceptual Model for Automatic Early Warning Information System of Infectious Diseases Based on Internet Reporting Surveillance System

    Institute of Scientific and Technical Information of China (English)

    JIA-QI MA; LI-PING WANG; XUAO-PENG QI; XIAO-MING SHI; GONG-HUAN YANG

    2007-01-01

    Objective To establish a conceptual model of automatic early warning of infectious diseases based on internet reporting surveillance system,with a view to realizing an automated warning system on a daily basis and timely identifying potential outbreaks of infectious diseases. Methods The statistic conceptual model was established using historic surveillance data with movable percentile method.Results Based on the infectious disease surveillance information platform,the conceptualmodelfor early warning was established.The parameter,threshold,and revised sensitivity and specificity of early warning value were changed to realize dynamic alert of infectious diseases on a daily basis.Conclusion The instructive conceptual model of dynamic alert can be used as a validating tool in institutions of infectious disease surveillance in different districts.

  1. Research on information models for the construction schedule management based on the IFC standard

    OpenAIRE

    Weirui Xue; Yaowu Wang; Qingpeng Man

    2015-01-01

    Purpose: The purpose of this article is to study the description and extension of the Industry Foundation Classes (IFC) standard in construction schedule management, which achieves the information exchange and sharing among the different information systems and stakeholders, and facilitates the collaborative construction in the construction projects. Design/methodology/approach: The schedule information processing and coordination are difficult in the complex construction project. Building In...

  2. The Model of ICT-Based Career Information Services and Decision-Making Ability of Learners

    Science.gov (United States)

    Syakir, Muhammad; Mahmud, Alimuddin; Achmad, Arifin

    2016-01-01

    One of the impacts of information technology in guidance counseling is in the implementation of the support system. Entering the world of globalization and rapid technological breadth of information requires counseling to adjust to the environment in order to meet the needs of learners. Therefore, cyber-counseling is now developing. It is one of…

  3. A Comprehensive Decision-Making Approach Based on Hierarchical Attribute Model for Information Fusion Algorithms’ Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Lianhui Li

    2014-01-01

    Full Text Available Aiming at the problem of fusion algorithm performance evaluation in multiradar information fusion system, firstly the hierarchical attribute model of track relevance performance evaluation model is established based on the structural model and functional model and quantization methods of evaluation indicators are given; secondly a combination weighting method is proposed to determine the weights of evaluation indicators, in which the objective and subjective weights are separately determined by criteria importance through intercriteria correlation (CRITIC and trapezoidal fuzzy scale analytic hierarchy process (AHP, and then experience factor is introduced to obtain the combination weight; at last the improved technique for order preference by similarity to ideal solution (TOPSIS replacing Euclidean distance with Kullback-Leibler divergence (KLD is used to sort the weighted indicator value of the evaluation object. An example is given to illustrate the correctness and feasibility of the proposed method.

  4. "Information-Based Economy" and Educational System

    OpenAIRE

    Mahir Terzi

    2006-01-01

    "Information-Based Economy", which is today's economy that is a proof and indicator of development level for the countries now on, comes on the scene with its new organizing model on its infrastructure, which is called "Information Society". The phenomenon of administration introduces to "e-Government" for reinforcing the roots of "Information-Based Economy" now. Having a systematic knowledge of the relation between "Information-Based Economy", "Information Society" and "e-Government" as a wh...

  5. New frontiers in information and production systems modelling and analysis incentive mechanisms, competence management, knowledge-based production

    CERN Document Server

    Novikov, Dmitry; Bakhtadze, Natalia; Zaikin, Oleg

    2016-01-01

    This book demonstrates how to apply modern approaches to complex system control in practical applications involving knowledge-based systems. The dimensions of knowledge-based systems are extended by incorporating new perspectives from control theory, multimodal systems and simulation methods.  The book is divided into three parts: theory, production system and information system applications. One of its main focuses is on an agent-based approach to complex system analysis. Moreover, specialised forms of knowledge-based systems (like e-learning, social network, and production systems) are introduced with a new formal approach to knowledge system modelling.   The book, which offers a valuable resource for researchers engaged in complex system analysis, is the result of a unique cooperation between scientists from applied computer science (mainly from Poland) and leading system control theory researchers from the Russian Academy of Sciences’ Trapeznikov Institute of Control Sciences.

  6. Detecting Appropriate Trajectories of Growth in Latent Growth Models: The Performance of Information-Based Criteria

    Science.gov (United States)

    Whittaker, Tiffany A.; Khojasteh, Jam

    2017-01-01

    Latent growth modeling (LGM) is a popular and flexible technique that may be used when data are collected across several different measurement occasions. Modeling the appropriate growth trajectory has important implications with respect to the accurate interpretation of parameter estimates of interest in a latent growth model that may impact…

  7. Information behaviour: models and concepts

    Directory of Open Access Journals (Sweden)

    Polona Vilar

    2005-01-01

    Full Text Available The article presents an overview of the research area of information behaviour. Information behaviour is defined as the behaviour of individuals in relation to information sources and channels, which results as a consequence of their information need, and encompasses passive and active searching of information, and its use. Theoretical foundations are presented, as well as some fundamental conceptual models of information behaviour and related concepts: information searching behaviour, which occurrs in active, purposeful searching for information, regardless of the information source used; and information seeking behaviour, which represents a micro-level of information searching behaviour, and is expressed by those individuals who interact with information retrieval systems.

  8. Dynamic Model of an Ammonia Synthesis Reactor Based on Open Information

    OpenAIRE

    Jinasena, Asanthi; Lie, Bernt; Glemmestad, Bjørn

    2016-01-01

    Ammonia is a widely used chemical, hence the ammonia manufacturing process has become a standard case study in the scientific community. In the field of mathematical modeling of the dynamics of ammonia synthesis reactors, there is a lack of complete and well documented models. Therefore, the main aim of this work is to develop a complete and well documented mathematical model for observing the dynamic behavior of an industrial ammonia synthesis reactor system. The mode...

  9. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    Science.gov (United States)

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications.

  10. A Costing Model for Project-Based Information and Communication Technology Systems

    Science.gov (United States)

    Stewart, Brian; Hrenewich, Dave

    2009-01-01

    A major difficulty facing IT departments is ensuring that the projects and activities to which information and communications technologies (ICT) resources are committed represent an effective, economic, and efficient use of those resources. This complex problem has no single answer. To determine effective use requires, at the least, a…

  11. Information Technology Security Professionals' Knowledge and Use Intention Based on UTAUT Model

    Science.gov (United States)

    Kassa, Woldeloul

    2016-01-01

    Information technology (IT) security threats and vulnerabilities have become a major concern for organizations in the United States. However, there has been little research on assessing the effect of IT security professionals' knowledge on the use of IT security controls. This study examined the unified theory of acceptance and use of technology…

  12. A Costing Model for Project-Based Information and Communication Technology Systems

    Science.gov (United States)

    Stewart, Brian; Hrenewich, Dave

    2009-01-01

    A major difficulty facing IT departments is ensuring that the projects and activities to which information and communications technologies (ICT) resources are committed represent an effective, economic, and efficient use of those resources. This complex problem has no single answer. To determine effective use requires, at the least, a…

  13. Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval

    NARCIS (Netherlands)

    Kraaij, W.; Nie, J.Y.; Simard, M.

    2003-01-01

    Although more and more language pairs are covered by machine translation (MT) services, there are still many pairs that lack translation resources. Cross-language information retrieval (CUR) is an application that needs translation functionality of a relatively low level of sophistication, since

  14. Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval

    NARCIS (Netherlands)

    Kraaij, W.; Nie, J.Y.; Simard, M.

    2003-01-01

    Although more and more language pairs are covered by machine translation (MT) services, there are still many pairs that lack translation resources. Cross-language information retrieval (CUR) is an application that needs translation functionality of a relatively low level of sophistication, since cur

  15. Information Technology Security Professionals' Knowledge and Use Intention Based on UTAUT Model

    Science.gov (United States)

    Kassa, Woldeloul

    2016-01-01

    Information technology (IT) security threats and vulnerabilities have become a major concern for organizations in the United States. However, there has been little research on assessing the effect of IT security professionals' knowledge on the use of IT security controls. This study examined the unified theory of acceptance and use of technology…

  16. Modelling hen harrier dynamics to inform human-wildlife conflict resolution: a spatially-realistic, individual-based approach.

    Directory of Open Access Journals (Sweden)

    Johannes P M Heinonen

    Full Text Available Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.

  17. Semantic-Based Knowledge Management in E-Government: Modeling Attention for Proactive Information Delivery

    Science.gov (United States)

    Samiotis, Konstantinos; Stojanovic, Nenad

    E-government has become almost synonymous with a consumer-led revolution of government services inspired and made possible by the Internet. With technology being the least of the worries for government organizations nowadays, attention is shifting towards managing complexity as one of the basic antecedents of operational and decision-making inefficiency. Complexity has been traditionally preoccupying public administrations and owes its origins to several sources. Among them we encounter primarily the cross-functional nature and the degree of legal structuring of administrative work. Both of them have strong reliance to the underlying process and information infrastructure of public organizations. Managing public administration work thus implies managing its processes and information. Knowledge management (KM) and business process reengineering (BPR) have been deployed already by private organizations with success for the same purposes and certainly comprise improvement practices that are worthwhile investigating. Our contribution through this paper is on the utilization of KM for the e-government.

  18. To the problem of modelling tasks in the Russian language on the base of informational texts

    Directory of Open Access Journals (Sweden)

    Vasilyevikh Irina P.

    2016-01-01

    Full Text Available The article describes the modern educational situation from the standpoint of the requirements of the Standard to the formation of a universal (meta- skills. The authors put the problem of providing teachers with modern materials for the development and verification of formation of communicative skills in the analysis of information texts, indicate the requirements that apply to the content of such material.

  19. Perspectives of IT Artefacts: Information Systems based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    A solution for production scheduling that is lately attracting the interests of the manufacturing industry involves the use of complex mathematical modeling techniques in scheduling software. However this technology is fairly unknown among manufacturing practitioners, as are the social problems...

  20. Objective information about energy models

    Energy Technology Data Exchange (ETDEWEB)

    Hale, D.R. (Energy Information Administration, Washington, DC (United States))

    1993-01-01

    This article describes the Energy Information Administration's program to develop objective information about its modeling systems without hindering model development and applications, and within budget and human resource constraints. 16 refs., 1 fig., 2 tabs.

  1. Information Extraction from Research Papers based on Conditional Random Field Model

    Directory of Open Access Journals (Sweden)

    Zhu Shuxin

    2013-01-01

    Full Text Available With the increasing use of CiteSeer academic search engines, the accuracy of such systems has become more and more important. The paper adopts the improved particle swarm optimization algorithm for training conditional random field model and applies it into the research papers’ title and citation retrieval. The improved particl swarm optimization algorithm  brings the particle swarm aggregation to prevent particle swarm from being plunged into local convergence too early, and uses the linear inertia factor and learning factor to update particle rate. It can control algorithm in infinite iteration by the iteration between particle relative position change rate. The results of which using the standard research papers’ heads and references to evaluate the trained conditional random field model shows that compared with traditionally conditional random field model and Hidden Markov Model, the conditional random field model ,optimized and trained by improved particle swarm, has been better ameliorated in the aspect of F1 mean error and word error rate.

  2. Tuple-based morphisms for interoperability establishment of financial information models

    OpenAIRE

    Beça, Miguel Alexandre Sousa Ferro de

    2010-01-01

    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e Computadores The current financial crisis has demonstrated that there is a need for financial accounting data in a format which can be rapidly analyzed and exchanged. The appearance of XBRL in 2000 has helped create a ‘de facto’ standard data format for the exchange of financial information. However, XBRL by itself is not capable of ens...

  3. Promoting Coordinated Development of Community-Based Information Standards for Modeling in Biology: The COMBINE Initiative.

    Science.gov (United States)

    Hucka, Michael; Nickerson, David P; Bader, Gary D; Bergmann, Frank T; Cooper, Jonathan; Demir, Emek; Garny, Alan; Golebiewski, Martin; Myers, Chris J; Schreiber, Falk; Waltemath, Dagmar; Le Novère, Nicolas

    2015-01-01

    The Computational Modeling in Biology Network (COMBINE) is a consortium of groups involved in the development of open community standards and formats used in computational modeling in biology. COMBINE's aim is to act as a coordinator, facilitator, and resource for different standardization efforts whose domains of use cover related areas of the computational biology space. In this perspective article, we summarize COMBINE, its general organization, and the community standards and other efforts involved in it. Our goals are to help guide readers toward standards that may be suitable for their research activities, as well as to direct interested readers to relevant communities where they can best expect to receive assistance in how to develop interoperable computational models.

  4. Simulating Fire Disturbance and Plant Mortality Using Antecedent Eco-hydrological Conditions to Inform a Physically Based Combustion Model

    Science.gov (United States)

    Atchley, A. L.; Linn, R.; Middleton, R. S.; Runde, I.; Coon, E.; Michaletz, S. T.

    2016-12-01

    Wildfire is a complex agent of change that both affects and depends on eco-hydrological systems, thereby constituting a tightly linked system of disturbances and eco-hydrological conditions. For example, structure, build-up, and moisture content of fuel are dependent on eco-hydrological regimes, which impacts fire spread and intensity. Fire behavior, on the other hand, determines the severity and extent of eco-hydrological disturbance, often resulting in a mosaic of untouched, stressed, damaged, or completely destroyed vegetation within the fire perimeter. This in turn drives new eco-hydrological system behavior. The cycles of disturbance and recovery present a complex evolving system with many unknowns especially in the face of climate change that has implications for fire risk, water supply, and forest composition. Physically-based numerical experiments that attempt to capture the complex linkages between eco-hydrological regimes that affect fire behavior and the echo-hydrological response from those fire disturbances help build the understanding required to project how fire disturbance and eco-hydrological conditions coevolve over time. Here we explore the use of FIRETEC—a physically-based 3D combustion model that solves conservation of mass, momentum, energy, and chemical species—to resolve fire spread over complex terrain and fuel structures. Uniquely, we couple a physically-based plant mortality model with FIRETEC and examine the resultant hydrologic impact. In this proof of concept demonstration we spatially distribute fuel structure and moisture content based on the eco-hydrological condition to use as input for FIRETEC. The fire behavior simulation then produces localized burn severity and heat injures which are used as input to a spatially-informed plant mortality model. Ultimately we demonstrate the applicability of physically-based models to explore integrated disturbance and eco-hydrologic response to wildfire behavior and specifically map how fire

  5. Perceptive visual attention model based on depth information for free viewpoint video rendering

    Science.gov (United States)

    Park, Min-Chul; Son, Jung-Young

    2009-05-01

    How to detect meaningful video representation becomes an interesting problem in various research communities. Visual attention system detects "Region of Interesting" from input video sequence. Generally the attended regions correspond to visually prominent object in the image in video sequence. In this paper, we have improved previous approaches using spatiotemporal attention modules. We proposed to make use of 3D depth map information in addition to spatiotemporal features. Therefore, the proposed method can compensate typical spatiotemporal saliency approaches for their inaccuracy. Motion is important cue when we derive temporal saliency. On the other hand noise information that deteriorates accuracy of temporal saliency is also obtained during the computation. To obtain the saliency map with more accuracy the noise should be removed. In order to settle down the problem, we used the result of psychological studies on "double opponent receptive field" and "noise filtration" in Middle Temporal area. We also applied "FlagMap" on each frame to prevent "Flickering" of global-area noise. As a result of this consideration, our system can detect the salient regions in the image with higher accuracy while removing noise effectively. It has been applied to several image sequences as a result the proposed method can describe the salient regions with more accuracy in another higher domain than the typical approach does. The obtained result can be applied to generate a spontaneous viewpoint offered by the system itself for "3-D imaging projector" or 3-DTV.

  6. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  7. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  8. Spatial Interpolation of Annual Runoff in Ungauged Basins Based on the Improved Information Diffusion Model Using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mei Hong

    2017-01-01

    Full Text Available Prediction in Ungauged Basins (PUB is an important task for water resources planning and management and remains a fundamental challenge for the hydrological community. In recent years, geostatistical methods have proven valuable for estimating hydrological variables in ungauged catchments. However, four major problems restrict the development of geostatistical methods. We established a new information diffusion model based on genetic algorithm (GIDM for spatial interpolating of runoff in the ungauged basins. Genetic algorithms (GA are used to generate high-quality solutions to optimization and search problems. So, using GA, the parameter of optimal window width can be obtained. To test our new method, seven experiments for the annual runoff interpolation based on GIDM at 17 stations on the mainstream and tributaries of the Yellow River are carried out and compared with the inverse distance weighting (IDW method, Cokriging (COK method, and conventional IDMs using the same sparse observed data. The seven experiments all show that the GIDM method can solve four problems of the previous geostatistical methods to some extent and obtains best accuracy among four different models. The key problems of the PUB research are the lack of observation data and the difficulties in information extraction. So the GIDM is a new and useful tool to solve the Prediction in Ungauged Basins (PUB problem and to improve the water management.

  9. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  10. Agent Based Modelling of Communication Costs: Why Information Can Be Free

    Science.gov (United States)

    Čače, Ivana; Bryson, Joanna J.

    What purposes, other than facilitating the sharing of information, can language have served? First, it may not have evolved to serve any purpose at all. It is possible that language is just a side effect of the large human brain — a spandrel or exaptation — that only became useful later. If language is adaptive, this does not necessarily mean that it is adaptive for the purpose of communication. For example Dennett (1996) and Chomsky (1980) have stressed the utility of language in thinking. Also, there are different ways to view communication. The purpose of language according to Dunbar (1993), is to replace grooming as a social bonding process and in this way to ensure the stability of large social groups.

  11. Auto-Mapping and Configuration Method of IEC 61850 Information Model Based on OPC UA

    Directory of Open Access Journals (Sweden)

    In-Jae Shin

    2016-11-01

    Full Text Available The open-platform communication (OPC unified architecture (UA (IEC62541 is introduced as a key technology for realizing a variety of smart grid (SG use cases enabling relevant automation and control tasks. The OPC UA can expand interoperability between power systems. The top-level SG management platform needs independent middleware to transparently manage the power information technology (IT systems, including the IEC 61850. To expand interoperability between the power system for a large number of stakeholders and various standards, this paper focuses on the IEC 61850 for the digital substation. In this paper, we propose the interconnection method to integrate communication with OPC UA and convert OPC UA AddressSpace using system configuration description language (SCL of IEC 61850. We implemented the mapping process for the verification of the interconnection method. The interconnection method in this paper can expand interoperability between power systems for OPC UA integration for various data structures in the smart grid.

  12. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    parameter uncertainty decreased significantly when TLRG data was included in the inversion. The forced infiltration experiment caused changes in unsaturated zone storage, which were monitored using TLRG and ground-penetrating radar. A numerical unsaturated zone model was subsequently conditioned on both...... in gravity due to unmonitored non-hydrological effects, and the requirement of a gravitationally stable reference station. Application of TLRG in hydrology should be combined with other geophysical and/or traditional monitoring methods....

  13. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    Science.gov (United States)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  14. From Information to Experience: Place-Based Augmented Reality Games as a Model for Learning in a Globally Networked Society

    Science.gov (United States)

    Squire, Kurt D.

    2010-01-01

    Background/Context: New information technologies make information available just-in-time and on demand and are reshaping how we interact with information, but schools remain in a print-based culture, and a growing number of students are disaffiliating from traditional school. New methods of instruction are needed that are suited to the digital…

  15. The Path of New Information Technology Affecting Educational Equality in the New Digital Divide--Based on Information System Success Model

    Science.gov (United States)

    Zheng, Qian; Liang, Chang-Yong

    2017-01-01

    New information technology (new IT) plays an increasingly important role in the field of education, which greatly enriches the teaching means and promotes the sharing of education resources. However, because of the New Digital Divide existing, the impact of new IT on educational equality has yet to be discussed. Based on Information System Success…

  16. Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time

    Directory of Open Access Journals (Sweden)

    Andreas Glockner

    2011-02-01

    Full Text Available Research on the processing of recognition information has focused on testing the recognition heuristic (RH. On the aggregate, the noncompensatory use of recognition information postulated by the RH was rejected in several studies, while RH could still account for a considerable proportion of choices. These results can be explained if either a a part of the subjects used RH or b nobody used it but its choice predictions were accidentally in line with predictions of the strategy used. In the current study, which exemplifies a new approach to model testing, we determined individuals' decision strategies based on a maximum-likelihood classification method, taking into account choices, response times and confidence ratings simultaneously. Unlike most previous studies of the RH, our study tested the RH under conditions in which we provided information about cue values of unrecognized objects (which we argue is fairly common and thus of some interest. For 77.5% of the subjects, overall behavior was best explained by a compensatory parallel constraint satisfaction (PCS strategy. The proportion of subjects using an enhanced RH heuristic (RHe was negligible (up to 7.5%; 15% of the subjects seemed to use a take the best strategy (TTB. A more-fine grained analysis of the supplemental behavioral parameters conditional on strategy use supports PCS but calls into question process assumptions for apparent users of RH, RHe, and TTB within our experimental context. Our results are consistent with previous literature highlighting the importance of individual strategy classification as compared to aggregated analyses.

  17. The Approach Towards Equilibrium in a Reversible Ising Dynamics Model: An Information-Theoretic Analysis Based on an Exact Solution

    Science.gov (United States)

    Lindgren, Kristian; Olbrich, Eckehard

    2017-08-01

    We study the approach towards equilibrium in a dynamic Ising model, the Q2R cellular automaton, with microscopic reversibility and conserved energy for an infinite one-dimensional system. Starting from a low-entropy state with positive magnetisation, we investigate how the system approaches equilibrium characteristics given by statistical mechanics. We show that the magnetisation converges to zero exponentially. The reversibility of the dynamics implies that the entropy density of the microstates is conserved in the time evolution. Still, it appears as if equilibrium, with a higher entropy density is approached. In order to understand this process, we solve the dynamics by formally proving how the information-theoretic characteristics of the microstates develop over time. With this approach we can show that an estimate of the entropy density based on finite length statistics within microstates converges to the equilibrium entropy density. The process behind this apparent entropy increase is a dissipation of correlation information over increasing distances. It is shown that the average information-theoretic correlation length increases linearly in time, being equivalent to a corresponding increase in excess entropy.

  18. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  19. Information risk and security modeling

    Science.gov (United States)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  20. Modeling land-based nitrogen loads from groundwater-dominated agricultural watersheds to estuaries to inform nutrient reduction planning

    Science.gov (United States)

    Jiang, Yefang; Nishimura, Peter; van den Heuvel, Michael R.; MacQuarrie, Kerry T. B.; Crane, Cindy S.; Xing, Zisheng; Raymond, Bruce G.; Thompson, Barry L.

    2015-10-01

    Excessive nitrate loads from intensive potato production have been linked to the reoccurring anoxic events in many estuaries in Prince Edward Island (PEI), Canada. Community-led watershed-based nutrient reduction planning has been promoted as a strategy for water quality restoration and initial nitrate load criteria have been proposed for the impacted estuaries. An integrated modeling approach was developed to predict base flow nitrate loads to inform the planning activities in the groundwater-dominated agricultural watersheds. Nitrate load is calculated as base flow multiplied by the average of nitrate concentration at the receiving watershed outlet. The average of nitrate concentration is estimated as the integration of nitrate leaching concentration over the watershed area minus a nitrate loss coefficient that accounts for long-term nitrate storage in the aquifer and losses from the recharge to the discharge zones. Nitrate leaching concentrations from potato rotation systems were estimated with a LEACHN model and the land use areas were determined from satellite image data (2006-2009) using GIS. The simulated average nitrate concentrations are compared with the arithmetic average of nitrate concentration measurements in each of the 27 watersheds for model calibration and in 138 watersheds for model verifications during 2006-2009. Sensitivity of the model to the variations of land use mapping errors, nitrate leaching concentrations from key sources, and nitrate loss coefficient was tested. The calibration and verification statistics and sensitivity analysis show that the model can provide accurate nitrate concentration predictions for watersheds with drainage areas more than 5 km2 and nitrate concentration over 2 mg N L-1, while the model resolution for watersheds with drainage areas below 5 km2 and/or nitrate concentration below 2 mg N L-1 may not be sufficient for nitrate load management purposes. Comparisons of normalized daily stream discharges among the

  1. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  2. Residuals of autoregressive model providing additional information for feature extraction of pattern recognition-based myoelectric control.

    Science.gov (United States)

    Pan, Lizhi; Zhang, Dingguo; Sheng, Xinjun; Zhu, Xiangyang

    2015-01-01

    Myoelectric control based on pattern recognition has been studied for several decades. Autoregressive (AR) features are one of the mostly used feature extraction methods among myoelectric control studies. Almost all previous studies only used the AR coefficients without the residuals of AR model for classification. However, the residuals of AR model contain important amplitude information of the electromyography (EMG) signals. In this study, we added the residuals to the AR features (AR+re) and compared its performance with the classical sixth-order AR coefficients. We tested six unilateral transradial amputees and eight able-bodied subjects for eleven hand and wrist motions. The classification accuracy (CA) of the intact side for amputee subjects and the right hand for able-bodied subjects showed that the CA of AR+re features was slightly but significantly higher than that of classical AR features (p = 0.009), which meant that residuals could provide additional information to classical AR features for classification. Interestingly, the CA of the affected side for amputee subjects showed that there was no significant difference between the CA of AR+re features and classical AR features (p > 0.05). We attributed this to the fact that the amputee subjects could not use their affected side to produce consistent EMG patterns as their intact side or the dominant hand of the able-bodied subjects. Since the residuals were already available when the AR coefficients were computed, the results of this study suggested adding the residuals to classical AR features to potentially improve the performance of pattern recognition-based myoelectric control.

  3. 基于GIS的地质勘察信息模型研究%Geological Survey Research Information Model based on GIS

    Institute of Scientific and Technical Information of China (English)

    周中成

    2015-01-01

    The main building GIS-based geological survey information model describes the GIS information model to analyze the design and application of geological survey information systems, and discusses the three-dimensional geological information related to the content model, aimed at strengthening the geological survey information model based on GIS build and improve the quality of geological survey work.%主要研究了基于GIS地质勘察信息模型的构建,介绍了GIS信息模型,分析了地质勘察信息系统的设计和应用,讨论了地质三维信息模型的相关内容,旨在加强基于GIS地质勘察信息模型的构建,提高地质勘察工作的质量。

  4. Modeling spatiotemporal information generation

    NARCIS (Netherlands)

    Scheider, Simon; Gräler, Benedikt; Stasch, Christoph; Pebesma, Edzer

    2016-01-01

    Maintaining knowledge about the provenance of datasets, that is, about how they were obtained, is crucial for their further use. Contrary to what the overused metaphors of ‘data mining’ and ‘big data’ are implying, it is hardly possible to use data in a meaningful way if information about sources an

  5. Sustainable funding for biocuration: The Arabidopsis Information Resource (TAIR) as a case study of a subscription-based funding model.

    Science.gov (United States)

    Reiser, Leonore; Berardini, Tanya Z; Li, Donghui; Muller, Robert; Strait, Emily M; Li, Qian; Mezheritsky, Yarik; Vetushko, Andrey; Huala, Eva

    2016-01-01

    Databases and data repositories provide essential functions for the research community by integrating, curating, archiving and otherwise packaging data to facilitate discovery and reuse. Despite their importance, funding for maintenance of these resources is increasingly hard to obtain. Fueled by a desire to find long term, sustainable solutions to database funding, staff from the Arabidopsis Information Resource (TAIR), founded the nonprofit organization, Phoenix Bioinformatics, using TAIR as a test case for user-based funding. Subscription-based funding has been proposed as an alternative to grant funding but its application has been very limited within the nonprofit sector. Our testing of this model indicates that it is a viable option, at least for some databases, and that it is possible to strike a balance that maximizes access while still incentivizing subscriptions. One year after transitioning to subscription support, TAIR is self-sustaining and Phoenix is poised to expand and support additional resources that wish to incorporate user-based funding strategies. Database URL: www.arabidopsis.org.

  6. Combining livestock production information in a process-based vegetation model to reconstruct the history of grassland management

    Science.gov (United States)

    Chang, Jinfeng; Ciais, Philippe; Herrero, Mario; Havlik, Petr; Campioli, Matteo; Zhang, Xianzhou; Bai, Yongfei; Viovy, Nicolas; Joiner, Joanna; Wang, Xuhui; Peng, Shushi; Yue, Chao; Piao, Shilong; Wang, Tao; Hauglustaine, Didier A.; Soussana, Jean-Francois; Peregon, Anna; Kosykh, Natalya; Mironycheva-Tokareva, Nina

    2016-06-01

    Grassland management type (grazed or mown) and intensity (intensive or extensive) play a crucial role in the greenhouse gas balance and surface energy budget of this biome, both at field scale and at large spatial scale. However, global gridded historical information on grassland management intensity is not available. Combining modelled grass-biomass productivity with statistics of the grass-biomass demand by livestock, we reconstruct gridded maps of grassland management intensity from 1901 to 2012. These maps include the minimum area of managed vs. maximum area of unmanaged grasslands and the fraction of mown vs. grazed area at a resolution of 0.5° by 0.5°. The grass-biomass demand is derived from a livestock dataset for 2000, extended to cover the period 1901-2012. The grass-biomass supply (i.e. forage grass from mown grassland and biomass grazed) is simulated by the process-based model ORCHIDEE-GM driven by historical climate change, rising CO2 concentration, and changes in nitrogen fertilization. The global area of managed grassland obtained in this study increases from 6.1 × 106 km2 in 1901 to 12.3 × 106 km2 in 2000, although the expansion pathway varies between different regions. ORCHIDEE-GM also simulated augmentation in global mean productivity and herbage-use efficiency over managed grassland during the 20th century, indicating a general intensification of grassland management at global scale but with regional differences. The gridded grassland management intensity maps are model dependent because they depend on modelled productivity. Thus specific attention was given to the evaluation of modelled productivity against a series of observations from site-level net primary productivity (NPP) measurements to two global satellite products of gross primary productivity (GPP) (MODIS-GPP and SIF data). Generally, ORCHIDEE-GM captures the spatial pattern, seasonal cycle, and interannual variability of grassland productivity at global scale well and thus is

  7. Modeling gross primary production of agro-forestry ecosystems by assimilation of satellite-derived information in a process-based model.

    Science.gov (United States)

    Migliavacca, Mirco; Meroni, Michele; Busetto, Lorenzo; Colombo, Roberto; Zenone, Terenzio; Matteucci, Giorgio; Manca, Giovanni; Seufert, Guenther

    2009-01-01

    In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP) using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC) with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale.

  8. Geographic Information Systems (GIS Based Village Roads Management Model For Monitoring, Maintenance And Repairing Purposes: Example Of Denizli

    Directory of Open Access Journals (Sweden)

    Yetiş Şazi Murat

    2013-06-01

    Full Text Available Geographic information systems has become a tool that is often used in many fields especially in developed countries, yet in Turkey while it's been recognized and acknowledged by central and local authorities, its use in public services is still in only the crawling stages.Within the borders of Denizli Provincial Administration Roads and Transportation Services Directorship, a GIS-supported study has been undertaken in order to update information on all village roads in the related area of responsibility and service and to aid the Directorship's country infrastructure service with modern and technical methods over different analysis results obtained. In this study, executed with the Strategy Development Directorship, through using developed layers, topographic maps, satellite photos, etc. devised in fact for different purposes, it was aimed to ensure that the service concerning village roads are based on correct and update data. In line with the fast and sensitive analysis results obtained in this study as an attempt to ensure that public resources are used efficiently, it was aimed to put forth a pilot project to be used later on as a product which is thought to serve as a model to help re-plan village road constructions, determine lacking and incomplete cases, thereby establish a complete and integrated management plan.

  9. Development of an Antarctic digital elevation model by integrating cartographic and remotely sensed data: A geographic information system based approach

    Science.gov (United States)

    Liu, Hongxing; Jezek, Kenneth C.; Li, Biyan

    1999-10-01

    We present a high-resolution digital elevation model (DEM) of the Antarctic. It was created in a geographic information system (GIS) environment by integrating the best available topographic data from a variety of sources. Extensive GIS-based error detection and correction operations ensured that our DEM is free of gross errors. The carefully designed interpolation algorithms for different types of source data and incorporation of surface morphologic information preserved and enhanced the fine surface structures present in the source data. The effective control of adverse edge effects and the use of the Hermite blending weight function in data merging minimized the discontinuities between different types of data, leading to a seamless and topographically consistent DEM throughout the Antarctic. This new DEM provides exceptional topographical details and represents a substantial improvement in horizontal resolution and vertical accuracy over the earlier, continental-scale renditions, particularly in mountainous and coastal regions. It has a horizontal resolution of 200 m over the rugged mountains, 400 m in the coastal regions, and approximately 5 km in the interior. The vertical accuracy of the DEM is estimated at about 100-130 m over the rugged mountainous area, better than 2 m for the ice shelves, better than 15 m for the interior ice sheet, and about 35 m for the steeper ice sheet perimeter. The Antarctic DEM can be obtained from the authors.

  10. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems....... These inputs are disturbance inputs, reference inputs and auxilary inputs. The diagnosis of the system is derived by an evaluation of the signature from the inputs in the residual outputs. The changes of the signatures form the external inputs are used for detection and isolation of the parametric faults....

  11. Investigating Information-Seeking Behavior of Faculty Members Based on Wilson’s Model: Case Study of PNU University, Mazandaran, Iran

    Science.gov (United States)

    Azadeh, Fereydoon; Ghasemi, Shahrzad

    2016-01-01

    The present research aims to study information seeking behavior of faculty Members of Payame Noor University (PNU) in Mazandaran province of Iran by using Wilson’s model of information seeking behavior. This is a survey study. Participants were 97 of PNU faculty Members in Mazandaran province. An information-seeking behavior inventory was employed to gather information and research data, which had 24 items based on 5-point likert scale. Collected data were analyzed in SPSS software. Results showed that the most important goal of faculty members was publishing a scientific paper, and their least important goal was updating technical information. Also we found that they mostly use internet-based resources to meet their information needs. Accordingly, 57.7% of them find information resources via online search engines (e.g. Google, Yahoo). Also we concluded that there was a significant relationship between English language proficiency, academic rank, and work experience of them and their information- seeking behavior. PMID:27157151

  12. 基于BASS模型的危机信息扩散模式%Crisis Information Diffusion Model Based on BASS Model

    Institute of Scientific and Technical Information of China (English)

    魏玖长; 周磊; 赵定涛

    2011-01-01

    Effective crisis information dissemination is critical in crisis management.In order to study the diffusion characteristics of crisis information under different modes of transmission,this paper,based on BASS Model,constructs the diffusion model of true information and rumor information after the public crisis,simulates the diffusion rules and state differences of two types of information under the condition of different parameters through the Matlab software.The result shows that the growth of two types of information diffusion curve is affected by channel coefficients;that it has positive relationship between the final scale and information accuracy in informal channels.To improve the effectiveness of crisis information dissemination,crisis management departments could expand the coverage of formal channels,improve the accuracy and authority of information issuance,enhance the public emergency education and change the rumor management attitude.%有效的危机信息传播在危机管理中至关重要。为了研究危机信息在不同传播方式下的扩散特征,本文基于BA SS模型,构建了公共危机事件后正确信息与谣言信息的扩散模型,并通过M atlab软件模拟了不同参数条件下两类信息的扩散规律及扩散状态差异。分析结果表明,两类信息扩散曲线的增长幅度受到渠道系数的影响,最终的扩散规模与非正式渠道信息准确度存在正相关的关系。要提高危机信息传播的有效性,危机管理部门可以从扩大正式渠道的覆盖范围、提高信息发布的准确性与权威性、加强公众的应急素质教育以及转变谣言管理态度等方面入手。

  13. Modelling hen harrier dynamics to inform human-wildlife conflict resolution: a spatially-realistic, individual-based approach

    National Research Council Canada - National Science Library

    Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J

    2014-01-01

    Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations...

  14. Green Template for Life Cycle Assessment of Buildings Based on Building Information Modeling: Focus on Embodied Environmental Impact

    Directory of Open Access Journals (Sweden)

    Sungwoo Lee

    2015-12-01

    Full Text Available The increased popularity of building information modeling (BIM for application in the construction of eco-friendly green buildings has given rise to techniques for evaluating green buildings constructed using BIM features. Existing BIM-based green building evaluation techniques mostly rely on externally provided evaluation tools, which pose problems associated with interoperability, including a lack of data compatibility and the amount of time required for format conversion. To overcome these problems, this study sets out to develop a template (the “green template” for evaluating the embodied environmental impact of using a BIM design tool as part of BIM-based building life-cycle assessment (LCA technology development. Firstly, the BIM level of detail (LOD was determined to evaluate the embodied environmental impact, and constructed a database of the impact factors of the embodied environmental impact of the major building materials, thereby adopting an LCA-based approach. The libraries of major building elements were developed by using the established databases and compiled evaluation table of the embodied environmental impact of the building materials. Finally, the green template was developed as an embodied environmental impact evaluation tool and a case study was performed to test its applicability. The results of the green template-based embodied environmental impact evaluation of a test building were validated against those of its actual quantity takeoff (2D takeoff, and its reliability was confirmed by an effective error rate of ≤5%. This study aims to develop a system for assessing the impact of the substances discharged from concrete production process on six environmental impact categories, i.e., global warming (GWP, acidification (AP, eutrophication (EP, abiotic depletion (ADP, ozone depletion (ODP, and photochemical oxidant creation (POCP, using the life a cycle assessment (LCA method. To achieve this, we proposed an LCA method

  15. Information-time based futures pricing

    Science.gov (United States)

    Yen, Simon; Wang, Jai Jen

    2009-09-01

    This study follows Clark [P.K. Clark, A subordinated stochastic process model with finite variance for speculative prices, Econometrica 41 (1973) 135-155] and Chang, Chang and Lim [C.W. Chang, S.K. Chang, K.G. Lim, Information-time option pricing: Theory and empirical evidence, Journal of Financial Economics 48 (1998) 211-242] to subordinate an information-time based directing process into calendar-time based parent processes. A closed-form futures pricing formula is derived after taking into account the information-time setting and the stochasticity of the spot price, interest rate, and convenience yield. According to the empirical results on the TAIEX and TFETX data from 1998/7/21 to 2003/12/31, the information-time based model performs better than its calendar-time based counterpart and the cost of carry model, especially when the information arrival intensity estimates become larger.

  16. Multi-Information Model for PCB-Based Electronics Product Manufacturing%面向制造的板级电子电路产品多信息模型研究

    Institute of Scientific and Technical Information of China (English)

    李春泉; 周德俭; 俞涛

    2004-01-01

    Most electronics products use PCB to carry electronic circuits. This paper classifies information contained in PCB-based electronic circuits into several models: geometry model, physics model, performance model and function model. Based on this classification, a multi-information model of product is established. A composite model of product is also created based on object-orientation and characteristics of the product. The model includes a 3D geometry model, a physics model with integrated information that can be divided into microscopic and macroscopic information, a generalized performance model and a function model that are from top to bottom. Finally, a multi-unit analysis is briefly discussed.

  17. Micro-level dynamics of the online information propagation: A user behavior model based on noisy spiking neurons.

    Science.gov (United States)

    Lymperopoulos, Ilias N; Ioannou, George D

    2016-10-01

    We develop and validate a model of the micro-level dynamics underlying the formation of macro-level information propagation patterns in online social networks. In particular, we address the dynamics at the level of the mechanism regulating a user's participation in an online information propagation process. We demonstrate that this mechanism can be realistically described by the dynamics of noisy spiking neurons driven by endogenous and exogenous, deterministic and stochastic stimuli representing the influence modulating one's intention to be an information spreader. Depending on the dynamically changing influence characteristics, time-varying propagation patterns emerge reflecting the temporal structure, strength, and signal-to-noise ratio characteristics of the stimulation driving the online users' information sharing activity. The proposed model constitutes an overarching, novel, and flexible approach to the modeling of the micro-level mechanisms whereby information propagates in online social networks. As such, it can be used for a comprehensive understanding of the online transmission of information, a process integral to the sociocultural evolution of modern societies. The proposed model is highly adaptable and suitable for the study of the propagation patterns of behavior, opinions, and innovations among others.

  18. A Policy Model for Secure Information Flow

    Science.gov (United States)

    Adetoye, Adedayo O.; Badii, Atta

    When a computer program requires legitimate access to confidential data, the question arises whether such a program may illegally reveal sensitive information. This paper proposes a policy model to specify what information flow is permitted in a computational system. The security definition, which is based on a general notion of information lattices, allows various representations of information to be used in the enforcement of secure information flow in deterministic or nondeterministic systems. A flexible semantics-based analysis technique is presented, which uses the input-output relational model induced by an attacker’s observational power, to compute the information released by the computational system. An illustrative attacker model demonstrates the use of the technique to develop a termination-sensitive analysis. The technique allows the development of various information flow analyses, parametrised by the attacker’s observational power, which can be used to enforce what declassification policies.

  19. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  20. 基于信息工厂的供应链信息模型研究%Supply Chain Information Model Research:Based on the Information Factory

    Institute of Scientific and Technical Information of China (English)

    王文韬; 谢阳群

    2011-01-01

    With the rapid development of the logistics supply chain,transmission speed,accuracy and timeliness of information in supply chain are attracting increasing attention.This paper from the perspective of the information factory,take the supply chain of enterprises as a whole various departments,make the shared database of information factory concept associated with the logistics supply chain,proposed the information factory supply chain information model and performance assessment indicators and influence factors of information sharing.Proposed a quickly and accurately exchange model for logistics operations between supply chain departments.It is a new model.%随着物流供应链的快速发展,供应链中信息传递的速度、准确性和时效性越来越引起人们注意,文章从信息工厂的角度出发,把供应链上各企业看成是整体上的各个部门,将信息工厂中数据库共享的理念与物流供应链相联系,提出基于信息工厂的供应链信息模型、绩效评估指标和信息共享影响因素.为实现物流业务操作过程中供应链上各部门间信息快速、准确地交换提出了一种新的模型.

  1. Research of Home Information Technology Adoption Model

    Institute of Scientific and Technical Information of China (English)

    Ao Shan; Ren Weiyin; Lin Peishan; Tang Shoulian

    2008-01-01

    The Information Technology at Home has caught the attention of various industries such as IT, Home Appliances, Communication, and Real Estate. Based on the information technology acceptance theories and family consumption behaviors theories, this study summarized and analyzed four key belief variables i.e. Perceived Value, Perceived Risk, Perceived Cost and Perceived Ease of Use, which influence the acceptance of home information technology. The study also summaries three groups of external variables. They axe social, industrial, and family influence factors. The social influence factors include Subjective Norm; the industry factors include the Unification of Home Information Technological Standards, the Perfection of Home Information Industry Value Chain, and the Competitiveness of Home Information Industry; and the family factors include Family Income, Family Life Cycle and Family Educational Level. The study discusses the relationship among these external variables and cognitive variables. The study provides Home Information Technology Acceptance Model based on the Technology Acceptance Model and the characteristics of home information technology consumption.

  2. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering.

    Science.gov (United States)

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-04-01

    Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Information Theory: a Multifaceted Model of Information

    Directory of Open Access Journals (Sweden)

    Mark Burgin

    2003-06-01

    Full Text Available A contradictory and paradoxical situation that currently exists in information studies can be improved by the introduction of a new information approach, which is called the general theory of information. The main achievement of the general theory of information is explication of a relevant and adequate definition of information. This theory is built as a system of two classes of principles (ontological and sociological and their consequences. Axiological principles, which explain how to measure and evaluate information and information processes, are presented in the second section of this paper. These principles systematize and unify different approaches, existing as well as possible, to construction and utilization of information measures. Examples of such measures are given by Shannon’s quantity of information, algorithmic quantity of information or volume of information. It is demonstrated that all other known directions of information theory may be treated inside general theory of information as its particular cases.

  4. Product Information Platform Based on STEP

    Institute of Scientific and Technical Information of China (English)

    WANG Taiyong; ZHANG Zhiwei

    2009-01-01

    The intemational standard ISO 10303.called STEP.has been used to deal with problems in the ex change of product models and the associated data between difierent computer-aided systems.A platform based on STEP for managing product information is presented.This platform includes three components:a product geometry information model,a product feature model and a product visualization model.An information extracting pattern,in which information is extracted from low level elements to high level ones,is adopted in establishing the product geometry information model.Relative elements lists are created based on the extracted product information.With the traversing of these lists,feature extraction methods are proposed,which take advantage of boundary information in product model and avoid the determination of concavity and convexity of curves.Information correlating to features iS stored in a structure named as feature block and the product visualization model iS founded from it.The feature block is used in the platform for information communication and synchronous update among the three components.

  5. Combining Livestock Production Information in a Process-Based Vegetation Model to Reconstruct the History of Grassland Management

    Science.gov (United States)

    Chang, Jinfeng; Ciais, Philippe; Herrero, Mario; Havlik, Petr; Campioli, Matteo; Zhang, Xianzhou; Bai, Yongfei; Viovy, Nicolas; Joiner, Joanna; Wang, Xuhui; hide

    2016-01-01

    Grassland management type (grazed or mown) and intensity (intensive or extensive) play a crucial role in the greenhouse gas balance and surface energy budget of this biome, both at field scale and at large spatial scale. However, global gridded historical information on grassland management intensity is not available. Combining modelled grass-biomass productivity with statistics of the grass-biomass demand by livestock, we reconstruct gridded maps of grassland management intensity from 1901 to 2012. These maps include the minimum area of managed vs. maximum area of unmanaged grasslands and the fraction of mown vs. grazed area at a resolution of 0.5deg by 0.5deg. The grass-biomass demand is derived from a livestock dataset for 2000, extended to cover the period 19012012. The grass-biomass supply (i.e. forage grass from mown grassland and biomass grazed) is simulated by the process-based model ORCHIDEE-GM driven by historical climate change, risingCO2 concentration, and changes in nitrogen fertilization. The global area of managed grassland obtained in this study increases from 6.1 x 10(exp 6) km(exp 2) in 1901 to 12.3 x 10(exp 6) kmI(exp 2) in 2000, although the expansion pathway varies between different regions. ORCHIDEE-GM also simulated augmentation in global mean productivity and herbage-use efficiency over managed grassland during the 20th century, indicating a general intensification of grassland management at global scale but with regional differences. The gridded grassland management intensity maps are model dependent because they depend on modelled productivity. Thus specific attention was given to the evaluation of modelled productivity against a series of observations from site-level net primary productivity (NPP) measurements to two global satellite products of gross primary productivity (GPP) (MODIS-GPP and SIF data). Generally, ORCHIDEE-GM captures the spatial pattern, seasonal cycle, and inter-annual variability of grassland productivity at global

  6. An enhancement of the role-based access control model to facilitate information access management in context of team collaboration and workflow.

    Science.gov (United States)

    Le, Xuan Hung; Doll, Terry; Barbosu, Monica; Luque, Amneris; Wang, Dongwen

    2012-12-01

    Although information access control models have been developed and applied to various applications, few of the previous works have addressed the issue of managing information access in the combined context of team collaboration and workflow. To facilitate this requirement, we have enhanced the Role-Based Access Control (RBAC) model through formulating universal constraints, defining bridging entities and contributing attributes, extending access permissions to include workflow contexts, synthesizing a role-based access delegation model to target on specific objects, and developing domain ontologies as instantiations of the general model to particular applications. We have successfully applied this model to the New York State HIV Clinical Education Initiative (CEI) project to address the specific needs of information management in collaborative processes. An initial evaluation has shown this model achieved a high level of agreement with an existing system when applied to 4576 cases (kappa=0.801). Comparing to a reference standard, the sensitivity and specificity of the enhanced RBAC model were at the level of 97-100%. These results indicate that the enhanced RBAC model can be effectively used for information access management in context of team collaboration and workflow to coordinate clinical education programs. Future research is required to incrementally develop additional types of universal constraints, to further investigate how the workflow context and access delegation can be enriched to support the various needs on information access management in collaborative processes, and to examine the generalizability of the enhanced RBAC model for other applications in clinical education, biomedical research, and patient care.

  7. Using Models and Data to Learn: The Need for a Perspective based in Characterization of Information (John Dalton Medal Lecture)

    Science.gov (United States)

    Gupta, Hoshin

    2014-05-01

    The hydrological community has recently engaged in a discussion regarding future directions of Hydrology as an Earth Science. In this context, I will comment on the role of "dynamical systems modeling" (and more generally the systems-theoretic perspective) as a vehicle for informing the Discovery and Learning Process. I propose that significant advances can occur through a better understanding of what is meant by "Information", and by focusing on ways to characterize and quantify the nature, quality and quantity of information in models and data, thereby establishing a more robust and insightful (less ad-hoc) basis for learning through the model-data juxtaposition. While the mathematics of Information Theory has much to offer, it will need to be augmented and extended by bringing to bear contextual perspectives from both dynamical systems modeling and the Hydrological Sciences. A natural consequence will be to re-emphasize the a priori role of Process Modeling (particularly specification of System Architecture) over that of the selection of System Parameterizations, thereby shifting the emphasis to the more creative inductive aspects of scientific investigation.

  8. Performance of Information Criteria for Spatial Models.

    Science.gov (United States)

    Lee, Hyeyoung; Ghosh, Sujit K

    2009-01-01

    Model choice is one of the most crucial aspect in any statistical data analysis. It is well known that most models are just an approximation to the true data generating process but among such model approximations it is our goal to select the "best" one. Researchers typically consider a finite number of plausible models in statistical applications and the related statistical inference depends on the chosen model. Hence model comparison is required to identify the "best" model among several such candidate models. This article considers the problem of model selection for spatial data. The issue of model selection for spatial models has been addressed in the literature by the use of traditional information criteria based methods, even though such criteria have been developed based on the assumption of independent observations. We evaluate the performance of some of the popular model selection critera via Monte Carlo simulation experiments using small to moderate samples. In particular, we compare the performance of some of the most popular information criteria such as Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Corrected AIC (AICc) in selecting the true model. The ability of these criteria to select the correct model is evaluated under several scenarios. This comparison is made using various spatial covariance models ranging from stationary isotropic to nonstationary models.

  9. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information

    NARCIS (Netherlands)

    L.M. Lamers (Leida)

    1999-01-01

    textabstractOBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness

  10. The Knowledge Based Information Economy

    OpenAIRE

    1990-01-01

    Working Paper No. 256 is published as "The Knowledge Based Information Economy" (authors: Gunnar Eliasson, Stefan Fölster, Thomas Lindberg, Tomas Pousette and Erol Taymaz). Stockholm: Industrial Institute for Economic and Social Research and Telecon, 1990.

  11. Impacts of Irrigation and Climate Change on Water Security: Using Stakeholder Engagement to Inform a Process-based Crop Model

    Science.gov (United States)

    Leonard, A.; Flores, A. N.; Han, B.; Som Castellano, R.; Steimke, A.

    2016-12-01

    Irrigation is an essential component for agricultural production in arid and semi-arid regions, accounting for a majority of global freshwater withdrawals used for human consumption. Since climate change affects both the spatiotemporal demand and availability of water in irrigated areas, agricultural productivity and water efficiency depend critically on how producers adapt and respond to climate change. It is necessary, therefore, to understand the coevolution and feedbacks between humans and agricultural systems. Integration of social and hydrologic processes can be achieved by active engagement with local stakeholders and applying their expertise to models of coupled human-environment systems. Here, we use a process based crop simulation model (EPIC) informed by stakeholder engagement to determine how both farm management and climate change influence regional agricultural water use and production in the Lower Boise River Basin (LBRB) of southwest Idaho. Specifically, we investigate how a shift from flood to sprinkler fed irrigation would impact a watershed's overall agricultural water use under RCP 4.5 and RCP 8.5 climate scenarios. The LBRB comprises about 3500 km2, of which 20% is dedicated to irrigated crops and another 40% to grass/pasture grazing land. Via interviews of stakeholders in the LBRB, we have determined that approximately 70% of irrigated lands in the region are flood irrigated. We model four common crops produced in the LBRB (alfalfa, corn, winter wheat, and sugarbeets) to investigate both hydrologic and agricultural impacts of irrigation and climatic drivers. Factors influencing farmers' decision to switch from flood to sprinkler irrigation include potential economic benefits, external financial incentives, and providing a buffer against future water shortages. These two irrigation practices are associated with significantly different surface water and energy budgets, and large-scale shifts in practice could substantially impact regional

  12. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  13. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator.

  14. Information criteria for astrophysical model selection

    CERN Document Server

    Liddle, A R

    2007-01-01

    Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

  15. Modeling the Dynamics of an Information System

    Directory of Open Access Journals (Sweden)

    Jacek Unold

    2003-11-01

    Full Text Available The article concentrates on the nature of a social subsystem of an information system. It analyzes the nature of information processes of collectivity within an IS and introduces a model of IS dynamics. The model is based on the assumption that a social subsystem of an information system works as a nonlinear dynamic system. The model of IS dynamics is verified on the indexes of the stock market. It arises from the basic assumption of the technical analysis of the markets, that is, the index chart reflects the play of demand and supply, which in turn represents the crowd sentiment on the market.

  16. Integrated modelling of module behavior and energy aspects in mechatronics. Energy optimization of production facilities based on model information; Modellintegration von Verhaltens- und energetischen Aspekten fuer mechatronische Module. Energieoptimierung von Produktionsanlagen auf Grundlage von Modellinformationen

    Energy Technology Data Exchange (ETDEWEB)

    Schuetz, Daniel; Vogel-Heuser, Birgit [Technische Univ. Muenchen (Germany). Lehrstuhl fuer Informationstechnik im Maschinenwesen

    2011-01-15

    In this Paper a modelling approach is presented that merges the operation characteristics and the energy aspects of automation modules into one model. A characteristic of this approach is the state-based behavior model. An example is used to demonstrate how the information in the model can be used for an energy-optimized operation controlled by software agents. (orig.)

  17. Ontology-based Information Retrieval

    DEFF Research Database (Denmark)

    Styltsvig, Henrik Bulskov

    of concept similarity in query evaluation is discussed. A semantic expansion approach that incorporates concept similarity is introduced and a generalized fuzzy set retrieval model that applies expansion during query evaluation is presented. While not commonly used in present information retrieval systems......In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information...... retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use...

  18. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms...

  19. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  20. Integrated Navigation Based on Robust Estimation Outputs of Multi-sensor Measurements and Adaptive Weights of Dynamic Model Information

    Institute of Scientific and Technical Information of China (English)

    YANG Yuanxi; GAO Weiguang

    2005-01-01

    An integrated navigation based on the kinematic or dynamic state model and the raw measurements has the advantages of high redundancy, high reliability, as well as high ability of fault tolerance and simplicity in calculation. In order to control the influences of measurements outliers and the kinematic model errors on the integrated navigation results, a robust estimation method and an adaptive data fusion method are applied. An integrated navigation example using simulated data is performed and analyzed.

  1. A comparison and user-based evaluation of models of textual information structure in the context of cancer risk assessment

    Directory of Open Access Journals (Sweden)

    Hogberg Johan

    2011-03-01

    Full Text Available Abstract Background Many practical tasks in biomedicine require accessing specific types of information in scientific literature; e.g. information about the results or conclusions of the study in question. Several schemes have been developed to characterize such information in scientific journal articles. For example, a simple section-based scheme assigns individual sentences in abstracts under sections such as Objective, Methods, Results and Conclusions. Some schemes of textual information structure have proved useful for biomedical text mining (BIO-TM tasks (e.g. automatic summarization. However, user-centered evaluation in the context of real-life tasks has been lacking. Methods We take three schemes of different type and granularity - those based on section names, Argumentative Zones (AZ and Core Scientific Concepts (CoreSC - and evaluate their usefulness for a real-life task which focuses on biomedical abstracts: Cancer Risk Assessment (CRA. We annotate a corpus of CRA abstracts according to each scheme, develop classifiers for automatic identification of the schemes in abstracts, and evaluate both the manual and automatic classifications directly as well as in the context of CRA. Results Our results show that for each scheme, the majority of categories appear in abstracts, although two of the schemes (AZ and CoreSC were developed originally for full journal articles. All the schemes can be identified in abstracts relatively reliably using machine learning. Moreover, when cancer risk assessors are presented with scheme annotated abstracts, they find relevant information significantly faster than when presented with unannotated abstracts, even when the annotations are produced using an automatic classifier. Interestingly, in this user-based evaluation the coarse-grained scheme based on section names proved nearly as useful for CRA as the finest-grained CoreSC scheme. Conclusions We have shown that existing schemes aimed at capturing

  2. Information analysis for modeling and representation of meaning

    OpenAIRE

    Uda, Norihiko

    1994-01-01

    In this dissertation, information analysis and an information model called the Semantic Structure Model based on information analysis are explained for semantic processing. Methods for self organization of information are also described. In addition, Information-Base Systems for thinking support of research and development in non linear optical materials are explained. As a result of information analysis, general properties of information and structural properties of concepts become clear. Ge...

  3. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  4. Landslides and Slope Aspect in the Three Gorges Reservoir Area Based on GIS and Information Value Model

    Institute of Scientific and Technical Information of China (English)

    WU Caiyan; QIAO Jianping; WANG Meng

    2006-01-01

    Slope aspect is one of the indispensable internal factors besides lithology,relative elevation and slope degree.In this paper authors use information value model with Geographical Information System (GIS) technology to study how slope aspect contributes to landslide growth from Yunyang to Wushan segment in the Three Gorges Reservoir area, and the relationship between aspect and landslide growth is quantified.Through the research on 205 landslides examples,it is found that the slope contributes most whose aspect is towards south,southeast and southwest aspect contribute moderately,and other five aspects contribute little.The research result inosculates preferably with the fact.The result of this paper can provide potent gist to the construction of Three Gorges Reservoir area in future.

  5. Relationship between landslides and lithology in the Three Gorges Reservoir area based on GIS and information value model

    Institute of Scientific and Technical Information of China (English)

    Caiyan WU; Jianping QIAO

    2009-01-01

    Development of landslides in the Three Gorges Reservoir area is related to many factors. Lithology is one of the indispensable internal factors, besides relative height differences, slope gradients and slope profiles. We used an information value model with geographical information system (GIS) technology to study how lithology contributes to the development of landslides from the Yunyang to Wushan segment in the Three Gorges Reservoir area and we quantify the relationship between lithology and development of landslides. Via an investigation of 205 examples of past landslides, we found that the lithology of J3s, J3p and T2b contributes most. Our research results can provide a valid basis for future construction in the Three Gorges Reservoir area.

  6. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes.

    Science.gov (United States)

    Bertrand, Olivier J N; Lindemann, Jens P; Egelhaaf, Martin

    2015-11-01

    Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation

  7. Why Don’t More Farmers Go Organic? Using A Stakeholder-Informed Exploratory Agent-Based Model to Represent the Dynamics of Farming Practices in the Philippines

    Directory of Open Access Journals (Sweden)

    Laura Schmitt Olabisi

    2015-10-01

    Full Text Available In spite of a growing interest in organic agriculture; there has been relatively little research on why farmers might choose to adopt organic methods, particularly in the developing world. To address this shortcoming, we developed an exploratory agent-based model depicting Philippine smallholder farmer decisions to implement organic techniques in rice paddy systems. Our modeling exercise was novel in its combination of three characteristics: first, agent rules were based on focus group data collected in the system of study. Second, a social network structure was built into the model. Third, we utilized variance-based sensitivity analysis to quantify model outcome variability, identify influential drivers, and suggest ways in which further modeling efforts could be focused and simplified. The model results indicated an upper limit on the number of farmers adopting organic methods. The speed of information spread through the social network; crop yields; and the size of a farmer’s plot were highly influential in determining agents’ adoption rates. The results of this stylized model indicate that rates of organic farming adoption are highly sensitive to the yield drop after switchover to organic techniques, and to the speed of information spread through existing social networks. Further research and model development should focus on these system characteristics.

  8. Isotope-based quantum information

    CERN Document Server

    G Plekhanov, Vladimir

    2012-01-01

    The present book provides to the main ideas and techniques of the rapid progressing field of quantum information and quantum computation using isotope - mixed materials. It starts with an introduction to the isotope physics and then describes of the isotope - based quantum information and quantum computation. The ability to manipulate and control electron and/or nucleus spin in semiconductor devices provides a new route to expand the capabilities of inorganic semiconductor-based electronics and to design innovative devices with potential application in quantum computing. One of the major challenges towards these objectives is to develop semiconductor-based systems and architectures in which the spatial distribution of spins and their properties can be controlled. For instance, to eliminate electron spin decoherence resulting from hyperfine interaction due to nuclear spin background, isotopically controlled devices are needed (i.e., nuclear spin-depleted). In other emerging concepts, the control of the spatial...

  9. Context based multimedia information retrieval

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti

    with the help of contextual knowledge. Our approach to model the context of multimedia is based on unsupervised methods to automatically extract meaning. We investigate two paths of context modelling. The first part extracts context from the primary media, in this case broadcast news speech, by extracting...... through an approximation based on non-negative matrix factorisation NMF. The second part of the work tries to infer the contextual meaning of music based on extra-musical knowledge, in our case gathered from Wikipedia. The semantic relations between artists are inferred using linking structure...

  10. A geographical information system-based web model of arbovirus transmission risk in the continental United States of America.

    Science.gov (United States)

    Konrad, Sarah K; Zou, Li; Miller, Scott N

    2012-11-01

    A degree-day (DD) model of West Nile virus capable of forecasting real-time transmission risk in the continental United States of America up to one week in advance using a 50-km grid is available online at https://sites. google.com/site/arbovirusmap/. Daily averages of historical risk based on temperatures for 1994-2003 are available at 10km resolution. Transmission risk maps can be downloaded from 2010 to the present. The model can be adapted to work with any arbovirus for which the temperature-related parameters are known, e.g. Rift Valley fever virus. To more effectively assess virus establishment and transmission, the model incorporates "compound risk" maps and forecasts, which includes livestock density as a parameter.

  11. A geographical information system-based web model of arbovirus transmission risk in the continental United States of America

    Directory of Open Access Journals (Sweden)

    Sarah K. Konrad

    2012-11-01

    Full Text Available A degree-day (DD model of West Nile virus capable of forecasting real-time transmission risk in the continental United States of America up to one week in advance using a 50-km grid is available online at https://sites. google.com/site/arbovirusmap/. Daily averages of historical risk based on temperatures for 1994-2003 are available at 10- km resolution. Transmission risk maps can be downloaded from 2010 to the present. The model can be adapted to work with any arbovirus for which the temperature-related parameters are known, e.g. Rift Valley fever virus. To more effectively assess virus establishment and transmission, the model incorporates “compound risk” maps and forecasts, which includes livestock density as a parameter.

  12. Matchmaking Semantic Based for Information System Interoperability

    CERN Document Server

    Wicaksana, I Wayan Simri

    2011-01-01

    Unlike the traditional model of information pull, matchmaking is base on a cooperative partnership between information providers and consumers, assisted by an intelligent facilitator (the matchmaker). Refer to some experiments, the matchmaking to be most useful in two different ways: locating information sources or services that appear dynamically and notification of information changes. Effective information and services sharing in distributed such as P2P based environments raises many challenges, including discovery and localization of resources, exchange over heterogeneous sources, and query processing. One traditional approach for dealing with some of the above challenges is to create unified integrated schemas or services to combine the heterogeneous sources. This approach does not scale well when applied in dynamic distributed environments and has many drawbacks related to the large numbers of sources. The main issues in matchmaking are how to represent advertising and request, and how to calculate poss...

  13. Study on Information Search Model Based on Semantic Web Services%基于语义Web服务的信息检索模型研究

    Institute of Scientific and Technical Information of China (English)

    李志强

    2011-01-01

    In order to resolve the lack of semantic information of traditional keyword-based information search method, this paper puts forward the information search model based on semantic Web services in distributed network environment, on the basis of the description on key technologies of semantic Web services. Through analysis of the functions of the model, it proposes the information search mechanism based on semantic similarity, and provides the solution in order to implement integration and sharing on information resources of heterogeneous systems. Finally it illustrates the implementation of information search prototype system based on semantic Web services, and makes performance analysis by simulated experiment. As a result, this paper provides a solution in order to achieve automatic and intelligent information search.%为解决传统基于关键词的信息检索机制的语义信息缺失问题,在对语义Web和Web服务关键技术描述的基础上,本文提出分布式网络环境下基于语义Web服务的信息检索模型.通过对模型中每一层功能的分析,提出基于语义相似度的信息检索机制,并为实现异构系统的信息集成和共享提供解决方案.最后实现基于语义Web服务的信息检索原型系统,并通过仿真实验进行性能分析.结果证明,本文为实现自动化与智能化信息检索提供一种较好的解决方案.

  14. Building Information Modeling Comprehensive Overview

    Directory of Open Access Journals (Sweden)

    Sergey Kalinichuk

    2015-07-01

    Full Text Available The article is addressed to provide a comprehensive review on recently accelerated development of the Information Technology within project market such as industrial, engineering, procurement and construction. Author’s aim is to cover the last decades of the growth of the Information and Communication Technology in construction industry in particular Building Information Modeling and testifies that the problem of a choice of the effective project realization method not only has not lost its urgency, but has also transformed into one of the major condition of the intensive technology development. All of it has created a great impulse on shortening the project duration and has led to the development of various schedule compression techniques what becomes a focus of modern construction.

  15. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  16. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2014-01-01

    Full Text Available For predicting the key technology indicators (concentrate grade and tailings recovery rate of flotation process, an echo state network (ESN based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO algorithm is proposed. Firstly, the color feature (saturation and brightness and texture features (angular second moment, sum entropy, inertia moment, etc. based on grey-level co-occurrence matrix (GLCM are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process.

  17. Improved GSO optimized ESN soft-sensor model of flotation process based on multisource heterogeneous information fusion.

    Science.gov (United States)

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process.

  18. Beyond Information Seeking: Towards a General Model of Information Behaviour

    Science.gov (United States)

    Godbold, Natalya

    2006-01-01

    Introduction: The aim of the paper is to propose new models of information behaviour that extend the concept beyond simply information seeking to consider other modes of behaviour. The models chiefly explored are those of Wilson and Dervin. Argument: A shortcoming of some models of information behaviour is that they present a sequence of stages…

  19. Towards socio-hydroinformatics: optimal design and integration of citizen-based information in water-system models

    Science.gov (United States)

    Solomatine, Dimitri; Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon Hurtado, Juan Carlos

    2017-04-01

    -hydroinformatics can be a potential application demonstrates that citizens not only play an active role in information capturing, evaluation and communication, but also help to improve models and thus increase flood resilience.

  20. 一个基于网格环境的安全信息流模型%Security Information Flow Model Based on Grid Environment

    Institute of Scientific and Technical Information of China (English)

    刘益和

    2011-01-01

    网格安全是网格中的一个重要组成部分,它直接影响着网格的发展和网格系统软件的实际应用.为了充分描述网格环境下的信息流动情况,扩展了一般网络环境下的主体、客体,利用客体的组织密级、密级、完整性等级来划分安全类,定义信息流策略,给出了一个基于网格环境的安全信息流模型.经严格的数学证明,新模型满足Denning 的信息流模型的有限格和最小上界运算符性质,是合理的、安全的,它是BLP模型、Biba模型对应的信息流模型的扩展,也是一般网络环境下的安全信息流模型的扩展,这对网格安全研究有一定的积极意义.%The grid security is an important component,and it directly affects the development of the grid and the practical application of grid system software. In order to fully describe the information flow based on grid environment, a new security information flow model based on grid environment was given, in which the safety class was divided and the information flow policy was defined by using the organization security classifications, classifications, and integrity grade of the object. In this article,the subject and the object were extended,and the concepts of the decomposition of the subject and object,and of organization security classification were defined. The strict mathematics verification shows that this new information flow model satisfies the character of finite lattice and least upper bound operator of the Denning's information flow model, it is reasonable and safe. It is an extension of information flow model contrast to BLP model's and Biba model's, but also the expansion of security information flow model based on the general network environment,and there has some positive significance for the study grid security.

  1. Effective information spreading based on local information in correlated networks

    Science.gov (United States)

    Gao, Lei; Wang, Wei; Pan, Liming; Tang, Ming; Zhang, Hai-Feng

    2016-12-01

    Using network-based information to facilitate information spreading is an essential task for spreading dynamics in complex networks. Focusing on degree correlated networks, we propose a preferential contact strategy based on the local network structure and local informed density to promote the information spreading. During the spreading process, an informed node will preferentially select a contact target among its neighbors, basing on their degrees or local informed densities. By extensively implementing numerical simulations in synthetic and empirical networks, we find that when only consider the local structure information, the convergence time of information spreading will be remarkably reduced if low-degree neighbors are favored as contact targets. Meanwhile, the minimum convergence time depends non-monotonically on degree-degree correlation, and a moderate correlation coefficient results in the most efficient information spreading. Incorporating the local informed density information into contact strategy, the convergence time of information spreading can be further reduced, and be minimized by an moderately preferential selection.

  2. A focused information criterion for graphical models

    NARCIS (Netherlands)

    Pircalabelu, E.; Claeskens, G.; Waldorp, L.

    2015-01-01

    A new method for model selection for Gaussian Bayesian networks and Markov networks, with extensions towards ancestral graphs, is constructed to have good mean squared error properties. The method is based on the focused information criterion, and offers the possibility of fitting individual-tailore

  3. Fisher Information Framework for Time Series Modeling

    CERN Document Server

    Venkatesan, R C

    2016-01-01

    A robust prediction model invoking the Takens embedding theorem, whose \\textit{working hypothesis} is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the \\textit{working hypothesis} satisfy a time independent Schr\\"{o}dinger-like equation in a vector setting. The inference of i) the probability density function of the coefficients of the \\textit{working hypothesis} and ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defi...

  4. Information Filtering Based on Users' Negative Opinions

    Science.gov (United States)

    Guo, Qiang; Li, Yang; Liu, Jian-Guo

    2013-05-01

    The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.

  5. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related activit...

  6. Biologically Informed Individual-Based Network Model for Rift Valley Fever in the US and Evaluation of Mitigation Strategies

    Science.gov (United States)

    Scoglio, Caterina M.

    2016-01-01

    Rift Valley fever (RVF) is a zoonotic disease endemic in sub-Saharan Africa with periodic outbreaks in human and animal populations. Mosquitoes are the primary disease vectors; however, Rift Valley fever virus (RVFV) can also spread by direct contact with infected tissues. The transmission cycle is complex, involving humans, livestock, and multiple species of mosquitoes. The epidemiology of RVFV in endemic areas is strongly affected by climatic conditions and environmental variables. In this research, we adapt and use a network-based modeling framework to simulate the transmission of RVFV among hypothetical cattle operations in Kansas, US. Our model considers geo-located livestock populations at the individual level while incorporating the role of mosquito populations and the environment at a coarse resolution. Extensive simulations show the flexibility of our modeling framework when applied to specific scenarios to quantitatively evaluate the efficacy of mosquito control and livestock movement regulations in reducing the extent and intensity of RVF outbreaks in the United States. PMID:27662585

  7. Determining Potential Sites for Runoff Water Harvesting using Remote Sensing and Geographic Information Systems-Based Modeling in Sinai

    Directory of Open Access Journals (Sweden)

    Hossam H. Elewa

    2012-01-01

    Full Text Available Problem statement: Sinai is increasingly suffering from an overwhelming water crisis. Runoff Water Harvesting (RWH could be a solution for this problem. The determined promising drainage basins for RWH could be used by the decision makers to propose appropriate controlling systems to overcome the problem of water scarcity and for implementing runoff farming and rain-fed agriculture. Approach: Remote sensing, geographic information systems, watershed modeling system were integrated to extract a multi-criteria-decision support system of nine thematic layers, namely; volume of annual flood, lineaments frequency density, drainage frequency density, maximum flow distance, basin area, basin slope, basin length, average overland flow distance and soil infiltration. These criteria were used for conducting a Weighted Spatial Probability Modeling (WSPM to determine the potential areas for the RWH. The potential runoff available for harvesting was estimated by applying Finkel-SCS rainfall-runoff methods. Results: The WSPM classified Sinai into four classes that graded from high (3,201-6,695 km2, moderate (35,923-35,896 km2, low (13,185-16,652 km2, very low (1.38-5.57 km2 for RWH. Promising watersheds like those of Abu Taryfya, Hamma El Hassana, Gerafi, Watir, Geraia, Heridien, Sidri, Feiran and Alaawag, are categorized as high-moderate RWH potential basins. Conclusion: These basins could be investigated in detail with larger scale to determine the appropriate locations for implementing the RWH structures and techniques. Implementing systems and techniques of RWH in the potential watersheds could open new opportunities for sustainable development in the area.

  8. Automatic Building Information Model Query Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  9. Conjunction of wavelet transform and SOM-mutual information data pre-processing approach for AI-based Multi-Station nitrate modeling of watersheds

    Science.gov (United States)

    Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika

    2017-05-01

    Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.

  10. An Agent-Based Model of Private Woodland Owner Management Behavior Using Social Interactions, Information Flow, and Peer-To-Peer Networks.

    Science.gov (United States)

    Huff, Emily Silver; Leahy, Jessica E; Hiebeler, David; Weiskittel, Aaron R; Noblet, Caroline L

    2015-01-01

    Privately owned woodlands are an important source of timber and ecosystem services in North America and worldwide. Impacts of management on these ecosystems and timber supply from these woodlands are difficult to estimate because complex behavioral theory informs the owner's management decisions. The decision-making environment consists of exogenous market factors, internal cognitive processes, and social interactions with fellow landowners, foresters, and other rural community members. This study seeks to understand how social interactions, information flow, and peer-to-peer networks influence timber harvesting behavior using an agent-based model. This theoretical model includes forested polygons in various states of 'harvest readiness' and three types of agents: forest landowners, foresters, and peer leaders (individuals trained in conservation who use peer-to-peer networking). Agent rules, interactions, and characteristics were parameterized with values from existing literature and an empirical survey of forest landowner attitudes, intentions, and demographics. The model demonstrates that as trust in foresters and peer leaders increases, the percentage of the forest that is harvested sustainably increases. Furthermore, peer leaders can serve to increase landowner trust in foresters. Model output and equations will inform forest policy and extension/outreach efforts. The model also serves as an important testing ground for new theories of landowner decision making and behavior.

  11. An Agent-Based Model of Private Woodland Owner Management Behavior Using Social Interactions, Information Flow, and Peer-To-Peer Networks.

    Directory of Open Access Journals (Sweden)

    Emily Silver Huff

    Full Text Available Privately owned woodlands are an important source of timber and ecosystem services in North America and worldwide. Impacts of management on these ecosystems and timber supply from these woodlands are difficult to estimate because complex behavioral theory informs the owner's management decisions. The decision-making environment consists of exogenous market factors, internal cognitive processes, and social interactions with fellow landowners, foresters, and other rural community members. This study seeks to understand how social interactions, information flow, and peer-to-peer networks influence timber harvesting behavior using an agent-based model. This theoretical model includes forested polygons in various states of 'harvest readiness' and three types of agents: forest landowners, foresters, and peer leaders (individuals trained in conservation who use peer-to-peer networking. Agent rules, interactions, and characteristics were parameterized with values from existing literature and an empirical survey of forest landowner attitudes, intentions, and demographics. The model demonstrates that as trust in foresters and peer leaders increases, the percentage of the forest that is harvested sustainably increases. Furthermore, peer leaders can serve to increase landowner trust in foresters. Model output and equations will inform forest policy and extension/outreach efforts. The model also serves as an important testing ground for new theories of landowner decision making and behavior.

  12. Adaptable Information Models in the Global Change Information System

    Science.gov (United States)

    Duggan, B.; Buddenberg, A.; Aulenbach, S.; Wolfe, R.; Goldstein, J.

    2014-12-01

    The US Global Change Research Program has sponsored the creation of the Global Change Information System () to provide a web based source of accessible, usable, and timely information about climate and global change for use by scientists, decision makers, and the public. The GCIS played multiple roles during the assembly and release of the Third National Climate Assessment. It provided human and programmable interfaces, relational and semantic representations of information, and discrete identifiers for various types of resources, which could then be manipulated by a distributed team with a wide range of specialties. The GCIS also served as a scalable backend for the web based version of the report. In this talk, we discuss the infrastructure decisions made during the design and deployment of the GCIS, as well as ongoing work to adapt to new types of information. Both a constrained relational database and an open ended triple store are used to ensure data integrity while maintaining fluidity. Using natural primary keys allows identifiers to propagate through both models. Changing identifiers are accomodated through fine grained auditing and explicit mappings to external lexicons. A practical RESTful API is used whose endpoints are also URIs in an ontology. Both the relational schema and the ontology are maleable, and stability is ensured through test driven development and continuous integration testing using modern open source techniques. Content is also validated through continuous testing techniques. A high degres of scalability is achieved through caching.

  13. A Conceptually Simple Modeling Approach for Jason-1 Sea State Bias Correction Based on 3 Parameters Exclusively Derived from Altimetric Information

    Directory of Open Access Journals (Sweden)

    Nelson Pires

    2016-07-01

    Full Text Available A conceptually simple formulation is proposed for a new empirical sea state bias (SSB model using information retrieved entirely from altimetric data. Nonparametric regression techniques are used, based on penalized smoothing splines adjusted to each predictor and then combined by a Generalized Additive Model. In addition to the significant wave height (SWH and wind speed (U10, a mediator parameter designed by the mean wave period derived from radar altimetry, has proven to improve the model performance in explaining some of the SSB variability, especially in swell ocean regions with medium-high SWH and low U10. A collinear analysis of scaled sea level anomalies (SLA variance differences shows conformity between the proposed model and the established SSB models. The new formulation aims to be a fast, reliable and flexible SSB model, in line with the well-settled SSB corrections, depending exclusively on altimetric information. The suggested method is computationally efficient and capable of generating a stable model with a small training dataset, a useful feature for forthcoming missions.

  14. Solving Information-Based Problems: Evaluating Sources and Information

    Science.gov (United States)

    Brand-Gruwel, Saskia; Stadtler, Marc

    2011-01-01

    The focus of this special section is on the processes involved when solving information-based problems. Solving these problems requires from people that they are able to define the information problem, search and select usable and reliable sources and information and synthesise information into a coherent body of knowledge. An important aspect…

  15. Solving Information-Based Problems: Evaluating Sources and Information

    Science.gov (United States)

    Brand-Gruwel, Saskia; Stadtler, Marc

    2011-01-01

    The focus of this special section is on the processes involved when solving information-based problems. Solving these problems requires from people that they are able to define the information problem, search and select usable and reliable sources and information and synthesise information into a coherent body of knowledge. An important aspect…

  16. Calculation Model for Efficiency Evaluation of Information System Based on ADC Method%ADC的武器信息处理系统效能评估模型

    Institute of Scientific and Technical Information of China (English)

    王永智; 白利军; 张文元; 郜鹏

    2012-01-01

    The mathematics model based on ADC method is established to evaluate the efficiency of information system in multi-equipment weapon system. The validity of the model is proved with a practical evaluation on a real information system. The parameter of the model is easy to be quantifated, and the model is available in practical application. An efficiency evaluation on weapon system based on the model is used for weapon system's designing, applying and maintaining, and also used for equipment proving on weapon system.%基于ADC法建立了复杂武器信息处理系统效能评估的数学模型,并结合实例进行分析和评估,验证了该模型的有效性.该模型应用于多设备武器系统的效能评估中,易于量化,适用性强,对于武器系统的设计、应用、维修、维护及对武器装备论证都具有一定的借鉴意义和参考价值.

  17. A descriptive model of information problem solving while using internet

    NARCIS (Netherlands)

    Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber

    2009-01-01

    This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information

  18. Comparison on information-seeking behavior of postgraduated students in Isfahan University of Medical Sciences and University of Isfahan in writing dissertation based on Kuhlthau model of information search process.

    Science.gov (United States)

    Abedi, Mahnaz; Ashrafi-Rizi, Hasan; Zare-Farashbandi, Firoozeh; Nouri, Rasoul; Hassanzadeh, Akbar

    2014-01-01

    Information-seeking behaviors have been one of the main focuses of researchers in order to identify and solve the problems users face in information recovery. The aim of this research is Comparative on Information-Seeking Behavior of the Postgraduate Students in Isfahan University of Medical Sciences and Isfahan University in Writing Dissertation based on Kuhlthau Model of Information Search Process in 2012. The research method followed is survey and the data collection tool is Narmenji questionnaire. Statistical population was all postgraduate students in Isfahan University of Medical Sciences and Isfahan University. The sample size was 196 people and sampling was stratified randomly. The type of statistical analyses were descriptive (mean and frequency) and inferential (independent t test and Pearson's correlation) and the software used was SPSS20. The findings showed that Isfahan Medical Sciences University followed 20% of the order steps of this model and Isfahan University did not follow this model. In the first stage (Initiation) and sixth (Presentation) of feelings aspects and in actions (total stages) significant difference was found between students from the two universities. Between gender and fourth stage (Formulation) and the total score of feelings the Kuhlthau model there has a significant relationship. Also there was a significant and inverse relationship between the third stage (Exploration) of feelings and age of the students. The results showed that in writing dissertation there were some major differences in following up the Kuhlthau model between students of the two Universities. There are significant differences between some of the stages of feelings and actions of students' information-seeking behavior from the two universities. There is a significant relationship between the fourth stage (Formulation) of feelings in the Kuhlthau Model with gender and third stage of the Feelings (Exploration) with age.

  19. 基于本体的医学影像信息整合①%Ontology-Based Information Model for Integration of Medical Imaging Data

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    A computer readable unified information model is the data foundation in medical imaging semantic retrieve . In this paper, some challenges including lacking of unified information model for medical imaging information, the terminology and syntax for describing the semantic content in medical imaging varying were discussed, and an ontology-based information scheme for medical imaging information integrating was developed. Based on the analysis of medical imaging data source and the relationship of them, a medical imaging information ontology model was developed using "seven-step" method proposed by Stanford University, and the persistence of ontology model, extracting original data and data integration were realized. The information model was used in medical imaging semantic retrieve.%  计算机可理解的统一信息模型是基于语义的医学影像检索研究的数据基础。讨论了医学影像及其相关信息使用中存在的数据异构、图像标注术语及语法不一致及数据格式不支持现有数据挖掘和图像语义检索的问题,提出了一种基于本体的医学影像信息集成方案。在分析医学影像信息来源及其关系基础上,结合领域专家知识,使用斯坦福大学提出的本体构建“七步法”设计了医学影像信息本体模型,实现了本体模型的持久化、原始数据提取和数据整合,解决了医学影像信息使用中存在的问题,该信息模型已用于医学影像检索系统中。

  20. Information modeling system for blast furnace control

    Science.gov (United States)

    Spirin, N. A.; Gileva, L. Y.; Lavrov, V. V.

    2016-09-01

    Modern Iron & Steel Works as a rule are equipped with powerful distributed control systems (DCS) and databases. Implementation of DSC system solves the problem of storage, control, protection, entry, editing and retrieving of information as well as generation of required reporting data. The most advanced and promising approach is to use decision support information technologies based on a complex of mathematical models. The model decision support system for control of blast furnace smelting is designed and operated. The basis of the model system is a complex of mathematical models created using the principle of natural mathematical modeling. This principle provides for construction of mathematical models of two levels. The first level model is a basic state model which makes it possible to assess the vector of system parameters using field data and blast furnace operation results. It is also used to calculate the adjustment (adaptation) coefficients of the predictive block of the system. The second-level model is a predictive model designed to assess the design parameters of the blast furnace process when there are changes in melting conditions relative to its current state. Tasks for which software is developed are described. Characteristics of the main subsystems of the blast furnace process as an object of modeling and control - thermal state of the furnace, blast, gas dynamic and slag conditions of blast furnace smelting - are presented.

  1. Advancing an Information Model for Environmental Observations

    Science.gov (United States)

    Horsburgh, J. S.; Aufdenkampe, A. K.; Hooper, R. P.; Lehnert, K. A.; Schreuders, K.; Tarboton, D. G.; Valentine, D. W.; Zaslavsky, I.

    2011-12-01

    Observational data are fundamental to hydrology and water resources, and the way they are organized, described, and shared either enables or inhibits the analyses that can be performed using the data. The CUAHSI Hydrologic Information System (HIS) project is developing cyberinfrastructure to support hydrologic science by enabling better access to hydrologic data. HIS is composed of three major components. HydroServer is a software stack for publishing time series of hydrologic observations on the Internet as well as geospatial data using standards-based web feature, map, and coverage services. HydroCatalog is a centralized facility that catalogs the data contents of individual HydroServers and enables search across them. HydroDesktop is a client application that interacts with both HydroServer and HydroCatalog to discover, download, visualize, and analyze hydrologic observations published on one or more HydroServers. All three components of HIS are founded upon an information model for hydrologic observations at stationary points that specifies the entities, relationships, constraints, rules, and semantics of the observational data and that supports its data services. Within this information model, observations are described with ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. Physical implementations of this information model include the Observations Data Model (ODM) for storing hydrologic observations, Water Markup Language (WaterML) for encoding observations for transmittal over the Internet, the HydroCatalog metadata catalog database, and the HydroDesktop data cache database. The CUAHSI HIS and this information model have now been in use for several years, and have been deployed across many different academic institutions as well as across several national agency data repositories. Additionally, components of the HIS

  2. Data Model Management for Space Information Systems

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool

  3. Data Model Management for Space Information Systems

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool

  4. Research of Cognitive Model Based on Comprehensive Information Theory%基于全信息理论的认知模型研究

    Institute of Scientific and Technical Information of China (English)

    浦江

    2012-01-01

    依据全信息理论,在吸收认知及脑科学研究成果的基础上,提出了认知的层次模型,即信息认知模型和知识认知模型.信息认知模型满足于人类的基本需求,知识认知模型适用于高层次需求.该认知模型为研究人类的认知过程和思维方式提供了一条新的路径.%Having absorbed research fruits from cognition science and brain science, this paper pro- posed two cognitive models of comprehensive information based on comprehensive information theory. They are information cognition model and knowledge cognition model. With the former model catering to basic human demands and the latter to the higher level human demands, they have opened up a new path for the research of human cognitive processes and subtle thinking.

  5. Product information modeling of hydraulic support based on ontology%基于本体的液压支架产品信息建模

    Institute of Scientific and Technical Information of China (English)

    邵园园; 曾庆良; 玄冠涛; 刘贤喜; 王成龙

    2012-01-01

    The current product information model is lack of consistent and uniform expression, especially semantic information has not been fully described, which bring about failing to meet different needs of different designers for the product information model. In addition, the product information model can not describe the function information. Aiming at these disadvantages, the paper proposed to use ontology to describe the product information model from the function layer, principle layer and structural layer. Such sub-ontology of the hydraulic support as control ontology, structure ontology and type ontology were constructed by ontology construction software protege, eventually an ontology model of the hydraulic support was integrated. The model could offer basic model for subsequent search and reuse of the hydraulic support, so as to shorten the product development cycle. Through studying the hydraulic support product information modeling based on ontology, the multi-view product information modeling method proved to be feasible, and provided method and experience for product information description based on ontology.%当前的产品信息模型缺乏一致、统一的表达,尤其是语义信息没有得到全面的描述,不能满足不同设计者对产品信息模型的不同需要,另外,产品信息模型也不能描述功能信息,针对这些缺点,提出了利用本体思想从功能层、原理层和结构层3个层面来描述产品信息模型,通过本体构建软件protege建立液压支架产品的控制本体、结构本体及架型本体等子本体,最终集成为液压支架产品的本体模型。该模型可为后续的液压支架的检索与重用等提供基础模型,更好地辅助设计人员进行产品开发,从而缩短产品开发周期。通过基于本体的液压支架产品信息建模的研究,验证了多视图产品信息建模方法的可行性,并为基于本体的产品信息描述提供借鉴的方法和经验。

  6. Hybrid Information Retrieval Model For Web Images

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    The Bing Bang of the Internet in the early 90's increased dramatically the number of images being distributed and shared over the web. As a result, image information retrieval systems were developed to index and retrieve image files spread over the Internet. Most of these systems are keyword-based which search for images based on their textual metadata; and thus, they are imprecise as it is vague to describe an image with a human language. Besides, there exist the content-based image retrieval systems which search for images based on their visual information. However, content-based type systems are still immature and not that effective as they suffer from low retrieval recall/precision rate. This paper proposes a new hybrid image information retrieval model for indexing and retrieving web images published in HTML documents. The distinguishing mark of the proposed model is that it is based on both graphical content and textual metadata. The graphical content is denoted by color features and color histogram of ...

  7. Information Service Model with Mobile Agent Supported

    Institute of Scientific and Technical Information of China (English)

    邹涛; 王继成; 张福炎

    2000-01-01

    Mobile Agent is a kind of novel agent technology characterized by mobile, intelligent, parallel and asynchronous computing. In this paper, a new information service model that adopts mobile agent technology is introduced first,and then an experimental system DOLTRIA system that is implemented based on the model is described. The DOLTRIA system implemented by WWW framework and Java can search for relevant HTML documents on a set of Web servers. The result of experiments shows that performance improvement can be achieved by this model, and both the elapsed time and network traffic are reduced significantly.

  8. Device Design and Modeling for Beyond-CMOS Information Technology Based on Integrated Electronic-Magnetic Systems

    Science.gov (United States)

    Duan, Xiaopeng

    This thesis focuses on exploiting the correlation between insulating ferromagnets and 2- dimensional Dirac electronic systems in graphene and topological insulators (TI) to develop beyond-CMOS devices for information processing. (Abstract shortened by ProQuest.).

  9. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  10. Effective information spreading based on local information in correlated networks

    CERN Document Server

    Gao, Lei; Pan, Liming; Tang, Ming; Zhang, Hai-Feng

    2016-01-01

    Using network-based information to facilitate information spreading is an essential task for spreading dynamics in complex networks, which will benefit the promotion of technical innovations, healthy behaviors, new products, etc. Focusing on degree correlated networks, we propose a preferential contact strategy based on the local network structure and local informed density to promote the information spreading. During the spreading process, an informed node will preferentially select a contact target among its neighbors, basing on their degrees or local informed densities. By extensively implementing numerical simulations in synthetic and empirical networks, we find that when only consider the local structure information, the convergence time of information spreading will be remarkably reduced if low-degree neighbors are favored as contact targets. Meanwhile, the minimum convergence time depends non-monotonically on degree-degree correlation, and moderate correlation coefficients result in most efficient info...

  11. 基于保质设计的材料信息模型研究%Research on Model of Material Information Based on Design for Quality

    Institute of Scientific and Technical Information of China (English)

    刘永振; 宋德强

    2011-01-01

    Design for Quality (DFQ) is a new design method that integrates the quality assurance and quality management into product design. The needs for material quality information at the different stages are analyzed in depth and the model of material information supporting the design for quality established. For convenience of integration of the material quality information and the application system CAD/CAPP/ERP/PDM, a STEP-based material information model is presented.%质量保证设计(DFQ)是将质量保证和质量管理融入到产品设计中的一种新的设计方法.深入分析产品设计各阶段对材料质量信息的需求,建立支持保质设计的材料信息模型.考虑到材料质量信息与CAD/CAPP/ERP/PDM等应用系统集成方便,提出基于STEP的材料信息模型.

  12. Building Information Modelling in Denmark and Iceland

    DEFF Research Database (Denmark)

    Jensen, Per Anker; Jóhannesson, Elvar Ingi

    2013-01-01

    Purpose – The purpose of this paper is to explore the implementation of building information modelling (BIM) in the Nordic countries of Europe with particular focus on the Danish building industry with the aim of making use of its experience for the Icelandic building industry. Design....../methodology/aptroach – The research is based on two separate analyses. In the first part, the deployment of information and communication technology (ICT) in the Icelandic building industry is investigated and compared with the other Nordic countries. In the second part the experience in Denmark from implementing and working...... for making standards and guidelines related to BIM. Public building clients are also encouraged to consider initiating projects based on making simple building models of existing buildings in order to introduce the BIM technology to the industry. Icelandic companies are recommended to start implementing BIM...

  13. CREATION OF WEB BASED MANDAL LEVEL INFORMATION SYSTEM USING REMOTE SENSING & GIS AND VISUAL BASIC PROGRAME - A MODEL STUDY

    Directory of Open Access Journals (Sweden)

    SS.Asadi

    2011-12-01

    Full Text Available The present study is aimed to prepare the, micro level planning for sustainable development of singarayakonda mandal of prakasam district in this digital thematic maps has prepared namely, land use/ landcover , hydro geomorphology, slope, physiography, soil, geology, drainage etc. using satellite imageries on ARC/INFO GIS platform. This constitutes the spatial database and to create information system for mandal development. The mandal taken for the study is singarayakonda of prakasam district. The present study resulted in information system for mandal level planning with a scope to develop the mandal further by providing the information necessary about the mandal. This system is user friendly and many decisions can be made by the user according to his choice. The Decision Support System developed here can further serve as a replica to other mandals.

  14. Establishing Viable and Effective Information-Warfare Capability in Developing Nations Based on the U.S. Model

    Science.gov (United States)

    2012-12-01

    information advantage to deliver warfighting options and effects.62 As noted in May 2010 by Vice Admiral David J, Dorsett, deputy chief of naval...Bob Hume on DoD Education 2009. Accessed October 31, 2012. https://dde.carlisle.army.mil/documents/courses.../ppt/2208-UCP.ppt 81 Information in this...Accessed June 5, 2012. http://www.defencejournal.com/2000/aug/role-media-war.htm Hume , Bob Col. “DoD Education 2009.” Accessed October 31, 2012. https

  15. Refreshing Information Literacy: Learning from Recent British Information Literacy Models

    Science.gov (United States)

    Martin, Justine

    2013-01-01

    Models play an important role in helping practitioners implement and promote information literacy. Over time models can lose relevance with the advances in technology, society, and learning theory. Practitioners and scholars often call for adaptations or transformations of these frameworks to articulate the learning needs in information literacy…

  16. The Infopriv model for information privacy

    OpenAIRE

    2012-01-01

    D.Phil. (Computer Science) The privacy of personal information is crucial in today's information systems. Traditional security models are mainly concerned with the protection of information inside a computer system. These models assume that the users of a computer system are trustworthy and will not disclose information to unauthorised parties. However, this assumption does not always apply to information privacy since people are the major cause of privacy violations. Alternative models ar...

  17. A Meteorological Information Mining-Based Wind Speed Model for Adequacy Assessment of Power Systems With Wind Power

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    factors are calculated. Secondly, the meteorological data are classified into several states using an improved Fuzzy C-means (FCM) algorithm. Then the Markov chain is used to model the chronological characteristics of meteorological states and wind speed. The proposed model was proved to be more accurate......Accurate wind speed simulation is an essential prerequisite to analyze the power systems with wind power. A wind speed model considering meteorological conditions and seasonal variations is proposed in this paper. Firstly, using the path analysis method, the influence weights of meteorological...... in capturing the characteristics of probability distribution, auto-correlation and seasonal variations of wind speed compared with the traditional Markov chain Monte Carlo (MCMC) and autoregressive moving average (ARMA) model. Furthermore, the proposed model was applied to adequacy assessment of generation...

  18. In silico exploratory study using structure-activity relationship models and metabolic information for prediction of mutagenicity based on the Ames test and rodent micronucleus assay.

    Science.gov (United States)

    Kamath, P; Raitano, G; Fernández, A; Rallo, R; Benfenati, E

    2015-12-01

    The mutagenic potential of chemicals is a cause of growing concern, due to the possible impact on human health. In this paper we have developed a knowledge-based approach, combining information from structure-activity relationship (SAR) and metabolic triggers generated from the metabolic fate of chemicals in biological systems for prediction of mutagenicity in vitro based on the Ames test and in vivo based on the rodent micronucleus assay. In the first part of the work, a model was developed, which comprises newly generated SAR rules and a set of metabolic triggers. These SAR rules and metabolic triggers were further externally validated to predict mutagenicity in vitro, with metabolic triggers being used only to predict mutagenicity of chemicals, which were predicted unknown, by SARpy. Hence, this model has a higher accuracy than the SAR model, with an accuracy of 89% for the training set and 75% for the external validation set. Subsequently, the results of the second part of this work enlist a set of metabolic triggers for prediction of mutagenicity in vivo, based on the rodent micronucleus assay. Finally, the results of the third part enlist a list of metabolic triggers to find similarities and differences in the mutagenic response of chemicals in vitro and in vivo.

  19. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    2004-01-01

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  20. Modeling uncertainty in geographic information and analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Uncertainty modeling and data quality for spatial data and spatial analyses are im-portant topics in geographic information science together with space and time in geography,as well as spatial analysis. In the past two decades,a lot of efforts have been made to research the uncertainty modeling for spatial data and analyses. This paper presents our work in the research. In particular,four progresses in the re-search are given out: (a) from determinedness-to uncertainty-based representation of geographic objects in GIS; (b) from uncertainty modeling for static data to dy-namic spatial analyses; (c) from modeling uncertainty for spatial data to models; and (d) from error descriptions to quality control for spatial data.

  1. Memory-Based Cognitive Modeling for Visual Information Processing%基于记忆机制的视觉信息处理认知建模

    Institute of Scientific and Technical Information of China (English)

    王延江; 齐玉娟

    2013-01-01

    Inspired by the way in which humans perceive the environment,a memory-based cognitive model for visual information processing is proposed to imitate some cognitive functions of human brain.The proposed model includes five components:information granule,memory spaces cognitive behaviors,rules for manipulating information among memory spaces,and decision-making processes.According to the three-stage memory model of human brain,three memory spaces are defined to store the current,temporal and permanent visual information respectively,i.e.ultra short-term memory space (USTMS),short-term memory space (STMS) and long-term memory space (LTMS).The past scenes can be remembered or forgotten by the proposed model,and thus the model can adapt to the variation of the scene.The proposed model is applied to two hot issues in computer vision:background modeling and object tracking.Experimental results show that the proposed model can deal with scenes with sudden background,object appearance changing and heavy object occlusions under complex background.%受人类认知环境方式的启发,将人类记忆机制引入到视觉信息处理过程,提出一种基于记忆机制的视觉信息处理认知模型,用于模拟人脑的一些认知过程.该模型主要包括5个部分:信息粒、记忆空间、认知行为、信息传递规则和决策过程.根据人脑三阶段记忆模型定义3个记忆空间:瞬时记忆空间、短时记忆空间和长时记忆空间,分别用于存储当前的、临时的和永久的视觉信息.该模型可记住或遗忘曾经出现过的场景,从而使其能快速适应场景变化.将其应用于计算机视觉研究中的两个关键问题:背景建模与运动目标跟踪.实验结果表明,该模型能较好解决复杂场景下背景或目标姿态突变以及目标被严重遮挡等问题.

  2. Informing energy and climate policies using energy systems models insights from scenario analysis increasing the evidence base

    CERN Document Server

    Giannakidis, George; Ó Gallachóir, Brian; Tosato, GianCarlo

    2015-01-01

    This book highlights how energy-system models are used to underpin and support energy and climate mitigation policy decisions at national, multi-country and global levels. It brings together, for the first time in one volume, a range of methodological approaches and case studies of good modeling practice on a national and international scale from the IEA-ETSAP energy technology initiative. It provides insights for the reader into the rich and varied applications of energy-system models and the underlying methodologies and policy questions they can address. The book demonstrates how these mode

  3. Hospital Information System Project Development Based on Scrum Model%基于SCRUM模型的医院信息系统项目开发

    Institute of Scientific and Technical Information of China (English)

    康亚冰; 艾育华; 陈芳炯

    2012-01-01

    Hospital Information System is the necessary technical support and infrastructure for modern hospitals, but there are many difficulties in the HIS project. This paper analyses the difficulties of the HIS project, and puts forward a solution, which is based on Scrum model and using TFS platform, to develop HIS project.%医院信息系统(Hospital Information System,HIS)是现代医院运营的必要技术支持环境和基础设施建设,但HIS项目的开发还有许多难点,对HIS开发项目难点进行分析,提出了基于TFS平台以Scrum模型开发HIS的方法.

  4. Model-based approaches for interoperability of next generation enterprise information systems: state of the art and future challenges

    OpenAIRE

    Zacharewicz, Gregory; Diallo, Saikou; Ducq, Yves; Agostinho, Carlos; Jardim-Goncalves, Ricardo; Bazoun, Hassan; Wang, Zhongjie; Doumeingts, Guy

    2016-01-01

    International audience; Enterprise businesses are more than ever challenged by competitors that frequently refine and tailor their offers to clients. In this context, enterprise information systems (EIS) are especially important because: (1) they remain one of the last levers to increase the performance and competitiveness of the enterprise, (2) we operate in a business world where the product itself has reached a limit of performance and quality due to uniform capacity of industrial tools in...

  5. Information Modeling for Direct Control of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob

    2013-01-01

    a desired accumulated response. In this paper, we design such an information model based on the markets that the aggregator participates in and based on the flexibility characteristics of the remote controlled DERs. The information model is constructed in a modular manner making the interface suitable...... for a whole range of different DERs. The devised information model can serve as input to the international standardization efforts on distributed energy resources....

  6. Distributed Expert-Based Information Systems: An Interdisciplinary Approach.

    Science.gov (United States)

    Belkin, Nicholas J.; And Others

    1987-01-01

    Based on an international workshop held at Rutgers University, this article discusses problems and issues in the design, research, and implementation of distributed expert-based information systems (DEBIS). Information needs of end users are stressed, architectures for expert information retrieval systems are explored, and prototype models are…

  7. Heterogeneous information-based artificial stock market

    Science.gov (United States)

    Pastore, S.; Ponta, L.; Cincotti, S.

    2010-05-01

    In this paper, an information-based artificial stock market is considered. The market is populated by heterogeneous agents that are seen as nodes of a sparsely connected graph. Agents trade a risky asset in exchange for cash. Besides the amount of cash and assets owned, each agent is characterized by a sentiment. Moreover, agents share their sentiments by means of interactions that are identified by the graph. Interactions are unidirectional and are supplied with heterogeneous weights. The agent's trading decision is based on sentiment and, consequently, the stock price process depends on the propagation of information among the interacting agents, on budget constraints and on market feedback. A central market maker (clearing house mechanism) determines the price process at the intersection of the demand and supply curves. Both closed- and open-market conditions are considered. The results point out the validity of the proposed model of information exchange among agents and are helpful for understanding the role of information in real markets. Under closed market conditions, the interaction among agents' sentiments yields a price process that reproduces the main stylized facts of real markets, e.g. the fat tails of the returns distributions and the clustering of volatility. Within open-market conditions, i.e. with an external cash inflow that results in asset price inflation, also the unitary root stylized fact is reproduced by the artificial stock market. Finally, the effects of model parameters on the properties of the artificial stock market are also addressed.

  8. Study on Product Lifecycle Dynamic Information Modeling and Its Application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Liang-cai; WANG Li-hui; ZHANG Yong

    2003-01-01

    The PLDIM (Product Lifecycle Dynamic Information Model) is the most important part of the PLDM (Product Lifecycle Dynamic Model ) and it is the basis to create the information system and to implement PLM (Product Lifecycle Management). The information classification, the relationships among all information items, PLDIM mathematic expression, information coding and the 3D synthetic description of the PLDIM are presented. The information flow and information system structure based on the two information centers and Internet/Intranet are proposed, and how to implement this system for ship diesel engines are also introduced according to the PLDIM and PLM solutions.

  9. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Knowledge-based information systems in practice

    CERN Document Server

    Jain, Lakhmi; Watada, Junzo; Howlett, Robert

    2015-01-01

    This book contains innovative research from leading researchers who presented their work at the 17th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES 2013, held in Kitakyusha, Japan, in September 2013. The conference provided a competitive field of 236 contributors, from which 38 authors expanded their contributions and only 21 published. A plethora of techniques and innovative applications are represented within this volume. The chapters are organized using four themes. These topics include: data mining, knowledge management, advanced information processes and system modelling applications. Each topic contains multiple contributions and many offer case studies or innovative examples. Anyone that wants to work with information repositories or process knowledge should consider reading one or more chapters focused on their technique of choice. They may also benefit from reading other chapters to assess if an alternative technique represents a more suitable app...

  11. 基于语义处理技术的信息检索模型%Information Retrieval Model Based on Semantic Processing Technology

    Institute of Scientific and Technical Information of China (English)

    王瑞琴

    2012-01-01

    信息爆炸是当今信息社会的一大特点,如何在海量的信息中有效地找到所需信息因而成为了一个关键问题,语义检索技术是解决这一问题非常有潜力的方法.本文对信息检索中的若干关键问题进行了研究,提出了基于语义处理技术的信息检索模型--SPTIR,该模型主要包括以下关键技术:基于词义消歧的语义查询扩展、基于词汇语义相关性度量的查询优化和基于文档语义相关性的检索结果重排序.最后使用大型测试数据集和多项性能指标对SPTIR模型的检索性能进行了试验评估,实验结果充分验证了SPTIR模型的竞争优势以及该模型采用的各项语义处理技术对提高检索性能所起的积极作用.%We are in an information age that is mainly characterized by information explosion, and how to find moreprecise search results in the ocean of information becomes a key issue. Semantic search technique, fortunately, is a hopeful way to solve this problem. Several key problems in Information Retrieval (IR) domain are addressed and a novel Semantic Processing Technology based Information Retrieval ( SPTIR) model is proposed in this dissertation. SPTIR includes the following key technologies;Word Sense Disambiguation (WSD) based semantic query expansion, word semantic relatedness based query optimization and document semantic relevance based search results re-ranking. Finally large test data sets and a number of performance indicators are used to test the retrieval performance of the proposed model, and the experimental results fully validated the competitive advantage of SPTIR as well as the active role of semantic processing techniques adopted in improving the retrieval performance.

  12. Space-based Observation System Simulation Experiments for the Global Water Cycle: Information Tradeoffs, Model Diagnostics, and Exascale Computing

    Science.gov (United States)

    Reed, P. M.

    2011-12-01

    Global scale issues such as population growth, changing land-use, and climate change place our natural resources at the center of focus for a broad range of interdependent science, engineering, and policy problems. Our ability to mitigate and adapt to the accelerating rate of environmental change is critically dependent on our ability to observe and predict the natural, built, and social systems that define sustainability at the global scale. Despite the risks and challenges posed by global change, we are faced with critical risks to our ability to maintain and improve long term space-based observations of these changes. Despite consensus agreement on the critical importance of space-based Earth science, the fundamental challenge remains: How should we manage the severe tradeoffs and design challenges posed by maximizing the value of existing and proposed spaced-based Earth observation systems? Addressing this question requires transformative innovations in the design and management of spaced-based Earth observation systems that effectively take advantage of massively parallel computing architectures to enable the discovery and exploitation of critical mission tradeoffs using high-resolution space-based observation system simulation events (OSSEs) that simulate the global water cycle data that would result from sensing innovations and evaluates their merit with carefully constructed prediction and management benchmarks.

  13. Development of an interactive exploratory web-based modelling platform for informed decision-making and knowledgeable responses to global change

    Science.gov (United States)

    Holman, I.; Harrison, P.; Cojocaru, G.

    2013-12-01

    Informed decision-making and knowledgeable responses to global climate change impacts on natural resources and ecosystem services requires access to information resources that are credible, accurate, easy to understand, and appropriate. Too often stakeholders are limited to restricted scientific outputs produced by inaccessible models, generated from a limited number of scenario simulations chosen arbitrarily by researchers. This paper describes the outcomes of the CLIMSAVE project (www.climsave.eu), which has attempted to democratise climate change impacts, adaptation and vulnerability modelling, through developing the public domain interactive exploratory web-based CLIMSAVE Integrated Assessment (IA) Platform. The CLIMSAVE Integrated Assessment (IA) Platform aims to enable a wide range of stakeholders to improve their understanding surrounding impacts, adaptation responses and vulnerability of natural resources and ecosystem services under uncertain futures across Europe. The CLIMSAVE IA Platform contain linked simulation models (of the urban, water, agriculture, forestry, biodiversity and other sectors), IPCC AR4 climate scenarios and CLIMSAVE socio-economic scenarios, enabling users to select their inputs (climate and socioeconomic), rapidly run the models across Europe using their input settings and view their selected Impact (before, or after, adaptation) and Vulnerability (Figure 1) indicators. The CLIMSAVE IA Platform has been designed to promote both cognitive accessibility - the ease of understanding - and practical accessibility - the ease of application. Based upon partner and CLIMSAVE international experts' experience, examination of other participatory model interfaces and potential user requirements, we describe the design concepts and functionality that were identified, incorporated into the prototype CLIMSAVE IA Platform and further refined based on stakeholder feedback. The CLIMSAVE IA Platform is designed to facilitate a two-way iterative process

  14. Optimal Disturbance Accommodation with Limited Model Information

    CERN Document Server

    Farokhi, F; Johansson, K H

    2011-01-01

    The design of optimal dynamic disturbance-accommodation controller with limited model information is considered. We adapt the family of limited model information control design strategies, defined earlier by the authors, to handle dynamic-controllers. This family of limited model information design strategies construct subcontrollers distributively by accessing only local plant model information. The closed-loop performance of the dynamic-controllers that they can produce are studied using a performance metric called the competitive ratio which is the worst case ratio of the cost a control design strategy to the cost of the optimal control design with full model information.

  15. Vector space model for document representation in information retrieval

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2007-12-01

    Full Text Available This paper presents the basics of information retrieval: the vector space model for document representation with Boolean and term weighted models, ranking methods based on the cosine factor and evaluation measures: recall, precision and combined measure.

  16. Modeling of groundwater potential of the sub-basin of Siriri river, Sergipe state, Brazil, based on Geographic Information System and Remote Sensing

    Directory of Open Access Journals (Sweden)

    Washington Franca Rocha

    2011-08-01

    Full Text Available The use of Geographic Information System (GIS and Remote Sensing for modeling groundwater potential give support for the analysis and decision-making processes about water resource management in watersheds. The objective of this work consisted in modeling the groundwater water potential of Siriri river sub-basin, Sergipe state, based on its natural environment (soil, land use, slope, drainage density, lineament density, rainfall and geology using Remote Sensing and Geographic Information System as an integration environment. The groundwater potential map was done using digital image processing procedures of ENVI 4.4 software and map algebra of ArcGIS 9.3®. The Analytical Hierarchy Method was used for modeling the weights definition of the different criteria (maps. Loads and weights of the different classes were assigned to each map according to their influence on the overall objective of the work. The integration of these maps in a GIS environment and the AHP technique application allowed the development of the groundwater potential map in five classes: very low, low, moderate, high, very high. The average flow rates of wells confirm the potential of aquifers Sapucari, Barriers and Maruim since they are the most exploited in this sub-basin, with average flows of 78,113 L/h, 19,332 L/h and 12,085 L/h, respectively.

  17. A Frequency-Based Assignment Model under Day-to-Day Information Evolution of Oversaturated Conditions on a Feeder Bus Service

    Directory of Open Access Journals (Sweden)

    Silin Zhang

    2017-02-01

    Full Text Available Day-to-day information is increasingly being implemented in transit networks worldwide. Feeder bus service (FBS plays a vital role in a public transit network by providing feeder access to hubs and rails. As a feeder service, a space-time path for frequent passengers is decided by its dynamic strategy procedure, in which a day-to-day information self-learning mechanism is identified and analyzed from our survey data. We formulate a frequency-based assignment model considering day-to-day evolution under oversaturated conditions, which takes into account the residual capacity of bus and the comfort feelings of sitting or standing. The core of our proposed model is to allocate the passengers on each segment belonging to their own paths according to multi-utilities transformed from the time values and parametric demands, such as frequency, bus capacity, seat comfort, and stop layout. The assignment method, albeit general, allows us to formulate an equivalent optimization problem in terms of interaction between the FBS’ operation and frequent passengers’ rational behaviors. Finally, a real application case is generated to test the ability of the modeling framework capturing the theoretical consequents, serving the passengers’ dynamic externalities.

  18. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  19. Web-Based Information Extraction Technology

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Information extraction techniques on the Web are the current research hotspot. Now many information extraction techniques based on different principles have appeared and have different capabilities. We classify the existing information extraction techniques by the principle of information extraction and analyze the methods and principles of semantic information adding, schema defining,rule expression, semantic items locating and object locating in the approaches. Based on the above survey and analysis,several open problems are discussed.

  20. Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.

    Science.gov (United States)

    Staccini, Pascal M; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius

    2002-01-01

    Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation.

  1. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  2. Regionalization of Habitat Suitability of Masson’s Pine based on geographic information system and Fuzzy Matter-Element Model

    Science.gov (United States)

    Zhou, Xiuteng; Zhao, Manxi; Zhou, Liangyun; Yang, Guang; Huang, Luqi; Yan, Cuiqi; Huang, Quanshu; Ye, Liang; Zhang, Xiaobo; Guo, Lanpin; Ke, Xiao; Guo, Jiao

    2016-01-01

    Pine needles have been widely used in the development of anti-hypertensive and anti-hyperlipidemic agents and health food. However, the widespread distribution of this tree poses great obstacles to the quality control and efficacy evaluation. To facilitate the effective and rational exploitation of Masson’s pine (Pinus massoniana Lamb), as well as ensure effective development of Masson’s pine needles as a medicinal agent, we investigated the spatial distribution of habitat suitability and evaluated the optimal ranges of ecological factors of P. massoniana with 280 samples collected from 12 provinces in China through the evaluation of four constituents known to be effective medicinally. The results of habitat suitability evaluation were also verified by Root Mean Square Error (RMSE). Finally, five ecological factors were chosen in the establishment of a habitat suitability evaluation system. The most suitable areas for P. massoniana growth were mainly concentrated in the middle and lower reaches of the Yangtze River basin, such as Sichuan, Guizhou, and Jiangxi provinces, while the best quality needles were from Guizhou, Sichuan, and the junction area of Chongqing, Hunan, and Hubei provinces. This information revealed that suitable areas for effective constituent accumulation of Masson’s pine needles accounted for only 7.41% of its distribution area. PMID:27694967

  3. Regionalization of Habitat Suitability of Masson’s Pine based on geographic information system and Fuzzy Matter-Element Model

    Science.gov (United States)

    Zhou, Xiuteng; Zhao, Manxi; Zhou, Liangyun; Yang, Guang; Huang, Luqi; Yan, Cuiqi; Huang, Quanshu; Ye, Liang; Zhang, Xiaobo; Guo, Lanpin; Ke, Xiao; Guo, Jiao

    2016-10-01

    Pine needles have been widely used in the development of anti-hypertensive and anti-hyperlipidemic agents and health food. However, the widespread distribution of this tree poses great obstacles to the quality control and efficacy evaluation. To facilitate the effective and rational exploitation of Masson’s pine (Pinus massoniana Lamb), as well as ensure effective development of Masson’s pine needles as a medicinal agent, we investigated the spatial distribution of habitat suitability and evaluated the optimal ranges of ecological factors of P. massoniana with 280 samples collected from 12 provinces in China through the evaluation of four constituents known to be effective medicinally. The results of habitat suitability evaluation were also verified by Root Mean Square Error (RMSE). Finally, five ecological factors were chosen in the establishment of a habitat suitability evaluation system. The most suitable areas for P. massoniana growth were mainly concentrated in the middle and lower reaches of the Yangtze River basin, such as Sichuan, Guizhou, and Jiangxi provinces, while the best quality needles were from Guizhou, Sichuan, and the junction area of Chongqing, Hunan, and Hubei provinces. This information revealed that suitable areas for effective constituent accumulation of Masson’s pine needles accounted for only 7.41% of its distribution area.

  4. Directory of Energy Information Administration Models 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994.

  5. Analysis of the quality of hospital information systems in Isfahan teaching hospitals based on the DeLone and McLean model.

    Science.gov (United States)

    Saghaeiannejad-Isfahani, Sakineh; Saeedbakhsh, Saeed; Jahanbakhsh, Maryam; Habibi, Mahboobeh

    2015-01-01

    Quality is one of the most important criteria for the success of an information system, which refers to its desirable features of the processing system itself. The aim of this study was the analysis of system quality of hospital information systems (HIS) in teaching hospitals of Isfahan based on the DeLone and McLean model. This research was an applied and analytical-descriptive study. It was performed in teaching hospitals of Isfahan in 2010. The research population consisted of the HIS's users, system designers and hospital information technology (IT) authorities who were selected by random sampling method from users' group (n = 228), and system designers and IT authorities (n = 52) using census method. The data collection tool was two researcher-designed questionnaires. Questionnaires' reliability was estimated by using Cronbach's alpha was calculated. It was 97.1% for the system designers and IT authorities' questionnaire and 92.3% for system users' questionnaire. Findings showed that the mean of system quality score in a variety of HIS and among different hospitals was significantly different and not the same (P value ≥ 0.05). In general, Kosar (new version) system and Rahavard Rayaneh system have dedicated the highest and the lowest mean scores to themselves. The system quality criterion overall mean was 59.6% for different HIS and 57.5% among different hospitals respectively. According to the results of the research, it can be stated that based on the applied model, the investigated systems were relatively desirable in terms of quality. Thus, in order to achieve a good optimal condition, it is necessary to pay particular attention to the improving factors of system quality, type of activity, type of specialty and hospital ownership type.

  6. Spatial Information Based Medical Image Registration using Mutual Information

    Directory of Open Access Journals (Sweden)

    Benzheng Wei

    2011-06-01

    Full Text Available Image registration is a valuable technique for medical diagnosis and treatment. Due to the inferiority of image registration using maximum mutual information, a new hybrid method of multimodality medical image registration based on mutual information of spatial information is proposed. The new measure that combines mutual information, spatial information and feature characteristics, is proposed. Edge points are used as features, obtained from a morphology gradient detector. Feature characteristics like location, edge strength and orientation are taken into account to compute a joint probability distribution of corresponding edge points in two images. Mutual information based on this function is minimized to find the best alignment parameters. Finally, the translation parameters are calculated by using a modified Particle Swarm Optimization (MPSO algorithm. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  7. A STUDY ON IMPROVING INFORMATION PROCESSING ABILITIES BASED ON PBL

    Directory of Open Access Journals (Sweden)

    Du Gyu KIM,

    2014-04-01

    Full Text Available This study examined an instruction method for the improvement of information processing abilities in elementary school students. Current elementary students are required to develop information processing abilities to create new knowledge for this digital age. There is, however, a shortage of instruction strategies for these information processing abilities. This research proposes a method for teaching information processing abilities based on a problem-based learning model, and was tested with elementary students. The students developed an improved ability to create new knowledge and to present relationships with information through the process of problem solving. This study performed experimental research by comparing pre- and post-tests with twenty-three fifth grade elementary students over the course of eight months. This study produced a remarkable improvement in information selection, information reliability, information classification, information analysis, information comparison, and information internalization. This study presents an improved methodology for the teaching of information processing abilities.

  8. Complementarity of Historic Building Information Modelling and Geographic Information Systems

    Science.gov (United States)

    Yang, X.; Koehl, M.; Grussenmeyer, P.; Macher, H.

    2016-06-01

    In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM) and Geographical Information Systems (GIS) to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D), time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.

  9. Understanding requirements via natural language information modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sharp, J.K.; Becker, S.D.

    1993-07-01

    Information system requirements that are expressed as simple English sentences provide a clear understanding of what is needed between system specifiers, administrators, users, and developers of information systems. The approach used to develop the requirements is the Natural-language Information Analysis Methodology (NIAM). NIAM allows the processes, events, and business rules to be modeled using natural language. The natural language presentation enables the people who deal with the business issues that are to be supported by the information system to describe exactly the system requirements that designers and developers will implement. Computer prattle is completely eliminated from the requirements discussion. An example is presented that is based upon a section of a DOE Order involving nuclear materials management. Where possible, the section is analyzed to specify the process(es) to be done, the event(s) that start the process, and the business rules that are to be followed during the process. Examples, including constraints, are developed. The presentation steps through the modeling process and shows where the section of the DOE Order needs clarification, extensions or interpretations that could provide a more complete and accurate specification.

  10. 基于聚类信息的活动轮廓图像分割模型%Active Contour Model for Image Segmentation Based on Clustering Information

    Institute of Scientific and Technical Information of China (English)

    李敏; 梁久祯; 廖翠萃

    2015-01-01

    Based on traditional Chan-Vese ( CV) model, combining image clustering information, an effective active contour model for image segmentation is proposed in this paper. Firstly, the energy functional of CV model is improved, the gradient information of image is considered, and the accuracy of image segmentation is improved. Then, the coefficient K based on image clustering information is added in energy functional. And the image clustering information is used to initialize the level set curves automatically. In color image segmentation processing, weighting process on the RGB channel is proposed to improve the efficiency of segmentation. Finally, regularization term is added in energy functional to avoid re-initialization of the level set. The gray images and color images are segmented quickly and accurately. Experimental results shows the effectiveness of the proposed method.%基于传统Chan-Vese( CV)模型,结合图像聚类信息,提出一种有效的活动轮廓模型图像分割方法。该方法首先改进CV模型的能量泛函,考虑图像的梯度信息,提高图像分割的精确度。其次在能量泛函中添加图像的聚类信息系数K,并使用图像的聚类信息实现对水平集轮廓曲线的自动初始化。在分割处理彩色图像时,为提高分割效率,对彩色RGB图像的三通道进行加权处理。最后为能量泛函添加正则项,避免水平集的重新初始化,完成对灰度图像及彩色图像的快速精确分割。实验表明该方法的有效性。

  11. Theory of Dynamic Diagnosis Based on Integrated Maintenance Information

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on the concept of an integrated maintenance information system and related information environment, this paper discusses the process of troubleshooting in modern maintenance in detail, and gives a model of dynamic diagnosis of faults, in which a reasoning program is designed through taking advantage of information fusion and time analysis. In the end, the authors present the logic process of dynamic diagnosis through a typical example, and proposes a dynamic diagnostic system based on information fusion.

  12. The Informed Guide to Climate Data Sets, a web-based community resource to facilitate the discussion and selection of appropriate datasets for Earth System Model Evaluation

    Science.gov (United States)

    Schneider, D. P.; Deser, C.; Shea, D.

    2011-12-01

    When comparing CMIP5 model output to observations, researchers will be faced with a bewildering array of choices. Considering just a few of the different products available for commonly analyzed climate variables, for reanalysis there are at least half a dozen different products, for sea ice concentrations there are NASA Team or Bootstrap versions, for sea surface temperatures there are HadISST or NOAA ERSST data, and for precipitation there are CMAP and GPCP data sets. While many data centers exist to host data, there is little centralized guidance on discovering and choosing appropriate climate data sets for the task at hand. Common strategies like googling "sea ice data" yield results that at best are substantially incomplete. Anecdotal evidence suggests that individual researchers often base their selections on non-scientific criteria-either the data are in a convenient format that the user is comfortable with, a co-worker has the data handy on her local server, or a mentor discourages or recommends the use of particular products for legacy or other non-objective reasons. Sometimes these casual recommendations are sound, but they are not accessible to the broader community or adequately captured in the peer-reviewed literature. These issues are addressed by the establishment of a web-based Informed Guide with the specific goals to (1) Evaluate and assess selected climate datasets and (2) Provide expert user guidance on the strengths and limitations of selected climate datasets. The Informed Guide is based at NCAR's Climate and Global Dynamics Division, Climate Analysis Section and is funded by NSF. The Informed Guide is an interactive website that welcomes participation from the broad scientific community and is scalable to grow as participation increases. In this presentation, we will present the website, discuss how you can participate, and address the broader issues about its role in the evaluation of CMIP5 and other climate model simulations. A link to the

  13. Workflow management based on information management

    NARCIS (Netherlands)

    Lutters, Diederick; Mentink, R.J.; van Houten, Frederikus J.A.M.; Kals, H.J.J.

    2001-01-01

    In manufacturing processes, the role of the underlying information is of the utmost importance. Based on three different types of integration (function, information and control), as well as the theory of information management and the accompanying information structures, the entire product creation

  14. Predictive modeling of Time-Temperature-Transformation diagram of metallic glasses based on atomistically-informed classical nucleation theory.

    Science.gov (United States)

    Sato, Yuji; Nakai, Chiaki; Wakeda, Masato; Ogata, Shigenobu

    2017-08-03

    Theoretical prediction of glass forming ability (GFA) of metallic alloys is a key process in exploring metallic alloy compositions with excellent GFA and thus with the ability to form a large-sized bulk metallic glass. Molecular dynamics (MD) simulation is a promising tool to achieve a theoretical prediction. However, direct MD prediction continues to be challenging due to the time-scale limitation of MD. With respect to practical bulk metallic glass alloys, the time necessary for quenching at a typical cooling rate is five or more orders of magnitude higher than that at the MD time-scale. To overcome the time-scale issue, this study proposes a combined method of classical nucleation theory and MD simulations. The method actually allows to depict the time-temperature-transformation (TTT) diagram of the bulk metallic glass alloys. The TTT directly provides a prediction of the critical cooling rate and GFA. Although the method assumes conventional classical nucleation theory, all the material parameters appearing in the theory were determined by MD simulations using realistic interatomic potentials. The method is used to compute the TTT diagrams and critical cooling rates of two Cu-Zr alloy compositions (Cu50Zr50 and Cu20Zr80). The results indicate that the proposed method reasonably predicts the critical cooling rate based on the computed TTT.

  15. Geo-information Based Spatio-temporal Modeling of Urban Land Use and Land Cover Change in Butwal Municipality, Nepal

    Science.gov (United States)

    Mandal, U. K.

    2014-11-01

    Unscientific utilization of land use and land cover due to rapid growth of urban population deteriorates urban condition. Urban growth, land use change and future urban land demand are key concerns of urban planners. This paper is aimed to model urban land use change essential for sustainable urban development. GI science technology was employed to study the urban change dynamics using Markov Chain and CA-Markov and predicted the magnitude and spatial pattern. It was performed using the probability transition matrix from the Markov chain process, the suitability map of each land use/cover types and the contiguity filter. Suitability maps were generated from the MCE process where weight was derived from the pair wise comparison in the AHP process considering slope, land capability, distance to road, and settlement and water bodies as criterion of factor maps. Thematic land use land cover types of 1999, 2006, and 2013 of Landsat sensors were classified using MLC algorithm. The spatial extent increase from 1999 to 2013 in built up , bush and forest was observed to be 48.30 percent,79.48 percent and 7.79 percent, respectively, while decrease in agriculture and water bodies were 30.26 percent and 28.22 percent. The predicted urban LULC for 2020 and 2027 would provide useful inputs to the decision makers. Built up and bush expansion are explored as the main driving force for loss of agriculture and river areas and has the potential to continue in future also. The abandoned area of river bed has been converted to built- up areas.

  16. Bayesian information criterion for censored survival models.

    Science.gov (United States)

    Volinsky, C T; Raftery, A E

    2000-03-01

    We investigate the Bayesian Information Criterion (BIC) for variable selection in models for censored survival data. Kass and Wasserman (1995, Journal of the American Statistical Association 90, 928-934) showed that BIC provides a close approximation to the Bayes factor when a unit-information prior on the parameter space is used. We propose a revision of the penalty term in BIC so that it is defined in terms of the number of uncensored events instead of the number of observations. For a simple censored data model, this revision results in a better approximation to the exact Bayes factor based on a conjugate unit-information prior. In the Cox proportional hazards regression model, we propose defining BIC in terms of the maximized partial likelihood. Using the number of deaths rather than the number of individuals in the BIC penalty term corresponds to a more realistic prior on the parameter space and is shown to improve predictive performance for assessing stroke risk in the Cardiovascular Health Study.

  17. PRISM: a planned risk information seeking model.

    Science.gov (United States)

    Kahlor, LeeAnn

    2010-06-01

    Recent attention on health-related information seeking has focused primarily on information seeking within specific health and health risk contexts. This study attempts to shift some of that focus to individual-level variables that may impact health risk information seeking across contexts. To locate these variables, the researcher posits an integrated model, the Planned Risk Information Seeking Model (PRISM). The model, which treats risk information seeking as a deliberate (planned) behavior, maps variables found in the Theory of Planned Behavior (TPB; Ajzen, 1991) and the Risk Information Seeking and Processing Model (RISP; Griffin, Dunwoody, & Neuwirth, 1999), and posits linkages among those variables. This effort is further informed by Kahlor's (2007) Augmented RISP, the Theory of Motivated Information Management (Afifi & Weiner, 2004), the Comprehensive Model of Information Seeking (Johnson & Meischke, 1993), the Health Information Acquisition Model (Freimuth, Stein, & Kean, 1989), and the Extended Parallel Processing Model (Witte, 1998). The resulting integrated model accounted for 59% of the variance in health risk information-seeking intent and performed better than the TPB or the RISP alone.

  18. A proposed general model of information behaviour.

    Directory of Open Access Journals (Sweden)

    2003-01-01

    Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.

  19. XML-based product information processing method for product design

    Science.gov (United States)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  20. SEMANTIC TERM BASED INFORMATION RETRIEVAL USING ONTOLOGY

    OpenAIRE

    2014-01-01

    Information Searching and retrieval is a challenging task in the traditional keyword based textual information retrieval system. In the growing information age, adding huge data every day the searching problem also augmented. Keyword based retrieval system returns bulk of junk document irrelevant to query. To address the limitations, this paper proposed query terms along with semantic terms for information retrieval using multiple ontology reference. User query sometimes reflects multiple ...

  1. A qualitative model for temporal reasoning with incomplete information

    Energy Technology Data Exchange (ETDEWEB)

    Geffner, H. [Universidad Simon Bolivar, Caracas (Venezuela)

    1996-12-31

    We develop a qualitative framework for temporal reasoning with incomplete information that features a modeling language based on rules and a semantics based on infinitesimal probabilities. The framework relates logical and probabilistical models, and accommodates in a natural way features that have been found problematic in other models like non-determinism, action qualifications, parallel actions, and abduction to actions and fluents.

  2. Information aggregation leakage proof model based on assignment partition%基于任务划分的防信息聚合泄密模型

    Institute of Scientific and Technical Information of China (English)

    解文冲; 杨英杰; 汪永伟; 代向东

    2013-01-01

    针对BLP模型中存在的信息聚合泄密、可信主体权限过大以及模型完整性缺失的问题,结合文件分级保护的需求,提出了基于任务划分的防信息聚合泄密模型IALP.首先,探讨了信息聚合形成的原因及研究现状;然后,以任务划分为基础,对主体的信息可知度及客体所占信息权重进行量化,提出了相对可信主体的概念,给出了模型安全公理和状态转换规则.最后,经理论证明、应用举例和分析表明,该模型能够控制主体对具有聚合泄密关系的客体集合的可知程度,并在一定程度上限制可信主体权限以及增强完整性.%To solve the problems existing in BLP ( Bell-LaPadula) model, such as information aggregation leakage, excessive privileges of trusted subject and the deficiency of integrity, with reference to the application requirement of hierarchical file protection, an information aggregation leakage proof model named IALP (Information Aggregation Leakage Proof) was proposed based on assignment partition. First of all, the cause of information aggregation leakage and the current research situation were discussed. Secondly, on the basis of assignments partition, the knowledgeable degree of subject and the information weight of object were quantized, and the relatively trusted subject was proposed. Security axioms and state transition rules were given. Finally, the theoretical proof, application examples and analysis indicate that IALP can control the knowable degree of the subject towards the object set with the aggregation leakage relation, and limits the privilege of trusted subject and enhances the integrity to some extent.

  3. Information Audit Based on Image Content Filtering

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    At present, network information audit system is almost based on text information filtering, but badness information is embedded into image or image file directly by badness information provider, in order to avoid monitored by. The paper realizes an information audit system based on image content filtering. Taking the pornographic program identification for an example, the system can monitor the video including any abnormal human body information by matching the texture characters with those defined in advance, which consist of contrast, energy, correlation measure and entropy character measure and so on.

  4. 医疗服务供应链管理模型研究%Study on Medical Service Supply Chain Management Model Based on Information Platform

    Institute of Scientific and Technical Information of China (English)

    王振锋; 郭鹏; 颜功兴

    2012-01-01

    在分析医疗服务特点的基础上,建立了基于信息平台的医疗服务供应链管理模型.其中,供应商、医院和患者通过信息平台进行有效沟通,进行需求管理、人力资源管理、医疗服务能力管理、供应商关系管理、医疗服务质量管理和医患关系管理,以及与卫生局、食品药品监督管理局、银行和政府等医疗服务供应链外部的联系.通过CR医院的实证分析,证明了基于信息平台的医疗服务供应链管理模型的有效性和实用性.%Based on analyzing characteristics of medical service, the medical service supply chain management model based on information platform is establishment. Hospital, providers and patients communicates through the information platform, realizing demand management, human resources management, medical service ability management, supplier relationship management, medical service quality management and doctor-patient relationship management, as well as links medical service supply chain external, health bureau, food and drug administration, banking, government, and so on. Through empirical analysis of CR hospital, validity and practicability of the model is proved.

  5. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  6. Topic modelling in the information warfare domain

    CSIR Research Space (South Africa)

    De Waal, A

    2013-11-01

    Full Text Available In this paper the authors provide context to Topic Modelling as an Information Warfare technique. Topic modelling is a technique that discovers latent topics in unstructured and unlabelled collection of documents. The topic structure can be searched...

  7. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  8. Information Retrieval Interaction: an Analysis of Models

    Directory of Open Access Journals (Sweden)

    Farahnaz Sadoughi

    2012-03-01

    Full Text Available Information searching process is an interactive process; thus users has control on searching process, and they can manage the results of the search process. In this process, user's question became more mature, according to retrieved results. In addition, on the side of the information retrieval system, there are some processes that could not be realized, unless by user. Practically, this issue, is egregious in “Interaction” -i.e. process of user connection to other system elements- and in “Relevance judgment”. This paper had a glance to existence of “Interaction” in information retrieval, in first. Then the tradition model of information retrieval and its strenght and weak points were reviewed. Finally, the current models of interactive information retrieval includes: Belkin episodic model, Ingwersen cognitive model, Sarasevic stratified model, and Spinks interactive feedback model were elucidated.

  9. Using factor analysis scales of generalized amino acid information for prediction and characteristic analysis of β-turns in proteins based on a support vector machine model

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper offers a new combined approach to predict and characterize β-turns in proteins.The approach includes two key steps,i.e.,how to represent the features of β-turns and how to develop a predictor.The first step is to use factor analysis scales of generalized amino acid information(FASGAI),involving hydrophobicity,alpha and turn propensities,bulky properties,compositional characteristics,local flexibility and electronic properties,to represent the features of β-turns in proteins.The second step is to construct a support vector machine(SVM) predictor of β-turns based on 426 training proteins by a sevenfold cross validation test.The SVM predictor thus predicted β-turns on 547 and 823 proteins by an external validation test,separately.Our results are compared with the previously best known β-turn prediction methods and are shown to give comparative performance.Most significantly,the SVM model provides some information related to β-turn residues in proteins.The results demonstrate that the present combination approach may be used in the prediction of protein structures.

  10. Use of molecular markers to improve relationship information in the genetic evaluation of beef cattle tick resistance under pedigree-based models.

    Science.gov (United States)

    Junqueira, V S; Cardoso, F F; Oliveira, M M; Sollero, B P; Silva, F F; Lopes, P S

    2017-02-01

    The selection of genetically superior individuals is conditional upon accurate breeding value predictions which, in turn, are highly depend on how precisely relationship is represented by pedigree. For that purpose, the numerator relationship matrix is essential as a priori information in mixed model equations. The presence of pedigree errors and/or the lack of relationship information affect the genetic gain because it reduces the correlation between the true and estimated breeding values. Thus, this study aimed to evaluate the effects of correcting the pedigree relationships using single-nucleotide polymorphism (SNP) markers on genetic evaluation accuracies for resistance of beef cattle to ticks. Tick count data from Hereford and Braford cattle breeds were used as phenotype. Genotyping was carried out using a high-density panel (BovineHD - Illumina(®) bead chip with 777 962 SNPs) for sires and the Illumina BovineSNP50 panel (54 609 SNPs) for their progenies. The relationship between the parents and progenies of genotyped animals was evaluated, and mismatches were based on the Mendelian conflicts counts. Variance components and genetic parameters estimates were obtained using a Bayesian approach via Gibbs sampling, and the breeding values were predicted assuming a repeatability model. A total of 460 corrections in relationship definitions were made (Table 1) corresponding to 1018 (9.5%) tick count records. Among these changes, 97.17% (447) were related to the sire's information, and 2.8% (13) were related to the dam's information. We observed 27.2% (236/868) of Mendelian conflicts for sire-progeny genotyped pairs and 14.3% (13/91) for dam-progeny genotyped pairs. We performed 2174 new definitions of half-siblings according to the correlation coefficient between the coancestry and molecular coancestry matrices. It was observed that higher-quality genetic relationships did not result in significant differences of variance components estimates; however, they

  11. Enterprise Human Resources Information Mining Based on Improved Apriori Algorithm

    Directory of Open Access Journals (Sweden)

    Lei He

    2013-05-01

    Full Text Available With the unceasing development of information and technology in today’s modern society, enterprises’ demand of human resources information mining is getting bigger and bigger. Based on the enterprise human resources information mining situation, this paper puts forward a kind of improved Apriori algorithm based model on the enterprise human resources information mining, this model introduced data mining technology and traditional Apriori algorithm, and improved on its basis, divided the association rules mining task of the original algorithm into two subtasks of producing frequent item sets and producing rule, using SQL technology to directly generating frequent item sets, and using the method of establishing chart to extract the information which are interested to customers. The experimental results show that the improved Apriori algorithm based model on the enterprise human resources information mining is better in efficiency than the original algorithm, and the practical application test results show that the improved algorithm is practical and effective.

  12. 多源信息分级优化备件需求预测模型%Multi-source information classification optimization based spare parts demand prediction model

    Institute of Scientific and Technical Information of China (English)

    索海龙; 高建民; 高智勇; 刘元浩

    2015-01-01

    In order to solve the difficult demand prediction problem of main key spare parts in large power equipment manufactur-ing supply enterprises,the multi-source heterogeneous information from multisectoral departments was trimmed,classified and analyzed,and a spare parts demand prediction model based on multi-source information classification optimization was proposed. This model mainly included the establishment of the basic spare parts inventory,the model optimization of customer satisfaction rate,spare parts reserve strategy and the product service status.The spare parts results from hierarchical optimization prediction model,combined with time series forecasting method and enterprise actual forecasting methods respectively were analyzed by an actural example.Model actual satisfied rate is improved from 90.32% and 98.81% respectively to 98.87%.Meanwhile,practi-cal feasibility and economical efficiency were verified for large equipment main key spare parts demand prediction.%为了解决大型动力装备制造供应企业主关键备件需求预测难的问题,采用来自企业多部门的多源异构信息,对其进行整理、归类和分析,建立了一种基于多源信息分级优化备件需求预测模型。该模型主要包括备件基本库的建立、基于客户满足率的模型优化、基于备件储备策略的模型优化和基于产品服役状态的模型优化。分级优化备件需求预测方法分别与时序预测方法、企业实际预测方法得到的备件数量通过实例进行对比验证分析,该模型实际满足率分别由90.32%和98.81%提高到98.87%,对大型装备主关键备件的需求预测具有实际可行性和良好经济性。

  13. A process Approach to Information Services: Information Search Process (ISP Model

    Directory of Open Access Journals (Sweden)

    Hamid Keshavarz

    2010-12-01

    Full Text Available Information seeking is a behavior emerging out of the interaction between information seeker and information system and should be regarded as an episodic process so as to meet information needs of users and to take different roles in different stages of it. The present article introduces a process approach to information services in libraries using Carol Collier Kuhlthau Model. In this model, information seeking is regarded as a process consisting of six stages in each of which users have different thoughts, feelings and actions and librarians also take different roles at any stage correspondingly. These six stages are derived from instructive learning theory based on uncertainty principle. Regardless of some acceptable shortcomings, this model may be regarded as a new solution for rendering modern information services in libraries especially in relation to new information environments and media.

  14. Applying XML for designing and interchanging information for multidimensional model

    Institute of Scientific and Technical Information of China (English)

    Lu Changhui; Deng Su; Zhang Weiming

    2005-01-01

    In order to exchange and share information among the conceptual models of data warehouse, and to build a solid base for the integration and share of metadata, a new multidimensional concept model is presented based on XML and its DTD is defined, which can perfectly describe various semantic characteristics of multidimensional conceptual model. According to the multidimensional conceptual modeling technique which is based on UML, the mapping algorithm between the multidimensional conceptual model is described based on XML and UML class diagram, and an application base for the wide use of this technique is given.

  15. Probabilistic Modeling in Dynamic Information Retrieval

    OpenAIRE

    Sloan, M. C.

    2016-01-01

    Dynamic modeling is used to design systems that are adaptive to their changing environment and is currently poorly understood in information retrieval systems. Common elements in the information retrieval methodology, such as documents, relevance, users and tasks, are dynamic entities that may evolve over the course of several interactions, which is increasingly captured in search log datasets. Conventional frameworks and models in information retrieval treat these elements as static, or only...

  16. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  17. The Nature of Information Science: Changing Models

    Science.gov (United States)

    Robinson, Lyn; Karamuftuoglu, Murat

    2010-01-01

    Introduction: This paper considers the nature of information science as a discipline and profession. Method: It is based on conceptual analysis of the information science literature, and consideration of philosophical perspectives, particularly those of Kuhn and Peirce. Results: It is argued that information science may be understood as a field of…

  18. FINANCIAL MARKET MODEL WITH INFLUENTIAL INFORMED INVESTORS

    OpenAIRE

    AXEL GRORUD; MONIQUE PONTIER

    2005-01-01

    We develop a financial model with an "influential informed" investor who has an additional information and influences asset prices by means of his strategy. The prices dynamics are supposed to be driven by a Brownian motion, the informed investor's strategies affect the risky asset trends and the interest rate. Our paper could be seen as an extension of Cuoco and Cvitanic's work [4] since, as these authors, we solve the informed influential investor's optimization problem. But our main result...

  19. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  20. The Information Flow Analyzing Based on CPC

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhang; LI Hui

    2005-01-01

    The information flow chart within product life cycle is given out based on collaborative production commerce (CPC) thoughts. In this chart, the separated information systems are integrated by means of enterprise knowledge assets that are promoted by CPC from production knowledge. The information flow in R&D process is analyzed in the environment of virtual R&D group and distributed PDM. In addition, the information flow throughout the manufacturing and marketing process is analyzed in CPC environment.

  1. GEOGRAPHIC INFORMATION SYSTEM-BASED MODELING AND ANALYSIS FOR SITE SELECTION OF GREEN MUSSEL, Perna viridis, MARICULTURE IN LADA BAY, PANDEGLANG, BANTEN PROVINCE

    Directory of Open Access Journals (Sweden)

    I Nyoman Radiarta

    2011-06-01

    Full Text Available Green mussel is one of important species cultured in Lada Bay, Pandeglang. To provide a necessary guidance regarding green mussel mariculture development, finding suitable site is an important step. This study was conducted to identify suitable site for green mussel mariculture development using geographic information system (GIS based models. Seven important parameters were grouped into two submodels, namely environmental (water temperature, salinity, suspended solid, dissolve oxygen, and bathymetry and infrastructural (distance to settlement and pond aquaculture. A constraint data was used to exclude the area from suitability maps that cannot be allowed to develop green mussel mariculture, including area of floating net fishing activity and area near electricity station. Analyses of factors and constraints indicated that about 31% of potential area with bottom depth less than 25 m had the most suitable area. This area was shown to have an ideal condition for green mussel mariculture in this study region. This study shows that GIS model is a powerful tool for site selection decision making. The tool can be a valuable tool in solving problems in local, regional, and/or continent areas.

  2. Element-Based Computational Model

    Directory of Open Access Journals (Sweden)

    Conrad Mueller

    2012-02-01

    Full Text Available A variation on the data-flow model is proposed to use for developing parallel architectures. While the model is a data driven model it has significant differences to the data-flow model. The proposed model has an evaluation cycleof processing elements (encapsulated data that is similar to the instruction cycle of the von Neumann model. The elements contain the information required to process them. The model is inherently parallel. An emulation of the model has been implemented. The objective of this paper is to motivate support for taking the research further. Using matrix multiplication as a case study, the element/data-flow based model is compared with the instruction-based model. This is done using complexity analysis followed by empirical testing to verify this analysis. The positive results are given as motivation for the research to be taken to the next stage - that is, implementing the model using FPGAs.

  3. Enterprise Information System Architecture Based on Web 2.0

    Institute of Scientific and Technical Information of China (English)

    YI Xiushuang; WANG Yu; LIU Jinghong; WEN Zhankao

    2006-01-01

    Enterprise information systems with a great use of Web 2.0 technologies will be more open, free, and more efficient.With the contrast between classic Web technologies and Web 2.0 technologies, we represent a sample of enterprise information system based on Web 2.0, and show how the use of Web 2.0 technologies changes the system data exchange model of the enterprise information systems and how it improves the efficiency and effectiveness of information systems.

  4. Directory of Energy Information Administration models 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    This directory revises and updates the Directory of Energy Information Administration Models 1995, DOE/EIA-0293(95), Energy Information Administration (EIA), U.S. Department of Energy, July 1995. Four models have been deleted in this directory as they are no longer being used: (1) Market Penetration Model for Ground-Water Heat Pump Systems (MPGWHP); (2) Market Penetration Model for Residential Rooftop PV Systems (MPRESPV-PC); (3) Market Penetration Model for Active and Passive Solar Technologies (MPSOLARPC); and (4) Revenue Requirements Modeling System (RRMS).

  5. Unified framework for information integration based on information geometry.

    Science.gov (United States)

    Oizumi, Masafumi; Tsuchiya, Naotsugu; Amari, Shun-Ichi

    2016-12-20

    Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner.

  6. Intelligent Agent-Based System for Digital Library Information Retrieval

    Institute of Scientific and Technical Information of China (English)

    师雪霖; 牛振东; 宋瀚涛; 宋丽哲

    2003-01-01

    A new information search model is reported and the design and implementation of a system based on intelligent agent is presented. The system is an assistant information retrieval system which helps users to search what they need. The system consists of four main components: interface agent, information retrieval agent, broker agent and learning agent. They collaborate to implement system functions. The agents apply learning mechanisms based on an improved ID3 algorithm.

  7. Spatial-temporal characteristics of phosphorus in non-point source pollution with grid-based export coefficient model and geographical information system.

    Science.gov (United States)

    Liu, Ruimin; Dong, Guangxia; Xu, Fei; Wang, Xiujuan; He, Mengchang

    2015-01-01

    In this paper, the spatial changes and trends in non-point source (NPS) total phosphorus (TP) pollution were analyzed by land and non-land uses in the Songliao River Basin from 1986 to 2000 (14 years). A grid-based export coefficient model was used in the process of analysis based on to a geographic information system. The Songliao Basin is divided in four regions: Liaoning province, Jilin province (JL), Heilongjiang province and the eastern part of the Inner Mongolia (IM) Autonomous Region. Results indicated that the NPS phosphorus load caused by land use and non-land use increased steadily from 3.11×10(4) tons in 1986 to 3.49×10(4) tons in 2000. The southeastern region of the Songliao Plain was the most important NPS pollution contributor of all the districts. Although the TP load caused by land use decreased during the studied period in the Songliao River Basin, the contribution of land use to the TP load was dominant compared to non-land uses. The NPS pollution caused by non-land use steadily increased over the studied period. The IM Autonomous Region and JL province had the largest mean annual rate of change among all districts (more than 30%). In this area, livestock and poultry breeding had become one of the most important NPS pollution sources. These areas will need close attention in the future.

  8. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  9. Live sequence charts to model medical information

    Directory of Open Access Journals (Sweden)

    Aslakson Eric

    2012-06-01

    Full Text Available Abstract Background Medical records accumulate data concerning patient health and the natural history of disease progression. However, methods to mine information systematically in a form other than an electronic health record are not yet available. The purpose of this study was to develop an object modeling technique as a first step towards a formal database of medical records. Method Live Sequence Charts (LSC were used to formalize the narrative text obtained during a patient interview. LSCs utilize a visual scenario-based programming language to build object models. LSC extends the classical language of UML message sequence charts (MSC, predominantly through addition of modalities and providing executable semantics. Inter-object scenarios were defined to specify natural history event interactions and different scenarios in the narrative text. Result A simulated medical record was specified into LSC formalism by translating the text into an object model that comprised a set of entities and events. The entities described the participating components (i.e., doctor, patient and record and the events described the interactions between elements. A conceptual model is presented to illustrate the approach. An object model was generated from data extracted from an actual new patient interview, where the individual was eventually diagnosed as suffering from Chronic Fatigue Syndrome (CFS. This yielded a preliminary formal designated vocabulary for CFS development that provided a basis for future formalism of these records. Conclusions Translation of medical records into object models created the basis for a formal database of the patient narrative that temporally depicts the events preceding disease, the diagnosis and treatment approach. The LSCs object model of the medical narrative provided an intuitive, visual representation of the natural history of the patient’s disease.

  10. Information model construction of MES oriented to mechanical blanking workshop

    Science.gov (United States)

    Wang, Jin-bo; Wang, Jin-ye; Yue, Yan-fang; Yao, Xue-min

    2016-11-01

    Manufacturing Execution System (MES) is one of the crucial technologies to implement informatization management in manufacturing enterprises, and the construction of its information model is the base of MES database development. Basis on the analysis of the manufacturing process information in mechanical blanking workshop and the information requirement of MES every function module, the IDEF1X method was adopted to construct the information model of MES oriented to mechanical blanking workshop, and a detailed description of the data structure feature included in MES every function module and their logical relationship was given from the point of view of information relationship, which laid the foundation for the design of MES database.

  11. Norms, standards, models and recommendations for information security management

    Directory of Open Access Journals (Sweden)

    Karol Kreft

    2010-12-01

    Full Text Available Information is the factor which can decide about the potential and market value of a company. An increase in the value of intellectual capital of an information-driven company requires development of an effective security management system. More and more often companies develop information security management systems (ISMS based on already verified models. In the article, the main problems with management of information security were discussed. Security models were described, as well as the risk analysis in information security management.

  12. Parsimonious modeling with information filtering networks

    Science.gov (United States)

    Barfuss, Wolfram; Massara, Guido Previde; Di Matteo, T.; Aste, Tomaso

    2016-12-01

    We introduce a methodology to construct parsimonious probabilistic models. This method makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance from a simple sum of local inverse covariances computed on small subparts of the network. Being based on local and low-dimensional inversions, this method is computationally very efficient and statistically robust, even for the estimation of inverse covariance of high-dimensional, noisy, and short time series. Applied to financial data our method results are computationally more efficient than state-of-the-art methodologies such as Glasso producing, in a fraction of the computation time, models that can have equivalent or better performances but with a sparser inference structure. We also discuss performances with sparse factor models where we notice that relative performances decrease with the number of factors. The local nature of this approach allows us to perform computations in parallel and provides a tool for dynamical adaptation by partial updating when the properties of some variables change without the need of recomputing the whole model. This makes this approach particularly suitable to handle big data sets with large numbers of variables. Examples of practical application for forecasting, stress testing, and risk allocation in financial systems are also provided.

  13. Managing Event Information Modeling, Retrieval, and Applications

    CERN Document Server

    Gupta, Amarnath

    2011-01-01

    With the proliferation of citizen reporting, smart mobile devices, and social media, an increasing number of people are beginning to generate information about events they observe and participate in. A significant fraction of this information contains multimedia data to share the experience with their audience. A systematic information modeling and management framework is necessary to capture this widely heterogeneous, schemaless, potentially humongous information produced by many different people. This book is an attempt to examine the modeling, storage, querying, and applications of such an

  14. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  15. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  16. Agricultural information dissemination using ICTs: A review and analysis of information dissemination models in China

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2016-03-01

    Full Text Available Over the last three decades, China’s agriculture sector has been transformed from the traditional to modern practice through the effective deployment of Information and Communication Technologies (ICTs. Information processing and dissemination have played a critical role in this transformation process. Many studies in relation to agriculture information services have been conducted in China, but few of them have attempted to provide a comprehensive review and analysis of different information dissemination models and their applications. This paper aims to review and identify the ICT based information dissemination models in China and to share the knowledge and experience in applying emerging ICTs in disseminating agriculture information to farmers and farm communities to improve productivity and economic, social and environmental sustainability. The paper reviews and analyzes the development stages of China’s agricultural information dissemination systems and different mechanisms for agricultural information service development and operations. Seven ICT-based information dissemination models are identified and discussed. Success cases are presented. The findings provide a useful direction for researchers and practitioners in developing future ICT based information dissemination systems. It is hoped that this paper will also help other developing countries to learn from China’s experience and best practice in their endeavor of applying emerging ICTs in agriculture information dissemination and knowledge transfer.

  17. The asymmetric reactions of mean and volatility of stock returns to domestic and international information based on a four-regime double-threshold GARCH model

    Science.gov (United States)

    Chen, Cathy W. S.; Yang, Ming Jing; Gerlach, Richard; Jim Lo, H.

    2006-07-01

    In this paper, we investigate the asymmetric reactions of mean and volatility of stock returns in five major markets to their own local news and the US information via linear and nonlinear models. We introduce a four-regime Double-Threshold GARCH (DTGARCH) model, which allows asymmetry in both the conditional mean and variance equations simultaneously by employing two threshold variables, to analyze the stock markets’ reactions to different types of information (good/bad news) generated from the domestic markets and the US stock market. By applying the four-regime DTGARCH model, this study finds that the interaction between the information of domestic and US stock markets leads to the asymmetric reactions of stock returns and their variability. In addition, this research also finds that the positive autocorrelation reported in the previous studies of financial markets may in fact be mis-specified, and actually due to the local market's positive response to the US stock market.

  18. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using......This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... the intelligibility of speech signals contaminated by additive noise and potentially non-linearly processed using time-frequency weighting....

  19. Vision Based Geo Navigation Information Retreival

    Directory of Open Access Journals (Sweden)

    Asif Khan

    2016-01-01

    Full Text Available In order to derive the three-dimensional camera position from the monocular camera vision, a geo-reference database is needed. Floor plan is a ubiquitous geo-reference database that every building refers to it during construction and facility maintenance. Comparing with other popular geo-reference database such as geo-tagged photos, the generation, update and maintenance of floor plan database does not require costly and time consuming survey tasks. In vision based methods, the camera needs special attention. In contrast to other sensors, vision sensors typically yield vast information that needs complex strategies to permit use in real-time and on computationally con-strained platforms. This research work show that map-based visual odometer strategy derived from a state-of-the-art structure-from-motion framework is particularly suitable for locally stable, pose controlled flight. Issues concerning drifts and robustness are analyzed and discussed with respect to the original framework. Additionally, various usage of localization algorithm in view of vision has been proposed here. Though, a noteworthy downside with vision-based algorithms is the absence of robustness. The greater parts of the methodologies are delicate to scene varieties (like season or environment changes because of the way that they utilize the Sum of Squared Differences (SSD. To stop that, we utilize the Mutual Information which is exceptionally vigorous toward global and local scene varieties. On the other hand, dense methodologies are frequently identified with drift drawbacks. Here, attempt to take care of this issue by utilizing geo-referenced pictures. The algorithm of localization has been executed and experimental results are available. Vision sensors possess the potential to extract information about the surrounding environment and determine the locations of features or points of interest. Having mapped out landmarks in an unknown environment, subsequent observations

  20. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar

  1. Directory of Energy Information Administration Models 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-07-06

    This directory contains descriptions about each model, including the title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included in this directory are 35 EIA models active as of May 1, 1993. Models that run on personal computers are identified by ``PC`` as part of the acronym. EIA is developing new models, a National Energy Modeling System (NEMS), and is making changes to existing models to include new technologies, environmental issues, conservation, and renewables, as well as extend forecast horizon. Other parts of the Department are involved in this modeling effort. A fully operational model is planned which will integrate completed segments of NEMS for its first official application--preparation of EIA`s Annual Energy Outlook 1994. Abstracts for the new models will be included in next year`s version of this directory.

  2. Directory of energy information administration models 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-13

    This updated directory has been published annually; after this issue, it will be published only biennially. The Disruption Impact Simulator Model in use by EIA is included. Model descriptions have been updated according to revised documentation approved during the past year. This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included are 37 EIA models active as of February 1, 1995. The first group is the National Energy Modeling System (NEMS) models. The second group is all other EIA models that are not part of NEMS. Appendix A identifies major EIA modeling systems and the models within these systems. Appendix B is a summary of the `Annual Energy Outlook` Forecasting System.

  3. Information Clustering Based on Fuzzy Multisets.

    Science.gov (United States)

    Miyamoto, Sadaaki

    2003-01-01

    Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…

  4. Thermodynamic Model of Noise Information Transfer

    Science.gov (United States)

    Hejna, Bohdan

    2008-10-01

    In this paper we apply a certain unifying physical description of the results of Information Theory. Assuming that heat entropy is a thermodynamic realization of information entropy [2], we construct a cyclical, thermodynamic, average-value model of an information transfer chain [3] as a general heat engine, in particular a Carnot engine, reversible or irreversible. A working medium of the cycle (a thermodynamic system transforming input heat energy) can be considered as a thermodynamic, average-value model or, as such, as a realization of an information transfer channel. We show that in a model realized in this way the extended II. Principle of Thermodynamics is valid [2] and we formulate its information form.

  5. A Model for an Electronic Information Marketplace

    Directory of Open Access Journals (Sweden)

    Wei Ge

    2005-11-01

    Full Text Available As the information content on the Internet increases, the task of locating desired information and assessing its quality becomes increasingly difficult. This development causes users to be more willing to pay for information that is focused on specific issues, verifiable, and available upon request. Thus, the nature of the Internet opens up the opportunity for information trading. In this context, the Internet cannot only be used to close the transaction, but also to deliver the product - desired information - to the user. Early attempts to implement such business models have fallen short of expectations. In this paper, we discuss the limitations of such practices and present a modified business model for information trading, which uses a reverse auction approach together with a multiple-buyer price discovery process

  6. Analyzing Traditional Medical Practitioners' Information-Seeking Behaviour Using Taylor's Information-Use Environment Model

    Science.gov (United States)

    Olatokun, Wole Michael; Ajagbe, Enitan

    2010-01-01

    This survey-based study examined the information-seeking behaviour of traditional medical practitioners using Taylor's information use model. Respondents comprised all 160 traditional medical practitioners that treat sickle cell anaemia. Data were collected using an interviewer-administered, structured questionnaire. Frequency and percentage…

  7. Analyzing Traditional Medical Practitioners' Information-Seeking Behaviour Using Taylor's Information-Use Environment Model

    Science.gov (United States)

    Olatokun, Wole Michael; Ajagbe, Enitan

    2010-01-01

    This survey-based study examined the information-seeking behaviour of traditional medical practitioners using Taylor's information use model. Respondents comprised all 160 traditional medical practitioners that treat sickle cell anaemia. Data were collected using an interviewer-administered, structured questionnaire. Frequency and percentage…

  8. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  9. Complementarity of information sent via different bases

    DEFF Research Database (Denmark)

    Wu, Shengjun; Yu, Sixia; Mølmer, Klaus

    2009-01-01

    We discuss quantitatively the complementarity of information transmitted by a quantum system prepared in a basis state in one out of several different mutually unbiased bases (MUBs). We obtain upper bounds on the information available to a receiver who has no knowledge of which MUB was chosen...... by the sender. These upper bounds imply a complementarity of information encoded via different MUBs and ultimately ensure the security in quantum key distribution protocols....

  10. A Model for Teaching Information Design

    Science.gov (United States)

    Pettersson, Rune

    2011-01-01

    The author presents his views on the teaching of information design. The starting point includes some general aspects of teaching and learning. The multidisciplinary structure and content of information design as well as the combined practical and theoretical components influence studies of the discipline. Experiences from working with a model for…

  11. Measurements and Information in Spin Foam Models

    CERN Document Server

    Garcia-Islas, J Manuel

    2012-01-01

    We present a problem relating measurements and information theory in spin foam models. In the three dimensional case of quantum gravity we can compute probabilities of spin network graphs and study the behaviour of the Shannon entropy associated to the corresponding information. We present a general definition, compute the Shannon entropy of some examples, and find some interesting inequalities.

  12. A Model for Teaching Information Design

    Science.gov (United States)

    Pettersson, Rune

    2011-01-01

    The author presents his views on the teaching of information design. The starting point includes some general aspects of teaching and learning. The multidisciplinary structure and content of information design as well as the combined practical and theoretical components influence studies of the discipline. Experiences from working with a model for…

  13. The Information Service Evaluation (ISE Model

    Directory of Open Access Journals (Sweden)

    Laura Schumann

    2014-06-01

    Full Text Available Information services are an inherent part of our everyday life. Especially since ubiquitous cities are being developed all over the world their number is increasing even faster. They aim at facilitating the production of information and the access to the needed information and are supposed to make life easier. Until today many different evaluation models (among others, TAM, TAM 2, TAM 3, UTAUT and MATH have been developed to measure the quality and acceptance of these services. Still, they only consider subareas of the whole concept that represents an information service. As a holistic and comprehensive approach, the ISE Model studies five dimensions that influence adoption, use, impact and diffusion of the information service: information service quality, information user, information acceptance, information environment and time. All these aspects have a great impact on the final grading and of the success (or failure of the service. Our model combines approaches, which study subjective impressions of users (e.g., the perceived service quality, and user-independent, more objective approaches (e.g., the degree of gamification of a system. Furthermore, we adopt results of network economics, especially the "Success breeds success"-principle.

  14. Information Literacy for Health Professionals: Teaching Essential Information Skills with the Big6 Information Literacy Model

    Science.gov (United States)

    Santana Arroyo, Sonia

    2013-01-01

    Health professionals frequently do not possess the necessary information-seeking abilities to conduct an effective search in databases and Internet sources. Reference librarians may teach health professionals these information and technology skills through the Big6 information literacy model (Big6). This article aims to address this issue. It also…

  15. Education as an Information Based Organization.

    Science.gov (United States)

    Jones, Pamela

    Because information is quickly becoming the key resource in many industries globally, it seems natural to utilize and incorporate it as the primary resource in education, an institution that affects our entire society. A definition of an information-based organization (IBO) is proposed, steps to making an organization an IBO are identified, and a…

  16. Grid Resource Allocation Model Based on Incomplete Information Game%基于非完全信息博弈的网格资源分配模型

    Institute of Scientific and Technical Information of China (English)

    李明楚; 许雷; 孙伟峰; 陆坤; 郭成

    2012-01-01

    Considering the characteristics of the grid computing environment, dynamic, heterogeneous and distributional, and the problem of the low utilization ratio of resources and benefit imbalance in the grid resource distribution, this paper proposes a grid resource auction model which is multi-winners and based on the microeconomics theory. The contributions of this paper are listed as follows: first the study predicts the status of consumer's bidding price using the hidden Markov model; second, the paper presents the multi-winners auction model using Nash equilibrium of the incomplete information game, where it could enhance the utilizable rate of resources; thirdly, the condition of dominant strategy incentive compatibility is analyzed; finally, the paper proves the profits both of buyers and the seller all are maximal. Moreover, the utilizable ratio of the resource is proved to be increased through the contradistinctive experiment with other algorithms.%针对网格计算环境动态,异构和分布的特性以及网格资源分配中资源利用率低、效益不均等问题,结合微观经济学理论,建立了一种多赢家式的网格资源拍卖模型(muti-winners auction model,简称MWAM).将隐马尔可夫模型应用在网格用户t时刻出价状态预测方面,并结合分配算法计算出能够获得所需资源的概率;并且在原有资源分配机制的基础上,结合非完全信息纳什均衡理论设计了一种多赢家拍卖算法.从理论上证明了资源分配结束后系统收益最大,且本模型符合微观经济学中的激励相容性与个人理性准则.实验模拟在验证了隐马尔可夫预测的可行性的同时,又与几种具有代表性的算法相比较,从资源利用率、系统总收益等方面突显了本模型的优势.

  17. Decision Making Models Using Weather Forecast Information

    OpenAIRE

    Hiramatsu, Akio; Huynh, Van-Nam; Nakamori, Yoshiteru

    2007-01-01

    The quality of weather forecast has gradually improved, but weather information such as precipitation forecast is still uncertainty. Meteorologists have studied the use and economic value of weather information, and users have to translate weather information into their most desirable action. To maximize the economic value of users, the decision maker should select the optimum course of action for his company or project, based on an appropriate decision strategy under uncertain situations. In...

  18. Personalized Multimedia Information Retrieval based on User Profile Mining

    Directory of Open Access Journals (Sweden)

    Pengyi Zhang

    2013-10-01

    Full Text Available This paper focuses on how to retrieve personalized multimedia information based on user interest which can be mined from user profile. After analyzing the related works, a general structure of the personalized multimedia information retrieval system is given, which combines online module and offline module. Firstly, we collect a large-sale of photos from multimedia information sharing websites. Then, we record the information of the users who upload the multimedia information. For a given user, we save his history data which could describe the multimedia data. Secondly, the relationship between contents of multimedia data and semantic information is analyzed and then the user interest model is constructed by a modified LDA model which can integrate all the influencing factors in the task of multimedia information retrieval. Thirdly, the query distributions of all the topics can be estimated by the proposed modified LDA model. Thirdly, based on the above offline computing process, the online personalized multimedia information ranking algorithm is given which utilize the user interest model and the query word. Fourthly, multimedia information retrieval results are obtained using the proposed personalized multimedia information ranking algorithm. Finally, performance evaluation is conducted by a series of experiments to test the performance of the proposed algorithm compared with other methods on different datasets.

  19. Model-based geostatistics

    CERN Document Server

    Diggle, Peter J

    2007-01-01

    Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.

  20. Full feature data model for spatial information network integration

    Institute of Scientific and Technical Information of China (English)

    DENG Ji-qiu; BAO Guang-shu

    2006-01-01

    In allusion to the difficulty of integrating data with different models in integrating spatial information,the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical vectorraster integrative full feature model was put forward by integrating the advantage of vector and raster model and using the object-oriented method. The data structures of the four basic features, i.e. point, line, surface and solid,were described. An application was analyzed and described, and the characteristics of this model were described. In this model, all objects in the real world are divided into and described as features with hierarchy, and all the data are organized in vector. This model can describe data based on feature, field, network and other models, and avoid the disadvantage of inability to integrate data based on different models and perform spatial analysis on them in spatial information integration.

  1. High resolution reservoir geological modelling using outcrop information

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Changmin; Lin Kexiang; Liu Huaibo [Jianghan Petroleum Institute, Hubei (China)] [and others

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  2. Modelling Hen Harrier Dynamics to Inform Human-Wildlife Conflict Resolution: A Spatially-Realistic, Individual-Based Approach: e112492

    National Research Council Canada - National Science Library

    Johannes P M Heinonen; Stephen C F Palmer; Steve M Redpath; Justin M J Travis

    2014-01-01

      Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations...

  3. Scaling Informal Learning at the Workplace: A Model and Four Designs from a Large-Scale Design-Based Research Effort

    Science.gov (United States)

    Ley, Tobias; Cook, John; Dennerlein, Sebastian; Kravcik, Milos; Kunzmann, Christine; Pata, Kai; Purma, Jukka; Sandars, John; Santos, Patricia; Schmidt, Andreas; Al-Smadi, Mohammad; Trattner, Christoph

    2014-01-01

    Workplace learning happens in the process and context of work, is multi-episodic, often informal, problem based and takes place on a just-in-time basis. While this is a very effective means of delivery, it also does not scale very well beyond the immediate context. We review three types of technologies that have been suggested to scale learning…

  4. Perceived Threat and Corroboration: Key Factors That Improve a Predictive Model of Trust in Internet-based Health Information and Advice

    Science.gov (United States)

    Harris, Peter R; Briggs, Pam

    2011-01-01

    Background How do people decide which sites to use when seeking health advice online? We can assume, from related work in e-commerce, that general design factors known to affect trust in the site are important, but in this paper we also address the impact of factors specific to the health domain. Objective The current study aimed to (1) assess the factorial structure of a general measure of Web trust, (2) model how the resultant factors predicted trust in, and readiness to act on, the advice found on health-related websites, and (3) test whether adding variables from social cognition models to capture elements of the response to threatening, online health-risk information enhanced the prediction of these outcomes. Methods Participants were asked to recall a site they had used to search for health-related information and to think of that site when answering an online questionnaire. The questionnaire consisted of a general Web trust questionnaire plus items assessing appraisals of the site, including threat appraisals, information checking, and corroboration. It was promoted on the hungersite.com website. The URL was distributed via Yahoo and local print media. We assessed the factorial structure of the measures using principal components analysis and modeled how well they predicted the outcome measures using structural equation modeling (SEM) with EQS software. Results We report an analysis of the responses of participants who searched for health advice for themselves (N = 561). Analysis of the general Web trust questionnaire revealed 4 factors: information quality, personalization, impartiality, and credible design. In the final SEM model, information quality and impartiality were direct predictors of trust. However, variables specific to eHealth (perceived threat, coping, and corroboration) added substantially to the ability of the model to predict variance in trust and readiness to act on advice on the site. The final model achieved a satisfactory fit: χ2 5 = 10

  5. BIM. Building Information Model. Special issue; BIM. Building Information Model. Themanummer

    Energy Technology Data Exchange (ETDEWEB)

    Van Gelder, A.L.A. [Arta and Consultancy, Lage Zwaluwe (Netherlands); Van den Eijnden, P.A.A. [Stichting Marktwerking Installatietechniek, Zoetermeer (Netherlands); Veerman, J.; Mackaij, J.; Borst, E. [Royal Haskoning DHV, Nijmegen (Netherlands); Kruijsse, P.M.D. [Wolter en Dros, Amersfoort (Netherlands); Buma, W. [Merlijn Media, Waddinxveen (Netherlands); Bomhof, F.; Willems, P.H.; Boehms, M. [TNO, Delft (Netherlands); Hofman, M.; Verkerk, M. [ISSO, Rotterdam (Netherlands); Bodeving, M. [VIAC Installatie Adviseurs, Houten (Netherlands); Van Ravenswaaij, J.; Van Hoven, H. [BAM Techniek, Bunnik (Netherlands); Boeije, I.; Schalk, E. [Stabiplan, Bodegraven (Netherlands)

    2012-11-15

    A series of 14 articles illustrates the various aspects of the Building Information Model (BIM). The essence of BIM is to capture information about the building process and the building product. [Dutch] In 14 artikelen worden diverse aspecten m.b.t. het Building Information Model (BIM) belicht. De essentie van BIM is het vastleggen van informatie over het bouwproces en het bouwproduct.

  6. Study on Building Lifecycle Information Management Platform Based on BIM

    Directory of Open Access Journals (Sweden)

    Wang-Jian Ping

    2014-01-01

    Full Text Available Building Information Modeling (BIM and building lifecycle management (BLM, proposed for the realization of building lifecycle information exchange and sharing, play a crucial role in the research and development fields of construction information integration and interoperability. This study, from an information technology point of view, based on BLM and BIM technology and Industry Foundation Classes (IFC standard, proposes the concept, frame and realization method of Building Lifecycle Management Platform (BLMP. This BLMP presents a practical and effective way to realize information creating, exchange, sharing and integration management of all participants of the construction project.

  7. Models of Financial Market Information Ecology

    Science.gov (United States)

    Challet, Damien

    I discuss a new simple framework that allows a more realistic modelling of speculation. The resulting model features expliciting position holding, contagion between predictability patterns, allows for an explicit measure of market inefficiency and substantiates the use of the minority game to study information ecology in financial markets.

  8. Multi-dimensional indoor location information model

    NARCIS (Netherlands)

    Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.

    2013-01-01

    Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and

  9. Millennial Students' Mental Models of Information Retrieval

    Science.gov (United States)

    Holman, Lucy

    2009-01-01

    This qualitative study examines first-year college students' online search habits in order to identify patterns in millennials' mental models of information retrieval. The study employed a combination of modified contextual inquiry and concept mapping methodologies to elicit students' mental models. The researcher confirmed previously observed…

  10. Visual Information Processing Based on Qualitative Mapping

    Institute of Scientific and Technical Information of China (English)

    LI Hua; LIU Yongchang; LI Chao

    2007-01-01

    Visual information processing is not only an important research direction in fields of psychology,neuroscience and artificial intelligence etc,but also the research base on biological recognition theory and technology realization.Visual information processing in existence,e.g.visual information processing facing to nerve calculation,visual information processing using substance shape distilling and wavelet under high yawp,ANN visual information processing and etc,are very complex in comparison.Using qualitative Mapping,this text describes the specific attributes in the course of visual information processing and the results are more brief and straightforward.So the software program of vision recognition is probably easier to realize.

  11. An information criterion for marginal structural models.

    Science.gov (United States)

    Platt, Robert W; Brookhart, M Alan; Cole, Stephen R; Westreich, Daniel; Schisterman, Enrique F

    2013-04-15

    Marginal structural models were developed as a semiparametric alternative to the G-computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi-likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at 1 year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV-infected men and women. The marginal structural model specified should reflect the scientific question being addressed but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications.

  12. Molecular model with quantum mechanical bonding information.

    Science.gov (United States)

    Bohórquez, Hugo J; Boyd, Russell J; Matta, Chérif F

    2011-11-17

    The molecular structure can be defined quantum mechanically thanks to the theory of atoms in molecules. Here, we report a new molecular model that reflects quantum mechanical properties of the chemical bonds. This graphical representation of molecules is based on the topology of the electron density at the critical points. The eigenvalues of the Hessian are used for depicting the critical points three-dimensionally. The bond path linking two atoms has a thickness that is proportional to the electron density at the bond critical point. The nuclei are represented according to the experimentally determined atomic radii. The resulting molecular structures are similar to the traditional ball and stick ones, with the difference that in this model each object included in the plot provides topological information about the atoms and bonding interactions. As a result, the character and intensity of any given interatomic interaction can be identified by visual inspection, including the noncovalent ones. Because similar bonding interactions have similar plots, this tool permits the visualization of chemical bond transferability, revealing the presence of functional groups in large molecules.

  13. Optimal Control Design with Limited Model Information

    CERN Document Server

    Farokhi, F; Johansson, K H

    2011-01-01

    We introduce the family of limited model information control design methods, which construct controllers by accessing the plant's model in a constrained way, according to a given design graph. We investigate the achievable closed-loop performance of discrete-time linear time-invariant plants under a separable quadratic cost performance measure with structured static state-feedback controllers. We find the optimal control design strategy (in terms of the competitive ratio and domination metrics) when the control designer has access to the local model information and the global interconnection structure of the plant-to-be-controlled. At last, we study the trade-off between the amount of model information exploited by a control design method and the best closed-loop performance (in terms of the competitive ratio) of controllers it can produce.

  14. Information theory based approaches to cellular signaling.

    Science.gov (United States)

    Waltermann, Christian; Klipp, Edda

    2011-10-01

    Cells interact with their environment and they have to react adequately to internal and external changes such changes in nutrient composition, physical properties like temperature or osmolarity and other stresses. More specifically, they must be able to evaluate whether the external change is significant or just in the range of noise. Based on multiple external parameters they have to compute an optimal response. Cellular signaling pathways are considered as the major means of information perception and transmission in cells. Here, we review different attempts to quantify information processing on the level of individual cells. We refer to Shannon entropy, mutual information, and informal measures of signaling pathway cross-talk and specificity. Information theory in systems biology has been successfully applied to identification of optimal pathway structures, mutual information and entropy as system response in sensitivity analysis, and quantification of input and output information. While the study of information transmission within the framework of information theory in technical systems is an advanced field with high impact in engineering and telecommunication, its application to biological objects and processes is still restricted to specific fields such as neuroscience, structural and molecular biology. However, in systems biology dealing with a holistic understanding of biochemical systems and cellular signaling only recently a number of examples for the application of information theory have emerged. This article is part of a Special Issue entitled Systems Biology of Microorganisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Moving Target Information Extraction Based on Single Satellite Image

    Directory of Open Access Journals (Sweden)

    ZHAO Shihu

    2015-03-01

    Full Text Available The spatial and time variant effects in high resolution satellite push broom imaging are analyzed. A spatial and time variant imaging model is established. A moving target information extraction method is proposed based on a single satellite remote sensing image. The experiment computes two airplanes' flying speed using ZY-3 multispectral image and proves the validity of spatial and time variant model and moving information extracting method.

  16. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  17. Implementation of Web-based Information Systems in Distributed Organizations

    DEFF Research Database (Denmark)

    Bødker, Keld; Pors, Jens Kaaber; Simonsen, Jesper

    2004-01-01

    This article presents results elicited from studies conducted in relation to implementing a web-based information system throughout a large distributed organization. We demonstrate the kind of expectations and conditions for change that management face in relation to open-ended, configurable......, and context specific web-based information systems like Lotus QuickPlace. Our synthesis from the empirical findings is related to two recent models, the improvisational change management model suggested by Orlikowski and Hofman (1997), and Gallivan's (2001) model for organizational adoption and assimilation...

  18. Information retrieval models foundations and relationships

    CERN Document Server

    Roelleke, Thomas

    2013-01-01

    Information Retrieval (IR) models are a core component of IR research and IR systems. The past decade brought a consolidation of the family of IR models, which by 2000 consisted of relatively isolated views on TF-IDF (Term-Frequency times Inverse-Document-Frequency) as the weighting scheme in the vector-space model (VSM), the probabilistic relevance framework (PRF), the binary independence retrieval (BIR) model, BM25 (Best-Match Version 25, the main instantiation of the PRF/BIR), and language modelling (LM). Also, the early 2000s saw the arrival of divergence from randomness (DFR).Regarding in

  19. TUNS/TCIS information model/process model

    Science.gov (United States)

    Wilson, James

    1992-01-01

    An Information Model is comprised of graphical and textual notation suitable for describing and defining the problem domain - in our case, TUNS or TCIS. The model focuses on the real world under study. It identifies what is in the problem and organizes the data into a formal structure for documentation and communication purposes. The Information Model is composed of an Entity Relationship Diagram (ERD) and a Data Dictionary component. The combination of these components provide an easy to understand methodology for expressing the entities in the problem space, the relationships between entities and the characteristics (attributes) of the entities. This approach is the first step in information system development. The Information Model identifies the complete set of data elements processed by TUNS. This representation provides a conceptual view of TUNS from the perspective of entities, data, and relationships. The Information Model reflects the business practices and real-world entities that users must deal with.

  20. Research on BIM-based Construction Domain Text Information Management

    Directory of Open Access Journals (Sweden)

    Shaohua Jiang

    2013-06-01

    Full Text Available Construction project produces a large amount of unstructured information throughout the whole lifecycle, most of them is text information. Building information modeling (BIM can support lifecycle information management of construction project, so BIM-based construction domain text information integration management can improve the efficiency and quality of project management to a large extent. The concept of BIM and its implementation platform, as well as the data exchange standard, i.e. industry foundation class (IFC, are introduced firstly. Then this paper puts forward a systematic unstructured construction domain text information management system framework, and the implementation of BIM-based text information integration methodology: the unstructured text information is transformed to structured information by means of text mining to facilitate information retrieval and ranking; then the text information is classified according to the IFC standard, and is associated with entities in BIM to realize the integration of text information and BIM. Finally, this paper takes contract document as an example for verification. The proposed method can improve text information management ability and efficiency of construction domain.

  1. Bayesian Case-deletion Model Complexity and Information Criterion.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Chen, Qingxia

    2014-10-01

    We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example.

  2. A linguistic model of informed consent.

    Science.gov (United States)

    Marta, J

    1996-02-01

    The current disclosure model of informed consent ignores the linguistic complexity of any act of communication, and the increased risk of difficulties in the special circumstances of informed consent. This article explores, through linguistic analysis, the specificity of informed consent as a speech act, a communication act, and a form of dialogue, following on the theories of J.L. Austin, Roman Jakobson, and Mikhail Bakhtin, respectively. In the proposed model, informed consent is a performative speech act resulting from a series of communication acts which together constitute a dialogic, polyphonic, heteroglossial discourse. It is an act of speech that results in action being taken after a conversation has happened where distinct individuals, multiple voices, and multiple perspectives have been respected, and convention observed and recognized. It is more meaningful and more ethical for both patient and physician, in all their human facets including their interconnectedness.

  3. Optimal information diffusion in stochastic block models

    CERN Document Server

    Curato, Gianbiagio

    2016-01-01

    We use the linear threshold model to study the diffusion of information on a network generated by the stochastic block model. We focus our analysis on a two community structure where the initial set of informed nodes lies only in one of the two communities and we look for optimal network structures, i.e. those maximizing the asymptotic extent of the diffusion. We find that, constraining the mean degree and the fraction of initially informed nodes, the optimal structure can be assortative (modular), core-periphery, or even disassortative. We then look for minimal cost structures, i.e. those such that a minimal fraction of initially informed nodes is needed to trigger a global cascade. We find that the optimal networks are assortative but with a structure very close to a core-periphery graph, i.e. a very dense community linked to a much more sparsely connected periphery.

  4. SEMANTIC TERM BASED INFORMATION RETRIEVAL USING ONTOLOGY

    Directory of Open Access Journals (Sweden)

    J. Mannar Mannan

    2014-01-01

    Full Text Available Information Searching and retrieval is a challenging task in the traditional keyword based textual information retrieval system. In the growing information age, adding huge data every day the searching problem also augmented. Keyword based retrieval system returns bulk of junk document irrelevant to query. To address the limitations, this paper proposed query terms along with semantic terms for information retrieval using multiple ontology reference. User query sometimes reflects multiple domain of interest that persist us to collect semantically related ontologies. If no related ontology exists then WordNet ontology used to retrieve semantic terms related to query term. In this approach, classes on the ontology derived as semantic related text keywords, these keywords considered for rank the documents.

  5. Information Search Process Model: How Freshmen Begin Research.

    Science.gov (United States)

    Swain, Deborah E.

    1996-01-01

    Investigates Kuhlthau's Search Process Model for information seeking using two Freshmen English classes. Data showed that students followed the six stages Kuhlthau proposed and suggest extensions to the model, including changing the order of the tasks, iterating and combining steps, and revising search goals based on social and interpersonal…

  6. The Sanctuary Model of Trauma-Informed Organizational Change

    Science.gov (United States)

    Bloom, Sandra L.; Sreedhar, Sarah Yanosy

    2008-01-01

    This article features the Sanctuary Model[R], a trauma-informed method for creating or changing an organizational culture. Although the model is based on trauma theory, its tenets have application in working with children and adults across a wide diagnostic spectrum. Originally developed in a short-term, acute inpatient psychiatric setting for…

  7. Image Filtering Based on Improved Information Entropy

    Institute of Scientific and Technical Information of China (English)

    JINGXiaojun; LIUYulin; XIONGYuqing

    2004-01-01

    An image filtering based on improved information entropy is proposed in this paper, which can overcome the shortcomings of hybrid linear and non-linear filtering algorithm. Due to the shortcomings of information entropy in the field of data fusion, we introduce the consistency constraint factor of sub-source report and subsource performance difference parameter, propose the concept of fusion entropy, utilize its amendment and regularity function on sub-source decision-making matrix, bring into play the competency, redundency and complementarity of information fusion, suppress and delete fault and invalid information, strengthen and preserve correct and useful information, overcome the risk of error reporting on single source critical point and the shortcomings of reliability and error tolerating, add the decision-making criteria of multiple sub-source fusion, finally improve filtering quality. Subsequent experiments show its validity and improved filtering performance, thus providing a new way of image filtering technique.

  8. Information encryption systems based on Boolean functions

    Directory of Open Access Journals (Sweden)

    Aureliu Zgureanu

    2011-02-01

    Full Text Available An information encryption system based on Boolean functions is proposed. Information processing is done using multidimensional matrices, performing logical operations with these matrices. At the basis of ensuring high level security of the system the complexity of solving the problem of building systems of Boolean functions that depend on many variables (tens and hundreds is set. Such systems represent the private key. It varies both during the encryption and decryption of information, and during the transition from one message to another.

  9. Improving information for community-based adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Huq, Saleemul

    2011-10-15

    Community-based adaptation aims to empower local people to cope with and plan for the impacts of climate change. In a world where knowledge equals power, you could be forgiven for thinking that enabling this type of adaptation boils down to providing local people with information. Conventional approaches to planning adaptation rely on 'expert' advice and credible 'science' from authoritative information providers such as the Intergovernmental Panel on Climate Change. But to truly support the needs of local communities, this information needs to be more site-specific, more user-friendly and more inclusive of traditional knowledge and existing coping practices.

  10. Improving information for community-based adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Huq, Saleemul

    2011-10-15

    Community-based adaptation aims to empower local people to cope with and plan for the impacts of climate change. In a world where knowledge equals power, you could be forgiven for thinking that enabling this type of adaptation boils down to providing local people with information. Conventional approaches to planning adaptation rely on 'expert' advice and credible 'science' from authoritative information providers such as the Intergovernmental Panel on Climate Change. But to truly support the needs of local communities, this information needs to be more site-specific, more user-friendly and more inclusive of traditional knowledge and existing coping practices.

  11. Aligning building information model tools and construction management methods

    NARCIS (Netherlands)

    Hartmann, Timo; van Meerveld, H.J.; Vossebeld, N.; Adriaanse, Adriaan Maria

    2012-01-01

    Few empirical studies exist that can explain how different Building Information Model (BIM) based tool implementation strategies work in practical contexts. To help overcoming this gap, this paper describes the implementation of two BIM based tools, the first, to support the activities at an estimat

  12. Synthetic information prediction system for crisis mine based on GIS

    Institute of Scientific and Technical Information of China (English)

    Yuxin Ye; Ping Yu; Shi Wang; Shuisheng Ye

    2006-01-01

    Reserves of some kinds of the crisis mines will be lack now or from now on, because of lacking seriously reserves of mineral resources and the crisis of exploring bases in support. So that it is urgent to predict, appraise, development and utilize the replaceable resources of the crisis mines. The mineral resources prediction software system of synthetic information is intelligent GIS which is used to quantitative prediction of large-scale synthetic information mineral target. It takes the geological body and the mineral resource body as a unit. And it analyzes the ore deposit genesis and metallotect, knows the spatial distribution laws of the ore deposit and ore body, and establish the prospecting model based on the concept of establishing the three-dimensional space of a mine. This paper will primarily discuss some important problems as follows: the secondary development of various kinds of data(including geology, geophysical prospecting, geochemical prospecting and remote sensing, etc); process synthetically and establish the synthetic information interpretative map base; correspond prospecting model with synthetic information of ore deposit; divided into statistical units of metallogenic information synthetic anomalies based on the synthetic information anomalies of ore control, then research the metallogenic information variable of unit synthetically and make quantitative prediction according to choose the quantitative prediction math model which is suitable to the demands of large-scale precision; at last, finish the target area optimization of ore deposit (body).

  13. 基于本体的蒙古语灾害信息检索模型%Mongolian Disaster Information Retrieval Model Based on Ontology

    Institute of Scientific and Technical Information of China (English)

    苏依拉; 窦保媛; 吉亚图

    2016-01-01

    The development of Mongolian language information is slow due to lack of electronic text data and massive difficulty of language analysis. In order to solve the problem mentioned above, a cross language retrieval model oriented to Mongolian natural disaster information was constructed by using Semantic Web ontology technology. Model test results show that this model can achieve a better cross language retrieval effect. At the same time, the cross language retrieval model proposed in this paper has a certain commonality, which can provide reference for similar applications.%由于蒙古语文本数据的匮乏以及语言分析困难等原因,蒙古语信息化处理发展缓慢。针对该问题,利用语义网本体技术,以自然灾害为本体构建了一个面向蒙古语自然灾害信息的跨语言检索模型,实现了蒙英自然灾害信息间的跨语言检索。同时,本文提出的跨语言检索模型具有一定的通用性,可为类似应用研究提供参考。

  14. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements......Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...

  15. A Cross-sectional Study Assessing Predictors of Essential Medicines Prescribing Behavior Based on Information-motivation-behavioral Skills Model among County Hospitals in Anhui, China

    Institute of Scientific and Technical Information of China (English)

    Yun-Wu Zhao; Jing-Ya Wu; Heng Wang; Nian-Nian Li; Cheng Bian; Shu-Man Xu; Peng Li

    2015-01-01

    Background:The self-consciousness and practicality of preferentially prescribed essential medicines (EMs) are not high enough in county hospitals.The purposes of this study were to use the information-motivation-behavioral skills (IMB) model to identify the predictors of essential medicines prescribing behavior (EMPB) among doctors and to examine the association between demographic variables,IMB,and EMPB.Methods:A cross-sectional study was carried out to assess predictive relationships among demographic variables and IMB model variables using an anonymous questionnaire administered in nine county hospitals of Anhui province.A structural equation model was constructed for the IMB model to test the instruments using analysis of moment structures 17.0.Results:A total of 732 participants completed the survey.The average age of the participants was 37.7 ± 8.9 years old (range:22-67 years old).The correct rate of information was 90.64%.The average scores of the motivation and behavioral skills were 45.46 ± 7.34 (hundred mark system:75.77) and 19.92 ± 3.44 (hundred mark system:79.68),respectively.Approximately half(50.8%) of respondents reported that the proportion of EM prescription was below 60%.The final revised model indicated a good fit to the data (x2/df=4.146,goodness of fit index =0.948,comparative fit index =0.938,root mean square error of approximation =0.066).More work experience (β =0.153,P < 0.001) and behavioral skills (β =0.449,P < 0.001) predicted more EMPB.Higher income predicted less information (β =-0.197,P < 0.001) and motivation (β =-0.204,P < 0.001).Behavioral skills were positively predicted by information (β =0.135,P < 0.001) and motivation (β =0.742,P < 0.001).Conclusion:The present study predicted some factors of EMPB,and specified the relationships among the model variables.The utilization rate of EM was not high enough.Motivation and behavior skills were crucial factors affecting EMPB.The influence of demographic

  16. CORBA Based Information Integration Platform for CIMS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new information integration platform for computer integrated manufacturing system(CIMS) is presented, which is based on agent and CORBA. CORBA enhances the system integration be-cause it is an industry-standard for interoperable, distributed objects across heterogeneous hardware andsoftware platform. Agent technology is used to improve intelligence of integration system. In order to im-plement the information integration platform, we use network integration server to integrate network, de-sign a generic database agent to integrate database, adopt multi-agent based architecture to integrate appli-cation, and utilize wrapper as CORBA object to integrate legacy code.

  17. Informed Systems: Enabling Collaborative Evidence Based Organizational Learning

    Directory of Open Access Journals (Sweden)

    Mary M. Somerville

    2015-12-01

    Full Text Available Objective – In response to unrelenting disruptions in academic publishing and higher education ecosystems, the Informed Systems approach supports evidence based professional activities to make decisions and take actions. This conceptual paper presents two core models, Informed Systems Leadership Model and Collaborative Evidence-Based Information Process Model, whereby co-workers learn to make informed decisions by identifying the decisions to be made and the information required for those decisions. This is accomplished through collaborative design and iterative evaluation of workplace systems, relationships, and practices. Over time, increasingly effective and efficient structures and processes for using information to learn further organizational renewal and advance nimble responsiveness amidst dynamically changing circumstances. Methods – The integrated Informed Systems approach to fostering persistent workplace inquiry has its genesis in three theories that together activate and enable robust information usage and organizational learning. The information- and learning-intensive theories of Peter Checkland in England, which advance systems design, stimulate participants’ appreciation during the design process of the potential for using information to learn. Within a co-designed environment, intentional social practices continue workplace learning, described by Christine Bruce in Australia as informed learning enacted through information experiences. In addition, in Japan, Ikujiro Nonaka’s theories foster information exchange processes and knowledge creation activities within and across organizational units. In combination, these theories promote the kind of learning made possible through evolving and transferable capacity to use information to learn through design and usage of collaborative communication systems with associated professional practices. Informed Systems therein draws from three antecedent theories to create an original

  18. Scope of Building Information Modeling (BIM in India

    Directory of Open Access Journals (Sweden)

    Mahua Mukherjee

    2009-01-01

    Full Text Available The design communication is gradually being changed from 2D based to integrated 3D digital interface. Building InformationModeling (BIM is a model-based design concept, in which buildings will be built virtually before they get built outin the field, where data models organized for complete integration of all relevant factors in the building lifecycle whichalso manages the information exchange between the AEC (Architects, Engineers, Contractors professionals, to strengthenthe interaction between the design team. BIM is a shared knowledge about the information for decisions making during itslifecycle. There’s still much to be learned about the opportunities and implications of this tool.This paper deals with the status check of BIM application in India, to do that a survey has been designed to check the acceptanceof BIM till date, while this application is widely accepted throughout the industry in many countries for managingproject information with capabilities for cost control and facilities management.

  19. 基于社会网络的信息传播度量模型%The Metric Model of Information Dissemination Based on Social Network

    Institute of Scientific and Technical Information of China (English)

    王玉姣

    2014-01-01

    In the information age ,social network has become the platform for human beings to create and share informa‐tion ,meanwhile the effects and influence of information dissemination in the network is growing fast .So ,revealing the principles of information dissemination between users is very important to timely grasp and control the influence range dur‐ing the information dissemination .So in order to reveal the law of the dissemination of information ,we built a metric mod‐el of social network based on the social network direct relationship intensity P ,the social network potential relationship strength Q and the social network information dissemination capabilities W .By measuring the relationship strength of the class members to calculate the potential relationship strength between each other ,then use the calculated data to analyze the information dissemination in the class network .Results show that the greater intensity of social relations ,the greater probability of network information transmission ,it means that the network information transmission capacity is stronger .%社会网络已经成为人类共创与共享信息的平台,信息在网络中的传播对社会发展产生的作用和影响与日俱增。因此,揭示用户间信息的传播规律对于及时掌握和控制信息传播具有非常重要的意义。以社会网络社会关系强度P、社会网络潜在关系强度Q、社会网络信息传播能力W为基础,构建了社会网络信息传播度量模型,描述了度量模型的算法核心思想。仿真实验表明,社会关系强度越大,网络的信息传递概率越大,也就意味着网络的信息传递能力越强。

  20. Informing mechanistic toxicology with computational molecular models.

    Science.gov (United States)

    Goldsmith, Michael R; Peterson, Shane D; Chang, Daniel T; Transue, Thomas R; Tornero-Velez, Rogelio; Tan, Yu-Mei; Dary, Curtis C

    2012-01-01

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo efforts. From a molecular biophysical ansatz, we describe how 3D molecular modeling methods used to numerically evaluate the classical pair-wise potential at the chemical/biological interface can inform mechanism of action and the dose-response paradigm of modern toxicology. With an emphasis on molecular docking, 3D-QSAR and pharmacophore/toxicophore approaches, we demonstrate how these methods can be integrated with chemoinformatic and toxicogenomic efforts into a tiered computational toxicology workflow. We describe generalized protocols in which 3D computational molecular modeling is used to enhance our ability to predict and model the most relevant toxicokinetic, metabolic, and molecular toxicological endpoints, thereby accelerating the computational toxicology-driven basis of modern risk assessment while providing a starting point for rational sustainable molecular design.

  1. Geomagnetic Information Model for the Year 2013

    Directory of Open Access Journals (Sweden)

    Mario Brkić

    2012-12-01

    Full Text Available The finalization of the survey of the Basic Geomagnetic Network of the Republic of Croatia (BGNRC and completion of geomagnetic information models for the Institute for Research and Development of Defence Systems of the Ministry of Defence and the State Geodetic Administration (e.g. Brkić M., E. Jungwirth, D. Matika and Ž. Bačić, 2012, Geomagnetic Information and Safety, 3rd Conference of Croatian National Platform for Disaster Risk Reduction, National Protection and Rescue Directorate, Zagreb was followed in 2012 with validity confirmation of the GI2012 predictive model by geomagnetic observations in quiet conditions. The differences between the measured and modelled declination were found to be within the expected errors of the model. It needs to be pointed out that this was the first successful implementation of night surveying (especially suitable for geomagnetic surveys of airports in the Republic of Croatia.

  2. Road landslide information management and forecasting system base on GIS.

    Science.gov (United States)

    Wang, Wei Dong; Du, Xiang Gang; Xie, Cui Ming

    2009-09-01

    Take account of the characters of road geological hazard and its supervision, it is very important to develop the Road Landslides Information Management and Forecasting System based on Geographic Information System (GIS). The paper presents the system objective, function, component modules and key techniques in the procedure of system development. The system, based on the spatial information and attribute information of road geological hazard, was developed and applied in Guizhou, a province of China where there are numerous and typical landslides. The manager of communication, using the system, can visually inquire all road landslides information based on regional road network or on the monitoring network of individual landslide. Furthermore, the system, integrated with mathematical prediction models and the GIS's strongpoint on spatial analyzing, can assess and predict landslide developing procedure according to the field monitoring data. Thus, it can efficiently assists the road construction or management units in making decision to control the landslides and to reduce human vulnerability.

  3. Five-factor model personality disorder prototypes in a community sample: self- and informant-reports predicting interview-based DSM diagnoses.

    Science.gov (United States)

    Lawton, Erin M; Shields, Andrew J; Oltmanns, Thomas F

    2011-10-01

    The need for an empirically validated, dimensional system of personality disorders is becoming increasingly apparent. While a number of systems have been investigated in this regard, the five-factor model of personality has demonstrated the ability to adequately capture personality pathology. In particular, the personality disorder prototypes developed by Lynam and Widiger (2001) have been tested in a number of samples. The goal of the present study is to extend this literature by validating the prototypes in a large, representative community sample of later middle-aged adults using both self and informant reports. We found that the prototypes largely work well in this age group. Schizoid, Borderline, Histrionic, Narcissistic, and Avoidant personality disorders demonstrate good convergent validity, with a particularly strong pattern of discriminant validity for the latter four. Informant-reported prototypes show similar patterns to self reports for all analyses. This demonstrates that informants are not succumbing to halo representations of the participants, but are rather describing participants in nuanced ways. It is important that informant reports add significant predictive validity for Schizoid, Antisocial, Borderline, Histrionic, and Narcissistic personality disorders. Implications of our results and directions for future research are discussed.

  4. Research on urban land information system based on GIS

    Institute of Scientific and Technical Information of China (English)

    LI Liang-bao; ZOU Zhi-chong

    2006-01-01

    Urban land utilization plays an important role in city development. We establish the "Urban Land Information System based on GIS" in order to inspect urban land structure and utilize a model automatically. A series of codes abstracted from urban land sustainable utilization significance are used as measures in land inspection. GIS tools combined with "Urban Land Information System" make visible codes calculations and statistical results possible. Useful mathematic methods are cited to analyze the degree urban land sustainable and optimize land structure. By scientific system analysis, relationships among modules and system structure are illustrated clearly. As a result this study has drawn out the "Urban Land Information System" model.

  5. A Personalized Information Dissemination System Based on How-Net

    Institute of Scientific and Technical Information of China (English)

    张磊; 杜小勇; 王珊

    2002-01-01

    The information dissemination model is becoming increasingly important inwide-area information systems. In this model, a user subscribes to an information disseminationservice by submitting profiles that describe his interests. There have been several simple kindsof information dissemination services on the Internet such as mailing list, but the problemis that it provides a crude granularity of interest matching. A user whose information needdoes not exactly match certain lists will either receive too many irrelevant or too few relevantmessages. This paper presents a personalized information dissemination model based on How-Net, which uses a Concept Network-Views (CN-V) model to support information filtering, user'sinterests modeling and information recommendation. A Concept Network is constructed uponthe user's profiles and the content of documents, which describes concepts and their relationsin the content and assigns different weights to these concepts. Usually the Concept Network isnot well arranged, from which it is hard to find some useful relations, so several views from areextracted it to represent the important relations explicitly.

  6. An agent oriented information system: an MDA based development

    Directory of Open Access Journals (Sweden)

    Mohamed Sadgal

    2012-09-01

    Full Text Available Information systems (IS development should not only accomplish functional models but also conceptual models to represent the organizational environment in which it will have to evolve and must be aligned with strategic objectives. Generally, a significant innovations in the enterprise, is to organize its IS around its business processes. Otherwise, business models must be enriched by the agent paradigm to reduce the complexity involved in solving a problem by the structuring of knowledge on a set of intelligent agents, the association between agents and activities and collaboration among agents. To do this, we propose an agent oriented approach based on the model-driven-architecture (MDA for the information system development. This approach uses in its different phases, the BPMN language for the business processes modeling, AML language for the agent modeling, and JADEX platform for the implementation. The IS development is realized by different automated mappings from source models to target models.

  7. Language-based multimedia information retrieval

    OpenAIRE

    De Jong; Gauvain, J.L.; Hiemstra, D; Netter, K.

    2000-01-01

    This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material by use of human language technologies. Thus, in contrast to image or sound-based retrieval methods, where both the query language and the indexing methods build on non-linguistic data, these methods...

  8. Semantic-Sensitive Web Information Retrieval Model for HTML Documents

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    With the advent of the Internet, a new era of digital information exchange has begun. Currently, the Internet encompasses more than five billion online sites and this number is exponentially increasing every day. Fundamentally, Information Retrieval (IR) is the science and practice of storing documents and retrieving information from within these documents. Mathematically, IR systems are at the core based on a feature vector model coupled with a term weighting scheme that weights terms in a document according to their significance with respect to the context in which they appear. Practically, Vector Space Model (VSM), Term Frequency (TF), and Inverse Term Frequency (IDF) are among other long-established techniques employed in mainstream IR systems. However, present IR models only target generic-type text documents, in that, they do not consider specific formats of files such as HTML web documents. This paper proposes a new semantic-sensitive web information retrieval model for HTML documents. It consists of a...

  9. Information filtering via collaborative user clustering modeling

    Science.gov (United States)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Yu, Lu; Liu, Chuang; Liu, Hao; Yan, Xiao-Yong

    2014-02-01

    The past few years have witnessed the great success of recommender systems, which can significantly help users to find out personalized items for them from the information era. One of the widest applied recommendation methods is the Matrix Factorization (MF). However, most of the researches on this topic have focused on mining the direct relationships between users and items. In this paper, we optimize the standard MF by integrating the user clustering regularization term. Our model considers not only the user-item rating information but also the user information. In addition, we compared the proposed model with three typical other methods: User-Mean (UM), Item-Mean (IM) and standard MF. Experimental results on two real-world datasets, MovieLens 1M and MovieLens 100k, show that our method performs better than other three methods in the accuracy of recommendation.

  10. The Consumer Health Information System Adoption Model.

    Science.gov (United States)

    Monkman, Helen; Kushniruk, Andre W

    2015-01-01

    Derived from overlapping concepts in consumer health, a consumer health information system refers to any of the broad range of applications, tools, and educational resources developed to empower consumers with knowledge, techniques, and strategies, to manage their own health. As consumer health information systems become increasingly popular, it is important to explore the factors that impact their adoption and success. Accumulating evidence indicates a relationship between usability and consumers' eHealth Literacy skills and the demands consumer HISs place on their skills. Here, we present a new model called the Consumer Health Information System Adoption Model, which depicts both consumer eHealth literacy skills and system demands on eHealth literacy as moderators with the potential to affect the strength of relationship between usefulness and usability (predictors of usage) and adoption, value, and successful use (actual usage outcomes). Strategies for aligning these two moderating factors are described.

  11. Asset Condition, Information Systems and Decision Models

    CERN Document Server

    Willett, Roger; Brown, Kerry; Mathew, Joseph

    2012-01-01

    Asset Condition, Information Systems and Decision Models, is the second volume of the Engineering Asset Management Review Series. The manuscripts provide examples of implementations of asset information systems as well as some practical applications of condition data for diagnostics and prognostics. The increasing trend is towards prognostics rather than diagnostics, hence the need for assessment and decision models that promote the conversion of condition data into prognostic information to improve life-cycle planning for engineered assets. The research papers included here serve to support the on-going development of Condition Monitoring standards. This volume comprises selected papers from the 1st, 2nd, and 3rd World Congresses on Engineering Asset Management, which were convened under the auspices of ISEAM in collaboration with a number of organisations, including CIEAM Australia, Asset Management Council Australia, BINDT UK, and Chinese Academy of Sciences, Beijing University of Chemical Technology, Chin...

  12. Engaging Theories and Models to Inform Practice

    Science.gov (United States)

    Kraus, Amanda

    2012-01-01

    Helping students prepare for the complex transition to life after graduation is an important responsibility shared by those in student affairs and others in higher education. This chapter explores theories and models that can inform student affairs practitioners and faculty in preparing students for life after college. The focus is on roles,…

  13. Higher-dimensional modelling of geographic information

    NARCIS (Netherlands)

    Arroyo Ohori, G.A.K.

    2016-01-01

    Our world is three-dimensional and complex, continuously changing over time and appearing different at different scales. Yet, when we model it in a computer using Geographic Information Systems (GIS), we mostly use 2D representations, which essentially consist of linked points, lines and polygons. T

  14. Using Interaction Scenarios to Model Information Systems

    DEFF Research Database (Denmark)

    Bækgaard, Lars; Bøgh Andersen, Peter

    The purpose of this paper is to define and discuss a set of interaction primitives that can be used to model the dynamics of socio-technical activity systems, including information systems, in a way that emphasizes structural aspects of the interaction that occurs in such systems. The primitives...

  15. Hiding Information into Palette-Based Image

    Institute of Scientific and Technical Information of China (English)

    WU Hong-tao; ZHU Bo-cheng; YANG Yi-xian

    2005-01-01

    After pointing out the weakness of the known palette-based image information hiding by palette matrix,a new spacial effective robust information hiding algorithm is proposed,which can resist the operation of ‘select all’,‘copy’,‘paste’ from cover to original,and can resist gently modification the palette matrix,and can resist the image format changed between true color image and palette-based image.The hiding capacity can reach 25% of the number of pixel index matrix.Due to the advisement of information hiding security an update algorithm is proposed at the end of the paper,with the capacity reduced and vision effect increased.

  16. Process and building information modelling in the construction industry by using information delivery manuals and model view definitions

    DEFF Research Database (Denmark)

    Karlshøj, Jan

    2012-01-01

    The construction industry is gradually increasing its use of structured information and building information modelling.To date, the industry has suffered from the disadvantages of a project-based organizational structure and ad hoc solutions. Furthermore, it is not used to formalizing the flow...... of information and specifying exactly which objects and properties are needed for each process and which information is produced by the processes. The present study is based on reviewing the existing methodology of Information Delivery Manuals (IDM) from Buildingsmart, which also is also an ISO standard 29481...... Part 1; and the Model View Definition (MVD) methodology developed by Buildingsmart and BLIS. The research also includes a review of concrete IDM development projects that have been developed over the last five years. Although the study has identified interest in the IDM methodology in a number...

  17. Language-based multimedia information retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, J.L.; Hiemstra, Djoerd; Netter, K.

    2000-01-01

    This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material

  18. Language-based multimedia information retrieval

    NARCIS (Netherlands)

    Jong, de F.M.G.; Gauvain, J.L.; Hiemstra, D.; Netter, K.

    2000-01-01

    This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material b

  19. Language-based multimedia information retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, J.L.; Hiemstra, Djoerd; Netter, K.

    2000-01-01

    This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material b

  20. Change of Geographic Information Service Model in Mobile Context

    Institute of Scientific and Technical Information of China (English)

    REN Fu; DU Qingyun

    2005-01-01

    A research on that how the topic of mobility, which is completely different but tightly relevant to space, provides new approaches and methods so as to promote the further development of geographic information services, will accumulate basic experience for different types of relative information systems in the wide fields of location based services. This paper analyzes the meaning of mobility and the change for geographic information service model, it describes the differences and correlation between M-GIS and traditional GIS. It sets a technical framework of geographic information services according to mobile context and provides a case study.