WorldWideScience

Sample records for modeling information based

  1. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  2. Model transformation based information system modernization

    Directory of Open Access Journals (Sweden)

    Olegas Vasilecas

    2013-03-01

    Full Text Available Information systems begin to date increasingly faster because of rapidly changing business environment. Usually, small changes are not sufficient to adapt complex legacy information systems to changing business needs. A new functionality should be installed with the requirement of putting business data in the smallest possible risk. Information systems modernization problems are beeing analyzed in this paper and a method for information system modernization is proposed. It involves programming code transformation into abstract syntax tree metamodel (ASTM and model based transformation from ASTM into knowledge discovery model (KDM. The method is validated on example for SQL language.

  3. Click Model-Based Information Retrieval Metrics

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    In recent years many models have been proposed that are aimed at predicting clicks of web search users. In addition, some information retrieval evaluation metrics have been built on top of a user model. In this paper we bring these two directions together and propose a common approach to converting

  4. A Process Model for Goal-Based Information Retrieval

    Directory of Open Access Journals (Sweden)

    Harvey Hyman

    2014-12-01

    Full Text Available In this paper we examine the domain of information search and propose a "goal-based" approach to study search strategy. We describe "goal-based information search" using a framework of Knowledge Discovery. We identify two Information Retrieval (IR goals using the constructs of Knowledge Acquisition (KA and Knowledge Explanation (KE. We classify these constructs into two specific information problems: An exploration-exploitation problem and an implicit-explicit problem. Our proposed framework is an extension of prior work in this domain, applying an IR Process Model originally developed for Legal-IR and adapted to Medical-IR. The approach in this paper is guided by the recent ACM-SIG Medical Information Retrieval (MedIR Workshop definition: "methodologies and technologies that seek to improve access to medical information archives via a process of information retrieval."

  5. GRAMMAR RULE BASED INFORMATION RETRIEVAL MODEL FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Nadana Ravishankar

    2015-07-01

    Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rule based approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rule base coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

  6. Semantic reasoning with XML-based biomedical information models.

    Science.gov (United States)

    O'Connor, Martin J; Das, Amar

    2010-01-01

    The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.

  7. Thermodynamics of information processing based on enzyme kinetics: An exactly solvable model of an information pump

    Science.gov (United States)

    Cao, Yuansheng; Gong, Zongping; Quan, H. T.

    2015-06-01

    Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012), 10.1073/pnas.1204263109] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013), 10.1103/PhysRevLett.111.030602], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.

  8. Information-based models for finance and insurance

    Science.gov (United States)

    Hoyle, Edward

    2010-10-01

    In financial markets, the information that traders have about an asset is reflected in its price. The arrival of new information then leads to price changes. The `information-based framework' of Brody, Hughston and Macrina (BHM) isolates the emergence of information, and examines its role as a driver of price dynamics. This approach has led to the development of new models that capture a broad range of price behaviour. This thesis extends the work of BHM by introducing a wider class of processes for the generation of the market filtration. In the BHM framework, each asset is associated with a collection of random cash flows. The asset price is the sum of the discounted expectations of the cash flows. Expectations are taken with respect (i) an appropriate measure, and (ii) the filtration generated by a set of so-called information processes that carry noisy or imperfect market information about the cash flows. To model the flow of information, we introduce a class of processes termed Lévy random bridges (LRBs), generalising the Brownian and gamma information processes of BHM. Conditioned on its terminal value, an LRB is identical in law to a Lévy bridge. We consider in detail the case where the asset generates a single cash flow X_T at a fixed date T. The flow of information about X_T is modelled by an LRB with random terminal value X_T. An explicit expression for the price process is found by working out the discounted conditional expectation of X_T with respect to the natural filtration of the LRB. New models are constructed using information processes related to the Poisson process, the Cauchy process, the stable-1/2 subordinator, the variance-gamma process, and the normal inverse-Gaussian process. These are applied to the valuation of credit-risky bonds, vanilla and exotic options, and non-life insurance liabilities.

  9. A Novel Fuzzy Document Based Information Retrieval Model for Forecasting

    Directory of Open Access Journals (Sweden)

    Partha Roy

    2017-06-01

    Full Text Available Information retrieval systems are generally used to find documents that are most appropriate according to some query that comes dynamically from users. In this paper a novel Fuzzy Document based Information Retrieval Model (FDIRM is proposed for the purpose of Stock Market Index forecasting. The novelty of proposed approach is a modified tf-idf scoring scheme to predict the future trend of the stock market index. The contribution of this paper has two dimensions, 1 In the proposed system the simple time series is converted to an enriched fuzzy linguistic time series with a unique approach of incorporating market sentiment related information along with the price and 2 A unique approach is followed while modeling the information retrieval (IR system which converts a simple IR system into a forecasting system. From the performance comparison of FDIRM with standard benchmark models it can be affirmed that the proposed model has a potential of becoming a good forecasting model. The stock market data provided by Standard & Poor’s CRISIL NSE Index 50 (CNX NIFTY-50 index of National Stock Exchange of India (NSE is used to experiment and validate the proposed model. The authentic data for validation and experimentation is obtained from http://www.nseindia.com which is the official website of NSE. A java program is under construction to implement the model in real-time with graphical users’ interface.

  10. Information system based on the mathematical model of the EPS

    Science.gov (United States)

    Kalimoldayev, Maksat N.; Abdildayeva, Assel A.; Mamyrbayev, Orken Zh.; Akhmetzhanov, Maksat

    2016-11-01

    This article discusses the structure of an information system, the mathematical and information models of electric power systems. Currently, the major application areas include system relaying data communication systems and automation, automated dispatching and technological management of electric power facilities, as well as computer-aided calculation of energy resources. Automatic control of excitation (ARV) synchronous machines is one of the most effective ways to ensure the stability of power systems. However, the variety of possible options and modes even in a single grid pose significant obstacles to the development of the best means of ensuring sustainability. Thus, the use of ARVs to ensure stability in some cases may not be sufficient. Therefore, there is a need to develop an information system based on a mathematical model.

  11. A rumor spreading model based on information entropy.

    Science.gov (United States)

    Wang, Chao; Tan, Zong Xuan; Ye, Ye; Wang, Lu; Cheong, Kang Hao; Xie, Neng-Gang

    2017-08-29

    Rumor spreading can have a significant impact on people's lives, distorting scientific facts and influencing political opinions. With technologies that have democratized the production and reproduction of information, the rate at which misinformation can spread has increased significantly, leading many to describe contemporary times as a 'post-truth era'. Research into rumor spreading has primarily been based on either model of social and biological contagion, or upon models of opinion dynamics. Here we present a comprehensive model that is based on information entropy, which allows for the incorporation of considerations like the role of memory, conformity effects, differences in the subjective propensity to produce distortions, and variations in the degree of trust that people place in each other. Variations in the degree of trust are controlled by a confidence factor β, while the propensity to produce distortions is controlled by a conservation factor K. Simulations were performed using a Barabási-Albert (BA) scale-free network seeded with a single piece of information. The influence of β and K upon the temporal evolution of the system was subsequently analyzed regarding average information entropy, opinion fragmentation, and the range of rumor spread. These results can aid in decision-making to limit the spread of rumors.

  12. Information fusion via isocortex-based Area 37 modeling

    Science.gov (United States)

    Peterson, James K.

    2004-08-01

    A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.

  13. An information spreading model based on online social networks

    Science.gov (United States)

    Wang, Tao; He, Juanjuan; Wang, Xiaoxia

    2018-01-01

    Online social platforms are very popular in recent years. In addition to spreading information, users could review or collect information on online social platforms. According to the information spreading rules of online social network, a new information spreading model, namely IRCSS model, is proposed in this paper. It includes sharing mechanism, reviewing mechanism, collecting mechanism and stifling mechanism. Mean-field equations are derived to describe the dynamics of the IRCSS model. Moreover, the steady states of reviewers, collectors and stiflers and the effects of parameters on the peak values of reviewers, collectors and sharers are analyzed. Finally, numerical simulations are performed on different networks. Results show that collecting mechanism and reviewing mechanism, as well as the connectivity of the network, make information travel wider and faster, and compared to WS network and ER network, the speed of reviewing, sharing and collecting information is fastest on BA network.

  14. Information Sharing In Shipbuilding based on the Product State Model

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    The paper provides a review of product modelling technologies and the overall architecture for the Product State Model (PSM) environment as a basis for how dynamically updated product data can improve control of production activities. Especially, the paper focuses on the circumstances prevailing...... in a one-of-a-kind manufacturing environment like the shipbuilding industry, where product modelling technologies already have proved their worth in the design and engineering phases of shipbuilding and in the operation phase. However, the handling of product information on the shop floor is not yet...

  15. Avian Information Systems: Developing Web-Based Bird Avoidance Models

    Directory of Open Access Journals (Sweden)

    Judy Shamoun-Baranes

    2008-12-01

    Full Text Available Collisions between aircraft and birds, so-called "bird strikes," can result in serious damage to aircraft and even in the loss of lives. Information about the distribution of birds in the air and on the ground can be used to reduce the risk of bird strikes and their impact on operations en route and in and around air fields. Although a wealth of bird distribution and density data is collected by numerous organizations, these data are not readily available nor interpretable by aviation. This paper presents two national efforts, one in the Netherlands and one in the United States, to develop bird avoidance nodels for aviation. These models integrate data and expert knowledge on bird distributions and migratory behavior to provide hazard maps in the form of GIS-enabled Web services. Both models are in operational use for flight planning and flight alteration and for airfield and airfield vicinity management. These models and their presentation on the Internet are examples of the type of service that would be very useful in other fields interested in species distribution and movement information, such as conservation, disease transmission and prevention, or assessment and mitigation of anthropogenic risks to nature. We expect that developments in cyber-technology, a transition toward an open source philosophy, and higher demand for accessible biological data will result in an increase in the number of biological information systems available on the Internet.

  16. An Object-Oriented Information Model for Policy-based Management of Distributed Applications

    NARCIS (Netherlands)

    Diaz, G.; Gay, V.C.J.; Horlait, E.; Hamza, M.H.

    2002-01-01

    This paper presents an object-oriented information model to support a policy-based management for distributed multimedia applications. The information base contains application-level information about the users, the applications, and their profile. Our Information model is described in details and

  17. A Model Based on Cocitation for Web Information Retrieval

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2014-01-01

    Full Text Available According to the relationship between authority and cocitation in HITS, we propose a new hyperlink weighting scheme to describe the strength of the relevancy between any two webpages. Then we combine hyperlink weight normalization and random surfing schemes as used in PageRank to justify the new model. In the new model based on cocitation (MBCC, the pages with stronger relevancy are assigned higher values, not just depending on the outlinks. This model combines both features of HITS and PageRank. Finally, we present the results of some numerical experiments, showing that the MBCC ranking agrees with the HITS ranking, especially in top 10. Meanwhile, MBCC keeps the superiority of PageRank, that is, existence and uniqueness of ranking vectors.

  18. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  19. Model based climate information on drought risk in Africa

    Science.gov (United States)

    Calmanti, S.; Syroka, J.; Jones, C.; Carfagna, F.; Dell'Aquila, A.; Hoefsloot, P.; Kaffaf, S.; Nikulin, G.

    2012-04-01

    The United Nations World Food Programme (WFP) has embarked upon the endeavor of creating a sustainable Africa-wide natural disaster risk management system. A fundamental building block of this initiative is the setup of a drought impact modeling platform called Africa Risk-View that aims to quantify and monitor weather-related food security risk in Africa. The modeling approach is based the Water Requirement Satisfaction Index (WRSI), as the fundamental indicator of the performances of agriculture and uses historical records of food assistance operation to project future potential needs for livelihood protection. By using climate change scenarios as an input to Africa Risk-View it is possible, in principles, to evaluate the future impact of climate variability on critical issues such as food security and the overall performance of the envisaged risk management system. A necessary preliminary step to this challenging task is the exploration of the sources of uncertainties affecting the assessment based on modeled climate change scenarios. For this purpose, a limited set of climate models have been selected in order verify the relevance of using climate model output data with Africa Risk-View and to explore a minimal range of possible sources of uncertainty. This first evaluation exercise started before the setup of the CORDEX framework and has relied on model output available at the time. In particular only one regional downscaling was available for the entire African continent from the ENSEMBLES project. The analysis shows that current coarse resolution global climate models can not directly feed into the Africa RiskView risk-analysis tool. However, regional downscaling may help correcting the inherent biases observed in the datasets. Further analysis is performed by using the first data available under the CORDEX framework. In particular, we consider a set of simulation driven with boundary conditions from the reanalysis ERA-Interim to evaluate the skill drought

  20. The Concept of Data Model Pattern Based on Fully Communication Oriented Information Modeling (FCO-IM

    Directory of Open Access Journals (Sweden)

    Fazat Nur Azizah

    2013-09-01

    Full Text Available Just as in many areas of software engineering, patterns have been used in  data modeling  to  create high quality  data  models.  We  provide  a  concept  of data  model  pattern  based  on  Fully  Communication  Oriented  Information Modeling  (FCO-IM,  a  fact  oriented  data  modeling  method.  A  data  model pattern is  defined  as  the relation  between  context, problem,  an d  solution.  This definition is adopted from the concept of pattern by Christopher Alexander. We define the concept of Information Grammar for Pattern (IG P in the solution part of  a  pattern,  which  works  as  a  template  to  create  a  data  model.  The  IG P  also shows how a pattern can relate to other patterns. The data model pattern concept is then used to describe 15 data model patterns, organized into 4 categories. A case study on geographical location is provided to show the use of the concept in a real case.

  1. INFORMATION SYSTEM QUALITY INFLUENCE ON ORGANIZATION PERFORMANCE: A MODIFICATION OF TECHNOLOGY-BASED INFORMATION SYSTEM ACCEPTANCE AND SUCCESS MODEL

    Directory of Open Access Journals (Sweden)

    Trisnawati N.

    2017-12-01

    Full Text Available This study aims to examine the effect of information system quality on technology-based accounting information systems usage and their impact on organizational performance on local government. This study is based on Technology Acceptance Model (TAM, IS Success Model, and the success of technology-based information systems. This study is a combination of previous studies conducted by Seddon and Kiew (1997, Saeed and Helm (2008, and DeLone and McLean (1992. This study used survey method and took 101 respondents from accounting staff working in Malang and Mojokerto regencies. This study uses Partial Least Square to examine research data. Research result exhibits information system qualities affecting benefit perception and user satisfaction. Technology-based accounting information systems usage in local government is influenced by benefits perception and user satisfaction. Research result concluded that technology-based accounting information systems usage will affect the performance of local government organizations.

  2. Evaluating user interactions with clinical information systems: a model based on human-computer interaction models.

    Science.gov (United States)

    Despont-Gros, Christelle; Mueller, Henning; Lovis, Christian

    2005-06-01

    This article proposes a model for dimensions involved in user evaluation of clinical information systems (CIS). The model links the dimensions in traditional CIS evaluation and the dimensions from the human-computer interaction (HCI) perspective. In this article, variables are defined as the properties measured in an evaluation, and dimensions are defined as the factors contributing to the values of the measured variables. The proposed model is based on a two-step methodology with: (1) a general review of information systems (IS) evaluations to highlight studied variables, existing models and frameworks, and (2) a review of HCI literature to provide the theoretical basis to key dimensions of user evaluation. The review of literature led to the identification of eight key variables, among which satisfaction, acceptance, and success were found to be the most referenced. Among those variables, IS acceptance is a relevant candidate to reflect user evaluation of CIS. While their goals are similar, the fields of traditional CIS evaluation, and HCI are not closely connected. Combining those two fields allows for the development of an integrated model which provides a model for summative and comprehensive user evaluation of CIS. All dimensions identified in existing studies can be linked to this model and such an integrated model could provide a new perspective to compare investigations of different CIS systems.

  3. An Abstraction-Based Data Model for Information Retrieval

    Science.gov (United States)

    McAllister, Richard A.; Angryk, Rafal A.

    Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.

  4. Landscape Epidemiology Modeling Using an Agent-Based Model and a Geographic Information System

    Directory of Open Access Journals (Sweden)

    S. M. Niaz Arifin

    2015-05-01

    Full Text Available A landscape epidemiology modeling framework is presented which integrates the simulation outputs from an established spatial agent-based model (ABM of malaria with a geographic information system (GIS. For a study area in Kenya, five landscape scenarios are constructed with varying coverage levels of two mosquito-control interventions. For each scenario, maps are presented to show the average distributions of three output indices obtained from the results of 750 simulation runs. Hot spot analysis is performed to detect statistically significant hot spots and cold spots. Additional spatial analysis is conducted using ordinary kriging with circular semivariograms for all scenarios. The integration of epidemiological simulation-based results with spatial analyses techniques within a single modeling framework can be a valuable tool for conducting a variety of disease control activities such as exploring new biological insights, monitoring epidemiological landscape changes, and guiding resource allocation for further investigation.

  5. Model of informational system for freight insurance automation based on digital signature

    Directory of Open Access Journals (Sweden)

    Maxim E. SLOBODYANYUK

    2009-01-01

    Full Text Available In the article considered a model of informational system for freight insurance automation based on digital signature, showed architecture, macro flowchart of information flow in model, components (modules and their functions. Described calculation method of costs on interactive cargo insurance via proposed system, represented main characteristics and options of existing transport management systems, conceptual cost models.

  6. Temporal Expectation and Information Processing: A Model-Based Analysis

    Science.gov (United States)

    Jepma, Marieke; Wagenmakers, Eric-Jan; Nieuwenhuis, Sander

    2012-01-01

    People are able to use temporal cues to anticipate the timing of an event, enabling them to process that event more efficiently. We conducted two experiments, using the fixed-foreperiod paradigm (Experiment 1) and the temporal-cueing paradigm (Experiment 2), to assess which components of information processing are speeded when subjects use such…

  7. Modeling the Information Age Combat Model: An Agent-Based Simulation of Network Centric Operations

    Science.gov (United States)

    Deller, Sean; Rabadi, Ghaith A.; Bell, Michael I.; Bowling, Shannon R.; Tolk, Andreas

    2010-01-01

    The Information Age Combat Model (IACM) was introduced by Cares in 2005 to contribute to the development of an understanding of the influence of connectivity on force effectiveness that can eventually lead to quantitative prediction and guidelines for design and employment. The structure of the IACM makes it clear that the Perron-Frobenius Eigenvalue is a quantifiable metric with which to measure the organization of a networked force. The results of recent experiments presented in Deller, et aI., (2009) indicate that the value of the Perron-Frobenius Eigenvalue is a significant measurement of the performance of an Information Age combat force. This was accomplished through the innovative use of an agent-based simulation to model the IACM and represents an initial contribution towards a new generation of combat models that are net-centric instead of using the current platform-centric approach. This paper describes the intent, challenges, design, and initial results of this agent-based simulation model.

  8. A new model of information behaviour based on the Search Situation Transition schema Information searching, Information behaviour, Behavior, Information retrieval, Information seeking

    Directory of Open Access Journals (Sweden)

    Nils Pharo

    2004-01-01

    Full Text Available This paper presents a conceptual model of information behaviour. The model is part of the Search Situation Transition method schema. The method schema is developed to discover and analyse interplay between phenomena traditionally analysed as factors influencing either information retrieval or information seeking. In this paper the focus is on the model's five main categories: the work task, the searcher, the social/organisational environment, the search task, and the search process. In particular, the search process and its sub-categories search situation and transition and the relationship between these are discussed. To justify the method schema an empirical study was designed according to the schema's specifications. In the paper a subset of the study is presented analysing the effects of work tasks on Web information searching. Findings from this small-scale study indicate a strong relationship between the work task goal and the level of relevance used for judging resources during search processes.

  9. Theories of learning: models of good practice for evidence-based information skills teaching.

    Science.gov (United States)

    Spring, Hannah

    2010-12-01

    This feature considers models of teaching and learning and how these can be used to support evidence based practice. © 2010 The authors. Health Information and Libraries Journal © 2010 Health Libraries Group.

  10. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  11. A rule-based backchannel prediction model using pitch and pause information

    NARCIS (Netherlands)

    Truong, Khiet Phuong; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We manually designed rules for a backchannel (BC) prediction model based on pitch and pause information. In short, the model predicts a BC when there is a pause of a certain length that is preceded by a falling or rising pitch. This model was validated against the Dutch IFADV Corpus in a

  12. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  13. Russian and Foreign Experience of Integration of Agent-Based Models and Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Konstantin Anatol’evich Gulin

    2016-11-01

    Full Text Available The article provides an overview of the mechanisms of integration of agent-based models and GIS technology developed by Russian and foreign researchers. The basic framework of the article is based on critical analysis of domestic and foreign literature (monographs, scientific articles. The study is based on the application of universal scientific research methods: system approach, analysis and synthesis, classification, systematization and grouping, generalization and comparison. The article presents theoretical and methodological bases of integration of agent-based models and geographic information systems. The concept and essence of agent-based models are explained; their main advantages (compared to other modeling methods are identified. The paper characterizes the operating environment of agents as a key concept in the theory of agent-based modeling. It is shown that geographic information systems have a wide range of information resources for calculations, searching, modeling of the real world in various aspects, acting as an effective tool for displaying the agents’ operating environment and allowing to bring the model as close as possible to the real conditions. The authors also focus on a wide range of possibilities for various researches in different spatial and temporal contexts. Comparative analysis of platforms supporting the integration of agent-based models and geographic information systems has been carried out. The authors give examples of complex socio-economic models: the model of a creative city, humanitarian assistance model. In the absence of standards for research results description, the authors focus on the models’ elements such as the characteristics of the agents and their operation environment, agents’ behavior, rules of interaction between the agents and the external environment. The paper describes the possibilities and prospects of implementing these models

  14. Information driving force and its application in agent-based modeling

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2018-04-01

    Exploring the scientific impact of online big-data has attracted much attention of researchers from different fields in recent years. Complex financial systems are typical open systems profoundly influenced by the external information. Based on the large-scale data in the public media and stock markets, we first define an information driving force, and analyze how it affects the complex financial system. The information driving force is observed to be asymmetric in the bull and bear market states. As an application, we then propose an agent-based model driven by the information driving force. Especially, all the key parameters are determined from the empirical analysis rather than from statistical fitting of the simulation results. With our model, both the stationary properties and non-stationary dynamic behaviors are simulated. Considering the mean-field effect of the external information, we also propose a few-body model to simulate the financial market in the laboratory.

  15. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  16. Stochastic Modeling of Usage Patterns in a Web-Based Information System.

    Science.gov (United States)

    Chen, Hui-Min; Cooper, Michael D.

    2002-01-01

    Uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns, both in rates and in probabilities, in a Web-based information system. Describes search sessions from transaction logs of the University of California's MELVYL library catalog system and discusses sequential dependency. (Author/LRW)

  17. The impacts of information-sharing mechanisms on spatial market formation based on agent-based modeling.

    Science.gov (United States)

    Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing'ang; Han, Zhangang

    2013-01-01

    There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman's logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration.

  18. The Impacts of Information-Sharing Mechanisms on Spatial Market Formation Based on Agent-Based Modeling

    Science.gov (United States)

    Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing’ang; Han, Zhangang

    2013-01-01

    There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman’s logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration. PMID:23484007

  19. Identification of potential landslide and model optimization based on the earth multi-sensor network information

    Science.gov (United States)

    Jiming, K.; Peifeng, H.; Yun, C.

    2012-12-01

    Potential landslide judgment is one of the key issues for the landslide forecast. Firstly, this work takes advantage of earth multiple time-space and multi-sensor networks to obtain the slope critical information such as lithology, slope structure, topography, activities signs and so on. Secondly, select different control judgment indicators, and establish the multi-factor judgment model of the potential landslide based on the multi-source information. Thirdly, especially according to the landslide disaster law, analyze the changes of the topography, the changes of the disaster conditions and the induced conditions of landslide occurred in the gestation process of potential landslide. Based on the above conclusions, build a landslide judgment model based on the controlling factors from different sources of information. Finally, study the typical case on the landslide in Wenchuan earthquake of China, and optimize the potential landslide judgment model. This research results can provide a good reference to landslide prediction and prevention.

  20. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    Science.gov (United States)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  1. Using ontologies to model human navigation behavior in information networks: A study based on Wikipedia.

    Science.gov (United States)

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A

    The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.

  2. An Elaboration of a Strategic Alignment Model of University Information Systems based on SAM Model

    Directory of Open Access Journals (Sweden)

    S. Ahriz

    2018-02-01

    Full Text Available Information system is a guarantee of the universities' ability to anticipate the essential functions to their development and durability. The alignment of information system, one of the pillars of IT governance, has become a necessity. In this paper, we consider the problem of strategic alignment model implementation in Moroccan universities. Literature revealed that few studies have examined strategic alignment in the public sector, particularly in higher education institutions. Hence we opted for an exploratory approach that aims to better understanding the strategic alignment and to evaluate the degree of its use within Moroccan universities. The data gained primarily through interviews with top managers and IT managers reveal that the alignment is not formalized and that it would be appropriate to implement an alignment model. It is found that the implementation of our proposed model can help managers to maximize returns of IT investment and to increase their efficiency.

  3. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  4. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  5. New Challenges for the Management of the Development of Information Systems Based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    has been initiated with the scope of investigating the questions that mathematical modelling technology poses to traditional information systems development projects. Based on the past body of research, this study proposes a framework to guide decision making for managing projects of information...... systems development. In a presented case the indications of the model are compared with the decisions taken during the development. The results highlight discrepancies between the structure and predictions of the model and the case observations, especially with regard to the importance given to the users......’ skills in the development process. Further observations also indicate that flexibility and adaptability, based on grounded theory, are valuable tools when information systems development involves a new technology....

  6. Does the Model of Evaluation Based on Fair Value Answer the Requests of Financial Information Users?

    OpenAIRE

    Mitea Neluta; Sarac Aldea Laura

    2010-01-01

    Does the model of evaluation based on the fair value answers the requests of the financial information users? The financial situations have as purposes the presentation of the information concerning the enterprise financial position, the performances and modifications of this position which, according to IASB and FASB, must be credible and useful. Both referential maintain the existence of several conventions regarding assessment, like historical cost, actual cost, the realizable value or act...

  7. Modeling and Security Threat Assessments of Data Processed in Cloud Based Information Systems

    Directory of Open Access Journals (Sweden)

    Darya Sergeevna Simonenkova

    2016-03-01

    Full Text Available The subject of the research is modeling and security threat assessments of data processed in cloud based information systems (CBIS. This method allow to determine the current security threats of CBIS, state of the system in which vulnerabilities exists, level of possible violators, security properties and to generate recommendations for neutralizing security threats of CBIS.

  8. Sustainable Manufacturing via Multi-Scale, Physics-Based Process Modeling and Manufacturing-Informed Design

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-04-01

    This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.

  9. A Model for Web-based Information Systems in E-Retailing.

    Science.gov (United States)

    Wang, Fang; Head, Milena M.

    2001-01-01

    Discusses the use of Web-based information systems (WIS) by electronic retailers to attract and retain consumers and deliver business functions and strategy. Presents an abstract model for WIS design in electronic retailing; discusses customers, business determinants, and business interface; and suggests future research. (Author/LRW)

  10. Applying an expectancy-value model to study motivators for work-task based information seeking

    DEFF Research Database (Denmark)

    Sigaard, Karen Tølbøl; Skov, Mette

    2015-01-01

    for interpersonal and internal sources increased when the task had high-value motivation or low-expectancy motivation or both. Research limitations/implications: The study is based on a relatively small sample and considers only one motivation theory. This should be addressed in future research along......Purpose: The purpose of this paper is to operationalise and verify a cognitive motivation model that has been adapted to information seeking. The original model was presented within the field of psychology. Design/methodology/approach: An operationalisation of the model is presented based...... on the theory of expectancy-value and on the operationalisation used when the model was first developed. Data for the analysis were collected from a sample of seven informants working as consultants in Danish municipalities. Each participant filled out a questionnaire, kept a log book for a week...

  11. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    Science.gov (United States)

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  12. The education of medical librarians in evidence-based information services: a model of active learning

    Directory of Open Access Journals (Sweden)

    Huriye Çolaklar

    2013-01-01

    Full Text Available Evidence-based practice stems from clinical approaches which are used in the late 18th and early 19th centuries’ medical practices. This area is new in Turkey, too. Turkey needs some lessons about evidence-based practice in Departments of Information and Records Management. This paper, examining the examples in various other countries, presents a model for including the evidence-based information services, which are based on research done in the fields of health and medicine and especially of dentistry, within the contents of the already existing courses in education of librarianship in Turkey. The paper depicts the aims and fields of use of evidence-based information services and their contribution to active learning; examines the education of this subject in various other countries and shows its place in Turkey, and presents a model for the improvement of this education. It is proved that the education of the librarians who will give evidence-based information services both with special practices within the already existing courses or with optional courses given especially for this aim in education of librarianship will contribute considerably to active learning in dentistry.

  13. Information, Meaning and Eigenforms: In the Light of Sociology, Agent-Based Modeling and AI

    Directory of Open Access Journals (Sweden)

    Manfred Füllsack

    2012-08-01

    Full Text Available The paper considers the relation of Shannon-type information to those semantic and hermeneutic aspects of communication, which are often referred to as meaning. It builds on considerations of Talcott Parsons, Niklas Luhmann and Robert K. Logan and relates them to an agent-based model that reproduces key aspects of the Talking Head experiment by Luc Steels. The resulting insights seem to give reason to regard information and meaning not as qualitatively different entities, but as interrelated forms of order that emerge in the interaction of autonomous (self-referentially closed agents. Although on first sight, this way of putting information and meaning into a constructivist framework seems to open possibilities to conceive meaning in terms of Shannon-information, it also suggests a re-conceptualization of information in terms of what cybernetics calls Eigenform in order to do justice to its dynamic interrelation with meaning.

  14. Minority persistence in agent based model using information and emotional arousal as control variables

    Science.gov (United States)

    Sobkowicz, Pawel

    2013-07-01

    We present detailed analysis of the behavior of an agent based model of opinion formation, using a discrete variant of cusp catastrophe behavior of single agents. The agent opinion about a particular issue is determined by its information about the issue and its emotional arousal. It is possible that for agitated agents the same information would lead to different opinions. This results in a nontrivial individual opinion dynamics. The agents communicate via messages, which allows direct application of the model to ICT based communities. We study the dependence of the composition of an agent society on the range of interactions and the rate of emotional arousal. Despite the minimal number of adjustable parameters, the model reproduces several phenomena observed in real societies, for example nearly perfectly balanced results of some highly contested elections or the fact that minorities seldom perceive themselves to be a minority.

  15. Analysis of the Effect of Information System Quality to Intention to Reuse of Employee Management Information System (Simpeg Based on Information Systems Success Model

    Directory of Open Access Journals (Sweden)

    Suryanto Tri Lathif Mardi

    2016-01-01

    Full Text Available This study examines the effect of Information Quality, Systems Quality and Service Quality on the user intention to reuse Employee Management Information System (SIMPEG in University in the city of Surabaya, based on the theoretical foundation of DeLone and McLane Information Systems Success (ISS Model. The distribution of questionnaire was conducted to 120 employees of different universities by means of stratified random sampling. The results showed that: (1 there is a significant positive effect of the System Quality on the Quality of Information, (2 there is a significant positive effect of the Information Quality on the Intention to Reuse, information related to the fulfillment of the user’s needs; (3 there is a significant positive effect of the Quality of the Intention on system re-use, the system related to the fulfillment of the needs of users; (4 there is no effect of the Quality of Service to the Intention to Reuse. In the end, the results of this study provide an analysis and advice to The University officials that can be used as a consideration for Information Technology/Information System investment and development in accordance with the Success of Information System and Intention to Reuse model.

  16. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  17. Model of Wikipedia growth based on information exchange via reciprocal arcs

    Science.gov (United States)

    Zlatić, V.; Štefančić, H.

    2011-03-01

    We show how reciprocal arcs significantly influence the structural organization of Wikipedias, online encyclopedias. It is shown that random addition of reciprocal arcs in the static network cannot explain the observed reciprocity of Wikipedias. A model of Wikipedia growth based on preferential attachment and on information exchange via reciprocal arcs is presented. An excellent agreement between in-degree distributions of our model and real Wikipedia networks is achieved without fitting the distributions, but by merely extracting a small number of model parameters from the measurement of real networks.

  18. Development, implementation and evaluation of an information model for archetype based user responsive medical data visualization.

    Science.gov (United States)

    Kopanitsa, Georgy; Veseli, Hasan; Yampolsky, Vladimir

    2015-06-01

    When medical data have been successfully recorded or exchanged between systems there appear a need to present the data consistently to ensure that it is clearly understood and interpreted. A standard based user interface can provide interoperability on the visual level. The goal of this research was to develop, implement and evaluate an information model for building user interfaces for archetype based medical data. The following types of knowledge were identified as important elements and were included in the information model: medical content related attributes, data type related attributes, user-related attributes, device-related attributes. In order to support flexible and efficient user interfaces an approach that represents different types of knowledge with different models separating the medical concept from a visual concept and interface realization was chosen. We evaluated the developed approach using Guideline for Good Evaluation Practice in Health Informatics (GEP-HI). We developed a higher level information model to complement the ISO 13606 archetype model. This enabled the specification of the presentation properties at the moment of the archetypes' definition. The model allows realizing different users' perspectives on the data. The approach was implemented and evaluated within a functioning EHR system. The evaluation involved 30 patients of different age and IT experience and 5 doctors. One month of testing showed that the time required reading electronic health records decreased for both doctors (from average 310 to 220s) and patients (from average 95 to 39s). Users reported a high level of satisfaction and motivation to use the presented data visualization approach especially in comparison with their previous experience. The introduced information model allows separating medical knowledge and presentation knowledge. The additional presentation layer will enrich the graphical user interface's flexibility and will allow an optimal presentation of

  19. A model-based traction control strategy non-reliant on wheel slip information

    Science.gov (United States)

    Deur, Joško; Pavković, Danijel; Burgio, Gilberto; Hrovat, Davor

    2011-08-01

    A traction control system (TCS) for two-wheel-drive vehicles can conveniently be realised by means of slip control. Such a TCS is modified in this paper in order to be applicable to four-wheel-drive vehicles and anti-lock braking systems, where slip information is not readily available. A reference vehicle model is used to estimate the vehicle velocity. The reference model is excited by a saw-tooth signal in order to adapt the slip for maximum tyre traction performance. The model-based TCS is made robust to vehicle modelling errors by extending it with (i) a superimposed loop of tyre static curve gradient control or (ii) a robust switching controller based on a bi-directional saw-tooth excitation signal. The proposed traction control strategies are verified by experiments and computer simulations.

  20. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  1. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  2. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    Science.gov (United States)

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  3. Role of propagation thresholds in sentiment-based model of opinion evolution with information diffusion

    Science.gov (United States)

    Si, Xia-Meng; Wang, Wen-Dong; Ma, Yan

    2016-06-01

    The degree of sentiment is the key factor for internet users in determining their propagating behaviors, i.e. whether participating in a discussion and whether withdrawing from a discussion. For this end, we introduce two sentiment-based propagation thresholds (i.e. infected threshold and refractory threshold) and propose an interacting model based on the Bayesian updating rules. Our model describe the phenomena that few internet users change their decisions and that someone has drop out of discussion about the topic when some others are just aware of it. Numerical simulations show that, large infected threshold restrains information diffusion but favors the lessening of extremism, while large refractory threshold facilitates decision interaction but promotes the extremism. Making netizens calm down and propagate information sanely can restrain the prevailing of extremism about rumors.

  4. Geographic information system-coupling sediment delivery distributed modeling based on observed data.

    Science.gov (United States)

    Lee, S E; Kang, S H

    2014-01-01

    Spatially distributed sediment delivery (SEDD) models are of great interest in estimating the expected effect of changes on soil erosion and sediment yield. However, they can only be applied if the model can be calibrated using observed data. This paper presents a geographic information system (GIS)-based method to calculate the sediment discharge from basins to coastal areas. For this, an SEDD model, with a sediment rating curve method based on observed data, is proposed and validated. The model proposed here has been developed using the combined application of the revised universal soil loss equation (RUSLE) and a spatially distributed sediment delivery ratio, within Model Builder of ArcGIS's software. The model focuses on spatial variability and is useful for estimating the spatial patterns of soil loss and sediment discharge. The model consists of two modules, a soil erosion prediction component and a sediment delivery model. The integrated approach allows for relatively practical and cost-effective estimation of spatially distributed soil erosion and sediment delivery, for gauged or ungauged basins. This paper provides the first attempt at estimating sediment delivery ratio based on observed data in the monsoon region of Korea.

  5. On Modeling Information Spreading in Bacterial Nano-Networks Based on Plasmid Conjugation.

    Science.gov (United States)

    Castorina, Gaetano; Galluccio, Laura; Palazzo, Sergio

    2016-09-01

    In the last years, nano-communications have attracted much attention as a newly promising research field. In particular, molecular communications, which exploit molecular nodes, are a powerful tool to implement communication functionalities in environments where the use of electromagnetic waves becomes critical, e.g., in the human body. In molecular communications, molecules such as proteins, DNA and RNA sequences are used to carry information. To this aim a novel approach relies on the use of genetically modified bacteria to transport enhanced DNA strands, called plasmids, where information can be encoded. Information transfer is thus based on bacteria motility, i.e., self-propelled motion, which under appropriate circumstances is exhibited by certain bacteria. It has been observed that bacteria motility presents many similarities with opportunistic forwarding. Currently the few studies on opportunistic communications among bacteria are based on simulations only. In this paper we propose an analytical model to characterize information spreading in bacterial nano-networks. To this purpose, an epidemic approach, similar to those used to model Delay Tolerant Networks (DTNs), is employed. We also derive two mathematical models which slightly differ. The first describes bacterial nano-networks where a single plasmid is disseminated according to an epidemic approach; the second, takes into account more complex mechanisms where multiple plasmids are disseminated as in realistic bacterial nano-networks. Numerical results being obtained are finally shown and discussed.

  6. Model-free stochastic processes studied with q-wavelet-based informational tools

    International Nuclear Information System (INIS)

    Perez, D.G.; Zunino, L.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2007-01-01

    We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework

  7. Modeling web-based information seeking by users who are blind.

    Science.gov (United States)

    Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti

    2011-01-01

    This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.

  8. Comparison of co-expression measures: mutual information, correlation, and model based indices

    Directory of Open Access Journals (Sweden)

    Song Lin

    2012-12-01

    Full Text Available Abstract Background Co-expression measures are often used to define networks among genes. Mutual information (MI is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes. Results We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation leads to modules that are superior to MI based modules and maximal information coefficient (MIC based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. Conclusion The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non

  9. A geographic information system-based 3D city estate modeling and simulation system

    Science.gov (United States)

    Chong, Xiaoli; Li, Sha

    2015-12-01

    This paper introduces a 3D city simulation system which is based on geographic information system (GIS), covering all commercial housings of the city. A regional- scale, GIS-based approach is used to capture, describe, and track the geographical attributes of each house in the city. A sorting algorithm of "Benchmark + Parity Rate" is developed to cluster houses with similar spatial and construction attributes. This system is applicable for digital city modeling, city planning, housing evaluation, housing monitoring, and visualizing housing transaction. Finally, taking Jingtian area of Shenzhen as an example, the each unit of 35,997 houses in the area could be displayed, tagged, and easily tracked by the GIS-based city modeling and simulation system. The match market real conditions well and can be provided to house buyers as reference.

  10. Subject-based discriminative sparse representation model for detection of concealed information.

    Science.gov (United States)

    Akhavan, Amir; Moradi, Mohammad Hassan; Vand, Safa Rafiei

    2017-05-01

    The use of machine learning approaches in concealed information test (CIT) plays a key role in the progress of this neurophysiological field. In this paper, we presented a new machine learning method for CIT in which each subject is considered independent of the others. The main goal of this study is to adapt the discriminative sparse models to be applicable for subject-based concealed information test. In order to provide sufficient discriminability between guilty and innocent subjects, we introduced a novel discriminative sparse representation model and its appropriate learning methods. For evaluation of the method forty-four subjects participated in a mock crime scenario and their EEG data were recorded. As the model input, in this study the recurrence plot features were extracted from single trial data of different stimuli. Then the extracted feature vectors were reduced using statistical dependency method. The reduced feature vector went through the proposed subject-based sparse model in which the discrimination power of sparse code and reconstruction error were applied simultaneously. Experimental results showed that the proposed approach achieved better performance than other competing discriminative sparse models. The classification accuracy, sensitivity and specificity of the presented sparsity-based method were about 93%, 91% and 95% respectively. Using the EEG data of a single subject in response to different stimuli types and with the aid of the proposed discriminative sparse representation model, one can distinguish guilty subjects from innocent ones. Indeed, this property eliminates the necessity of several subject EEG data in model learning and decision making for a specific subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. An information diffusion model based on retweeting mechanism for online social media

    International Nuclear Information System (INIS)

    Xiong, Fei; Liu, Yun; Zhang, Zhen-jiang; Zhu, Jiang; Zhang, Ying

    2012-01-01

    To characterize information propagation on online microblogs, we propose a diffusion model (SCIR) which contains four possible states: Susceptible, contacted, infected and refractory. Agents that read the information but have not decided to spread it, stay in the contacted state. They may become infected or refractory, and both the infected and refractory state are stable. Results show during the evolution process, more contacted agents appear in scale-free networks than in regular lattices. The degree based density of infected agents increases with the degree monotonously, but larger average network degree doesn't always mean less relaxation time. -- Highlights: ► We study information diffusion on microblogs based on retweeting mechanism. ► We present a propagation model that contains four states, two of which are absorbing. ► The threshold value of spreading rate, almost approaches zero. ► The degree based density of infected agents increases with the degree monotonously. ► Influences between topics occur only when topics originate in the same neighborhood.

  12. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    Directory of Open Access Journals (Sweden)

    Weiying Wang

    2014-01-01

    Full Text Available Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  13. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    Science.gov (United States)

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  14. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  15. Effective pollutant emission heights for atmospheric transport modelling based on real-world information

    International Nuclear Information System (INIS)

    Pregger, Thomas; Friedrich, Rainer

    2009-01-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling. - The comprehensive analysis of real-world stack data provides detailed default parameter values for improving vertical emission distribution in atmospheric modelling

  16. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  17. Model for Electromagnetic Information Leakage

    OpenAIRE

    Mao Jian; Li Yongmei; Zhang Jiemin; Liu Jinming

    2013-01-01

    Electromagnetic leakage will happen in working information equipments; it could lead to information leakage. In order to discover the nature of information in electromagnetic leakage, this paper combined electromagnetic theory with information theory as an innovative research method. It outlines a systematic model of electromagnetic information leakage, which theoretically describes the process of information leakage, intercept and reproduction based on electromagnetic radiation, and ana...

  18. INTEGRATIVE METHOD OF TEACHING INFORMATION MODELING IN PRACTICAL HEALTH SERVICE BASED ON MICROSOFT ACCESS QUERIES

    Directory of Open Access Journals (Sweden)

    Svetlana A. Firsova

    2016-06-01

    Full Text Available Introduction: this article explores the pedagogical technology employed to teach medical students foundations of work with MICROSOFT ACCESS databases. The above technology is based on integrative approach to the information modeling in public health practice, drawing upon basic didactic concepts that pertain to objects and tools databases created in MICROSOFT ACCESS. The article examines successive steps in teaching the topic “Queries in MICROSOFT ACCESS” – from simple queries to complex ones. The main attention is paid to such components of methodological system, as the principles and teaching methods classified according to the degree of learners’ active cognitive activity. The most interesting is the diagram of the relationship of learning principles, teaching methods and specific types of requests. Materials and Methods: the authors used comparative analysis of literature, syllabi, curricula in medical informatics taught at leading medical universities in Russia. Results: the original technique of training in putting queries with databases of MICROSOFT ACCESS is presented for analysis of information models in practical health care. Discussion and Conclusions: it is argued that the proposed pedagogical technology will significantly improve the effectiveness of teaching the course “Medical Informatics”, that includes development and application of models to simulate the operation of certain facilities and services of the health system which, in turn, increases the level of information culture of practitioners.

  19. Value-based choice: An integrative, neuroscience-informed model of health goals.

    Science.gov (United States)

    Berkman, Elliot T

    2018-01-01

    Traditional models of health behaviour focus on the roles of cognitive, personality and social-cognitive constructs (e.g. executive function, grit, self-efficacy), and give less attention to the process by which these constructs interact in the moment that a health-relevant choice is made. Health psychology needs a process-focused account of how various factors are integrated to produce the decisions that determine health behaviour. I present an integrative value-based choice model of health behaviour, which characterises the mechanism by which a variety of factors come together to determine behaviour. This model imports knowledge from research on behavioural economics and neuroscience about how choices are made to the study of health behaviour, and uses that knowledge to generate novel predictions about how to change health behaviour. I describe anomalies in value-based choice that can be exploited for health promotion, and review neuroimaging evidence about the involvement of midline dopamine structures in tracking and integrating value-related information during choice. I highlight how this knowledge can bring insights to health psychology using illustrative case of healthy eating. Value-based choice is a viable model for health behaviour and opens new avenues for mechanism-focused intervention.

  20. A Model for Information

    Directory of Open Access Journals (Sweden)

    Paul Walton

    2014-09-01

    Full Text Available This paper uses an approach drawn from the ideas of computer systems modelling to produce a model for information itself. The model integrates evolutionary, static and dynamic views of information and highlights the relationship between symbolic content and the physical world. The model includes what information technology practitioners call “non-functional” attributes, which, for information, include information quality and information friction. The concepts developed in the model enable a richer understanding of Floridi’s questions “what is information?” and “the informational circle: how can information be assessed?” (which he numbers P1 and P12.

  1. An Information Perception-Based Emotion Contagion Model for Fire Evacuation

    Science.gov (United States)

    Liu, Ting Ting; Liu, Zhen; Ma, Minhua; Xuan, Rongrong; Chen, Tian; Lu, Tao; Yu, Lipeng

    2017-03-01

    In fires, people are easier to lose their mind. Panic will lead to irrational behavior and irreparable tragedy. It has great practical significance to make contingency plans for crowd evacuation in fires. However, existing studies about crowd simulation always paid much attention on the crowd density, but little attention on emotional contagion that may cause a panic. Based on settings about information space and information sharing, this paper proposes an emotional contagion model for crowd in panic situations. With the proposed model, a behavior mechanism is constructed for agents in the crowd and a prototype of system is developed for crowd simulation. Experiments are carried out to verify the proposed model. The results showed that the spread of panic not only related to the crowd density and the individual comfort level, but also related to people's prior knowledge of fire evacuation. The model provides a new way for safety education and evacuation management. It is possible to avoid and reduce unsafe factors in the crowd with the lowest cost.

  2. HISTORIC BUILDING INFORMATION MODELLING – ADDING INTELLIGENCE TO LASER AND IMAGE BASED SURVEYS

    Directory of Open Access Journals (Sweden)

    M. Murphy

    2012-09-01

    Full Text Available Historic Building Information Modelling (HBIM is a novel prototype library of parametric objects based on historic data and a system of cross platform programmes for mapping parametric objects onto a point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL. The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engin- eering process. The final HBIM product is the creation of full 3D models including detail behind the object's surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured.

  3. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    Science.gov (United States)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  4. An estimation framework for building information modeling (BIM)-based demolition waste by type.

    Science.gov (United States)

    Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook

    2017-12-01

    Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.

  5. Experimental Robot Model Adjustments Based on Force-Torque Sensor Information.

    Science.gov (United States)

    Martinez, Santiago; Garcia-Haro, Juan Miguel; Victores, Juan G; Jardon, Alberto; Balaguer, Carlos

    2018-03-11

    The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM) to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM), is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP). Thanks to the use of information from force-torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics.

  6. Using simple agent-based modeling to inform and enhance neighborhood walkability

    Science.gov (United States)

    2013-01-01

    Background Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory ‘what-if’ scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. Methods This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. Results The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and

  7. Business Value of Information Technology Service Quality Based on Probabilistic Business-Driven Model

    Directory of Open Access Journals (Sweden)

    Jaka Sembiring

    2015-08-01

    Full Text Available The business value of information technology (IT services is often difficult to assess, especially from the point of view of a non-IT manager. This condition could severely impact organizational IT strategic decisions. Various approaches have been proposed to quantify the business value, but some are trapped in technical complexity while others misguide managers into directly and subjectively judging some technical entities outside their domain of expertise. This paper describes a method on how to properly capture both perspectives based on a  probabilistic business-driven model. The proposed model presents a procedure to calculate the business value of IT services. The model also covers IT security services and their business value as an important aspect of IT services that is not covered in previously published researches. The impact of changes in the quality of IT services on business value will also be discussed. A simulation and a case illustration are provided to show the possible application of the proposed model for a simple business process in an enterprise.

  8. Fetal Physiologically-Based Pharmacokinetic Models: Systems Information on Fetal Biometry and Gross Composition.

    Science.gov (United States)

    Abduljalil, Khaled; Johnson, Trevor N; Rostami-Hodjegan, Amin

    2017-12-20

    Postulating fetal exposure to xenobiotics has been based on animal studies; however, inter-species differences can make this problematic. Physiologically-based pharmacokinetic models may capture the rapid changes in anatomical, biochemical, and physiological parameters during fetal growth over the duration of pregnancy and help with interpreting laboratory animal data. However, these models require robust information on the longitudinal variations of system parameter values and their covariates. The objective of this study was to present an extensive analysis and integration of the available biometric data required for creating a virtual human fetal population by means of equations that define the changes of each parameter with gestational age. A comprehensive literature search was carried out on the parameters defining the growth of a fetus during in-utero life including weight, height, and body surface area in addition to other indices of fetal size, body fat, and water. Collated data were assessed and integrated through a meta-analysis to develop mathematical algorithms to describe growth with fetal age. Data for the meta-analysis were obtained from 97 publications, of these, 15 were related to fetal height or length, 32 to fetal weight, 4 to fetal body surface area, 8 to crown length, 5 to abdominal circumference, 12 to head circumference, 14 to body fat, and 12 to body water. Various mathematical algorithms were needed to describe parameter values from the time of conception to birth. The collated data presented in this article enabled the development of mathematical functions to describe fetal biometry and provide a potentially useful resource for building anthropometric features of fetal physiologically-based pharmacokinetic models.

  9. Agent-Based Model of Information Security System: Architecture and Formal Framework for Coordinated Intelligent Agents Behavior Specification

    National Research Council Canada - National Science Library

    Gorodetski, Vladimir

    2001-01-01

    The contractor will research and further develop the technology supporting an agent-based architecture for an information security system and a formal framework to specify a model of distributed knowledge...

  10. Dynamic relationships between microbial biomass, respiration, inorganic nutrients and enzyme activities: informing enzyme based decomposition models

    Directory of Open Access Journals (Sweden)

    Daryl L Moorhead

    2013-08-01

    Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.

  11. Genome-wide prediction, display and refinement of binding sites with information theory-based models

    Directory of Open Access Journals (Sweden)

    Leeder J Steven

    2003-09-01

    Full Text Available Abstract Background We present Delila-genome, a software system for identification, visualization and analysis of protein binding sites in complete genome sequences. Binding sites are predicted by scanning genomic sequences with information theory-based (or user-defined weight matrices. Matrices are refined by adding experimentally-defined binding sites to published binding sites. Delila-Genome was used to examine the accuracy of individual information contents of binding sites detected with refined matrices as a measure of the strengths of the corresponding protein-nucleic acid interactions. The software can then be used to predict novel sites by rescanning the genome with the refined matrices. Results Parameters for genome scans are entered using a Java-based GUI interface and backend scripts in Perl. Multi-processor CPU load-sharing minimized the average response time for scans of different chromosomes. Scans of human genome assemblies required 4–6 hours for transcription factor binding sites and 10–19 hours for splice sites, respectively, on 24- and 3-node Mosix and Beowulf clusters. Individual binding sites are displayed either as high-resolution sequence walkers or in low-resolution custom tracks in the UCSC genome browser. For large datasets, we applied a data reduction strategy that limited displays of binding sites exceeding a threshold information content to specific chromosomal regions within or adjacent to genes. An HTML document is produced listing binding sites ranked by binding site strength or chromosomal location hyperlinked to the UCSC custom track, other annotation databases and binding site sequences. Post-genome scan tools parse binding site annotations of selected chromosome intervals and compare the results of genome scans using different weight matrices. Comparisons of multiple genome scans can display binding sites that are unique to each scan and identify sites with significantly altered binding strengths

  12. A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.

    Science.gov (United States)

    Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F

    2017-11-01

    . This spatial model constitutes an elegant alternative to voxel-based approaches in neuroimaging studies; not only are their atoms biologically informed, they are also adaptive to high resolutions, represent high dimensions efficiently, and capture long-range spatial dependencies, which are important and challenging objectives for neuroimaging data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Normative models and healthcare planning: network-based simulations within a geographic information system environment.

    Science.gov (United States)

    Walsh, S J; Page, P H; Gesler, W M

    1997-06-01

    Network analysis to integrate patient, transportation and hospital characteristics for healthcare planning in order to assess the role of geographic information systems (GIS). A normative model of base-level responses of patient flows to hospitals, based on estimated travel times, was developed for this purpose. A GIS database developed to include patient discharge data, locations of hospitals, US TIGER/Line files of the transportation network, enhanced address-range data, and U.S. Census variables. The study area included a 16-county region centered on the city of Charlotte and Mecklenburg County, North Carolina, and contained 25 hospitals serving nearly 2 million people over a geographic area of nearly 9,000 square miles. Normative models as a tool for healthcare planning were derived through a spatial Network analysis and a distance optimization model that was implemented within a GIS. Scenarios were developed and tested that involved patient discharge data geocoded to the five-digit zip code, hospital locations geocoded to their individual addresses, and a transportation network of varying road types and corresponding estimated travel speeds to examine both patient discharge levels and a doubling of discharge levels associated with total discharges and DRG 391 (Normal Newborns). The Network analysis used location/allocation modeling to optimize for travel time and integrated measures of supply, demand, and impedance. Patient discharge data from the North Carolina Medical Database Commission, address-ranges from the North Carolina Institute for Transportation Research and Education, and U.S. Census TIGER/Line files were entered-into the ARC/INFO GIS software system for analysis. A relational database structure was used to organize the information and to link spatial features to their attributes. Advances in healthcare planning can be achieved by examining baseline responses of patient flows to distance optimization simulations and healthcare scenarios conducted

  14. Model-based estimators of density and connectivity to inform conservation of spatially structured populations

    Science.gov (United States)

    Morin, Dana J.; Fuller, Angela K.; Royle, J. Andrew; Sutherland, Chris

    2017-01-01

    Conservation and management of spatially structured populations is challenging because solutions must consider where individuals are located, but also differential individual space use as a result of landscape heterogeneity. A recent extension of spatial capture–recapture (SCR) models, the ecological distance model, uses spatial encounter histories of individuals (e.g., a record of where individuals are detected across space, often sequenced over multiple sampling occasions), to estimate the relationship between space use and characteristics of a landscape, allowing simultaneous estimation of both local densities of individuals across space and connectivity at the scale of individual movement. We developed two model-based estimators derived from the SCR ecological distance model to quantify connectivity over a continuous surface: (1) potential connectivity—a metric of the connectivity of areas based on resistance to individual movement; and (2) density-weighted connectivity (DWC)—potential connectivity weighted by estimated density. Estimates of potential connectivity and DWC can provide spatial representations of areas that are most important for the conservation of threatened species, or management of abundant populations (i.e., areas with high density and landscape connectivity), and thus generate predictions that have great potential to inform conservation and management actions. We used a simulation study with a stationary trap design across a range of landscape resistance scenarios to evaluate how well our model estimates resistance, potential connectivity, and DWC. Correlation between true and estimated potential connectivity was high, and there was positive correlation and high spatial accuracy between estimated DWC and true DWC. We applied our approach to data collected from a population of black bears in New York, and found that forested areas represented low levels of resistance for black bears. We demonstrate that formal inference about measures

  15. The experiential health information processing model: supporting collaborative web-based patient education

    Directory of Open Access Journals (Sweden)

    Wathen C Nadine

    2008-12-01

    Full Text Available Abstract Background First generation Internet technologies such as mailing lists or newsgroups afforded unprecedented levels of information exchange within a variety of interest groups, including those who seek health information. With emergence of the World Wide Web many communication applications were ported to web browsers. One of the driving factors in this phenomenon has been the exchange of experiential or anecdotal knowledge that patients share online, and there is emerging evidence that participation in these forums may be having an impact on people's health decision making. Theoretical frameworks supporting this form of information seeking and learning have yet to be proposed. Results In this article, we propose an adaptation of Kolb's experiential learning theory to begin to formulate an experiential health information processing model that may contribute to our understanding of online health information seeking behaviour in this context. Conclusion An experiential health information processing model is proposed that can be used as a research framework. Future research directions include investigating the utility of this model in the online health information seeking context, studying the impact of collaborating in these online environments on patient decision making and on health outcomes are provided.

  16. Investigating Information-Seeking Behavior of Faculty Members Based on Wilson's Model: Case Study of PNU University, Mazandaran, Iran.

    Science.gov (United States)

    Azadeh, Fereydoon; Ghasemi, Shahrzad

    2016-09-01

    The present research aims to study information seeking behavior of faculty Members of Payame Noor University (PNU) in Mazandaran province of Iran by using Wilson's model of information seeking behavior. This is a survey study. Participants were 97 of PNU faculty Members in Mazandaran province. An information-seeking behavior inventory was employed to gather information and research data, which had 24 items based on 5-point likert scale. Collected data were analyzed in SPSS software. Results showed that the most important goal of faculty members was publishing a scientific paper, and their least important goal was updating technical information. Also we found that they mostly use internet-based resources to meet their information needs. Accordingly, 57.7% of them find information resources via online search engines (e.g. Google, Yahoo). Also we concluded that there was a significant relationship between English language proficiency, academic rank, and work experience of them and their information- seeking behavior.

  17. Role-based typology of information technology : Model development and assessment.

    NARCIS (Netherlands)

    Zand, F.; Solaimani, H. (Sam); Beers, van C.

    2015-01-01

    Managers aim to explain how and why IT creates business value, recognize their IT-based capabilities, and select the appropriate IT to enhance and leverage those capabilities. This article synthesizes the Organizational Information Processing Theory and Resource-Based View into a descriptive

  18. Model-based system-of-systems engineering for space-based command, control, communication, and information architecture design

    Science.gov (United States)

    Sindiy, Oleg V.

    This dissertation presents a model-based system-of-systems engineering (SoSE) approach as a design philosophy for architecting in system-of-systems (SoS) problems. SoS refers to a special class of systems in which numerous systems with operational and managerial independence interact to generate new capabilities that satisfy societal needs. Design decisions are more complicated in a SoS setting. A revised Process Model for SoSE is presented to support three phases in SoS architecting: defining the scope of the design problem, abstracting key descriptors and their interrelations in a conceptual model, and implementing computer-based simulations for architectural analyses. The Process Model enables improved decision support considering multiple SoS features and develops computational models capable of highlighting configurations of organizational, policy, financial, operational, and/or technical features. Further, processes for verification and validation of SoS models and simulations are also important due to potential impact on critical decision-making and, thus, are addressed. Two research questions frame the research efforts described in this dissertation. The first concerns how the four key sources of SoS complexity---heterogeneity of systems, connectivity structure, multi-layer interactions, and the evolutionary nature---influence the formulation of SoS models and simulations, trade space, and solution performance and structure evaluation metrics. The second question pertains to the implementation of SoSE architecting processes to inform decision-making for a subset of SoS problems concerning the design of information exchange services in space-based operations domain. These questions motivate and guide the dissertation's contributions. A formal methodology for drawing relationships within a multi-dimensional trade space, forming simulation case studies from applications of candidate architecture solutions to a campaign of notional mission use cases, and

  19. Factors associated with adoption of health information technology: a conceptual model based on a systematic review.

    Science.gov (United States)

    Kruse, Clemens Scott; DeShazo, Jonathan; Kim, Forest; Fulton, Lawrence

    2014-05-23

    The Health Information Technology for Economic and Clinical Health Act (HITECH) allocated $19.2 billion to incentivize adoption of the electronic health record (EHR). Since 2009, Meaningful Use Criteria have dominated information technology (IT) strategy. Health care organizations have struggled to meet expectations and avoid penalties to reimbursements from the Center for Medicare and Medicaid Services (CMS). Organizational theories attempt to explain factors that influence organizational change, and many theories address changes in organizational strategy. However, due to the complexities of the health care industry, existing organizational theories fall short of demonstrating association with significant health care IT implementations. There is no organizational theory for health care that identifies, groups, and analyzes both internal and external factors of influence for large health care IT implementations like adoption of the EHR. The purpose of this systematic review is to identify a full-spectrum of both internal organizational and external environmental factors associated with the adoption of health information technology (HIT), specifically the EHR. The result is a conceptual model that is commensurate with the complexity of with the health care sector. We performed a systematic literature search in PubMed (restricted to English), EBSCO Host, and Google Scholar for both empirical studies and theory-based writing from 1993-2013 that demonstrated association between influential factors and three modes of HIT: EHR, electronic medical record (EMR), and computerized provider order entry (CPOE). We also looked at published books on organizational theories. We made notes and noted trends on adoption factors. These factors were grouped as adoption factors associated with various versions of EHR adoption. The resulting conceptual model summarizes the diversity of independent variables (IVs) and dependent variables (DVs) used in articles, editorials, books, as

  20. Use of stratigraphic, petrographic, hydrogeologic and geochemical information for hydrogeologic modelling based on geostatistical simulation

    International Nuclear Information System (INIS)

    Rohlig, K.J.; Fischer, H.; Poltl, B.

    2004-01-01

    This paper describes the stepwise utilization of geologic information from various sources for the construction of hydrogeological models of a sedimentary site by means of geostatistical simulation. It presents a practical application of aquifer characterisation by firstly simulating hydrogeological units and then the hydrogeological parameters. Due to the availability of a large amount of hydrogeological, geophysical and other data and information, the Gorleben site (Northern Germany) has been used for a case study in order to demonstrate the approach. The study, which has not yet been completed, tries to incorporate as much as possible of the available information and to characterise the remaining uncertainties. (author)

  1. Neurally and ocularly informed graph-based models for searching 3D environments

    Science.gov (United States)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  2. IRaPPA: information retrieval based integration of biophysical models for protein assembly selection.

    Science.gov (United States)

    Moal, Iain H; Barradas-Bautista, Didier; Jiménez-García, Brian; Torchala, Mieczyslaw; van der Velde, Arjan; Vreven, Thom; Weng, Zhiping; Bates, Paul A; Fernández-Recio, Juan

    2017-06-15

    In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. IRaPPA has been implemented in the SwarmDock server ( http://bmm.crick.ac.uk/∼SwarmDock/ ), pyDock server ( http://life.bsc.es/pid/pydockrescoring/ ) and ZDOCK server ( http://zdock.umassmed.edu/ ), with code available on request. moal@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. The informational system model of Ukrainian national transport workflow improvement based on electronic signature introduction management

    Directory of Open Access Journals (Sweden)

    Grigoriy NECHAEY

    2007-01-01

    Full Text Available Proposed model of informational system supposes improvement of newconceptual method on the work with e-signature in transport nformational systems. Problems and aims that may be solved with the help of this system and the most important economical and technical advantages of the proposed system in comparison with traditional methods of e-signing use are marked out.

  4. A geo-information theoretical approach to inductive erosion modelling based on terrain mapping units

    NARCIS (Netherlands)

    Suryana, N.

    1997-01-01

    Three main aspects of the research, namely the concept of object orientation, the development of an Inductive Erosion Model (IEM) and the development of a framework for handling uncertainty in the data or information resulting from a GIS are interwoven in this thesis. The first and the second aspect

  5. Architecture Model of Bussines, Information System and Technology in BAKOSURTANAL Based on TOGAF

    Directory of Open Access Journals (Sweden)

    Iyan Supriyana

    2010-04-01

    Full Text Available The information technology (IT is a necessary in BAKOSURTANAL to support business in relation with data and spatial information. Users will get the advantage through easy and fast access to data and spatial information. The important of the enterprise architecture (EA to play a role to support company is proven because it provides technology and process structure which are fundamental aspects in IT strategy. Enterprise architecture framework (EAF will accelerate and simplify the development of EA by ascertaining comprehensive coverage of solutions, ensuring the result of EA is always in line with the growth of enterprise. This paper explains the open group architecture framework (TOGAF from several of EAF. The result shows that the most suitable EAF for BAKOSURTANAL in Blueprint development is by proposing EA model that covers business, information system, and technology architecture which are relied on recommended technical basics that is possible to be implemented.

  6. Information Exchange in Global Logistics Chains : An application for Model-based Auditing (abstract)

    NARCIS (Netherlands)

    Veenstra, A.W.; Hulstijn, J.; Christiaanse, R.; Tan, Y.

    2013-01-01

    An integrated data pipeline has been proposed to meet requirements for supply chain visibility and control. How can data integration be used for risk assessment, monitoring and control in global supply chains? We argue that concepts from model-based auditing can be used to model the ‘ideal’ flow of

  7. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    Directory of Open Access Journals (Sweden)

    Saleh Alwahaishi

    2013-03-01

    Full Text Available The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As the use of ICT expands globally, there is need for further research into cultural aspects and implications of ICT. The acceptance of Information Technology (IT has become a fundamental part of the research plan for most organizations (Igbaria 1993. In IT research, numerous theories are used to understand users’ adoption of new technologies. Various models were developed including the Technology Acceptance Model, Theory of Reasoned Action, Theory of Planned Behavior, and recently, the Unified Theory of Acceptance and Use of Technology. Each of these models has sought to identify the factors which influence a citizen’s intention or actual use of information technology. Drawing on the UTAUT model and Flow Theory, this research composes a new hybrid theoretical framework to identify the factors affecting the acceptance and use of Mobile Internet -as an ICT application- in a consumer context. The proposed model incorporates eight constructs: Performance Expectancy, Effort Expectancy, Facilitating Conditions, Social Influences, Perceived Value, Perceived Playfulness, Attention Focus, and Behavioral intention. Data collected online from 238 respondents in Saudi Arabia were tested against the research model, using the structural equation modeling approach. The proposed model was mostly supported by the empirical data. The findings of this study provide several crucial implications for ICT and, in particular, mobile Internet service practitioners and researchers

  8. Spatial characterization and prediction of Neanderthal sites based on environmental information and stochastic modelling

    Science.gov (United States)

    Maerker, Michael; Bolus, Michael

    2014-05-01

    We present a unique spatial dataset of Neanderthal sites in Europe that was used to train a set of stochastic models to reveal the correlations between the site locations and environmental indices. In order to assess the relations between the Neanderthal sites and environmental variables as described above we applied a boosted regression tree approach (TREENET) a statistical mechanics approach (MAXENT) and support vector machines. The stochastic models employ a learning algorithm to identify a model that best fits the relationship between the attribute set (predictor variables (environmental variables) and the classified response variable which is in this case the types of Neanderthal sites. A quantitative evaluation of model performance was done by determining the suitability of the model for the geo-archaeological applications and by helping to identify those aspects of the methodology that need improvements. The models' predictive performances were assessed by constructing the Receiver Operating Characteristics (ROC) curves for each Neanderthal class, both for training and test data. In a ROC curve the Sensitivity is plotted over the False Positive Rate (1-Specificity) for all possible cut-off points. The quality of a ROC curve is quantified by the measure of the parameter area under the ROC curve. The dependent variable or target variable in this study are the locations of Neanderthal sites described by latitude and longitude. The information on the site location was collected from literature and own research. All sites were checked for site accuracy using high resolution maps and google earth. The study illustrates that the models show a distinct ranking in model performance with TREENET outperforming the other approaches. Moreover Pre-Neanderthals, Early Neanderthals and Classic Neanderthals show a specific spatial distribution. However, all models show a wide correspondence in the selection of the most important predictor variables generally showing less

  9. Carbon emission analysis and evaluation of industrial departments in China: An improved environmental DEA cross model based on information entropy.

    Science.gov (United States)

    Han, Yongming; Long, Chang; Geng, Zhiqiang; Zhang, Keyu

    2018-01-01

    Environmental protection and carbon emission reduction play a crucial role in the sustainable development procedure. However, the environmental efficiency analysis and evaluation based on the traditional data envelopment analysis (DEA) cross model is subjective and inaccurate, because all elements in a column or a row of the cross evaluation matrix (CEM) in the traditional DEA cross model are given the same weight. Therefore, this paper proposes an improved environmental DEA cross model based on the information entropy to analyze and evaluate the carbon emission of industrial departments in China. The information entropy is applied to build the entropy distance based on the turbulence of the whole system, and calculate the weights in the CEM of the environmental DEA cross model in a dynamic way. The theoretical results show that the new weight constructed based on the information entropy is unique and optimal globally by using the Monte Carlo simulation. Finally, compared with the traditional environmental DEA and DEA cross model, the improved environmental DEA cross model has a better efficiency discrimination ability based on the data of industrial departments in China. Moreover, the proposed model can obtain the potential of carbon emission reduction of industrial departments to improve the energy efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  11. The Research of Petroleum Enterprise Information System Architecture Based on the G/S Model

    Science.gov (United States)

    Rui, Liu; Xirong, Guo; Fang, Miao

    This paper explains the petroleum engineering technologies of petroleum enterprise supported by G/S model, which combine process of exploring, developing, and transporting of petroleum enterprise, these key technologies with spatial information technology supported by Digital Earth Platform, resulting in the improvement of the scientificity, accuracy, and rationality of the petroleum engineering technologies and the reduction of the cost and the increase of the benefits.

  12. Development of a target-site based regional frequency model using historical information

    Science.gov (United States)

    Hamdi, Yasser; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent

    2016-04-01

    Nuclear power facilities in France were designed to withstand extreme environmental conditions with a very low probability of failure. Nevertheless, some exceptional surges considered as outliers are not properly addressed by classical frequency analysis models. If available data at the site of interest (target-site) is sufficiently complete on a long period and not characterized by the presence of an outlier, at-site frequency analysis can be used to estimate quantiles with acceptable uncertainties. Otherwise, regional and historical information (HI) may be used to mitigate the lack of data and the influence of the outlier by increasing its representativeness in the sample. several models have been proposed over the last years for regional extreme surges frequency analysis in France to take into account these outliers in the frequency analysis. However, these models do not give a specific weight to the target site and cannot take into account HI. The objective of the present work is to develop a regional frequency model (RFM) centered on a target-site and using HI. The neighborhood between sites is measured by a degree of physical and statistical dependence between observations (with a prior confidence level). Unlike existing models, the obtained region around the target site (and constituting the neighboring sites) is sliding from a target-site to another. In other words, the developed model assigns a region for each target site. The idea behind the construction of a frequency model favoring target sites and the principle of moving regions around these target-sites is the original key point of the developed model. A related issue regards the estimation of missed and/or ungauged surges at target-sites from those of gauged potential neighboring sites, a multiple linear regression (MLR) is used and it can be extended to other reconstitutions models. MLR analysis can be considered conclusive only if available observations at neighboring sites are informative enough

  13. Conceptual model of health information ethics as a basis for computer-based instructions for electronic patient record systems.

    Science.gov (United States)

    Okada, Mihoko; Yamamoto, Kazuko; Watanabe, Kayo

    2007-01-01

    A computer-based learning system called Electronic Patient Record (EPR) Laboratory has been developed for students to acquire knowledge and practical skills of EPR systems. The Laboratory is basically for self-learning. Among the subjects dealt with in the system is health information ethics. We consider this to be of the utmost importance for personnel involved in patient information handling. The variety of material on the subject has led to a problem in dealing with it in a methodical manner. In this paper, we present a conceptual model of health information ethics developed using UML to represent the semantics and the knowledge of the domain. Based on the model, we could represent the scope of health information ethics, give structure to the learning materials, and build a control mechanism for a test, fail and review cycle. We consider that the approach is applicable to other domains.

  14. Hybrid approach for fault diagnosis based on multilevel flow model and information fusion of nuclear power plant

    International Nuclear Information System (INIS)

    Ma Jie; Guo Lifeng; Zhang Yusheng; Peng Qiao; Ruan Minzhi

    2011-01-01

    In order to improve the ability of condition monitoring and fault diagnostic system, a hybrid intelligent diagnostic system based on multilevel flow model (MFM) and information fusion was proposed. This method utilized information fusion technique to improve the rapidness and veracity of fault diagnosis, and made use of MFM to explain the alarm propagation path, which could enhance the comprehension of diagnostic result. The emulation test proves that the hybrid intelligent diagnostic system can identify fault and propose the alarm analysis quickly. (authors)

  15. An agent-based information management model of the Chinese pig sector

    NARCIS (Netherlands)

    Osinga, S.A.; Kramer, M.R.; Hofstede, G.J.; Roozmand, O.; Beulens, A.J.M.

    2010-01-01

    This paper investigates the effect of a selected top-down measure (what-if scenario) on actual agent behaviour and total system behaviour by means of an agent-based simulation model, when agents’ behaviour cannot fully be managed because the agents are autonomous. The Chinese pork sector serves as

  16. Information exchange in global logistics chains : An application for model-based auditing,

    NARCIS (Netherlands)

    Veenstra, A.W.; Hulstijn, J.; Christiaanse, R.M.J.; Tan, Y.

    2013-01-01

    An integrated data pipeline has been proposed to meet requirements for visibility, supervision and control in global supply chains. How can data integration be used for risk assessment, monitoring and control in global supply chains? We argue that concepts from model-based auditing can be used to

  17. Junior high school students' cognitive process in solving the developed algebraic problems based on information processing taxonomy model

    Science.gov (United States)

    Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd

    2017-05-01

    This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.

  18. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    Science.gov (United States)

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  19. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  20. HL7 document patient record architecture: an XML document architecture based on a shared information model.

    Science.gov (United States)

    Dolin, R H; Alschuler, L; Behlen, F; Biron, P V; Boyer, S; Essin, D; Harding, L; Lincoln, T; Mattison, J E; Rishel, W; Sokolowski, R; Spinosa, J; Williams, J P

    1999-01-01

    The HL7 SGML/XML Special Interest Group is developing the HL7 Document Patient Record Architecture. This draft proposal strives to create a common data architecture for the interoperability of healthcare documents. Key components are that it is under the umbrella of HL7 standards, it is specified in Extensible Markup Language, the semantics are drawn from the HL7 Reference Information Model, and the document specifications form an architecture that, in aggregate, define the semantics and structural constraints necessary for the exchange of clinical documents. The proposal is a work in progress and has not yet been submitted to HL7's formal balloting process.

  1. Exploring User Engagement in Information Networks: Behavioural – based Navigation Modelling, Ideas and Directions

    Directory of Open Access Journals (Sweden)

    Vesna Kumbaroska

    2017-04-01

    Full Text Available Revealing an endless array of user behaviors in an online environment is a very good indicator of the user’s interests either in the process of browsing or in purchasing. One such behavior is the navigation behavior, so detected user navigation patterns are able to be used for practical purposes such as: improving user engagement, turning most browsers into buyers, personalize content or interface, etc. In this regard, our research represents a connection between navigation modelling and user engagement. A usage of the Generalized Stochastic Petri Nets concept for stochastic behavioral-based modelling of the navigation process is proposed for measuring user engagement components. Different types of users are automatically identified and clustered according to their navigation behaviors, thus the developed model gives great insight into the navigation process. As part of this study, Peterson’s model for measuring the user engagement is explored and a direct calculation of its components is illustrated. At the same time, asssuming that several user sessions/visits are initialized in a certain time frame, following the Petri Nets dynamics is indicating that the proposed behavioral – based model could be used for user engagement metrics calculation, thus some basic ideas are discussed, and initial directions are given.

  2. Integrated Methodology for Information System Change Control Based on Enterprise Architecture Models

    Directory of Open Access Journals (Sweden)

    Pirta Ruta

    2015-12-01

    Full Text Available The information system (IS change management and governance, according to the best practices, are defined and described in several international methodologies, standards, and frameworks (ITIL, COBIT, ValIT etc.. These methodologies describe IS change management aspects from the viewpoint of their particular enterprise resource management area. The areas are mainly viewed in a partly isolated environment, and the integration of the existing methodologies is insufficient for providing unified and controlled methodological support for holistic IS change management. In this paper, an integrated change management methodology is introduced. The methodology consists of guidelines for IS change control by integrating the following significant resource management areas – information technology (IT governance, change management and enterprise architecture (EA change management. In addition, the methodology includes lists of controls applicable at different phases. The approach is based on re-use and fusion of principles used by related methodologies as well as on empirical observations about typical IS change management mistakes in enterprises.

  3. Multiple Perspective Approach for the Development of Information Systems Based on Advanced Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    to observe and analyze the workings of a development project. I have been working as part of the team assembled for the development of the information system based on AMM for a period of three years. My active participation in the development project granted me access to all the actors involved. In my role I...... through negotiation and democratic decision making will it be possible for the team members to have their current weltanschauung represented in decision making. Thirdly, geographical distribution and loose coupling foster individualist rather than group behavior. The more the social tissue is disconnected...... of the technology, the development team was formed by individuals from both universities and the private sector. The organization of the development team was geographically distributed and loosely coupled. The development of information systems has always been a difficult activity and the records show a remarkable...

  4. Building information modelling (BIM)

    CSIR Research Space (South Africa)

    Conradie, Dirk CU

    2009-02-01

    Full Text Available The concept of a Building Information Model (BIM) also known as a Building Product Model (BPM) is nothing new. A short article on BIM will never cover the entire filed, because it is a particularly complex filed that is recently beginning to receive...

  5. 3D building reconstruction based on given ground plan information and surface models extracted from spaceborne imagery

    Science.gov (United States)

    Tack, Frederik; Buyuksalih, Gurcan; Goossens, Rudi

    2012-01-01

    3D surface models have gained field as an important tool for urban planning and mapping. However, urban environments have a complex nature to model and they provide a challenge to investigate the current limits of automatic digital surface modeling from high resolution satellite imagery. An approach is introduced to improve a 3D surface model, extracted photogrammetrically from satellite imagery, based on the geometric building information embodied in existing 2D ground plans. First buildings are clipped from the extracted DSM based on the 2D polygonal building ground plans. To generate prismatic shaped structures with vertical walls and flat roofs, building shape is retrieved from the cadastre database while elevation information is extracted from the DSM. Within each 2D building boundary, a constant roof height is extracted based on statistical calculations of the height values. After buildings are extracted from the initial surface model, the remaining DSM is further processed to simplify to a smooth DTM that reflects bare ground, without artifacts, local relief, vegetation, cars and city furniture. In a next phase, both models are merged to yield an integrated city model or generalized DSM. The accuracy of the generalized surface model is assessed according to a quantitative-statistical analysis by comparison with two different types of reference data.

  6. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  7. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  8. Perspectives of IT Artefacts: Information Systems based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    A solution for production scheduling that is lately attracting the interests of the manufacturing industry involves the use of complex mathematical modeling techniques in scheduling software. However this technology is fairly unknown among manufacturing practitioners, as are the social problems...... of its development and use. The aim of this article is to show how an approach based on multiple perspectives can help understand the emergence of complex software and help understand why and how the reasons and motives of the different stakeholders are, at times, incompatible....

  9. Power distribution system diagnosis with uncertainty information based on rough sets and clouds model

    Science.gov (United States)

    Sun, Qiuye; Zhang, Huaguang

    2006-11-01

    During the distribution system fault period, usually the explosive growth signals including fuzziness and randomness are too redundant to make right decision for the dispatcher. The volume of data with a few uncertainties overwhelms classic information systems in the distribution control center and exacerbates the existing knowledge acquisition process of expert systems. So intelligent methods must be developed to aid users in maintaining and using this abundance of information effectively. An important issue in distribution fault diagnosis system (DFDS) is to allow the discovered knowledge to be as close as possible to natural languages to satisfy user needs with tractability, and to offer DFDS robustness. At this junction, the paper describes a systematic approach for detecting superfluous data. The approach therefore could offer user both the opportunity to learn about the data and to validate the extracted knowledge. It is considered as a "white box" rather than a "black box" like in the case of neural network. The cloud theory is introduced and the mathematical description of cloud has effectively integrated the fuzziness and randomness of linguistic terms in a unified way. Based on it, a method of knowledge representation in DFDS is developed which bridges the gap between quantitative knowledge and qualitative knowledge. In relation to classical rough set, the cloud-rough method can deal with the uncertainty of the attribute and make a soft discretization for continuous ones (such as the current and the voltage). A novel approach, including discretization, attribute reduction, rule reliability computation and equipment reliability computation, is presented. The data redundancy is greatly reduced based on an integrated use of cloud theory and rough set theory. Illustrated with a power distribution DFDS shows the effectiveness and practicality of the proposed approach.

  10. INFORMATION MODEL OF SOCIAL TRANSFORMATIONS

    Directory of Open Access Journals (Sweden)

    Мария Васильевна Комова

    2013-09-01

    Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the

  11. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  12. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between ...... in gravity due to unmonitored non-hydrological effects, and the requirement of a gravitationally stable reference station. Application of TLRG in hydrology should be combined with other geophysical and/or traditional monitoring methods.......Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between...... the signal and the change in water mass stored in the subsurface. Thus, no petrophysical relationship is required for coupled hydrogeophysical inversion. Two hydrological events were monitored with TLRG. One was a natural flooding event in the periphery of the Okavango Delta, Botswana, and one was a forced...

  13. Towards the Building Information Modeling-Based Capital Project Lifecycle Management in the Luxury Yacht Industry

    Directory of Open Access Journals (Sweden)

    Liu Fuyong

    2017-11-01

    Full Text Available It will be a new approach that BIM’s capital project lifecycle management (CPLM applied to the yacht industry. This paper explored the feasibility of applying the principles and rationales of BIM for capital project lifecycle management in luxury yacht design, engineering, fabrication, construction and operation. The paper examined the premises and backbone technology of BIM. It then evaluated leading naval engineering and shipbuilding software applications and their development trends from the functional lens of BIM. To systematically investigate a BIM-based approach for capital project lifecycle management (CPLM in the luxury yacht industry, the paper proposed and outlined an implementation framework. A case study and a student competition use case were discussed to delineate the core constituents and processes of the proposed framework. The case of BIM was reviewed. Through the domestic custom luxury yacht design and prototyping student competition, the application of this framework in educational research is demonstrated and the initial quantitative assessment of the framework is carried out. Conclusions: a BIM-based CPLM implementation framework can help the luxury yacht industry capitalize on the global transformation to an information-centric and data-driven new business paradigm in shipbuilding with integrated design, manufacturing and production.

  14. Lotus Base: An integrated information portal for the model legume Lotus japonicus.

    Science.gov (United States)

    Mun, Terry; Bachmann, Asger; Gupta, Vikas; Stougaard, Jens; Andersen, Stig U

    2016-12-23

    Lotus japonicus is a well-characterized model legume widely used in the study of plant-microbe interactions. However, datasets from various Lotus studies are poorly integrated and lack interoperability. We recognize the need for a comprehensive repository that allows comprehensive and dynamic exploration of Lotus genomic and transcriptomic data. Equally important are user-friendly in-browser tools designed for data visualization and interpretation. Here, we present Lotus Base, which opens to the research community a large, established LORE1 insertion mutant population containing an excess of 120,000 lines, and serves the end-user tightly integrated data from Lotus, such as the reference genome, annotated proteins, and expression profiling data. We report the integration of expression data from the L. japonicus gene expression atlas project, and the development of tools to cluster and export such data, allowing users to construct, visualize, and annotate co-expression gene networks. Lotus Base takes advantage of modern advances in browser technology to deliver powerful data interpretation for biologists. Its modular construction and publicly available application programming interface enable developers to tap into the wealth of integrated Lotus data. Lotus Base is freely accessible at: https://lotus.au.dk.

  15. Green Template for Life Cycle Assessment of Buildings Based on Building Information Modeling: Focus on Embodied Environmental Impact

    OpenAIRE

    Sungwoo Lee; Sungho Tae; Seungjun Roh; Taehyung Kim

    2015-01-01

    The increased popularity of building information modeling (BIM) for application in the construction of eco-friendly green buildings has given rise to techniques for evaluating green buildings constructed using BIM features. Existing BIM-based green building evaluation techniques mostly rely on externally provided evaluation tools, which pose problems associated with interoperability, including a lack of data compatibility and the amount of time required for format conversion. To overcome thes...

  16. Test of the technology acceptance model for a Web-based information system in a Hong Kong Chinese sample.

    Science.gov (United States)

    Cheung, Emily Yee Man; Sachs, John

    2006-12-01

    The modified technology acceptance model was used to predict actual Blackboard usage (a web-based information system) in a sample of 57 Hong Kong student teachers whose mean age was 27.8 yr. (SD = 6.9). While the general form of the model was supported, Application-specific Self-efficacy was a more powerful predictor of system use than Behavioural Intention as predicted by the theory of reasoned action. Thus in this cultural and educational context, it has been shown that the model does not fully mediate the effect of Self-efficacy on System Use. Also, users' Enjoyment exerted considerable influence on the component variables of Usefulness and Ease of Use and on Application-specific Self-efficacy, thus indirectly influencing system usage. Consequently, efforts to gain students' acceptance and, therefore, use of information systems such as Blackboard must pay adequate attention to users' Self-efficacy and motivational variables such as Enjoyment.

  17. Using the model statement to elicit information and cues to deceit in interpreter-based interviews.

    Science.gov (United States)

    Vrij, Aldert; Leal, Sharon; Mann, Samantha; Dalton, Gary; Jo, Eunkyung; Shaboltas, Alla; Khaleeva, Maria; Granskaya, Juliana; Houston, Kate

    2017-06-01

    We examined how the presence of an interpreter during an interview affects eliciting information and cues to deceit, while using a method that encourages interviewees to provide more detail (model statement, MS). A total of 199 Hispanic, Korean and Russian participants were interviewed either in their own native language without an interpreter, or through an interpreter. Interviewees either lied or told the truth about a trip they made during the last twelve months. Half of the participants listened to a MS at the beginning of the interview. The dependent variables were 'detail', 'complications', 'common knowledge details', 'self-handicapping strategies' and 'ratio of complications'. In the MS-absent condition, the interviews resulted in less detail when an interpreter was present than when an interpreter was absent. In the MS-present condition, the interviews resulted in a similar amount of detail in the interpreter present and absent conditions. Truthful statements included more complications and fewer common knowledge details and self-handicapping strategies than deceptive statements, and the ratio of complications was higher for truth tellers than liars. The MS strengthened these results, whereas an interpreter had no effect on these results. Copyright © 2017. Published by Elsevier B.V.

  18. Fast Screening Technology for Drug Emergency Management: Predicting Suspicious SNPs for ADR with Information Theory-based Models.

    Science.gov (United States)

    Liang, Zhaohui; Liu, Jun; Huang, Jimmy Xiangji; Zeng, Xing

    2018-01-14

    The genetic polymorphism of Cytochrome P450 (CYP 450) is considered as one of the main causes for adverse drug reactions (ADRs). In order to explore the latent correlations between ADRs and potentially corresponding single-nucleotide polymorphism (SNPs) in CYP450, three algorithms based on information theory are used as the main method to predict the possible relation. The study uses a retrospective case-control study to explore the potential relation of ADRs to specific genomic locations and single-nucleotide polymorphism (SNP). The genomic data collected from 53 healthy volunteers are applied for the analysis, another group of genomic data collected from 30 healthy volunteers excluded from the study are used as the control group. The SNPs respective on five loci of CYP2D6*2,*10,*14 and CYP1A2*1C, *1F are detected by the Applied Biosystem 3130xl. The raw data is processed by ChromasPro to detected the specific alleles on the above loci from each sample. The secondary data are reorganized and processed by R combined with the reports of ADRs from clinical reports. Three information theory based algorithms are implemented for the screening task: JMI, CMIM, and mRMR. If a SNP is selected by more than two algorithms, we are confident to conclude that it is related to the corresponding ADR. The selection results are compared with the control decision tree + LASSO regression model. In the study group where ADRs occur, 10 SNPs are considered relevant to the occurrence of a specific ADR by the combined information theory model. In comparison, only 5 SNPs are considered relevant to a specific ADR by the decision tree + LASSO regression model. In addition, the new method detects more relevant pairs of SNP and ADR which are affected both by SNP and dosage. This implies that the new information theory based model is effective to discover correlations of ADRs and CYP 450 SNPs and is helpful to predict the potential vulnerable genotype for some ADRs. The newly proposed

  19. Modeling and Development of Medical Information System Based on Support Vector Machine in Web Network

    Directory of Open Access Journals (Sweden)

    Chuanfu Hu

    2017-12-01

    Full Text Available This paper aims at improving and utilizing the ontology information in ontology design of FOAF and vCard in real time, and the application of open relational data technology, SPARQL query information results and sending RDF/JSON data format. In addition, improve the effectiveness and efficiency of patient information extraction from the medical information website. This article includes two web search engines that are used to inform patients about medical care information. The experiment uses Drupal as the main software tool, and the Drupal RDF extension module provides some meaningful mapping. In the evaluation part, the structure of the experimental test platform is established and the system function test is carried out. The evaluation results include consumers or patients retrieving the latest doctor information and comparing search capabilities and techniques, between our system and existing systems.

  20. Hybrid attribute-based recommender system for learning material using genetic algorithm and a multidimensional information model

    Directory of Open Access Journals (Sweden)

    Mojtaba Salehi

    2013-03-01

    Full Text Available In recent years, the explosion of learning materials in the web-based educational systems has caused difficulty of locating appropriate learning materials to learners. A personalized recommendation is an enabling mechanism to overcome information overload occurred in the new learning environments and deliver suitable materials to learners. Since users express their opinions based on some specific attributes of items, this paper proposes a hybrid recommender system for learning materials based on their attributes to improve the accuracy and quality of recommendation. The presented system has two main modules: explicit attribute-based recommender and implicit attribute-based recommender. In the first module, weights of implicit or latent attributes of materials for learner are considered as chromosomes in genetic algorithm then this algorithm optimizes the weights according to historical rating. Then, recommendation is generated by Nearest Neighborhood Algorithm (NNA using the optimized weight vectors implicit attributes that represent the opinions of learners. In the second, preference matrix (PM is introduced that can model the interests of learner based on explicit attributes of learning materials in a multidimensional information model. Then, a new similarity measure between PMs is introduced and recommendations are generated by NNA. The experimental results show that our proposed method outperforms current algorithms on accuracy measures and can alleviate some problems such as cold-start and sparsity.

  1. Information Systems Efficiency Model

    Directory of Open Access Journals (Sweden)

    Milos Koch

    2017-07-01

    Full Text Available This contribution discusses the basic concept of creating a new model for the efficiency and effectiveness assessment of company information systems. The present trends in this field are taken into account, and the attributes are retained of measuring the optimal solutions for a company’s ICT (the implementation, functionality, service, innovations, safety, relationships, costs, etc.. The proposal of a new model of assessment comes from our experience with formerly implemented and employed methods, methods which we have modified in time and adapted to companies’ needs but also to the necessaries of our research that has been done through the ZEFIS portal. The most noteworthy of them is the HOS method that we have discussed in a number of forums. Its main feature is the fact that it respects the complexity of an information system in correlation with the balanced state of its individual parts.

  2. Modeling information technology effectiveness

    OpenAIRE

    Aleksander Lotko

    2005-01-01

    Numerous cases of systems not bringing expected results cause that investments in information technology are treated more and more carefully and are not privileged amongst others. This gives rise to the need for applying costs–effect calculations. Modeling IT effectiveness is a procedure which helps to bring system complexity under control. By using proper measures it is possible to perform an objective investment appraisal for projects under consideration. In the paper, a framework of method...

  3. On the impact of information delay on location-based relaying: a markov modeling approach

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2012-01-01

    For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability of the col...... policies via a heuristically reduced brute-force search. Numerical results show how forwarding delays affect these optimal policies....

  4. Towards second-generation smart card-based authentication in health information systems: the secure server model.

    Science.gov (United States)

    Hallberg, J; Hallberg, N; Timpka, T

    2001-01-01

    Conventional smart card-based authentication systems used in health care alleviate some of the security issues in user and system authentication. Existing models still do not cover all security aspects. To enable new protective measures to be developed, an extended model of the authentication process is presented. This model includes a new entity referred to as secure server. Assuming a secure server, a method where the smart card is aware of the status of the terminal integrity verification becomes feasible. The card can then act upon this knowledge and restrict the exposure of sensitive information to the terminal as required in order to minimize the risks. The secure server model can be used to illuminate the weaknesses of current approaches and the need for extensions which alleviate the resulting risks.

  5. A Comprehensive Decision-Making Approach Based on Hierarchical Attribute Model for Information Fusion Algorithms’ Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Lianhui Li

    2014-01-01

    Full Text Available Aiming at the problem of fusion algorithm performance evaluation in multiradar information fusion system, firstly the hierarchical attribute model of track relevance performance evaluation model is established based on the structural model and functional model and quantization methods of evaluation indicators are given; secondly a combination weighting method is proposed to determine the weights of evaluation indicators, in which the objective and subjective weights are separately determined by criteria importance through intercriteria correlation (CRITIC and trapezoidal fuzzy scale analytic hierarchy process (AHP, and then experience factor is introduced to obtain the combination weight; at last the improved technique for order preference by similarity to ideal solution (TOPSIS replacing Euclidean distance with Kullback-Leibler divergence (KLD is used to sort the weighted indicator value of the evaluation object. An example is given to illustrate the correctness and feasibility of the proposed method.

  6. Executive Information Systems' Multidimensional Models

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.

  7. Dynamic Model of an Ammonia Synthesis Reactor Based on Open Information

    OpenAIRE

    Jinasena, Asanthi; Lie, Bernt; Glemmestad, Bjørn

    2016-01-01

    Ammonia is a widely used chemical, hence the ammonia manufacturing process has become a standard case study in the scientific community. In the field of mathematical modeling of the dynamics of ammonia synthesis reactors, there is a lack of complete and well documented models. Therefore, the main aim of this work is to develop a complete and well documented mathematical model for observing the dynamic behavior of an industrial ammonia synthesis reactor system. The model is complete enough to ...

  8. Modelling the Effects of Information Campaigns Using Agent-Based Simulation

    National Research Council Canada - National Science Library

    Wragg, Tony

    2006-01-01

    .... The study highlighted the requirement for accurate data concerning a population's social hierarchy, social networks, behavior patterns, human geography and their subsequent impact on the success of both word-of-mouth and mass media driven information campaigns.

  9. Analysis of a SCADA System Anomaly Detection Model Based on Information Entropy

    Science.gov (United States)

    2014-03-27

    National Transportation Safety Board (NTSB) accident report, a ruptured pipe in Marshall, Michigan led to the release of “843,444 gallons of crude oil...3). This is true for some attempted applications of information theory such as psychology. R. Duncan Luce authored an article in the journal...applied” ( Luce , 2003:183). For a time following Shannon’s paper, psychologists attempted to apply information theory in their experiments with little

  10. New frontiers in information and production systems modelling and analysis incentive mechanisms, competence management, knowledge-based production

    CERN Document Server

    Novikov, Dmitry; Bakhtadze, Natalia; Zaikin, Oleg

    2016-01-01

    This book demonstrates how to apply modern approaches to complex system control in practical applications involving knowledge-based systems. The dimensions of knowledge-based systems are extended by incorporating new perspectives from control theory, multimodal systems and simulation methods.  The book is divided into three parts: theory, production system and information system applications. One of its main focuses is on an agent-based approach to complex system analysis. Moreover, specialised forms of knowledge-based systems (like e-learning, social network, and production systems) are introduced with a new formal approach to knowledge system modelling.   The book, which offers a valuable resource for researchers engaged in complex system analysis, is the result of a unique cooperation between scientists from applied computer science (mainly from Poland) and leading system control theory researchers from the Russian Academy of Sciences’ Trapeznikov Institute of Control Sciences.

  11. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    Science.gov (United States)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  12. A method for evaluating cognitively informed micro-targeted campaign strategies: An agent-based model proof of principle.

    Science.gov (United States)

    Madsen, Jens Koed; Pilditch, Toby D

    2018-01-01

    In political campaigns, perceived candidate credibility influences the persuasiveness of messages. In campaigns aiming to influence people's beliefs, micro-targeted campaigns (MTCs) that target specific voters using their psychological profile have become increasingly prevalent. It remains open how effective MTCs are, notably in comparison to population-targeted campaign strategies. Using an agent-based model, the paper applies recent insights from cognitive models of persuasion, extending them to the societal level in a novel framework for exploring political campaigning. The paper provides an initial treatment of the complex dynamics of population level political campaigning in a psychologically informed manner. Model simulations show that MTCs can take advantage of the psychology of the electorate by targeting voters favourable disposed towards the candidate. Relative to broad campaigning, MTCs allow for efficient and adaptive management of complex campaigns. Findings show that disliked MTC candidates can beat liked population-targeting candidates, pointing to societal questions concerning campaign regulations.

  13. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  14. Information Technology Security Professionals' Knowledge and Use Intention Based on UTAUT Model

    Science.gov (United States)

    Kassa, Woldeloul

    2016-01-01

    Information technology (IT) security threats and vulnerabilities have become a major concern for organizations in the United States. However, there has been little research on assessing the effect of IT security professionals' knowledge on the use of IT security controls. This study examined the unified theory of acceptance and use of technology…

  15. Semantic-Based Knowledge Management in E-Government: Modeling Attention for Proactive Information Delivery

    Science.gov (United States)

    Samiotis, Konstantinos; Stojanovic, Nenad

    E-government has become almost synonymous with a consumer-led revolution of government services inspired and made possible by the Internet. With technology being the least of the worries for government organizations nowadays, attention is shifting towards managing complexity as one of the basic antecedents of operational and decision-making inefficiency. Complexity has been traditionally preoccupying public administrations and owes its origins to several sources. Among them we encounter primarily the cross-functional nature and the degree of legal structuring of administrative work. Both of them have strong reliance to the underlying process and information infrastructure of public organizations. Managing public administration work thus implies managing its processes and information. Knowledge management (KM) and business process reengineering (BPR) have been deployed already by private organizations with success for the same purposes and certainly comprise improvement practices that are worthwhile investigating. Our contribution through this paper is on the utilization of KM for the e-government.

  16. Spherical harmonics based intrasubject 3-D kidney modeling/registration technique applied on partial information

    Science.gov (United States)

    Dillenseger, Jean-Louis; Guillaume, Hélène; Patard, Jean-Jacques

    2006-01-01

    This paper presents a 3D shape reconstruction/intra-patient rigid registration technique used to establish a Nephron-Sparing Surgery preoperative planning. The usual preoperative imaging system is the Spiral CT Urography, which provides successive 3D acquisitions of complementary information on kidney anatomy. Because the kidney is difficult to demarcate from the liver or from the spleen only limited information on its volume or surface is available. In our paper we propose a methodology allowing a global kidney spatial representation on a spherical harmonics basis. The spherical harmonics are exploited to recover the kidney 3D shape and also to perform intra-patient 3D rigid registration. An evaluation performed on synthetic data showed that this technique presented lower performance then expected for the 3D shape recovering but exhibited registration results slightly more accurate as the ICP technique with faster computation time. PMID:17073323

  17. Analyzing the performance of PROSPECT model inversion based on different spectral information for leaf biochemical properties retrieval

    Science.gov (United States)

    Sun, Jia; Shi, Shuo; Yang, Jian; Du, Lin; Gong, Wei; Chen, Biwu; Song, Shalei

    2018-01-01

    Leaf biochemical constituents provide useful information about major ecological processes. As a fast and nondestructive method, remote sensing techniques are critical to reflect leaf biochemistry via models. PROSPECT model has been widely applied in retrieving leaf traits by providing hemispherical reflectance and transmittance. However, the process of measuring both reflectance and transmittance can be time-consuming and laborious. Contrary to use reflectance spectrum alone in PROSPECT model inversion, which has been adopted by many researchers, this study proposes to use transmission spectrum alone, with the increasing availability of the latter through various remote sensing techniques. Then we analyzed the performance of PROSPECT model inversion with (1) only transmission spectrum, (2) only reflectance and (3) both reflectance and transmittance, using synthetic datasets (with varying levels of random noise and systematic noise) and two experimental datasets (LOPEX and ANGERS). The results show that (1) PROSPECT-5 model inversion based solely on transmission spectrum is viable with results generally better than that based solely on reflectance spectrum; (2) leaf dry matter can be better estimated using only transmittance or reflectance than with both reflectance and transmittance spectra.

  18. MOLES Information Model

    Science.gov (United States)

    Ventouras, Spiros; Lawrence, Bryan; Woolf, Andrew; Cox, Simon

    2010-05-01

    The Metadata Objects for Linking Environmental Sciences (MOLES) model has been developed within the Natural Environment Research Council (NERC) DataGrid project [NERC DataGrid] to fill a missing part of the ‘metadata spectrum'. It is a framework within which to encode the relationships between the tools used to obtain data, the activities which organised their use, and the datasets produced. MOLES is primarily of use to consumers of data, especially in an interdisciplinary context, to allow them to establish details of provenance, and to compare and contrast such information without recourse to discipline-specific metadata or private communications with the original investigators [Lawrence et al 2009]. MOLES is also of use to the custodians of data, providing an organising paradigm for the data and metadata. The work described in this paper is a high-level view of the structure and content of a recent major revision of MOLES (v3.3) carried out as part of a NERC DataGrid extension project. The concepts of MOLES v3.3 are rooted in the harmonised ISO model [Harmonised ISO model] - particularly in metadata standards (ISO 19115, ISO 19115-2) and the ‘Observations and Measurements' conceptual model (ISO 19156). MOLES exploits existing concepts and relationships, and specialises information in these standards. A typical sequence of data capturing involves one or more projects under which a number of activities are undertaken, using appropriate tools and methods to produce the datasets. Following this typical sequence, the relevant metadata can be partitioned into the following main sections - helpful in mapping onto the most suitable standards from the ISO 19100 series. • Project section • Activity section (including both observation acquisition and numerical computation) • Observation section (metadata regarding the methods used to obtained the data, the spatial and temporal sampling regime, quality etc.) • Observation collection section The key concepts in

  19. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    infiltration experiment in Denmark. The natural flooding event caused a spatio-temporally distributed increase in bank storage in an alluvial aquifer. The storage change was measured using both TLRG and traditional piezometers. A groundwater model was conditioned on both the TLRG and piezometer data. Model...... parameter uncertainty decreased significantly when TLRG data was included in the inversion. The forced infiltration experiment caused changes in unsaturated zone storage, which were monitored using TLRG and ground-penetrating radar. A numerical unsaturated zone model was subsequently conditioned on both...... the signal and the change in water mass stored in the subsurface. Thus, no petrophysical relationship is required for coupled hydrogeophysical inversion. Two hydrological events were monitored with TLRG. One was a natural flooding event in the periphery of the Okavango Delta, Botswana, and one was a forced...

  20. Theory of Compliance: Indicator Checklist Statistical Model and Instrument Based Program Monitoring Information System.

    Science.gov (United States)

    Fiene, Richard J.; Woods, Lawrence

    Two unanswered questions about child care are: (1) Does compliance with state child care regulations have a positive impact on children? and (2) Have predictors of program quality been identified? This paper explores a research study and related model that have had some success in answering these questions. Section I, a general introduction,…

  1. Simulating Fire Disturbance and Plant Mortality Using Antecedent Eco-hydrological Conditions to Inform a Physically Based Combustion Model

    Science.gov (United States)

    Atchley, A. L.; Linn, R.; Middleton, R. S.; Runde, I.; Coon, E.; Michaletz, S. T.

    2016-12-01

    Wildfire is a complex agent of change that both affects and depends on eco-hydrological systems, thereby constituting a tightly linked system of disturbances and eco-hydrological conditions. For example, structure, build-up, and moisture content of fuel are dependent on eco-hydrological regimes, which impacts fire spread and intensity. Fire behavior, on the other hand, determines the severity and extent of eco-hydrological disturbance, often resulting in a mosaic of untouched, stressed, damaged, or completely destroyed vegetation within the fire perimeter. This in turn drives new eco-hydrological system behavior. The cycles of disturbance and recovery present a complex evolving system with many unknowns especially in the face of climate change that has implications for fire risk, water supply, and forest composition. Physically-based numerical experiments that attempt to capture the complex linkages between eco-hydrological regimes that affect fire behavior and the echo-hydrological response from those fire disturbances help build the understanding required to project how fire disturbance and eco-hydrological conditions coevolve over time. Here we explore the use of FIRETEC—a physically-based 3D combustion model that solves conservation of mass, momentum, energy, and chemical species—to resolve fire spread over complex terrain and fuel structures. Uniquely, we couple a physically-based plant mortality model with FIRETEC and examine the resultant hydrologic impact. In this proof of concept demonstration we spatially distribute fuel structure and moisture content based on the eco-hydrological condition to use as input for FIRETEC. The fire behavior simulation then produces localized burn severity and heat injures which are used as input to a spatially-informed plant mortality model. Ultimately we demonstrate the applicability of physically-based models to explore integrated disturbance and eco-hydrologic response to wildfire behavior and specifically map how fire

  2. Testing the Information Technology Continuance Model on a Mandatory SMS-Based Student Response System

    Science.gov (United States)

    Lin, Julian; Rivera-Sanchez, Milagros

    2012-01-01

    This paper reports a 2-month longitudinal study on a mandatory Short Message Service-based student response system that enabled students to take classroom quizzes using their mobile phones to register responses. students' perceptions on usefulness and their attitude toward the system were measured twice: once before they used the system and again…

  3. Auto-Mapping and Configuration Method of IEC 61850 Information Model Based on OPC UA

    Directory of Open Access Journals (Sweden)

    In-Jae Shin

    2016-11-01

    Full Text Available The open-platform communication (OPC unified architecture (UA (IEC62541 is introduced as a key technology for realizing a variety of smart grid (SG use cases enabling relevant automation and control tasks. The OPC UA can expand interoperability between power systems. The top-level SG management platform needs independent middleware to transparently manage the power information technology (IT systems, including the IEC 61850. To expand interoperability between the power system for a large number of stakeholders and various standards, this paper focuses on the IEC 61850 for the digital substation. In this paper, we propose the interconnection method to integrate communication with OPC UA and convert OPC UA AddressSpace using system configuration description language (SCL of IEC 61850. We implemented the mapping process for the verification of the interconnection method. The interconnection method in this paper can expand interoperability between power systems for OPC UA integration for various data structures in the smart grid.

  4. Agent Based Modelling of Communication Costs: Why Information Can Be Free

    Science.gov (United States)

    Čače, Ivana; Bryson, Joanna J.

    What purposes, other than facilitating the sharing of information, can language have served? First, it may not have evolved to serve any purpose at all. It is possible that language is just a side effect of the large human brain — a spandrel or exaptation — that only became useful later. If language is adaptive, this does not necessarily mean that it is adaptive for the purpose of communication. For example Dennett (1996) and Chomsky (1980) have stressed the utility of language in thinking. Also, there are different ways to view communication. The purpose of language according to Dunbar (1993), is to replace grooming as a social bonding process and in this way to ensure the stability of large social groups.

  5. THE MODEL OF INFORMATION AND ANALYTICAL SUPPORT OF EDUCATIONAL RESEARCH BASED ON ELECTRONIC SYSTEMS OF OPEN ACCESS

    Directory of Open Access Journals (Sweden)

    Oleg M. Spirin

    2017-06-01

    Full Text Available The article presents the experience of using electronic open access systems for information and analytical support of pedagogical research, which positively influences the quality of scientific research. A well-founded system of information and analytical support of pedagogical research based on electronic open access systems corresponds to the scientific and pedagogical needs for implementation: the publication, dissemination and use of information resources. The use of this system will improve the quality of scientific and pedagogical research conducted at the institutions of the National Academy of Educational Sciences of Ukraine, and will effectively implement their results in the scientific and educational sphere of Ukraine. The model of information and analytical support of scientific research is substantiated and developed. Specific features of the functioning of the prototype of an electronic scientific journal on the platform of open journal systems are determined. The stages of implementation of the prototype on the platform of open journal systems that can be used by scientific institutions and higher educational institutions for the publication of scientific professional journals and collections are described.

  6. Spatial Interpolation of Annual Runoff in Ungauged Basins Based on the Improved Information Diffusion Model Using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mei Hong

    2017-01-01

    Full Text Available Prediction in Ungauged Basins (PUB is an important task for water resources planning and management and remains a fundamental challenge for the hydrological community. In recent years, geostatistical methods have proven valuable for estimating hydrological variables in ungauged catchments. However, four major problems restrict the development of geostatistical methods. We established a new information diffusion model based on genetic algorithm (GIDM for spatial interpolating of runoff in the ungauged basins. Genetic algorithms (GA are used to generate high-quality solutions to optimization and search problems. So, using GA, the parameter of optimal window width can be obtained. To test our new method, seven experiments for the annual runoff interpolation based on GIDM at 17 stations on the mainstream and tributaries of the Yellow River are carried out and compared with the inverse distance weighting (IDW method, Cokriging (COK method, and conventional IDMs using the same sparse observed data. The seven experiments all show that the GIDM method can solve four problems of the previous geostatistical methods to some extent and obtains best accuracy among four different models. The key problems of the PUB research are the lack of observation data and the difficulties in information extraction. So the GIDM is a new and useful tool to solve the Prediction in Ungauged Basins (PUB problem and to improve the water management.

  7. Remote information service access system based on a client-server-service model

    Science.gov (United States)

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  8. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    Science.gov (United States)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  9. Modelling and Analysis of Automobile Vibration System Based on Fuzzy Theory under Different Road Excitation Information

    Directory of Open Access Journals (Sweden)

    Xue-wen Chen

    2018-01-01

    Full Text Available A fuzzy increment controller is designed aimed at the vibration system of automobile active suspension with seven degrees of freedom (DOF. For decreasing vibration, an active control force is acquired by created Proportion-Integration-Differentiation (PID controller. The controller’s parameters are adjusted by a fuzzy increment controller with self-modifying parameters functions, which adopts the deviation and its rate of change of the body’s vertical vibration velocity and the desired value in the position of the front and rear suspension as the input variables based on 49 fuzzy control rules. Adopting Simulink, the fuzzy increment controller is validated under different road excitation, such as the white noise input with four-wheel correlation in time-domain, the sinusoidal input, and the pulse input of C-grade road surface. The simulation results show that the proposed controller can reduce obviously the vehicle vibration compared to other independent control types in performance indexes, such as, the root mean square value of the body’s vertical vibration acceleration, pitching, and rolling angular acceleration.

  10. Global Earth Observation System of Systems: Characterizing Uncertainties of Space- based Measurements and Earth System Models Informing Decision Tools

    Science.gov (United States)

    Birk, R. J.; Frederick, M.

    2006-05-01

    The Global Earth Observation System of Systems (GEOSS) framework identifies the benefits of systematically and scientifically networking the capacity of organizations and systems into solutions that benefit nine societal benefit areas. The U.S. Integrated Earth Observation System (IEOS), the U.S. contribution to the GEOSS, focuses on near-term, mid-term, and long-term opportunities to establish integrated system solutions based on capacities and capabilities of member agencies and affiliations. Scientists at NASA, NOAA, DOE, NSF and other U.S. agencies are evolving the predictive capacity of models of Earth processes based on space-based, airborne and surface-based instruments and their measurements. NASA research activities include advancing the power and accessibility of computational resources (i.e. Project Columbia) to enable robust science data analysis, modeling, and assimilation techniques to rapidly advance. The integration of the resulting observations and predictions into decision support tools require characterization of the accuracies of a range of input measurements includes temperature and humidity profiles, wind speed, ocean height, sea surface temperature, and atmospheric constituents that are measured globally by U.S. deployed spacecraft. These measurements are stored in many data formats on many different information systems with widely varying accessibility and have processes whose documentation ranges from extremely detailed to very minimal. Integrated and interdisciplinary modeling (enabled by the Earth System Model Framework) enable the types of ensemble analysis that are useful for decision processes associated with energy management, public health risk assessments, and optimizing transportation safety and efficiency. Interdisciplinary approaches challenge systems integrators (both scientists and engineers) to expand beyond the traditional boundaries of particular disciplines to develop, verify and validate, and ultimately benchmark the

  11. From Information to Experience: Place-Based Augmented Reality Games as a Model for Learning in a Globally Networked Society

    Science.gov (United States)

    Squire, Kurt D.

    2010-01-01

    Background/Context: New information technologies make information available just-in-time and on demand and are reshaping how we interact with information, but schools remain in a print-based culture, and a growing number of students are disaffiliating from traditional school. New methods of instruction are needed that are suited to the digital…

  12. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  13. Modeling Information Assurance

    National Research Council Canada - National Science Library

    Beauregard, Joseph

    2001-01-01

    .... S. military controls much of the world's most sensitive information, and since it cannot sacrifice losing the speed at which this information is currently processed and disseminated, it must find a way...

  14. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  15. Isotope-based quantum information

    International Nuclear Information System (INIS)

    Plekhanov, Vladimir G.

    2012-01-01

    The present book provides to the main ideas and techniques of the rapid progressing field of quantum information and quantum computation using isotope - mixed materials. It starts with an introduction to the isotope physics and then describes of the isotope - based quantum information and quantum computation. The ability to manipulate and control electron and/or nucleus spin in semiconductor devices provides a new route to expand the capabilities of inorganic semiconductor-based electronics and to design innovative devices with potential application in quantum computing. One of the major challenges towards these objectives is to develop semiconductor-based systems and architectures in which the spatial distribution of spins and their properties can be controlled. For instance, to eliminate electron spin decoherence resulting from hyperfine interaction due to nuclear spin background, isotopically controlled devices are needed (i.e., nuclear spin-depleted). In other emerging concepts, the control of the spatial distribution of isotopes with nuclear spins is a prerequisite to implement the quantum bits (or qbits). Therefore, stable semiconductor isotopes are important elements in the development of solid-state quantum information. There are not only different algorithms of quantum computation discussed but also the different models of quantum computers are presented. With numerous illustrations this small book is of great interest for undergraduate students taking courses in mesoscopic physics or nanoelectronics as well as quantum information, and academic and industrial researches working in this field.

  16. Object Modeling and Building Information Modeling

    OpenAIRE

    Auråen, Hege; Gjemdal, Hanne

    2016-01-01

    The main part of this thesis is an online course (Small Private Online Course) entitled "Introduction to Object Modeling and Building Information Modeling". This supplementary report clarifies the choices made in the process of developing the course. The course examines the basic concepts of object modeling, modeling techniques and a modeling language ​​(UML). Further, building information modeling (BIM) is presented as a modeling process, and the object modeling concepts in the BIM softw...

  17. The Path of New Information Technology Affecting Educational Equality in the New Digital Divide--Based on Information System Success Model

    Science.gov (United States)

    Zheng, Qian; Liang, Chang-Yong

    2017-01-01

    New information technology (new IT) plays an increasingly important role in the field of education, which greatly enriches the teaching means and promotes the sharing of education resources. However, because of the New Digital Divide existing, the impact of new IT on educational equality has yet to be discussed. Based on Information System Success…

  18. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  19. Modeling Gross Primary Production of Agro-Forestry Ecosystems by Assimilation of Satellite-Derived Information in a Process-Based Model

    Directory of Open Access Journals (Sweden)

    Guenther Seufert

    2009-02-01

    Full Text Available In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC with the aims of i improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale.

  20. Methodology base and problems of information technologies

    Science.gov (United States)

    Sovetov, Boris Y.

    1993-04-01

    Information product qualitative forming and effective use is the aim of any information technology. Information technology as a system provides both computer-aided problem solving for the user and automation of information processes, which in turn support the problem solving process. That's why the information technology methods are the methods for data transmission, processing, and storage. The tools of methodology, mathematics, algorithms, hardware, software, and information are the tools of information technology. We propose to differ between global, basic, and applied information technologies depending on information product significance and characteristics of models, methods, and tools used. The global technology is aimed to use information resources in the social sphere as a whole. The basic technology is oriented on the application sphere (industry, scientific research, design, training). Transition towards new information technology should have in its concern business area model merged with the formal model of problem solving: computing organization based on the data concept; user's intellectual interface development.

  1. Information acquisition during online decision-making : A model-based exploration using eye-tracking data

    NARCIS (Netherlands)

    Shi, W.; Wedel, M.; Pieters, R.

    2013-01-01

    We propose a model of eye-tracking data to understand information acquisition patterns on attribute-by-product matrices, which are common in online choice environments such as comparison websites. The objective is to investigate how consumers gather product and attribute information from moment to

  2. Information risk and security modeling

    Science.gov (United States)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  3. Gender-Based Violence and Armed Conflict: A Community-Informed Socioecological Conceptual Model From Northeastern Uganda

    Science.gov (United States)

    Mootz, Jennifer J.; Stabb, Sally D.; Mollen, Debra

    2017-01-01

    The high prevalence of gender-based violence (GBV) in armed conflict has been documented in various national contexts, but less is known about the complex pathways that constitute the relation between the two. Employing a community-based collaborative approach, we constructed a community-informed socioecological conceptual model from a feminist perspective, detailing how armed conflict relates to GBV in a conflict-affected rural community in Northeastern Uganda. The research questions were as follows: (1) How does the community conceptualize GBV? and (2) How does armed conflict relate to GBV? Nine focus group discussions divided by gender, age, and profession and six key informant interviews were conducted. Participants’ ages ranged from 9 to 80 years (n =34 girls/women, n = 43 boys/men). Grounded theory was used in analysis. Participants conceptualized eight forms of and 22 interactive variables that contributed to GBV. Armed conflict affected physical violence/quarreling, sexual violence, early marriage, and land grabbing via a direct pathway and four indirect pathways initiated through looting of resources, militarization of the community, death of a parent(s) or husband, and sexual violence. The findings suggest that community, organizational, and policy-level interventions, which include attention to intersecting vulnerabilities for exposure to GBV in conflict-affected settings, should be prioritized. While tertiary psychological interventions with women and girls affected by GBV in these areas should not be eliminated, we suggest that policy makers and members of community and organizational efforts make systemic and structural changes. Online slides for instructors who want to use this article for teaching are available on PWQ’s website at http://journals.sagepub.com/page/pwq/suppl/index PMID:29563663

  4. Green Template for Life Cycle Assessment of Buildings Based on Building Information Modeling: Focus on Embodied Environmental Impact

    Directory of Open Access Journals (Sweden)

    Sungwoo Lee

    2015-12-01

    Full Text Available The increased popularity of building information modeling (BIM for application in the construction of eco-friendly green buildings has given rise to techniques for evaluating green buildings constructed using BIM features. Existing BIM-based green building evaluation techniques mostly rely on externally provided evaluation tools, which pose problems associated with interoperability, including a lack of data compatibility and the amount of time required for format conversion. To overcome these problems, this study sets out to develop a template (the “green template” for evaluating the embodied environmental impact of using a BIM design tool as part of BIM-based building life-cycle assessment (LCA technology development. Firstly, the BIM level of detail (LOD was determined to evaluate the embodied environmental impact, and constructed a database of the impact factors of the embodied environmental impact of the major building materials, thereby adopting an LCA-based approach. The libraries of major building elements were developed by using the established databases and compiled evaluation table of the embodied environmental impact of the building materials. Finally, the green template was developed as an embodied environmental impact evaluation tool and a case study was performed to test its applicability. The results of the green template-based embodied environmental impact evaluation of a test building were validated against those of its actual quantity takeoff (2D takeoff, and its reliability was confirmed by an effective error rate of ≤5%. This study aims to develop a system for assessing the impact of the substances discharged from concrete production process on six environmental impact categories, i.e., global warming (GWP, acidification (AP, eutrophication (EP, abiotic depletion (ADP, ozone depletion (ODP, and photochemical oxidant creation (POCP, using the life a cycle assessment (LCA method. To achieve this, we proposed an LCA method

  5. Modeling spatiotemporal information generation

    NARCIS (Netherlands)

    Scheider, Simon; Gräler, Benedikt; Pebesma, Edzer; Stasch, Christoph

    2016-01-01

    Maintaining knowledge about the provenance of datasets, that is, about how they were obtained, is crucial for their further use. Contrary to what the overused metaphors of ‘data mining’ and ‘big data’ are implying, it is hardly possible to use data in a meaningful way if information about sources

  6. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems....... These inputs are disturbance inputs, reference inputs and auxilary inputs. The diagnosis of the system is derived by an evaluation of the signature from the inputs in the residual outputs. The changes of the signatures form the external inputs are used for detection and isolation of the parametric faults....

  7. Investigating Information-Seeking Behavior of Faculty Members Based on Wilson’s Model: Case Study of PNU University, Mazandaran, Iran

    Science.gov (United States)

    Azadeh, Fereydoon; Ghasemi, Shahrzad

    2016-01-01

    The present research aims to study information seeking behavior of faculty Members of Payame Noor University (PNU) in Mazandaran province of Iran by using Wilson’s model of information seeking behavior. This is a survey study. Participants were 97 of PNU faculty Members in Mazandaran province. An information-seeking behavior inventory was employed to gather information and research data, which had 24 items based on 5-point likert scale. Collected data were analyzed in SPSS software. Results showed that the most important goal of faculty members was publishing a scientific paper, and their least important goal was updating technical information. Also we found that they mostly use internet-based resources to meet their information needs. Accordingly, 57.7% of them find information resources via online search engines (e.g. Google, Yahoo). Also we concluded that there was a significant relationship between English language proficiency, academic rank, and work experience of them and their information- seeking behavior. PMID:27157151

  8. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering.

    Science.gov (United States)

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-04-01

    Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Combining Livestock Production Information in a Process-Based Vegetation Model to Reconstruct the History of Grassland Management

    Science.gov (United States)

    Chang, Jinfeng; Ciais, Philippe; Herrero, Mario; Havlik, Petr; Campioli, Matteo; Zhang, Xianzhou; Bai, Yongfei; Viovy, Nicolas; Joiner, Joanna; Wang, Xuhui; hide

    2016-01-01

    Grassland management type (grazed or mown) and intensity (intensive or extensive) play a crucial role in the greenhouse gas balance and surface energy budget of this biome, both at field scale and at large spatial scale. However, global gridded historical information on grassland management intensity is not available. Combining modelled grass-biomass productivity with statistics of the grass-biomass demand by livestock, we reconstruct gridded maps of grassland management intensity from 1901 to 2012. These maps include the minimum area of managed vs. maximum area of unmanaged grasslands and the fraction of mown vs. grazed area at a resolution of 0.5deg by 0.5deg. The grass-biomass demand is derived from a livestock dataset for 2000, extended to cover the period 19012012. The grass-biomass supply (i.e. forage grass from mown grassland and biomass grazed) is simulated by the process-based model ORCHIDEE-GM driven by historical climate change, risingCO2 concentration, and changes in nitrogen fertilization. The global area of managed grassland obtained in this study increases from 6.1 x 10(exp 6) km(exp 2) in 1901 to 12.3 x 10(exp 6) kmI(exp 2) in 2000, although the expansion pathway varies between different regions. ORCHIDEE-GM also simulated augmentation in global mean productivity and herbage-use efficiency over managed grassland during the 20th century, indicating a general intensification of grassland management at global scale but with regional differences. The gridded grassland management intensity maps are model dependent because they depend on modelled productivity. Thus specific attention was given to the evaluation of modelled productivity against a series of observations from site-level net primary productivity (NPP) measurements to two global satellite products of gross primary productivity (GPP) (MODIS-GPP and SIF data). Generally, ORCHIDEE-GM captures the spatial pattern, seasonal cycle, and inter-annual variability of grassland productivity at global

  10. Comparison of representational spaces based on structural information in the development of QSAR models for benzylamino enaminone derivatives.

    Science.gov (United States)

    García, G Cerruela; Palacios-Bejarano, B; Ruiz, I Luque; Gómez-Nieto, M Á

    2012-10-01

    In this paper we study different representational spaces of molecule data sets based on 2D representation models for the building of QSAR models for the prediction of the activity of 37 benzylamino enaminone derivatives. Approximations based on classical similarity calculated from fingerprint representation of molecules and isomorphism obtained using sub-graph matching algorithms are compared to fragmentation-based approximations using partial least squares and genetic algorithms. The influence of the anchored position of a non-common moiety and the kind of substituents in the common core structure of the data set are analysed, demonstrating the anomalous behaviour of some molecules and therefore the difficulty in building prediction models. These problems are solved by considering approximate similarity models. These models tune the prediction equations based on the size of the substituent and the anchored position, by adjusting the contribution of each substituent in similarity measurements calculated between the molecule data sets.

  11. Using Models and Data to Learn: The Need for a Perspective based in Characterization of Information (John Dalton Medal Lecture)

    Science.gov (United States)

    Gupta, Hoshin

    2014-05-01

    The hydrological community has recently engaged in a discussion regarding future directions of Hydrology as an Earth Science. In this context, I will comment on the role of "dynamical systems modeling" (and more generally the systems-theoretic perspective) as a vehicle for informing the Discovery and Learning Process. I propose that significant advances can occur through a better understanding of what is meant by "Information", and by focusing on ways to characterize and quantify the nature, quality and quantity of information in models and data, thereby establishing a more robust and insightful (less ad-hoc) basis for learning through the model-data juxtaposition. While the mathematics of Information Theory has much to offer, it will need to be augmented and extended by bringing to bear contextual perspectives from both dynamical systems modeling and the Hydrological Sciences. A natural consequence will be to re-emphasize the a priori role of Process Modeling (particularly specification of System Architecture) over that of the selection of System Parameterizations, thereby shifting the emphasis to the more creative inductive aspects of scientific investigation.

  12. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  13. A Framework for Effective Assessment of Model-based Projections of Biodiversity to Inform the Next Generation of Global Conservation Targets

    Science.gov (United States)

    Myers, B.; Beard, T. D.; Weiskopf, S. R.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.; Casey, K.; Lenton, T. M.; Leidner, A. K.; Ruane, A. C.; Ferrier, S.; Serbin, S.; Matsuda, H.; Shiklomanov, A. N.; Rosa, I.

    2017-12-01

    Biodiversity and ecosystems services underpin political targets for the conservation of biodiversity; however, previous incarnations of these biodiversity-related targets have not relied on integrated model based projections of possible outcomes based on climate and land use change. Although a few global biodiversity models are available, most biodiversity models lie along a continuum of geography and components of biodiversity. Model-based projections of the future of global biodiversity are critical to support policymakers in the development of informed global conservation targets, but the scientific community lacks a clear strategy for integrating diverse data streams in developing, and evaluating the performance of, such biodiversity models. Therefore, in this paper, we propose a framework for ongoing testing and refinement of model-based projections of biodiversity trends and change, by linking a broad variety of biodiversity models with data streams generated by advances in remote sensing, coupled with new and emerging in-situ observation technologies to inform development of essential biodiversity variables, future global biodiversity targets, and indicators. Our two main objectives are to (1) develop a framework for model testing and refining projections of a broad range of biodiversity models, focusing on global models, through the integration of diverse data streams and (2) identify the realistic outputs that can be developed and determine coupled approaches using remote sensing and new and emerging in-situ observations (e.g., metagenomics) to better inform the next generation of global biodiversity targets.

  14. Spiral model pilot project information model

    Science.gov (United States)

    1991-01-01

    The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.

  15. The Arabidopsis Information Resource (TAIR): a comprehensive database and web-based information retrieval, analysis, and visualization system for a model plant

    Science.gov (United States)

    Huala, Eva; Dickerman, Allan W.; Garcia-Hernandez, Margarita; Weems, Danforth; Reiser, Leonore; LaFond, Frank; Hanley, David; Kiphart, Donald; Zhuang, Mingzhe; Huang, Wen; Mueller, Lukas A.; Bhattacharyya, Debika; Bhaya, Devaki; Sobral, Bruno W.; Beavis, William; Meinke, David W.; Town, Christopher D.; Somerville, Chris; Rhee, Seung Yon

    2001-01-01

    Arabidopsis thaliana, a small annual plant belonging to the mustard family, is the subject of study by an estimated 7000 researchers around the world. In addition to the large body of genetic, physiological and biochemical data gathered for this plant, it will be the first higher plant genome to be completely sequenced, with completion expected at the end of the year 2000. The sequencing effort has been coordinated by an international collaboration, the Arabidopsis Genome Initiative (AGI). The rationale for intensive investigation of Arabidopsis is that it is an excellent model for higher plants. In order to maximize use of the knowledge gained about this plant, there is a need for a comprehensive database and information retrieval and analysis system that will provide user-friendly access to Arabidopsis information. This paper describes the initial steps we have taken toward realizing these goals in a project called The Arabidopsis Information Resource (TAIR) (www.arabidopsis.org). PMID:11125061

  16. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  17. Closed-loop EMG-informed model-based analysis of human musculoskeletal mechanics on rough terrains

    NARCIS (Netherlands)

    Varotto, C.; Sawacha, Z.; Gizzi, L; Farina, D.; Sartori, M.

    2017-01-01

    This work aims at estimating the musculoskeletal forces acting in the human lower extremity during locomotion on rough terrains. We employ computational models of the human neuro-musculoskeletal system that are informed by multi-modal movement data including foot-ground reaction forces, 3D marker

  18. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information

    NARCIS (Netherlands)

    L.M. Lamers (Leida)

    1999-01-01

    textabstractOBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness

  19. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  20. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  1. Context based multimedia information retrieval

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti

    The large amounts of digital media becoming available require that new approaches are developed for retrieving, navigating and recommending the data to users in a way that refl ects how we semantically perceive the content. The thesis investigates ways to retrieve and present content for users...... topics from a large collection of the transcribed speech to improve retrieval of spoken documents. The context modelling is done using a variant of probabilistic latent semantic analysis (PLSA), to extract properties of the textual sources that refl ect how humans perceive context. We perform PLSA...... of Wikipedia , as well as text-based semantic similarity. The final aspect investigated is how to include some of the structured data available in Wikipedia to include temporal information. We show that a multiway extension of PLSA makes it possible to extract temporally meaningful topics, better than using...

  2. Impacts of Irrigation and Climate Change on Water Security: Using Stakeholder Engagement to Inform a Process-based Crop Model

    Science.gov (United States)

    Leonard, A.; Flores, A. N.; Han, B.; Som Castellano, R.; Steimke, A.

    2016-12-01

    Irrigation is an essential component for agricultural production in arid and semi-arid regions, accounting for a majority of global freshwater withdrawals used for human consumption. Since climate change affects both the spatiotemporal demand and availability of water in irrigated areas, agricultural productivity and water efficiency depend critically on how producers adapt and respond to climate change. It is necessary, therefore, to understand the coevolution and feedbacks between humans and agricultural systems. Integration of social and hydrologic processes can be achieved by active engagement with local stakeholders and applying their expertise to models of coupled human-environment systems. Here, we use a process based crop simulation model (EPIC) informed by stakeholder engagement to determine how both farm management and climate change influence regional agricultural water use and production in the Lower Boise River Basin (LBRB) of southwest Idaho. Specifically, we investigate how a shift from flood to sprinkler fed irrigation would impact a watershed's overall agricultural water use under RCP 4.5 and RCP 8.5 climate scenarios. The LBRB comprises about 3500 km2, of which 20% is dedicated to irrigated crops and another 40% to grass/pasture grazing land. Via interviews of stakeholders in the LBRB, we have determined that approximately 70% of irrigated lands in the region are flood irrigated. We model four common crops produced in the LBRB (alfalfa, corn, winter wheat, and sugarbeets) to investigate both hydrologic and agricultural impacts of irrigation and climatic drivers. Factors influencing farmers' decision to switch from flood to sprinkler irrigation include potential economic benefits, external financial incentives, and providing a buffer against future water shortages. These two irrigation practices are associated with significantly different surface water and energy budgets, and large-scale shifts in practice could substantially impact regional

  3. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  4. Information Theory: a Multifaceted Model of Information

    Directory of Open Access Journals (Sweden)

    Mark Burgin

    2003-06-01

    Full Text Available A contradictory and paradoxical situation that currently exists in information studies can be improved by the introduction of a new information approach, which is called the general theory of information. The main achievement of the general theory of information is explication of a relevant and adequate definition of information. This theory is built as a system of two classes of principles (ontological and sociological and their consequences. Axiological principles, which explain how to measure and evaluate information and information processes, are presented in the second section of this paper. These principles systematize and unify different approaches, existing as well as possible, to construction and utilization of information measures. Examples of such measures are given by Shannon’s quantity of information, algorithmic quantity of information or volume of information. It is demonstrated that all other known directions of information theory may be treated inside general theory of information as its particular cases.

  5. An evaluation of the coping patterns of rape victims: integration with a schema-based information-processing model.

    Science.gov (United States)

    Littleton, Heather

    2007-08-01

    The current study sought to provide an expansion of Resick and Schnicke's information-processing model of interpersonal violence response. Their model posits that interpersonal violence threatens victims' schematic beliefs and that victims can resolve this threat through assimilation, accommodation, or overaccommodation. In addition, it is hypothesized that how victims resolve schematic threat affects their coping strategies. To test this hypothesis, a cluster analysis of rape victims' coping patterns was conducted. Victims' coping patterns were related to distress, self-worth, and rape label in ways consistent with predictions. Thus, future research should focus on the implications of how victims integrate trauma with schemas.

  6. Integrated modelling of module behavior and energy aspects in mechatronics. Energy optimization of production facilities based on model information; Modellintegration von Verhaltens- und energetischen Aspekten fuer mechatronische Module. Energieoptimierung von Produktionsanlagen auf Grundlage von Modellinformationen

    Energy Technology Data Exchange (ETDEWEB)

    Schuetz, Daniel; Vogel-Heuser, Birgit [Technische Univ. Muenchen (Germany). Lehrstuhl fuer Informationstechnik im Maschinenwesen

    2011-01-15

    In this Paper a modelling approach is presented that merges the operation characteristics and the energy aspects of automation modules into one model. A characteristic of this approach is the state-based behavior model. An example is used to demonstrate how the information in the model can be used for an energy-optimized operation controlled by software agents. (orig.)

  7. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  8. Benefit of Modeling the Observation Error in a Data Assimilation Framework Using Vegetation Information Obtained From Passive Based Microwave Data

    Science.gov (United States)

    Bolten, John D.; Mladenova, Iliana E.; Crow, Wade; De Jeu, Richard

    2016-01-01

    A primary operational goal of the United States Department of Agriculture (USDA) is to improve foreign market access for U.S. agricultural products. A large fraction of this crop condition assessment is based on satellite imagery and ground data analysis. The baseline soil moisture estimates that are currently used for this analysis are based on output from the modified Palmer two-layer soil moisture model, updated to assimilate near-real time observations derived from the Soil Moisture Ocean Salinity (SMOS) satellite. The current data assimilation system is based on a 1-D Ensemble Kalman Filter approach, where the observation error is modeled as a function of vegetation density. This allows for offsetting errors in the soil moisture retrievals. The observation error is currently adjusted using Normalized Difference Vegetation Index (NDVI) climatology. In this paper we explore the possibility of utilizing microwave-based vegetation optical depth instead.

  9. Information footprint of different ecohydrological data sources: using multi-objective calibration of a physically-based model as hypothesis testing

    Science.gov (United States)

    Kuppel, S.; Soulsby, C.; Maneta, M. P.; Tetzlaff, D.

    2017-12-01

    The utility of field measurements to help constrain the model solution space and identify feasible model configurations has been an increasingly central issue in hydrological model calibration. Sufficiently informative observations are necessary to ensure that the goodness of model-data fit attained effectively translates into more physically-sound information for the internal model parameters, as a basis for model structure evaluation. Here we assess to which extent the diversity of information content can inform on the suitability of a complex, process-based ecohydrological model to simulate key water flux and storage dynamics at a long-term research catchment in the Scottish Highlands. We use the fully-distributed ecohydrological model EcH2O, calibrated against long-term datasets that encompass hydrologic and energy exchanges and ecological measurements: stream discharge, soil moisture, net radiation above canopy, and pine stand transpiration. Diverse combinations of these constraints were applied using a multi-objective cost function specifically designed to avoid compensatory effects between model-data metrics. Results revealed that calibration against virtually all datasets enabled the model to reproduce streamflow reasonably well. However, parameterizing the model to adequately capture local flux and storage dynamics, such as soil moisture or transpiration, required calibration with specific observations. This indicates that the footprint of the information contained in observations varies for each type of dataset, and that a diverse database informing about the different compartments of the domain, is critical to test hypotheses of catchment function and identify a consistent model parameterization. The results foster confidence in using EcH2O to help understanding current and future ecohydrological couplings in Northern catchments.

  10. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatiall...

  11. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning.

    Directory of Open Access Journals (Sweden)

    Abe D Hofman

    Full Text Available We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779, and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808. For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.

  12. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes.

    Directory of Open Access Journals (Sweden)

    Olivier J N Bertrand

    2015-11-01

    Full Text Available Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of

  13. An enhancement of the role-based access control model to facilitate information access management in context of team collaboration and workflow.

    Science.gov (United States)

    Le, Xuan Hung; Doll, Terry; Barbosu, Monica; Luque, Amneris; Wang, Dongwen

    2012-12-01

    Although information access control models have been developed and applied to various applications, few of the previous works have addressed the issue of managing information access in the combined context of team collaboration and workflow. To facilitate this requirement, we have enhanced the Role-Based Access Control (RBAC) model through formulating universal constraints, defining bridging entities and contributing attributes, extending access permissions to include workflow contexts, synthesizing a role-based access delegation model to target on specific objects, and developing domain ontologies as instantiations of the general model to particular applications. We have successfully applied this model to the New York State HIV Clinical Education Initiative (CEI) project to address the specific needs of information management in collaborative processes. An initial evaluation has shown this model achieved a high level of agreement with an existing system when applied to 4576 cases (kappa=0.801). Comparing to a reference standard, the sensitivity and specificity of the enhanced RBAC model were at the level of 97-100%. These results indicate that the enhanced RBAC model can be effectively used for information access management in context of team collaboration and workflow to coordinate clinical education programs. Future research is required to incrementally develop additional types of universal constraints, to further investigate how the workflow context and access delegation can be enriched to support the various needs on information access management in collaborative processes, and to examine the generalizability of the enhanced RBAC model for other applications in clinical education, biomedical research, and patient care. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Why Don’t More Farmers Go Organic? Using A Stakeholder-Informed Exploratory Agent-Based Model to Represent the Dynamics of Farming Practices in the Philippines

    Directory of Open Access Journals (Sweden)

    Laura Schmitt Olabisi

    2015-10-01

    Full Text Available In spite of a growing interest in organic agriculture; there has been relatively little research on why farmers might choose to adopt organic methods, particularly in the developing world. To address this shortcoming, we developed an exploratory agent-based model depicting Philippine smallholder farmer decisions to implement organic techniques in rice paddy systems. Our modeling exercise was novel in its combination of three characteristics: first, agent rules were based on focus group data collected in the system of study. Second, a social network structure was built into the model. Third, we utilized variance-based sensitivity analysis to quantify model outcome variability, identify influential drivers, and suggest ways in which further modeling efforts could be focused and simplified. The model results indicated an upper limit on the number of farmers adopting organic methods. The speed of information spread through the social network; crop yields; and the size of a farmer’s plot were highly influential in determining agents’ adoption rates. The results of this stylized model indicate that rates of organic farming adoption are highly sensitive to the yield drop after switchover to organic techniques, and to the speed of information spread through existing social networks. Further research and model development should focus on these system characteristics.

  15. The brain MRI image sparse representation based on the gradient information and the non-symmetry and anti-packing model.

    Science.gov (United States)

    Liang, Hu; Zhao, Shengrong; Dong, Xiangjun

    2017-12-01

    Nowadays, sparse representation has been widely used in Magnetic Resonance Imaging (MRI). The commonly used sparse representation methods are based on symmetrical partition, which have not considered the complex structure of MRI image. In this paper, we proposed a sparse representation method for the brain MRI image, called GNAMlet transform, which is based on the gradient information and the non-symmetry and anti-packing model. The proposed sparse representation method can reduce the lost detail information, improving the reconstruction accuracy. The experiment results show the superiority of the proposed transform for the brain MRI image representation in comparison with some state-of-the-art sparse representation methods.

  16. A geographical information system-based web model of arbovirus transmission risk in the continental United States of America

    Directory of Open Access Journals (Sweden)

    Sarah K. Konrad

    2012-11-01

    Full Text Available A degree-day (DD model of West Nile virus capable of forecasting real-time transmission risk in the continental United States of America up to one week in advance using a 50-km grid is available online at https://sites. google.com/site/arbovirusmap/. Daily averages of historical risk based on temperatures for 1994-2003 are available at 10- km resolution. Transmission risk maps can be downloaded from 2010 to the present. The model can be adapted to work with any arbovirus for which the temperature-related parameters are known, e.g. Rift Valley fever virus. To more effectively assess virus establishment and transmission, the model incorporates “compound risk” maps and forecasts, which includes livestock density as a parameter.

  17. Using social network analysis and agent-based modelling to explore information flow using common operational pictures for maritime search and rescue operations.

    Science.gov (United States)

    Baber, C; Stanton, N A; Atkinson, J; McMaster, R; Houghton, R J

    2013-01-01

    The concept of common operational pictures (COPs) is explored through the application of social network analysis (SNA) and agent-based modelling to a generic search and rescue (SAR) scenario. Comparing the command structure that might arise from standard operating procedures with the sort of structure that might arise from examining information-in-common, using SNA, shows how one structure could be more amenable to 'command' with the other being more amenable to 'control' - which is potentially more suited to complex multi-agency operations. An agent-based model is developed to examine the impact of information sharing with different forms of COPs. It is shown that networks using common relevant operational pictures (which provide subsets of relevant information to groups of agents based on shared function) could result in better sharing of information and a more resilient structure than networks that use a COP. SNA and agent-based modelling are used to compare different forms of COPs for maritime SAR operations. Different forms of COP change the communications structures in the socio-technical systems in which they operate, which has implications for future design and development of a COP.

  18. A New Prediction Model for Transformer Winding Hotspot Temperature Fluctuation Based on Fuzzy Information Granulation and an Optimized Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Li Zhang

    2017-12-01

    Full Text Available Winding hotspot temperature is the key factor affecting the load capacity and service life of transformers. For the early detection of transformer winding hotspot temperature anomalies, a new prediction model for the hotspot temperature fluctuation range based on fuzzy information granulation (FIG and the chaotic particle swarm optimized wavelet neural network (CPSO-WNN is proposed in this paper. The raw data are firstly processed by FIG to extract useful information from each time window. The extracted information is then used to construct a wavelet neural network (WNN prediction model. Furthermore, the structural parameters of WNN are optimized by chaotic particle swarm optimization (CPSO before it is used to predict the fluctuation range of the hotspot temperature. By analyzing the experimental data with four different prediction models, we find that the proposed method is more effective and is of guiding significance for the operation and maintenance of transformers.

  19. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...

  20. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  1. Towards socio-hydroinformatics: optimal design and integration of citizen-based information in water-system models

    Science.gov (United States)

    Solomatine, Dimitri; Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon Hurtado, Juan Carlos

    2017-04-01

    -hydroinformatics can be a potential application demonstrates that citizens not only play an active role in information capturing, evaluation and communication, but also help to improve models and thus increase flood resilience.

  2. The Information Technology Model Curriculum

    Science.gov (United States)

    Ekstrom, Joseph J.; Gorka, Sandra; Kamali, Reza; Lawson, Eydie; Lunt, Barry; Miller, Jacob; Reichgelt, Han

    2006-01-01

    The last twenty years has seen the development of demand for a new type of computing professional, which has resulted in the emergence of the academic discipline of Information Technology (IT). Numerous colleges and universities across the country and abroad have responded by developing programs without the advantage of an existing model for…

  3. Closed-loop EMG-informed model-based analysis of human musculoskeletal mechanics on rough terrains.

    Science.gov (United States)

    Varotto, C; Sawacha, Z; Gizzi, L; Farina, D; Sartori, M

    2017-07-01

    This work aims at estimating the musculoskeletal forces acting in the human lower extremity during locomotion on rough terrains. We employ computational models of the human neuro-musculoskeletal system that are informed by multi-modal movement data including foot-ground reaction forces, 3D marker trajectories and lower extremity electromyograms (EMG). Data were recorded from one healthy subject locomoting on rough grounds realized using foam rubber blocks of different heights. Blocks arrangement was randomized across all locomotion trials to prevent adaptation to specific ground morphology. Data were used to generate subject-specific models that matched an individual's anthropometry and force-generating capacity. EMGs enabled capturing subject- and ground-specific muscle activation patterns employed for walking on the rough grounds. This allowed integrating realistic activation patterns in the forward dynamic simulations of the musculoskeletal system. The ability to accurately predict the joint mechanical forces necessary to walk on different terrains have implications for our understanding of human movement but also for developing intuitive human machine interfaces for wearable exoskeletons or prosthetic limbs that can seamlessly adapt to different mechanical demands matching biological limb performance.

  4. Bayesian Modeling of Cerebral Information Processing

    OpenAIRE

    Labatut, Vincent; Pastor, Josette

    2001-01-01

    International audience; Modeling explicitly the links between cognitive functions and networks of cerebral areas is necessitated both by the understanding of the clinical outcomes of brain lesions and by the interpretation of activation data provided by functional neuroimaging techniques. At this global level of representation, the human brain can be best modeled by a probabilistic functional causal network. Our modeling approach is based on the anatomical connection pattern, the information ...

  5. Isotope-based quantum information

    CERN Document Server

    G Plekhanov, Vladimir

    2012-01-01

    The present book provides to the main ideas and techniques of the rapid progressing field of quantum information and quantum computation using isotope - mixed materials. It starts with an introduction to the isotope physics and then describes of the isotope - based quantum information and quantum computation. The ability to manipulate and control electron and/or nucleus spin in semiconductor devices provides a new route to expand the capabilities of inorganic semiconductor-based electronics and to design innovative devices with potential application in quantum computing. One of the major challenges towards these objectives is to develop semiconductor-based systems and architectures in which the spatial distribution of spins and their properties can be controlled. For instance, to eliminate electron spin decoherence resulting from hyperfine interaction due to nuclear spin background, isotopically controlled devices are needed (i.e., nuclear spin-depleted). In other emerging concepts, the control of the spatial...

  6. Elaboration of a velocity model of the Bogota basin (Colombia) based on microtremors arrays measurements, gravity data, and geological information

    Science.gov (United States)

    Pulido Hernandez, N. E.; Senna, S.; Garcia, H. Mr; Montejo, S.; Reyes, J. C.

    2017-12-01

    Bogotá, a megacity with almost 8 million inhabitants is prone to a significant earthquake hazard due to nearby active faults as well as subduction megathrust earthquakes. The city has been severely affected by many historical earthquakes in the last 500 years, reaching MM intensities of 8 or more in Bogotá. The city is also located at a large lacustrine basin composed of extremely soft soils which may strongly amplify the ground shaking from earthquakes. The basin extends approximately 40 km from North to South, is bounded by the Andes range to the East and South, and sharply deepens towards the West of Bogotá. The city has been the subject of multiple microzonations studies which have contributed to gain a good knowledge on the geotechnical zonation of the city and tectonic setting of the region. To improve our knowledge on the seismic risk of the city as one of the topics, we started a 5 years project sponsored by SATREPS (a joint program of JICA and JST), entitled "Application of state of the art technologies to strengthen research and response to seismic, volcanic and tsunami events and enhance risk management in Colombia (2015-2019)". In this paper we will show our results for the elaboration of a velocity model of the city. To construct a velocity model of the basin we conducted multi-sized microtremors arrays measurements (radius from 60 cm up to 1000 m) at 41 sites within the city. We calculated dispersion curves and inferred velocity profiles at all the sites. We combine these results with gravity measurements as well as geological information to obtain the initial velocity model of the basin. Ackowledgments This research is funded by SATREPS (a joint program of JICA and JST).

  7. Geographical information system based model of land suitability for good yield of rice in prachuap khiri khan province, thailand

    International Nuclear Information System (INIS)

    Hussain, W.; Sohaib, O.

    2012-01-01

    Correct assessment of land is a major issue in agricultural sector to use possible capability of any land, to raise cultivation and production of rice. Geographical Information System (GIS) provides broad techniques for suitable land classifications. This study is GIS based on land suitability analysis for rice farming in Prachuap Khiri Khan Province, Thailand, where the main livelihood of people is rice farming. This analysis was conducted considering the relationship of rice production with various data layers of elevation, slope, soil pH, rainfall, fertilizer use and land use. ArcView GIS 3.2 software is used to consider each layer according to related data to weight every coefficient, ranking techniques are used. It was based on determining correlation of rice production and these variables. This analysis showed a positive correlation with these variables in varying degrees depending on the magnitude and quality of these factors. By combining both data layers of GIS and weighted linear combination, various suitable lands have been developed for cultivation of rice. Integrated suitable assessment map and current land were compared to find suitable land in Prachuap Khiri Khan Province of Thailand. As a result of this comparison, we get a land which is suitable for optimum utilization for rice production in Prachuap Khiri Khan Province. (author)

  8. Building Information Modeling Comprehensive Overview

    Directory of Open Access Journals (Sweden)

    Sergey Kalinichuk

    2015-07-01

    Full Text Available The article is addressed to provide a comprehensive review on recently accelerated development of the Information Technology within project market such as industrial, engineering, procurement and construction. Author’s aim is to cover the last decades of the growth of the Information and Communication Technology in construction industry in particular Building Information Modeling and testifies that the problem of a choice of the effective project realization method not only has not lost its urgency, but has also transformed into one of the major condition of the intensive technology development. All of it has created a great impulse on shortening the project duration and has led to the development of various schedule compression techniques what becomes a focus of modern construction.

  9. NASA's Carbon Cycle OSSE Initiative - Informing future space-based observing strategies through advanced modeling and data assimilation

    Science.gov (United States)

    Ott, L.; Sellers, P. J.; Schimel, D.; Moore, B., III; O'Dell, C.; Crowell, S.; Kawa, S. R.; Pawson, S.; Chatterjee, A.; Baker, D. F.; Schuh, A. E.

    2017-12-01

    Satellite observations of carbon dioxide (CO2) and methane (CH4) are critically needed to improve understanding of the contemporary carbon budget and carbon-climate feedbacks. Though current carbon observing satellites have provided valuable data in regions not covered by surface in situ measurements, limited sampling of key regions and small but spatially coherent biases have limited the ability to estimate fluxes at the time and space scales needed for improved process-level understanding and informed decision-making. Next generation satellites will improve coverage in data sparse regions, either through use of active remote sensing, a geostationary vantage point, or increased swath width, but all techniques have limitations. The relative strengths and weaknesses of these approaches and their synergism have not previously been examined. To address these needs, a significant subset of the US carbon modeling community has come together with support from NASA to conduct a series of coordinated observing system simulation experiments (OSSEs), with close collaboration in framing the experiments and in analyzing the results. Here, we report on the initial phase of this initiative, which focused on creating realistic, physically consistent synthetic CO2 and CH4 observational datasets for use in inversion and signal detection experiments. These datasets have been created using NASA's Goddard Earth Observing System Model (GEOS) to represent the current state of atmospheric carbon as well as best available estimates of expected flux changes. Scenarios represented include changes in urban emissions, release of permafrost soil carbon, changes in carbon uptake in tropical and mid-latitude forests, changes in the Southern Ocean sink, and changes in both anthropogenic and natural methane emissions. This GEOS carbon `nature run' was sampled by instrument simulators representing the most prominent observing strategies with a focus on consistently representing the impacts of

  10. Geographic information systems-based expert system modelling for shoreline sensitivity to oil spill disaster in Rivers State, Nigeria

    Directory of Open Access Journals (Sweden)

    Olanrewaju Lawal

    2017-07-01

    Full Text Available In the absence of adequate and appropriate actions, hazards often result in disaster. Oil spills across any environment are very hazardous; thus, oil spill contingency planning is pertinent, supported by Environmental Sensitivity Index (ESI mapping. However, a significant data gap exists across many low- and middle-income countries in aspect of environmental monitoring. This study developed a geographic information system (GIS-based expert system (ES for shoreline sensitivity to oiling. It focused on the biophysical attributes of the shoreline with Rivers State as a case study. Data on elevation, soil, relative wave exposure and satellite imageries were collated and used for the development of ES decision rules within GIS. Results show that about 70% of the shoreline are lined with swamp forest/mangroves/nympa palm, and 97% have silt and clay as dominant sediment type. From the ES, six ranks were identified; 61% of the shoreline has a rank of 9 and 19% has a rank of 3 for shoreline sensitivity. A total of 568 km out of the 728 km shoreline is highly sensitive (ranks 7–10. There is a clear indication that the study area is a complex mixture of sensitive environments to oil spill. GIS-based ES with classification rules for shoreline sensitivity represents a rapid and flexible framework for automatic ranking of shoreline sensitivity to oiling. It is expected that this approach would kick-start sensitivity index mapping which is comprehensive and openly available to support disaster risk management around the oil producing regions of the country.

  11. Conjunction of wavelet transform and SOM-mutual information data pre-processing approach for AI-based Multi-Station nitrate modeling of watersheds

    Science.gov (United States)

    Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika

    2017-05-01

    Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.

  12. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...

  13. An Agent-Based Model of Private Woodland Owner Management Behavior Using Social Interactions, Information Flow, and Peer-To-Peer Networks.

    Science.gov (United States)

    Huff, Emily Silver; Leahy, Jessica E; Hiebeler, David; Weiskittel, Aaron R; Noblet, Caroline L

    2015-01-01

    Privately owned woodlands are an important source of timber and ecosystem services in North America and worldwide. Impacts of management on these ecosystems and timber supply from these woodlands are difficult to estimate because complex behavioral theory informs the owner's management decisions. The decision-making environment consists of exogenous market factors, internal cognitive processes, and social interactions with fellow landowners, foresters, and other rural community members. This study seeks to understand how social interactions, information flow, and peer-to-peer networks influence timber harvesting behavior using an agent-based model. This theoretical model includes forested polygons in various states of 'harvest readiness' and three types of agents: forest landowners, foresters, and peer leaders (individuals trained in conservation who use peer-to-peer networking). Agent rules, interactions, and characteristics were parameterized with values from existing literature and an empirical survey of forest landowner attitudes, intentions, and demographics. The model demonstrates that as trust in foresters and peer leaders increases, the percentage of the forest that is harvested sustainably increases. Furthermore, peer leaders can serve to increase landowner trust in foresters. Model output and equations will inform forest policy and extension/outreach efforts. The model also serves as an important testing ground for new theories of landowner decision making and behavior.

  14. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  15. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-09-01

    Full Text Available To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model.

  16. A Conceptually Simple Modeling Approach for Jason-1 Sea State Bias Correction Based on 3 Parameters Exclusively Derived from Altimetric Information

    Directory of Open Access Journals (Sweden)

    Nelson Pires

    2016-07-01

    Full Text Available A conceptually simple formulation is proposed for a new empirical sea state bias (SSB model using information retrieved entirely from altimetric data. Nonparametric regression techniques are used, based on penalized smoothing splines adjusted to each predictor and then combined by a Generalized Additive Model. In addition to the significant wave height (SWH and wind speed (U10, a mediator parameter designed by the mean wave period derived from radar altimetry, has proven to improve the model performance in explaining some of the SSB variability, especially in swell ocean regions with medium-high SWH and low U10. A collinear analysis of scaled sea level anomalies (SLA variance differences shows conformity between the proposed model and the established SSB models. The new formulation aims to be a fast, reliable and flexible SSB model, in line with the well-settled SSB corrections, depending exclusively on altimetric information. The suggested method is computationally efficient and capable of generating a stable model with a small training dataset, a useful feature for forthcoming missions.

  17. Information Filtering Based on Users' Negative Opinions

    Science.gov (United States)

    Guo, Qiang; Li, Yang; Liu, Jian-Guo

    2013-05-01

    The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.

  18. Evaluation of an Enhanced Role-Based Access Control model to manage information access in collaborative processes for a statewide clinical education program.

    Science.gov (United States)

    Le, Xuan Hung; Doll, Terry; Barbosu, Monica; Luque, Amneris; Wang, Dongwen

    2014-08-01

    Managing information access in collaborative processes is a critical requirement to team-based biomedical research, clinical education, and patient care. We have previously developed a computation model, Enhanced Role-Based Access Control (EnhancedRBAC), and applied it to coordinate information access in the combined context of team collaboration and workflow for the New York State HIV Clinical Education Initiative (CEI) program. We report in this paper an evaluation study to assess the effectiveness of the EnhancedRBAC model for information access management in collaborative processes when applied to CEI. We designed a cross-sectional study and performed two sets of measurement: (1) degree of agreement between EnhancedRBAC and a control system CEIAdmin based on 9152 study cases, and (2) effectiveness of EnhancedRBAC in terms of sensitivity, specificity, and accuracy based on a gold-standard with 512 sample cases developed by a human expert panel. We applied stratified random sampling, partial factorial design, and blocked randomization to ensure a representative case sample and a high-quality gold-standard. With the kappa statistics of four comparisons in the range of 0.80-0.89, EnhancedRBAC has demonstrated a high level of agreement with CEIAdmin. When evaluated against the gold-standard, EnhancedRBAC has achieved sensitivities in the range of 97-100%, specificities at the level of 100%, and accuracies in the range of 98-100%. The initial results have shown that the EnhancedRBAC model can be effectively used to manage information access in the combined context of team collaboration and workflow for coordination of clinical education programs. Future research is required to perform longitudinal evaluation studies and to assess the effectiveness of EnhancedRBAC in other applications. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. How people learn about causal influence when there are many possible causes: A model based on informative transitions.

    Science.gov (United States)

    Derringer, Cory; Rottman, Benjamin Margolin

    2018-05-01

    Four experiments tested how people learn cause-effect relations when there are many possible causes of an effect. When there are many cues, even if all the cues together strongly predict the effect, the bivariate relation between each individual cue and the effect can be weak, which can make it difficult to detect the influence of each cue. We hypothesized that when detecting the influence of a cue, in addition to learning from the states of the cues and effect (e.g., a cue is present and the effect is present), which is hypothesized by multiple existing theories of learning, participants would also learn from transitions - how the cues and effect change over time (e.g., a cue turns on and the effect turns on). We found that participants were better able to identify positive and negative cues in an environment in which only one cue changed from one trial to the next, compared to multiple cues changing (Experiments 1A, 1B). Within a single learning sequence, participants were also more likely to update their beliefs about causal strength when one cue changed at a time ('one-change transitions') than when multiple cues changed simultaneously (Experiment 2). Furthermore, learning was impaired when the trials were grouped by the state of the effect (Experiment 3) or when the trials were grouped by the state of a cue (Experiment 4), both of which reduce the number of one-change transitions. We developed a modification of the Rescorla-Wagner algorithm to model this 'Informative Transitions' learning processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Model-based scenario planning to inform climate change adaptation in the Northern Great Plains—Final report

    Science.gov (United States)

    Symstad, Amy J.; Miller, Brian W.; Friedman, Jonathan M.; Fisichelli, Nicholas A.; Ray, Andrea J.; Rowland, Erika; Schuurman, Gregor W.

    2017-12-18

    Public SummaryWe worked with managers in two focal areas to plan for the uncertain future by integrating quantitative climate change scenarios and simulation modeling into scenario planning exercises.In our central North Dakota focal area, centered on Knife River Indian Villages National Historic Site, managers are concerned about how changes in flood severity and growing conditions for native and invasive plants may affect archaeological resources and cultural landscapes associated with the Knife and Missouri Rivers. Climate projections and hydrological modeling based on those projections indicate plausible changes in spring and summer soil moisture ranging from a 7 percent decrease to a 13 percent increase and maximum winter snowpack (important for spring flooding) changes ranging from a 13 percent decrease to a 47 percent increase. Facilitated discussions among managers and scientists exploring the implications of these different climate scenarios for resource management revealed potential conflicts between protecting archeological sites and fostering riparian cottonwood forests. The discussions also indicated the need to prioritize archeological sites for excavation or protection and culturally important plant species for intensive management attention.In our southwestern South Dakota focal area, centered on Badlands National Park, managers are concerned about how changing climate will affect vegetation production, wildlife populations, and erosion of fossils, archeological artifacts, and roads. Climate scenarios explored by managers and scientists in this focal area ranged from a 13 percent decrease to a 33 percent increase in spring precipitation, which is critical to plant growth in the northern Great Plains region, and a slight decrease to a near doubling of intense rain events. Facilitated discussions in this focal area concluded that greater effort should be put into preparing for emergency protection, excavation, and preservation of exposed fossils or

  1. Comparison on information-seeking behavior of postgraduated students in Isfahan University of Medical Sciences and University of Isfahan in writing dissertation based on Kuhlthau model of information search process.

    Science.gov (United States)

    Abedi, Mahnaz; Ashrafi-Rizi, Hasan; Zare-Farashbandi, Firoozeh; Nouri, Rasoul; Hassanzadeh, Akbar

    2014-01-01

    Information-seeking behaviors have been one of the main focuses of researchers in order to identify and solve the problems users face in information recovery. The aim of this research is Comparative on Information-Seeking Behavior of the Postgraduate Students in Isfahan University of Medical Sciences and Isfahan University in Writing Dissertation based on Kuhlthau Model of Information Search Process in 2012. The research method followed is survey and the data collection tool is Narmenji questionnaire. Statistical population was all postgraduate students in Isfahan University of Medical Sciences and Isfahan University. The sample size was 196 people and sampling was stratified randomly. The type of statistical analyses were descriptive (mean and frequency) and inferential (independent t test and Pearson's correlation) and the software used was SPSS20. The findings showed that Isfahan Medical Sciences University followed 20% of the order steps of this model and Isfahan University did not follow this model. In the first stage (Initiation) and sixth (Presentation) of feelings aspects and in actions (total stages) significant difference was found between students from the two universities. Between gender and fourth stage (Formulation) and the total score of feelings the Kuhlthau model there has a significant relationship. Also there was a significant and inverse relationship between the third stage (Exploration) of feelings and age of the students. The results showed that in writing dissertation there were some major differences in following up the Kuhlthau model between students of the two Universities. There are significant differences between some of the stages of feelings and actions of students' information-seeking behavior from the two universities. There is a significant relationship between the fourth stage (Formulation) of feelings in the Kuhlthau Model with gender and third stage of the Feelings (Exploration) with age.

  2. Multimodal Data Fusion Based on Mutual Information.

    Science.gov (United States)

    Bramon, Roger; Boada, Imma; Bardera, Anton; Rodríguez, Joaquim; Feixas, Miquel; Puig, Josep; Sbert, Mateu

    2012-09-01

    Multimodal visualization aims at fusing different data sets so that the resulting combination provides more information and understanding to the user. To achieve this aim, we propose a new information-theoretic approach that automatically selects the most informative voxels from two volume data sets. Our fusion criteria are based on the information channel created between the two input data sets that permit us to quantify the information associated with each intensity value. This specific information is obtained from three different ways of decomposing the mutual information of the channel. In addition, an assessment criterion based on the information content of the fused data set can be used to analyze and modify the initial selection of the voxels by weighting the contribution of each data set to the final result. The proposed approach has been integrated in a general framework that allows for the exploration of volumetric data models and the interactive change of some parameters of the fused data set. The proposed approach has been evaluated on different medical data sets with very promising results.

  3. ANN multiscale model of anti-HIV drugs activity vs AIDS prevalence in the US at county level based on information indices of molecular graphs and social networks.

    Science.gov (United States)

    González-Díaz, Humberto; Herrera-Ibatá, Diana María; Duardo-Sánchez, Aliuska; Munteanu, Cristian R; Orbegozo-Medina, Ricardo Alfredo; Pazos, Alejandro

    2014-03-24

    This work is aimed at describing the workflow for a methodology that combines chemoinformatics and pharmacoepidemiology methods and at reporting the first predictive model developed with this methodology. The new model is able to predict complex networks of AIDS prevalence in the US counties, taking into consideration the social determinants and activity/structure of anti-HIV drugs in preclinical assays. We trained different Artificial Neural Networks (ANNs) using as input information indices of social networks and molecular graphs. We used a Shannon information index based on the Gini coefficient to quantify the effect of income inequality in the social network. We obtained the data on AIDS prevalence and the Gini coefficient from the AIDSVu database of Emory University. We also used the Balaban information indices to quantify changes in the chemical structure of anti-HIV drugs. We obtained the data on anti-HIV drug activity and structure (SMILE codes) from the ChEMBL database. Last, we used Box-Jenkins moving average operators to quantify information about the deviations of drugs with respect to data subsets of reference (targets, organisms, experimental parameters, protocols). The best model found was a Linear Neural Network (LNN) with values of Accuracy, Specificity, and Sensitivity above 0.76 and AUROC > 0.80 in training and external validation series. This model generates a complex network of AIDS prevalence in the US at county level with respect to the preclinical activity of anti-HIV drugs in preclinical assays. To train/validate the model and predict the complex network we needed to analyze 43,249 data points including values of AIDS prevalence in 2,310 counties in the US vs ChEMBL results for 21,582 unique drugs, 9 viral or human protein targets, 4,856 protocols, and 10 possible experimental measures.

  4. A systematic review and qualitative analysis to inform the development of a new emergency department-based geriatric case management model.

    Science.gov (United States)

    Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce

    2011-06-01

    We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one

  5. Data Model Management for Space Information Systems

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool

  6. Strategies for control of sudden oak death in Humboldt County-informed guidance based on a parameterized epidemiological model

    Science.gov (United States)

    João A. N. Filipe; Richard C. Cobb; David M. Rizzo; Ross K. Meentemeyer; Christopher A.. Gilligan

    2010-01-01

    Landscape- to regional-scale models of plant epidemics are direly needed to predict largescale impacts of disease and assess practicable options for control. While landscape heterogeneity is recognized as a major driver of disease dynamics, epidemiological models are rarely applied to realistic landscape conditions due to computational and data limitations. Here we...

  7. A Meteorological Information Mining-Based Wind Speed Model for Adequacy Assessment of Power Systems With Wind Power

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    factors are calculated. Secondly, the meteorological data are classified into several states using an improved Fuzzy C-means (FCM) algorithm. Then the Markov chain is used to model the chronological characteristics of meteorological states and wind speed. The proposed model was proved to be more accurate...

  8. Early Engagement of Stakeholders with Individual-Based Modeling Can Inform Research for Improving Invasive Species Management: The Round Goby as a Case Study

    Directory of Open Access Journals (Sweden)

    Emma Samson

    2017-11-01

    Full Text Available Individual-based models (IBMs incorporating realistic representations of key range-front processes such as dispersal can be used as tools to investigate the dynamics of invasive species. Managers can apply insights from these models to take effective action to prevent further spread and prioritize measures preventing establishment of invasive species. We highlight here how early-stage IBMs (constructed under constraints of time and data availability can also play an important role in defining key research priorities for providing key information on the biology of an invasive species in order that subsequent models can provide robust insight into potential management interventions. The round goby, Neogobius melanostomus, is currently spreading through the Baltic Sea, with major negative effects being reported in the wake of its invasion. Together with stakeholders, we parameterize an IBM to investigate the goby's potential spread pattern throughout the Gulf of Gdansk and the Baltic Sea. Model parameters were assigned by integrating information obtained through stakeholder interaction, from scientific literature, or estimated using an inverse modeling approach when not available. IBMs can provide valuable direction to research on invasive species even when there is limited data and/or time available to parameterize/fit them to the degree to which we might aspire in an ideal world. Co-development of models with stakeholders can be used to recognize important invasion patterns, in addition to identifying and estimating unknown environmental parameters, thereby guiding the direction of future research. Well-parameterized and validated models are not required in the earlier stages of the modeling cycle where their main utility is as a tool for thought.

  9. Research on BIM-based building information value chain reengineering

    Science.gov (United States)

    Hui, Zhao; Weishuang, Xie

    2017-04-01

    The achievement of value and value-added factor to the building engineering information is accomplished through a chain-flow, that is, building the information value chain. Based on the deconstruction of the information chain on the construction information in the traditional information mode, this paper clarifies the value characteristics and requirements of each stage of the construction project. In order to achieve building information value-added, the paper deconstructs the traditional building information value chain, reengineer the information value chain model on the basis of the theory and techniques of BIM, to build value-added management model and analyse the value of the model.

  10. Models of memory: information processing.

    Science.gov (United States)

    Eysenck, M W

    1988-01-01

    A complete understanding of human memory will necessarily involve consideration of the active processes involved at the time of learning and of the organization and nature of representation of information in long-term memory. In addition to process and structure, it is important for theory to indicate the ways in which stimulus-driven and conceptually driven processes interact with each other in the learning situation. Not surprisingly, no existent theory provides a detailed specification of all of these factors. However, there are a number of more specific theories which are successful in illuminating some of the component structures and processes. The working memory model proposed by Baddeley and Hitch (1974) and modified subsequently has shown how the earlier theoretical construct of the short-term store should be replaced with the notion of working memory. In essence, working memory is a system which is used both to process information and to permit the transient storage of information. It comprises a number of conceptually distinct, but functionally interdependent components. So far as long-term memory is concerned, there is evidence of a number of different kinds of representation. Of particular importance is the distinction between declarative knowledge and procedural knowledge, a distinction which has received support from the study of amnesic patients. Kosslyn has argued for a distinction between literal representation and propositional representation, whereas Tulving has distinguished between episodic and semantic memories. While Tulving's distinction is perhaps the best known, there is increasing evidence that episodic and semantic memory differ primarily in content rather than in process, and so the distinction may be of less theoretical value than was originally believed.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  12. Building Information Modelling in Denmark and Iceland

    DEFF Research Database (Denmark)

    Jensen, Per Anker; Jóhannesson, Elvar Ingi

    2013-01-01

    Purpose – The purpose of this paper is to explore the implementation of building information modelling (BIM) in the Nordic countries of Europe with particular focus on the Danish building industry with the aim of making use of its experience for the Icelandic building industry. Design....../methodology/aptroach – The research is based on two separate analyses. In the first part, the deployment of information and communication technology (ICT) in the Icelandic building industry is investigated and compared with the other Nordic countries. In the second part the experience in Denmark from implementing and working...... for making standards and guidelines related to BIM. Public building clients are also encouraged to consider initiating projects based on making simple building models of existing buildings in order to introduce the BIM technology to the industry. Icelandic companies are recommended to start implementing BIM...

  13. Reservoir Model Information System: REMIS

    Science.gov (United States)

    Lee, Sang Yun; Lee, Kwang-Wu; Rhee, Taehyun; Neumann, Ulrich

    2009-01-01

    We describe a novel data visualization framework named Reservoir Model Information System (REMIS) for the display of complex and multi-dimensional data sets in oil reservoirs. It is aimed at facilitating visual exploration and analysis of data sets as well as user collaboration in an easier way. Our framework consists of two main modules: the data access point module and the data visualization module. For the data access point module, the Phrase-Driven Grammar System (PDGS) is adopted for helping users facilitate the visualization of data. It integrates data source applications and external visualization tools and allows users to formulate data query and visualization descriptions by selecting graphical icons in a menu or on a map with step-by-step visual guidance. For the data visualization module, we implemented our first prototype of an interactive volume viewer named REMVR to classify and to visualize geo-spatial specific data sets. By combining PDGS and REMVR, REMIS assists users better in describing visualizations and exploring data so that they can easily find desired data and explore interesting or meaningful relationships including trends and exceptions in oil reservoir model data.

  14. Development of an interactive exploratory web-based modelling platform for informed decision-making and knowledgeable responses to global change

    Science.gov (United States)

    Holman, I.; Harrison, P.; Cojocaru, G.

    2013-12-01

    Informed decision-making and knowledgeable responses to global climate change impacts on natural resources and ecosystem services requires access to information resources that are credible, accurate, easy to understand, and appropriate. Too often stakeholders are limited to restricted scientific outputs produced by inaccessible models, generated from a limited number of scenario simulations chosen arbitrarily by researchers. This paper describes the outcomes of the CLIMSAVE project (www.climsave.eu), which has attempted to democratise climate change impacts, adaptation and vulnerability modelling, through developing the public domain interactive exploratory web-based CLIMSAVE Integrated Assessment (IA) Platform. The CLIMSAVE Integrated Assessment (IA) Platform aims to enable a wide range of stakeholders to improve their understanding surrounding impacts, adaptation responses and vulnerability of natural resources and ecosystem services under uncertain futures across Europe. The CLIMSAVE IA Platform contain linked simulation models (of the urban, water, agriculture, forestry, biodiversity and other sectors), IPCC AR4 climate scenarios and CLIMSAVE socio-economic scenarios, enabling users to select their inputs (climate and socioeconomic), rapidly run the models across Europe using their input settings and view their selected Impact (before, or after, adaptation) and Vulnerability (Figure 1) indicators. The CLIMSAVE IA Platform has been designed to promote both cognitive accessibility - the ease of understanding - and practical accessibility - the ease of application. Based upon partner and CLIMSAVE international experts' experience, examination of other participatory model interfaces and potential user requirements, we describe the design concepts and functionality that were identified, incorporated into the prototype CLIMSAVE IA Platform and further refined based on stakeholder feedback. The CLIMSAVE IA Platform is designed to facilitate a two-way iterative process

  15. A Frequency-Based Assignment Model under Day-to-Day Information Evolution of Oversaturated Conditions on a Feeder Bus Service

    Directory of Open Access Journals (Sweden)

    Silin Zhang

    2017-02-01

    Full Text Available Day-to-day information is increasingly being implemented in transit networks worldwide. Feeder bus service (FBS plays a vital role in a public transit network by providing feeder access to hubs and rails. As a feeder service, a space-time path for frequent passengers is decided by its dynamic strategy procedure, in which a day-to-day information self-learning mechanism is identified and analyzed from our survey data. We formulate a frequency-based assignment model considering day-to-day evolution under oversaturated conditions, which takes into account the residual capacity of bus and the comfort feelings of sitting or standing. The core of our proposed model is to allocate the passengers on each segment belonging to their own paths according to multi-utilities transformed from the time values and parametric demands, such as frequency, bus capacity, seat comfort, and stop layout. The assignment method, albeit general, allows us to formulate an equivalent optimization problem in terms of interaction between the FBS’ operation and frequent passengers’ rational behaviors. Finally, a real application case is generated to test the ability of the modeling framework capturing the theoretical consequents, serving the passengers’ dynamic externalities.

  16. Modeling of groundwater potential of the sub-basin of Siriri river, Sergipe state, Brazil, based on Geographic Information System and Remote Sensing

    Directory of Open Access Journals (Sweden)

    Washington Franca Rocha

    2011-08-01

    Full Text Available The use of Geographic Information System (GIS and Remote Sensing for modeling groundwater potential give support for the analysis and decision-making processes about water resource management in watersheds. The objective of this work consisted in modeling the groundwater water potential of Siriri river sub-basin, Sergipe state, based on its natural environment (soil, land use, slope, drainage density, lineament density, rainfall and geology using Remote Sensing and Geographic Information System as an integration environment. The groundwater potential map was done using digital image processing procedures of ENVI 4.4 software and map algebra of ArcGIS 9.3®. The Analytical Hierarchy Method was used for modeling the weights definition of the different criteria (maps. Loads and weights of the different classes were assigned to each map according to their influence on the overall objective of the work. The integration of these maps in a GIS environment and the AHP technique application allowed the development of the groundwater potential map in five classes: very low, low, moderate, high, very high. The average flow rates of wells confirm the potential of aquifers Sapucari, Barriers and Maruim since they are the most exploited in this sub-basin, with average flows of 78,113 L/h, 19,332 L/h and 12,085 L/h, respectively.

  17. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  18. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  19. Refreshing Information Literacy: Learning from Recent British Information Literacy Models

    Science.gov (United States)

    Martin, Justine

    2013-01-01

    Models play an important role in helping practitioners implement and promote information literacy. Over time models can lose relevance with the advances in technology, society, and learning theory. Practitioners and scholars often call for adaptations or transformations of these frameworks to articulate the learning needs in information literacy…

  20. Requirements for clinical information modelling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak

    2015-07-01

    This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Ontological modeling of electronic health information exchange.

    Science.gov (United States)

    McMurray, J; Zhu, L; McKillop, I; Chen, H

    2015-08-01

    Investments of resources to purposively improve the movement of information between health system providers are currently made with imperfect information. No inventories of system-level electronic health information flows currently exist, nor do measures of inter-organizational electronic information exchange. Using Protégé 4, an open-source OWL Web ontology language editor and knowledge-based framework, we formalized a model that decomposes inter-organizational electronic health information flow into derivative concepts such as diversity, breadth, volume, structure, standardization and connectivity. The ontology was populated with data from a regional health system and the flows were measured. Individual instance's properties were inferred from their class associations as determined by their data and object property rules. It was also possible to visualize interoperability activity for regional analysis and planning purposes. A property called Impact was created from the total number of patients or clients that a health entity in the region served in a year, and the total number of health service providers or organizations with whom it exchanged information in support of clinical decision-making, diagnosis or treatment. Identifying providers with a high Impact but low Interoperability score could assist planners and policy-makers to optimize technology investments intended to electronically share patient information across the continuum of care. Finally, we demonstrated how linked ontologies were used to identify logical inconsistencies in self-reported data for the study. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Knowledge-based information systems in practice

    CERN Document Server

    Jain, Lakhmi; Watada, Junzo; Howlett, Robert

    2015-01-01

    This book contains innovative research from leading researchers who presented their work at the 17th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES 2013, held in Kitakyusha, Japan, in September 2013. The conference provided a competitive field of 236 contributors, from which 38 authors expanded their contributions and only 21 published. A plethora of techniques and innovative applications are represented within this volume. The chapters are organized using four themes. These topics include: data mining, knowledge management, advanced information processes and system modelling applications. Each topic contains multiple contributions and many offer case studies or innovative examples. Anyone that wants to work with information repositories or process knowledge should consider reading one or more chapters focused on their technique of choice. They may also benefit from reading other chapters to assess if an alternative technique represents a more suitable app...

  3. Analysis of the quality of hospital information systems in Isfahan teaching hospitals based on the DeLone and McLean model.

    Science.gov (United States)

    Saghaeiannejad-Isfahani, Sakineh; Saeedbakhsh, Saeed; Jahanbakhsh, Maryam; Habibi, Mahboobeh

    2015-01-01

    Quality is one of the most important criteria for the success of an information system, which refers to its desirable features of the processing system itself. The aim of this study was the analysis of system quality of hospital information systems (HIS) in teaching hospitals of Isfahan based on the DeLone and McLean model. This research was an applied and analytical-descriptive study. It was performed in teaching hospitals of Isfahan in 2010. The research population consisted of the HIS's users, system designers and hospital information technology (IT) authorities who were selected by random sampling method from users' group (n = 228), and system designers and IT authorities (n = 52) using census method. The data collection tool was two researcher-designed questionnaires. Questionnaires' reliability was estimated by using Cronbach's alpha was calculated. It was 97.1% for the system designers and IT authorities' questionnaire and 92.3% for system users' questionnaire. Findings showed that the mean of system quality score in a variety of HIS and among different hospitals was significantly different and not the same (P value ≥ 0.05). In general, Kosar (new version) system and Rahavard Rayaneh system have dedicated the highest and the lowest mean scores to themselves. The system quality criterion overall mean was 59.6% for different HIS and 57.5% among different hospitals respectively. According to the results of the research, it can be stated that based on the applied model, the investigated systems were relatively desirable in terms of quality. Thus, in order to achieve a good optimal condition, it is necessary to pay particular attention to the improving factors of system quality, type of activity, type of specialty and hospital ownership type.

  4. Item Information in the Rasch Model

    NARCIS (Netherlands)

    Engelen, Ron J.H.; van der Linden, Willem J.; Oosterloo, Sebe J.

    1988-01-01

    Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling

  5. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  6. Ontology-based Information Retrieval

    DEFF Research Database (Denmark)

    Styltsvig, Henrik Bulskov

    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information...... retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use......, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun...

  7. Model Information Exchange System (MIXS).

    Science.gov (United States)

    2013-08-01

    Many travel demand forecast models operate at state, regional, and local levels. While they share the same physical network in overlapping geographic areas, they use different and uncoordinated modeling networks. This creates difficulties for models ...

  8. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  9. The Informed Guide to Climate Data Sets, a web-based community resource to facilitate the discussion and selection of appropriate datasets for Earth System Model Evaluation

    Science.gov (United States)

    Schneider, D. P.; Deser, C.; Shea, D.

    2011-12-01

    When comparing CMIP5 model output to observations, researchers will be faced with a bewildering array of choices. Considering just a few of the different products available for commonly analyzed climate variables, for reanalysis there are at least half a dozen different products, for sea ice concentrations there are NASA Team or Bootstrap versions, for sea surface temperatures there are HadISST or NOAA ERSST data, and for precipitation there are CMAP and GPCP data sets. While many data centers exist to host data, there is little centralized guidance on discovering and choosing appropriate climate data sets for the task at hand. Common strategies like googling "sea ice data" yield results that at best are substantially incomplete. Anecdotal evidence suggests that individual researchers often base their selections on non-scientific criteria-either the data are in a convenient format that the user is comfortable with, a co-worker has the data handy on her local server, or a mentor discourages or recommends the use of particular products for legacy or other non-objective reasons. Sometimes these casual recommendations are sound, but they are not accessible to the broader community or adequately captured in the peer-reviewed literature. These issues are addressed by the establishment of a web-based Informed Guide with the specific goals to (1) Evaluate and assess selected climate datasets and (2) Provide expert user guidance on the strengths and limitations of selected climate datasets. The Informed Guide is based at NCAR's Climate and Global Dynamics Division, Climate Analysis Section and is funded by NSF. The Informed Guide is an interactive website that welcomes participation from the broad scientific community and is scalable to grow as participation increases. In this presentation, we will present the website, discuss how you can participate, and address the broader issues about its role in the evaluation of CMIP5 and other climate model simulations. A link to the

  10. Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological review of health technology assessments.

    Science.gov (United States)

    Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R

    2017-04-14

    Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for

  11. Information model for learning nursing terminology.

    Science.gov (United States)

    Nytun, Jan Pettersen; Fossum, Mariann

    2014-01-01

    Standardized terminologies are introduced in healthcare with the intention of improving information quality, which is important for enhancing the quality of healthcare itself. The International Classification for Nursing Practice (ICNP®) is a unified language system that presents an ontology for nursing terminology; it is meant for documentation of nursing diagnoses, nursing interventions and patient outcomes. This paper presents an information model and an application for teaching nursing students how to use ICNP to assist in the planning of nursing care. The model is an integration of ICNP and our catalog ontology, patient journal ontology, and ontology defining task sets. The application for learning nursing terminology offers descriptions of patient situations and then prompts the student to supply nursing statements for diagnoses, goals and interventions. The nursing statements may be selected from catalogues containing premade solutions based on ICNP, or they may be constructed directly by selecting terms from ICNP.

  12. Biological information systems: Evolution as cognition-based information management.

    Science.gov (United States)

    Miller, William B

    2018-05-01

    An alternative biological synthesis is presented that conceptualizes evolutionary biology as an epiphenomenon of integrated self-referential information management. Since all biological information has inherent ambiguity, the systematic assessment of information is required by living organisms to maintain self-identity and homeostatic equipoise in confrontation with environmental challenges. Through their self-referential attachment to information space, cells are the cornerstone of biological action. That individualized assessment of information space permits self-referential, self-organizing niche construction. That deployment of information and its subsequent selection enacted the dominant stable unicellular informational architectures whose biological expressions are the prokaryotic, archaeal, and eukaryotic unicellular forms. Multicellularity represents the collective appraisal of equivocal environmental information through a shared information space. This concerted action can be viewed as systematized information management to improve information quality for the maintenance of preferred homeostatic boundaries among the varied participants. When reiterated in successive scales, this same collaborative exchange of information yields macroscopic organisms as obligatory multicellular holobionts. Cognition-Based Evolution (CBE) upholds that assessment of information precedes biological action, and the deployment of information through integrative self-referential niche construction and natural cellular engineering antecedes selection. Therefore, evolutionary biology can be framed as a complex reciprocating interactome that consists of the assessment, communication, deployment and management of information by self-referential organisms at multiple scales in continuous confrontation with environmental stresses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting......This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...

  14. Directory of Energy Information Administration Models 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994.

  15. Directory of Energy Information Administration Models 1994

    International Nuclear Information System (INIS)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994

  16. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  17. Investigating the Challenges for Adopting and Implementing of Information and Communication Technologies (ICT by Isfahan High Schools Teachers: Based On the Model of Barriers in ICT Usage

    Directory of Open Access Journals (Sweden)

    Bibi Eshrat Zaman

    2012-02-01

    Full Text Available Relevance and usefulness of information and communication technologies (ICT have been investigated in many researches. There are many challenges for ICT users, especially for teachers that act as inhibitor factors for using ICT in their jobs. The main purpose of this paper was to investigate these challenges in the view point of high school teachers in Isfahan city based on ICT use barriers model. In the model, barriers have divided into four groups: organizational, managerial, educational, and financial-instrumental. The research was based on qualitative method. For analyzing data descriptive-analysis method was used. For gathering data, researcher made questionnaire including 5 open ended had been used. Survey population included teachers of all high schools in Isfahan city in 1387-88 academic years. 110 teachers were selected by using cluster random sampling method. For data analysis, content analysis method was used to calculate the mean and frequencies. Findings indicated that most teachers have explained the lack of proper in-service training programs for their use of ICT as the most important obstacles for using ICT in teaching. Lack of suitable managerial strategies for implementing ICT in curriculum, lack of organizational support and lack of financial resources and equipments in schools, respectively, were other barriers in using ICT in Iranian high schools.

  18. Investigating accident causation through information network modelling.

    Science.gov (United States)

    Griffin, T G C; Young, M S; Stanton, N A

    2010-02-01

    Management of risk in complex domains such as aviation relies heavily on post-event investigations, requiring complex approaches to fully understand the integration of multi-causal, multi-agent and multi-linear accident sequences. The Event Analysis of Systemic Teamwork methodology (EAST; Stanton et al. 2008) offers such an approach based on network models. In this paper, we apply EAST to a well-known aviation accident case study, highlighting communication between agents as a central theme and investigating the potential for finding agents who were key to the accident. Ultimately, this work aims to develop a new model based on distributed situation awareness (DSA) to demonstrate that the risk inherent in a complex system is dependent on the information flowing within it. By identifying key agents and information elements, we can propose proactive design strategies to optimize the flow of information and help work towards avoiding aviation accidents. Statement of Relevance: This paper introduces a novel application of an holistic methodology for understanding aviation accidents. Furthermore, it introduces an ongoing project developing a nonlinear and prospective method that centralises distributed situation awareness and communication as themes. The relevance of findings are discussed in the context of current ergonomic and aviation issues of design, training and human-system interaction.

  19. Toward risk assessment 2.0: Safety supervisory control and model-based hazard monitoring for risk-informed safety interventions

    International Nuclear Information System (INIS)

    Favarò, Francesca M.; Saleh, Joseph H.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a staple in the engineering risk community, and it has become to some extent synonymous with the entire quantitative risk assessment undertaking. Limitations of PRA continue to occupy researchers, and workarounds are often proposed. After a brief review of this literature, we propose to address some of PRA's limitations by developing a novel framework and analytical tools for model-based system safety, or safety supervisory control, to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. Our work shifts the emphasis from the pervading probabilistic mindset in risk assessment toward the notions of danger indices and hazard temporal contingency. The framework and tools here developed are grounded in Control Theory and make use of the state-space formalism in modeling dynamical systems. We show that the use of state variables enables the definition of metrics for accident escalation, termed hazard levels or danger indices, which measure the “proximity” of the system state to adverse events, and we illustrate the development of such indices. Monitoring of the hazard levels provides diagnostic information to support both on-line and off-line safety interventions. For example, we show how the application of the proposed tools to a rejected takeoff scenario provides new insight to support pilots’ go/no-go decisions. Furthermore, we augment the traditional state-space equations with a hazard equation and use the latter to estimate the times at which critical thresholds for the hazard level are (b)reached. This estimation process provides important prognostic information and produces a proxy for a time-to-accident metric or advance notice for an impending adverse event. The ability to estimate these two hazard coordinates, danger index and time-to-accident, offers many possibilities for informing system control strategies and improving accident prevention and risk mitigation

  20. Rule-based Information Integration

    NARCIS (Netherlands)

    de Keijzer, Ander; van Keulen, Maurice

    2005-01-01

    In this report, we show the process of information integration. We specifically discuss the language used for integration. We show that integration consists of two phases, the schema mapping phase and the data integration phase. We formally define transformation rules, conversion, evolution and

  1. Complementarity of Historic Building Information Modelling and Geographic Information Systems

    Science.gov (United States)

    Yang, X.; Koehl, M.; Grussenmeyer, P.; Macher, H.

    2016-06-01

    In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM) and Geographical Information Systems (GIS) to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D), time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.

  2. Development of tailored nutrition information messages based on the transtheoretical model for smartphone application of an obesity prevention and management program for elementary-school students.

    Science.gov (United States)

    Lee, Ji Eun; Lee, Da Eun; Kim, Kirang; Shim, Jae Eun; Sung, Eunju; Kang, Jae-Heon; Hwang, Ji-Yun

    2017-06-01

    Easy access to intervention and support for certain behaviors is important for obesity prevention and management. The available technology such as smartphone applications can be used for intervention regarding healthy food choices for obesity prevention and management in elementary-school students. The transtheoretical model (TTM) is comprised of stages and processes of change and can be adopted to tailored education for behavioral change. This study aims to develop TTM-based nutrition contents for mobile applications intended to change eating behaviors related to weight gain in young children. A synthesized algorithm for tailored nutrition messages was developed according to the intake status of six food groups (vegetables, fruits, sugar-sweetened beverages, fast food and instant food, snacks, and late-night snacks), decision to make dietary behavioral changes, and self-confidence in dietary behavioral changes. The messages in this study were developed from December 2014 to April 2015. After the validity evaluation of the contents through expert consultation, tailored nutrition information messages and educational contents were developed based on the TTM. Based on the TTM, stages of subjects are determined by their current intake status, decision to make dietary behavioral changes, and self-confidence in dietary behavioral changes. Three versions of tailored nutrition messages at each TTM stage were developed so as to not send the same messages for three weeks at most, and visual materials such as figures and tables were developed to provide additional nutritional information. Finally, 3,276 tailored nutrition messages and 60 nutrition contents for applications were developed. Smartphone applications may be an innovative medium to deliver interventions for eating behavior changes directly to individuals with favorable cost-effectiveness. In addition, using the TTM for tailored nutrition education for healthy eating is an effective approach.

  3. Development of tailored nutrition information messages based on the transtheoretical model for smartphone application of an obesity prevention and management program for elementary-school students

    Science.gov (United States)

    Lee, Ji Eun; Lee, Da Eun; Kim, Kirang; Shim, Jae Eun; Sung, Eunju; Kang, Jae-Heon

    2017-01-01

    BACKGROUND/OBJECTIVES Easy access to intervention and support for certain behaviors is important for obesity prevention and management. The available technology such as smartphone applications can be used for intervention regarding healthy food choices for obesity prevention and management in elementary-school students. The transtheoretical model (TTM) is comprised of stages and processes of change and can be adopted to tailored education for behavioral change. This study aims to develop TTM-based nutrition contents for mobile applications intended to change eating behaviors related to weight gain in young children. SUBJECTS/METHODS A synthesized algorithm for tailored nutrition messages was developed according to the intake status of six food groups (vegetables, fruits, sugar-sweetened beverages, fast food and instant food, snacks, and late-night snacks), decision to make dietary behavioral changes, and self-confidence in dietary behavioral changes. The messages in this study were developed from December 2014 to April 2015. After the validity evaluation of the contents through expert consultation, tailored nutrition information messages and educational contents were developed based on the TTM. RESULTS Based on the TTM, stages of subjects are determined by their current intake status, decision to make dietary behavioral changes, and self-confidence in dietary behavioral changes. Three versions of tailored nutrition messages at each TTM stage were developed so as to not send the same messages for three weeks at most, and visual materials such as figures and tables were developed to provide additional nutritional information. Finally, 3,276 tailored nutrition messages and 60 nutrition contents for applications were developed. CONCLUSIONS Smartphone applications may be an innovative medium to deliver interventions for eating behavior changes directly to individuals with favorable cost-effectiveness. In addition, using the TTM for tailored nutrition education for

  4. Workflow management based on information management

    NARCIS (Netherlands)

    Lutters, Diederick; Mentink, R.J.; van Houten, Frederikus J.A.M.; Kals, H.J.J.

    2001-01-01

    In manufacturing processes, the role of the underlying information is of the utmost importance. Based on three different types of integration (function, information and control), as well as the theory of information management and the accompanying information structures, the entire product creation

  5. Building an environment model using depth information

    Science.gov (United States)

    Roth-Tabak, Y.; Jain, Ramesh

    1989-01-01

    Modeling the environment is one of the most crucial issues for the development and research of autonomous robot and tele-perception. Though the physical robot operates (navigates and performs various tasks) in the real world, any type of reasoning, such as situation assessment, planning or reasoning about action, is performed based on information in its internal world. Hence, the robot's intentional actions are inherently constrained by the models it has. These models may serve as interfaces between sensing modules and reasoning modules, or in the case of telerobots serve as interface between the human operator and the distant robot. A robot operating in a known restricted environment may have a priori knowledge of its whole possible work domain, which will be assimilated in its World Model. As the information in the World Model is relatively fixed, an Environment Model must be introduced to cope with the changes in the environment and to allow exploring entirely new domains. Introduced here is an algorithm that uses dense range data collected at various positions in the environment to refine and update or generate a 3-D volumetric model of an environment. The model, which is intended for autonomous robot navigation and tele-perception, consists of cubic voxels with the possible attributes: Void, Full, and Unknown. Experimental results from simulations of range data in synthetic environments are given. The quality of the results show great promise for dealing with noisy input data. The performance measures for the algorithm are defined, and quantitative results for noisy data and positional uncertainty are presented.

  6. A STUDY ON IMPROVING INFORMATION PROCESSING ABILITIES BASED ON PBL

    Directory of Open Access Journals (Sweden)

    Du Gyu KIM,

    2014-04-01

    Full Text Available This study examined an instruction method for the improvement of information processing abilities in elementary school students. Current elementary students are required to develop information processing abilities to create new knowledge for this digital age. There is, however, a shortage of instruction strategies for these information processing abilities. This research proposes a method for teaching information processing abilities based on a problem-based learning model, and was tested with elementary students. The students developed an improved ability to create new knowledge and to present relationships with information through the process of problem solving. This study performed experimental research by comparing pre- and post-tests with twenty-three fifth grade elementary students over the course of eight months. This study produced a remarkable improvement in information selection, information reliability, information classification, information analysis, information comparison, and information internalization. This study presents an improved methodology for the teaching of information processing abilities.

  7. The Information Warfare Life Cycle Model

    Directory of Open Access Journals (Sweden)

    Brett van Niekerk

    2011-03-01

    Full Text Available Information warfare (IW is a dynamic and developing concept, which constitutes a number of disciplines. This paper aims to develop a life cycle model for information warfare that is applicable to all of the constituent disciplines. The model aims to be scalable and applicable to civilian and military incidents where information warfare tactics are employed. Existing information warfare models are discussed, and a new model is developed from the common aspects of these existing models. The proposed model is then applied to a variety of incidents to test its applicability and scalability. The proposed model is shown to be applicable to multiple disciplines of information warfare and is scalable, thus meeting the objectives of the model.

  8. Modelling the Replication Management in Information Systems

    Directory of Open Access Journals (Sweden)

    Cezar TOADER

    2017-01-01

    Full Text Available In the modern economy, the benefits of Web services are significant because they facilitates the activities automation in the framework of Internet distributed businesses as well as the cooperation between organizations through interconnection process running in the computer systems. This paper presents the development stages of a model for a reliable information system. This paper describes the communication between the processes within the distributed system, based on the message exchange, and also presents the problem of distributed agreement among processes. A list of objectives for the fault-tolerant systems is defined and a framework model for distributed systems is proposed. This framework makes distinction between management operations and execution operations. The proposed model promotes the use of a central process especially designed for the coordination and control of other application processes. The execution phases and the protocols for the management and the execution components are presented. This model of a reliable system could be a foundation for an entire class of distributed systems models based on the management of replication process.

  9. Building Program Models Incrementally from Informal Descriptions.

    Science.gov (United States)

    1979-10-01

    AD-AOB6 50 STANFORD UNIV CA DEPT OF COMPUTER SCIENCE F/G 9/2 BUILDING PROGRAM MODELS INCREMENTALLY FROM INFORMAL DESCRIPTION--ETC(U) OCT 79 B P...port SCI.ICS.U.79.2 t Building Program Models Incrementally from Informal Descriptions by Brian P. McCune Research sponsored by Defense Advanced...TYPE OF REPORT & PERIOD COVERED Building Program Models Incrementally from Informal Descriptions. , technical, October 1979 6. PERFORMING ORG

  10. GEOGRAPHIC INFORMATION SYSTEM-BASED MODELING AND ANALYSIS FOR SITE SELECTION OF GREEN MUSSEL, Perna viridis, MARICULTURE IN LADA BAY, PANDEGLANG, BANTEN PROVINCE

    Directory of Open Access Journals (Sweden)

    I Nyoman Radiarta

    2011-06-01

    Full Text Available Green mussel is one of important species cultured in Lada Bay, Pandeglang. To provide a necessary guidance regarding green mussel mariculture development, finding suitable site is an important step. This study was conducted to identify suitable site for green mussel mariculture development using geographic information system (GIS based models. Seven important parameters were grouped into two submodels, namely environmental (water temperature, salinity, suspended solid, dissolve oxygen, and bathymetry and infrastructural (distance to settlement and pond aquaculture. A constraint data was used to exclude the area from suitability maps that cannot be allowed to develop green mussel mariculture, including area of floating net fishing activity and area near electricity station. Analyses of factors and constraints indicated that about 31% of potential area with bottom depth less than 25 m had the most suitable area. This area was shown to have an ideal condition for green mussel mariculture in this study region. This study shows that GIS model is a powerful tool for site selection decision making. The tool can be a valuable tool in solving problems in local, regional, and/or continent areas.

  11. Estimation of potential loss of two pesticides in runoff in Fillmore County, Minnesota using a field-scale process-based model and a geographic information system

    Science.gov (United States)

    Capel, P.D.; Zhang, H.

    2000-01-01

    In assessing the occurrence, behavior, and effects of agricultural chemicals in surface water, the scales of study (i.e., watershed, county, state, and regional areas) are usually much larger than the scale of agricultural fields, where much of the understanding of processes has been developed. Field-scale areas are characterized by relatively homogeneous conditions. The combination of process-based simulation models and geographic information system technology can be used to help extend our understanding of field processes to water-quality concerns at larger scales. To demonstrate this, the model "Groundwater Loading Effects of Agricultural Management Systems" was used to estimate the potential loss of two pesticides (atrazine and permethrin) in runoff to surface water in Fillmore County in southeastern Minnesota. The county was divided into field-scale areas on the basis of a 100 m by 100 m grid, and the influences of soil type and surface topography on the potential losses of the two pesticides in runoff was evaluated for each individual grid cell. The results could be used for guidance for agricultural management and regulatory decisions, for planning environmental monitoring programs, and as an educational tool for the public.

  12. Summarization of clinical information: a conceptual model.

    Science.gov (United States)

    Feblowitz, Joshua C; Wright, Adam; Singh, Hardeep; Samal, Lipika; Sittig, Dean F

    2011-08-01

    To provide high-quality and safe care, clinicians must be able to optimally collect, distill, and interpret patient information. Despite advances in text summarization, only limited research exists on clinical summarization, the complex and heterogeneous process of gathering, organizing and presenting patient data in various forms. To develop a conceptual model for describing and understanding clinical summarization in both computer-independent and computer-supported clinical tasks. Based on extensive literature review and clinical input, we developed a conceptual model of clinical summarization to lay the foundation for future research on clinician workflow and automated summarization using electronic health records (EHRs). Our model identifies five distinct stages of clinical summarization: (1) Aggregation, (2) Organization, (3) Reduction and/or Transformation, (4) Interpretation and (5) Synthesis (AORTIS). The AORTIS model describes the creation of complex, task-specific clinical summaries and provides a framework for clinical workflow analysis and directed research on test results review, clinical documentation and medical decision-making. We describe a hypothetical case study to illustrate the application of this model in the primary care setting. Both practicing physicians and clinical informaticians need a structured method of developing, studying and evaluating clinical summaries in support of a wide range of clinical tasks. Our proposed model of clinical summarization provides a potential pathway to advance knowledge in this area and highlights directions for further research. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Evaluation of location and number of aid post for sustainable humanitarian relief using agent based modeling (ABM) and geographic information system (GIS)

    Science.gov (United States)

    Khair, Fauzi; Sopha, Bertha Maya

    2017-12-01

    One of the crucial phases in disaster management is the response phase or the emergency response phase. It requires a sustainable system and a well-integrated management system. Any errors in the system on this phase will impact on significant increase of the victims number as well as material damage caused. Policies related to the location of aid posts are important decisions. The facts show that there are many failures in the process of providing assistance to the refugees due to lack of preparation and determination of facilities and aid post location. Therefore, this study aims to evaluate the number and location of aid posts on Merapi eruption in 2010. This study uses an integration between Agent Based Modeling (ABM) and Geographic Information System (GIS) about evaluation of the number and location of the aid post using some scenarios. The ABM approach aims to describe the agents behaviour (refugees and volunteers) in the event of a disaster with their respective characteristics. While the spatial data, GIS useful to describe real condition of the Sleman regency road. Based on the simulation result, it shows alternative scenarios that combine DERU UGM post, Maguwoharjo Stadium, Tagana Post and Pakem Main Post has better result in handling and distributing aid to evacuation barrack compared to initial scenario. Alternative scenarios indicates the unmet demands are less than the initial scenario.

  14. Topic modelling in the information warfare domain

    CSIR Research Space (South Africa)

    De Waal, A

    2013-11-01

    Full Text Available In this paper the authors provide context to Topic Modelling as an Information Warfare technique. Topic modelling is a technique that discovers latent topics in unstructured and unlabelled collection of documents. The topic structure can be searched...

  15. Melvin Defleur's Information Communication Model: Its Application ...

    African Journals Online (AJOL)

    The paper discusses Melvin Defleur's information communication model and its application to archives administration. It provides relevant examples in which archives administration functions involve the communication process. Specific model elements and their application in archives administration are highlighted.

  16. Compilation of information on melter modeling

    International Nuclear Information System (INIS)

    Eyler, L.L.

    1996-03-01

    The objective of the task described in this report is to compile information on modeling capabilities for the High-Temperature Melter and the Cold Crucible Melter and issue a modeling capabilities letter report summarizing existing modeling capabilities. The report is to include strategy recommendations for future modeling efforts to support the High Level Waste (BLW) melter development

  17. Spinal Cord Injury Model System Information Network

    Science.gov (United States)

    ... the UAB-SCIMS Contact the UAB-SCIMS UAB Spinal Cord Injury Model System Newly Injured Health Daily Living Consumer ... Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network ...

  18. Energy Information Data Base: corporate author entries

    International Nuclear Information System (INIS)

    1980-06-01

    One of the controls for information entered into the data bases created and maintained by the DOE Technical Information Center is the standardized name for the corporate entity or the corporate author. The purpose of Energy Information Data Base: Corporate Author Entries is to provide a means for the consistent citing of the names of organizations in bibliographic records. These entries serve as guides for users of the DOE/RECON computerized data bases who want to locate information originating in particular organizations. The entries in this revision include the corporate entries used in report bibliographic citations since 1973 and list approximately 28,000 corporate sources

  19. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Science.gov (United States)

    Kim, Jong Bum; Clayton, Mark J.; Haberl, Jeff S.

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process. PMID:25309954

  20. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  1. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  2. Directory of Energy Information Administration models 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    This directory revises and updates the Directory of Energy Information Administration Models 1995, DOE/EIA-0293(95), Energy Information Administration (EIA), U.S. Department of Energy, July 1995. Four models have been deleted in this directory as they are no longer being used: (1) Market Penetration Model for Ground-Water Heat Pump Systems (MPGWHP); (2) Market Penetration Model for Residential Rooftop PV Systems (MPRESPV-PC); (3) Market Penetration Model for Active and Passive Solar Technologies (MPSOLARPC); and (4) Revenue Requirements Modeling System (RRMS).

  3. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian Søndergaard

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini-world...... are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  4. Energy information data base: subject thesaurus

    International Nuclear Information System (INIS)

    1979-10-01

    The technical staff of the DOE Technical Information Center, during its subject indexing activities, develops and structures a vocabulary that allows consistent machine storage and retrieval of information necessary to the accomplishment of the DOE mission. This thesaurus incorporates that structured vocabulary. The terminology of this thesaurus is used for the subject control of information announced in DOE Energy Research Abstracts, Energy Abstracts for Policy Analysis, Solar Energy Update, Geothermal Energy Update, Fossil Energy Update, Fusion Energy Update, and Energy Conservation Update. This terminology also facilitates subject searching of the DOE energy information data base, a research in progress data base, a general and practical energy information data base, power reactor docket information data base, nuclear science abstracts data base, and the federal energy information data base on the DOE on-line retrieval system, RECON. The rapid expansion of the DOE's activities will result in a concomitant thesaurus expansion as information relating to new activities is indexed. Only the terms used in the indexing of documents at the Technical Information Center to date are included

  5. Marketing information systems in units of business information: a proposed model

    OpenAIRE

    Ana Maria Pereira; Carla Campos Pereira*

    2016-01-01

    Introduction: It proposes a theoretical model of marketing information system, which provides qualitiy attributes informations, such as: accuracy, economy, flexibility, reliability, relevance, simplicity and verifiability to the decision-makers of business organizations, based on the systemic vision and marketing theories. Objective: Present a model of marketing information system for business units, identifying the requirements, skills and abilities that the market demands of the libraria...

  6. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  7. The Nature of Information Science: Changing Models

    Science.gov (United States)

    Robinson, Lyn; Karamuftuoglu, Murat

    2010-01-01

    Introduction: This paper considers the nature of information science as a discipline and profession. Method: It is based on conceptual analysis of the information science literature, and consideration of philosophical perspectives, particularly those of Kuhn and Peirce. Results: It is argued that information science may be understood as a field of…

  8. Predictors of consistent condom use based on the Information-Motivation-Behavior Skill (IMB) model among senior high school students in three coastal cities in China.

    Science.gov (United States)

    Cai, Yong; Ye, Xiuxia; Shi, Rong; Xu, Gang; Shen, Lixiao; Ren, Jia; Huang, Hong

    2013-06-04

    High prevalence of risky sexual behaviors and lack of information, skills and preventive support mean that, adolescents face high risks of HIV/AIDS. This study applied the information-motivation-behavioral skills (IMB) model to examine the predictors of consistent condom use among senior high school students from three coastal cities in China and clarify the relationships between the model constructs. A cross-sectional study was conducted to assess HIV/AIDS related information, motivation, behavioral skills and preventive behaviors among senior high school students in three coastal cities in China. Structural equation modelling (SEM) was used to assess the IMB model. Of the 12313 participants, 4.5% (95% CI: 4.2-5.0) reported having had premarital sex and among them 25.0% (95% CI: 21.2-29.1) reported having used a condom in their sexual debut. Only about one-ninth of participants reported consistent condom use. The final IMB model provided acceptable fit to the data (CFI = 0.981, RMSEA = 0.014). Consistent condom use was significantly predicted by motivation (β = 0.175, P students in China. The IMB model could predict consistent condom use and suggests that future interventions should focus on improving motivation and behavioral skills.

  9. Parsimonious modeling with information filtering networks

    Science.gov (United States)

    Barfuss, Wolfram; Massara, Guido Previde; Di Matteo, T.; Aste, Tomaso

    2016-12-01

    We introduce a methodology to construct parsimonious probabilistic models. This method makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance from a simple sum of local inverse covariances computed on small subparts of the network. Being based on local and low-dimensional inversions, this method is computationally very efficient and statistically robust, even for the estimation of inverse covariance of high-dimensional, noisy, and short time series. Applied to financial data our method results are computationally more efficient than state-of-the-art methodologies such as Glasso producing, in a fraction of the computation time, models that can have equivalent or better performances but with a sparser inference structure. We also discuss performances with sparse factor models where we notice that relative performances decrease with the number of factors. The local nature of this approach allows us to perform computations in parallel and provides a tool for dynamical adaptation by partial updating when the properties of some variables change without the need of recomputing the whole model. This makes this approach particularly suitable to handle big data sets with large numbers of variables. Examples of practical application for forecasting, stress testing, and risk allocation in financial systems are also provided.

  10. A process Approach to Information Services: Information Search Process (ISP Model

    Directory of Open Access Journals (Sweden)

    Hamid Keshavarz

    2010-12-01

    Full Text Available Information seeking is a behavior emerging out of the interaction between information seeker and information system and should be regarded as an episodic process so as to meet information needs of users and to take different roles in different stages of it. The present article introduces a process approach to information services in libraries using Carol Collier Kuhlthau Model. In this model, information seeking is regarded as a process consisting of six stages in each of which users have different thoughts, feelings and actions and librarians also take different roles at any stage correspondingly. These six stages are derived from instructive learning theory based on uncertainty principle. Regardless of some acceptable shortcomings, this model may be regarded as a new solution for rendering modern information services in libraries especially in relation to new information environments and media.

  11. Levy Random Bridges and the Modelling of Financial Information

    OpenAIRE

    Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea

    2009-01-01

    The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...

  12. Mobile-Based Dictionary of Information and Communication Technology

    Science.gov (United States)

    Liando, O. E. S.; Mewengkang, A.; Kaseger, D.; Sangkop, F. I.; Rantung, V. P.; Rorimpandey, G. C.

    2018-02-01

    This study aims to design and build mobile-based dictionary of information and communication technology applications to provide access to information in the form of glossary of terms in the context of information and communication technologies. Applications built in this study using the Android platform, with SQLite database model. This research uses prototype model development method which covers the stages of communication, Quick Plan, Quick Design Modeling, Construction of Prototype, Deployment Delivery & Feedback, and Full System Transformation. The design of this application is designed in such a way as to facilitate the user in the process of learning and understanding the new terms or vocabularies encountered in the world of information and communication technology. Mobile-based dictionary of Information And Communication Technology applications that have been built can be an alternative to learning literature. In its simplest form, this application is able to meet the need for a comprehensive and accurate dictionary of Information And Communication Technology function.

  13. Information object definition-based unified modeling language representation of DICOM structured reporting: a case study of transcoding DICOM to XML.

    Science.gov (United States)

    Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K P

    2002-01-01

    Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification.

  14. Directory of energy information administration models 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-13

    This updated directory has been published annually; after this issue, it will be published only biennially. The Disruption Impact Simulator Model in use by EIA is included. Model descriptions have been updated according to revised documentation approved during the past year. This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included are 37 EIA models active as of February 1, 1995. The first group is the National Energy Modeling System (NEMS) models. The second group is all other EIA models that are not part of NEMS. Appendix A identifies major EIA modeling systems and the models within these systems. Appendix B is a summary of the `Annual Energy Outlook` Forecasting System.

  15. Perceived threat and corroboration: key factors that improve a predictive model of trust in internet-based health information and advice.

    Science.gov (United States)

    Harris, Peter R; Sillence, Elizabeth; Briggs, Pam

    2011-07-27

    How do people decide which sites to use when seeking health advice online? We can assume, from related work in e-commerce, that general design factors known to affect trust in the site are important, but in this paper we also address the impact of factors specific to the health domain. The current study aimed to (1) assess the factorial structure of a general measure of Web trust, (2) model how the resultant factors predicted trust in, and readiness to act on, the advice found on health-related websites, and (3) test whether adding variables from social cognition models to capture elements of the response to threatening, online health-risk information enhanced the prediction of these outcomes. Participants were asked to recall a site they had used to search for health-related information and to think of that site when answering an online questionnaire. The questionnaire consisted of a general Web trust questionnaire plus items assessing appraisals of the site, including threat appraisals, information checking, and corroboration. It was promoted on the hungersite.com website. The URL was distributed via Yahoo and local print media. We assessed the factorial structure of the measures using principal components analysis and modeled how well they predicted the outcome measures using structural equation modeling (SEM) with EQS software. We report an analysis of the responses of participants who searched for health advice for themselves (N = 561). Analysis of the general Web trust questionnaire revealed 4 factors: information quality, personalization, impartiality, and credible design. In the final SEM model, information quality and impartiality were direct predictors of trust. However, variables specific to eHealth (perceived threat, coping, and corroboration) added substantially to the ability of the model to predict variance in trust and readiness to act on advice on the site. The final model achieved a satisfactory fit: χ(2) (5) = 10.8 (P = .21), comparative fit

  16. Perceived Threat and Corroboration: Key Factors That Improve a Predictive Model of Trust in Internet-based Health Information and Advice

    Science.gov (United States)

    Harris, Peter R; Briggs, Pam

    2011-01-01

    Background How do people decide which sites to use when seeking health advice online? We can assume, from related work in e-commerce, that general design factors known to affect trust in the site are important, but in this paper we also address the impact of factors specific to the health domain. Objective The current study aimed to (1) assess the factorial structure of a general measure of Web trust, (2) model how the resultant factors predicted trust in, and readiness to act on, the advice found on health-related websites, and (3) test whether adding variables from social cognition models to capture elements of the response to threatening, online health-risk information enhanced the prediction of these outcomes. Methods Participants were asked to recall a site they had used to search for health-related information and to think of that site when answering an online questionnaire. The questionnaire consisted of a general Web trust questionnaire plus items assessing appraisals of the site, including threat appraisals, information checking, and corroboration. It was promoted on the hungersite.com website. The URL was distributed via Yahoo and local print media. We assessed the factorial structure of the measures using principal components analysis and modeled how well they predicted the outcome measures using structural equation modeling (SEM) with EQS software. Results We report an analysis of the responses of participants who searched for health advice for themselves (N = 561). Analysis of the general Web trust questionnaire revealed 4 factors: information quality, personalization, impartiality, and credible design. In the final SEM model, information quality and impartiality were direct predictors of trust. However, variables specific to eHealth (perceived threat, coping, and corroboration) added substantially to the ability of the model to predict variance in trust and readiness to act on advice on the site. The final model achieved a satisfactory fit: χ2 5 = 10

  17. THE MODEL FOR RISK ASSESSMENT ERP-SYSTEMS INFORMATION SECURITY

    Directory of Open Access Journals (Sweden)

    V. S. Oladko

    2016-12-01

    Full Text Available The article deals with the problem assessment of information security risks in the ERP-system. ERP-system functions and architecture are studied. The model malicious impacts on levels of ERP-system architecture are composed. Model-based risk assessment, which is the quantitative and qualitative approach to risk assessment, built on the partial unification 3 methods for studying the risks of information security - security models with full overlapping technique CRAMM and FRAP techniques developed.

  18. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar

  19. Information Clustering Based on Fuzzy Multisets.

    Science.gov (United States)

    Miyamoto, Sadaaki

    2003-01-01

    Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…

  20. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  1. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  2. Information retrieval system based on INIS tapes

    International Nuclear Information System (INIS)

    Pultorak, G.

    1976-01-01

    An information retrieval system based on the INIS computer tapes is described. It includes the three main elements of a computerized information system: a data base on a machine -readable medium, a collection of queries which represent the information needs from the data - base, and a set of programs by which the actual retrieval is done, according to the user's queries. The system is built for the center's computer, a CDC 3600, and its special features characterize, to a certain degree, the structure of the programs. (author)

  3. A Model for an Electronic Information Marketplace

    Directory of Open Access Journals (Sweden)

    Wei Ge

    2005-11-01

    Full Text Available As the information content on the Internet increases, the task of locating desired information and assessing its quality becomes increasingly difficult. This development causes users to be more willing to pay for information that is focused on specific issues, verifiable, and available upon request. Thus, the nature of the Internet opens up the opportunity for information trading. In this context, the Internet cannot only be used to close the transaction, but also to deliver the product - desired information - to the user. Early attempts to implement such business models have fallen short of expectations. In this paper, we discuss the limitations of such practices and present a modified business model for information trading, which uses a reverse auction approach together with a multiple-buyer price discovery process

  4. [Lack of access to information on oral health problems among adults: an approach based on the theoretical model for literacy in health].

    Science.gov (United States)

    Roberto, Luana Leal; Noronha, Daniele Durães; Souza, Taiane Oliveira; Miranda, Ellen Janayne Primo; Martins, Andréa Maria Eleutério de Barros Lima; Paula, Alfredo Maurício Batista De; Ferreira, Efigênia Ferreira E; Haikal, Desirée Sant'ana

    2018-03-01

    This study sought to investigate factors associated with the lack of access to information on oral health among adults. It is a cross-sectional study, carried out among 831 adults (35-44 years of age). The dependent variable was access to information on how to avoid oral problems, and the independent variables were gathered into subgroups according to the theoretical model for literacy in health. Binary logistic regression was carried out, and results were corrected by the design effect. It was observed that 37.5% had no access to information about dental problems. The lack of access was higher among adults who had lower per capita income, were dissatisfied with the dental services provided, did not use dental floss, had unsatisfactory physical control of the quality of life, and self-perceived their oral health as fair/poor/very poor. The likelihood of not having access to information about dental problems among those dissatisfied with the dental services used was 3.28 times higher than for those satisfied with the dental services used. Thus, decreased access to information was related to unfavorable conditions among adults. Health services should ensure appropriate information to their users in order to increase health literacy levels and improve satisfaction and equity.

  5. Agricultural information dissemination using ICTs: A review and analysis of information dissemination models in China

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2016-03-01

    Full Text Available Over the last three decades, China’s agriculture sector has been transformed from the traditional to modern practice through the effective deployment of Information and Communication Technologies (ICTs. Information processing and dissemination have played a critical role in this transformation process. Many studies in relation to agriculture information services have been conducted in China, but few of them have attempted to provide a comprehensive review and analysis of different information dissemination models and their applications. This paper aims to review and identify the ICT based information dissemination models in China and to share the knowledge and experience in applying emerging ICTs in disseminating agriculture information to farmers and farm communities to improve productivity and economic, social and environmental sustainability. The paper reviews and analyzes the development stages of China’s agricultural information dissemination systems and different mechanisms for agricultural information service development and operations. Seven ICT-based information dissemination models are identified and discussed. Success cases are presented. The findings provide a useful direction for researchers and practitioners in developing future ICT based information dissemination systems. It is hoped that this paper will also help other developing countries to learn from China’s experience and best practice in their endeavor of applying emerging ICTs in agriculture information dissemination and knowledge transfer.

  6. Complementarity of information sent via different bases

    DEFF Research Database (Denmark)

    Wu, Shengjun; Yu, Sixia; Mølmer, Klaus

    2009-01-01

    We discuss quantitatively the complementarity of information transmitted by a quantum system prepared in a basis state in one out of several different mutually unbiased bases (MUBs). We obtain upper bounds on the information available to a receiver who has no knowledge of which MUB was chosen...... by the sender. These upper bounds imply a complementarity of information encoded via different MUBs and ultimately ensure the security in quantum key distribution protocols....

  7. Protection and security of data base information

    Directory of Open Access Journals (Sweden)

    Mariuţa ŞERBAN

    2011-06-01

    Full Text Available Data bases are one of the most important components in every large informatics system which stores and processes data and information. Because data bases contain all of the valuable information about a company, its clients, its financial activity, they represent one of the key elements in the structure of an organization, which determines imperatives such as confidentiality, integrity and ease of data access. The current paper discuses the integrity of data bases and it refers to the validity and the coherence of stored data. Usually, integrity is defined in connection with terms of constraint, that are rules regarding coherence which the data base cannot infringe. Data base that integrity refers to information correctness and assumes to detect, correct and prevent errors that might have an effect on the data comprised by the data bases.

  8. Information modeling for interoperable dimensional metrology

    CERN Document Server

    Zhao, Y; Brown, Robert; Xu, Xun

    2014-01-01

    This book analyzes interoperability issues in dimensional metrology systems and describes information modeling techniques. Coverage includes theory, techniques and key technologies, and explores new approaches for solving real-world interoperability problems.

  9. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  10. Information Dynamics in Networks: Models and Algorithms

    Science.gov (United States)

    2016-09-13

    Information Dynamics in Networks: Models and Algorithms In this project, we investigated how network structure interplays with higher level processes in...Models and Algorithms Report Title In this project, we investigated how network structure interplays with higher level processes in online social...Received Paper 1.00 2.00 3.00 . A Note on Modeling Retweet Cascades on Twitter, Workshop on Algorithms and Models for the Web Graph. 09-DEC-15

  11. Information and Communication Technology and School Based ...

    African Journals Online (AJOL)

    Information and Communication technology and school based assessment (SBA) is practice that broadens the form mode, means and scope of assessment in the school using modern technologies in order to facilitate and enhance learning. This study sought to ascertain the efficacy of Information and Communication ...

  12. THE INFORMATION MODEL «SOCIAL EXPLOSION»

    Directory of Open Access Journals (Sweden)

    Alexander Chernyavskiy

    2012-01-01

    Full Text Available Article is dedicated to examination and analysis of the construction of the information model «social explosion», which corresponds to the newest «colored» revolutions. The analysis of model makes it possible to see effective approaches to the initiation of this explosion and by the use of contemporary information communications as honeycomb connection and the mobile Internet

  13. Early engagement of stakeholders with individual-based modelling can inform research for improving invasive species management: the round goby as a case study

    DEFF Research Database (Denmark)

    Samson, Emma; Hirsch, Philipp E.; Palmer, Stephen C.

    2017-01-01

    that subsequent models can provide robust insight into potential management interventions. The round goby, Neogobius melanostomus, is currently spreading through the Baltic Sea, with major negative effects being reported in the wake of its invasion. Together with stakeholders, we parameterize an IBM...... to investigate the goby's potential spread pattern throughout the Gulf of Gdansk and the Baltic Sea. Model parameters were assigned by integrating information obtained through stakeholder interaction, from scientific literature, or estimated using an inverse modeling approach when not available. IBMs can provide...... valuable direction to research on invasive species even when there is limited data and/or time available to parameterize/fit them to the degree to which we might aspire in an ideal world. Co-development of models with stakeholders can be used to recognize important invasion patterns, in addition...

  14. High resolution reservoir geological modelling using outcrop information

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Changmin; Lin Kexiang; Liu Huaibo [Jianghan Petroleum Institute, Hubei (China)] [and others

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  15. Energy Information Data Base: serial titles

    International Nuclear Information System (INIS)

    1980-06-01

    The Department of Energy Technical Information Center (TIC) is responsible for creating bibliographic data bases that are used in the announcement and retrieval of publications dealing with all phases of energy. The TIC interactive information processing system makes use of a number of computerized authorities so that consistency can be maintained and indexes can be produced. One such authority is the Energy Information Data Base: Serial Titles. This authority contains the full and abbreviated journal title, country of publication, CODEN, and certain codes. This revision replaces previous revisions of this document

  16. Enterprise Modelling for an Educational Information Infrastructure

    NARCIS (Netherlands)

    Widya, I.A.; Michiels, E.F.; Volman, C.J.A.M.; Pokraev, S.; de Diana, I.P.F.; Filipe, J.; Sharp, B.; Miranda, P.

    2001-01-01

    This paper reports the modelling exercise of an educational information infrastructure that aims to support the organisation of teaching and learning activities suitable for a wide range of didactic policies. The modelling trajectory focuses on capturing invariant structures of relations between

  17. Millennial Students' Mental Models of Information Retrieval

    Science.gov (United States)

    Holman, Lucy

    2009-01-01

    This qualitative study examines first-year college students' online search habits in order to identify patterns in millennials' mental models of information retrieval. The study employed a combination of modified contextual inquiry and concept mapping methodologies to elicit students' mental models. The researcher confirmed previously observed…

  18. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  19. Moving Target Information Extraction Based on Single Satellite Image

    Directory of Open Access Journals (Sweden)

    ZHAO Shihu

    2015-03-01

    Full Text Available The spatial and time variant effects in high resolution satellite push broom imaging are analyzed. A spatial and time variant imaging model is established. A moving target information extraction method is proposed based on a single satellite remote sensing image. The experiment computes two airplanes' flying speed using ZY-3 multispectral image and proves the validity of spatial and time variant model and moving information extracting method.

  20. ICT, INFORMATIONAL INNOVATION AND KNOWLEDGE-BASED ECONOMY

    OpenAIRE

    Mohamed Neffati

    2012-01-01

    This paper suggests explaining the main important role of new economicinformation providing competitive advantages which help economic development. It highlightsa new ‘intelligent model» which produces and chooses the right competitive information, as aresult of informational innovation which ensures the maintaining of the stability of economicgrowth in a new knowledge based economy. The proposed model describes the major role ofinformation in different kinds of innovations which provide an ...

  1. Implementation of Web-based Information Systems in Distributed Organizations

    DEFF Research Database (Denmark)

    Bødker, Keld; Pors, Jens Kaaber; Simonsen, Jesper

    2004-01-01

    This article presents results elicited from studies conducted in relation to implementing a web-based information system throughout a large distributed organization. We demonstrate the kind of expectations and conditions for change that management face in relation to open-ended, configurable......, and context specific web-based information systems like Lotus QuickPlace. Our synthesis from the empirical findings is related to two recent models, the improvisational change management model suggested by Orlikowski and Hofman (1997), and Gallivan's (2001) model for organizational adoption and assimilation...

  2. Information theory based approaches to cellular signaling.

    Science.gov (United States)

    Waltermann, Christian; Klipp, Edda

    2011-10-01

    Cells interact with their environment and they have to react adequately to internal and external changes such changes in nutrient composition, physical properties like temperature or osmolarity and other stresses. More specifically, they must be able to evaluate whether the external change is significant or just in the range of noise. Based on multiple external parameters they have to compute an optimal response. Cellular signaling pathways are considered as the major means of information perception and transmission in cells. Here, we review different attempts to quantify information processing on the level of individual cells. We refer to Shannon entropy, mutual information, and informal measures of signaling pathway cross-talk and specificity. Information theory in systems biology has been successfully applied to identification of optimal pathway structures, mutual information and entropy as system response in sensitivity analysis, and quantification of input and output information. While the study of information transmission within the framework of information theory in technical systems is an advanced field with high impact in engineering and telecommunication, its application to biological objects and processes is still restricted to specific fields such as neuroscience, structural and molecular biology. However, in systems biology dealing with a holistic understanding of biochemical systems and cellular signaling only recently a number of examples for the application of information theory have emerged. This article is part of a Special Issue entitled Systems Biology of Microorganisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Information retrieval models foundations and relationships

    CERN Document Server

    Roelleke, Thomas

    2013-01-01

    Information Retrieval (IR) models are a core component of IR research and IR systems. The past decade brought a consolidation of the family of IR models, which by 2000 consisted of relatively isolated views on TF-IDF (Term-Frequency times Inverse-Document-Frequency) as the weighting scheme in the vector-space model (VSM), the probabilistic relevance framework (PRF), the binary independence retrieval (BIR) model, BM25 (Best-Match Version 25, the main instantiation of the PRF/BIR), and language modelling (LM). Also, the early 2000s saw the arrival of divergence from randomness (DFR).Regarding in

  4. Five-factor model personality disorder prototypes in a community sample: self- and informant-reports predicting interview-based DSM diagnoses.

    Science.gov (United States)

    Lawton, Erin M; Shields, Andrew J; Oltmanns, Thomas F

    2011-10-01

    The need for an empirically validated, dimensional system of personality disorders is becoming increasingly apparent. While a number of systems have been investigated in this regard, the five-factor model of personality has demonstrated the ability to adequately capture personality pathology. In particular, the personality disorder prototypes developed by Lynam and Widiger (2001) have been tested in a number of samples. The goal of the present study is to extend this literature by validating the prototypes in a large, representative community sample of later middle-aged adults using both self and informant reports. We found that the prototypes largely work well in this age group. Schizoid, Borderline, Histrionic, Narcissistic, and Avoidant personality disorders demonstrate good convergent validity, with a particularly strong pattern of discriminant validity for the latter four. Informant-reported prototypes show similar patterns to self reports for all analyses. This demonstrates that informants are not succumbing to halo representations of the participants, but are rather describing participants in nuanced ways. It is important that informant reports add significant predictive validity for Schizoid, Antisocial, Borderline, Histrionic, and Narcissistic personality disorders. Implications of our results and directions for future research are discussed.

  5. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  6. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  7. The Culture Based Model: Constructing a Model of Culture

    Science.gov (United States)

    Young, Patricia A.

    2008-01-01

    Recent trends reveal that models of culture aid in mapping the design and analysis of information and communication technologies. Therefore, models of culture are powerful tools to guide the building of instructional products and services. This research examines the construction of the culture based model (CBM), a model of culture that evolved…

  8. Dietary information improves cardiovascular disease risk prediction models.

    Science.gov (United States)

    Baik, I; Cho, N H; Kim, S H; Shin, C

    2013-01-01

    Data are limited on cardiovascular disease (CVD) risk prediction models that include dietary predictors. Using known risk factors and dietary information, we constructed and evaluated CVD risk prediction models. Data for modeling were from population-based prospective cohort studies comprised of 9026 men and women aged 40-69 years. At baseline, all were free of known CVD and cancer, and were followed up for CVD incidence during an 8-year period. We used Cox proportional hazard regression analysis to construct a traditional risk factor model, an office-based model, and two diet-containing models and evaluated these models by calculating Akaike information criterion (AIC), C-statistics, integrated discrimination improvement (IDI), net reclassification improvement (NRI) and calibration statistic. We constructed diet-containing models with significant dietary predictors such as poultry, legumes, carbonated soft drinks or green tea consumption. Adding dietary predictors to the traditional model yielded a decrease in AIC (delta AIC=15), a 53% increase in relative IDI (P-value for IDI NRI (category-free NRI=0.14, P NRI (category-free NRI=0.08, P<0.01) compared with the office-based model. The calibration plots for risk prediction demonstrated that the inclusion of dietary predictors contributes to better agreement in persons at high risk for CVD. C-statistics for the four models were acceptable and comparable. We suggest that dietary information may be useful in constructing CVD risk prediction models.

  9. Optimal information diffusion in stochastic block models.

    Science.gov (United States)

    Curato, Gianbiagio; Lillo, Fabrizio

    2016-09-01

    We use the linear threshold model to study the diffusion of information on a network generated by the stochastic block model. We focus our analysis on a two-community structure where the initial set of informed nodes lies only in one of the two communities and we look for optimal network structures, i.e., those maximizing the asymptotic extent of the diffusion. We find that, constraining the mean degree and the fraction of initially informed nodes, the optimal structure can be assortative (modular), core-periphery, or even disassortative. We then look for minimal cost structures, i.e., those for which a minimal fraction of initially informed nodes is needed to trigger a global cascade. We find that the optimal networks are assortative but with a structure very close to a core-periphery graph, i.e., a very dense community linked to a much more sparsely connected periphery.

  10. A linguistic model of informed consent.

    Science.gov (United States)

    Marta, J

    1996-02-01

    The current disclosure model of informed consent ignores the linguistic complexity of any act of communication, and the increased risk of difficulties in the special circumstances of informed consent. This article explores, through linguistic analysis, the specificity of informed consent as a speech act, a communication act, and a form of dialogue, following on the theories of J.L. Austin, Roman Jakobson, and Mikhail Bakhtin, respectively. In the proposed model, informed consent is a performative speech act resulting from a series of communication acts which together constitute a dialogic, polyphonic, heteroglossial discourse. It is an act of speech that results in action being taken after a conversation has happened where distinct individuals, multiple voices, and multiple perspectives have been respected, and convention observed and recognized. It is more meaningful and more ethical for both patient and physician, in all their human facets including their interconnectedness.

  11. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  12. Energy Information Data Base: corporate author entries

    International Nuclear Information System (INIS)

    1980-03-01

    One of the controls for information entered into the data bases created and maintained by the DOE Technical Information Center is the standardized name for the corporate entity or the corporate author. The purpose of Energy Information Data Base: Corporate Author Entries (TID-4585-R1) and this supplemental list of authorized or standardized corporate entries is to provide a means for the consistent citing of the names of organizations in bibliographic records. In general, an entry in Corporate Author Entries consists of the seven-digit code number assigned to the particular corporate entity, the two-letter country code, the largest element of the corporate name, the location of the corporate entity, and the smallest element of the corporate name (if provided). This supplement [DOE/TIC-4585-R1(Suppl.5)] contains additions to the base document (TID-4585-R1) and is intended to be used with that publication

  13. Improving information for community-based adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Huq, Saleemul

    2011-10-15

    Community-based adaptation aims to empower local people to cope with and plan for the impacts of climate change. In a world where knowledge equals power, you could be forgiven for thinking that enabling this type of adaptation boils down to providing local people with information. Conventional approaches to planning adaptation rely on 'expert' advice and credible 'science' from authoritative information providers such as the Intergovernmental Panel on Climate Change. But to truly support the needs of local communities, this information needs to be more site-specific, more user-friendly and more inclusive of traditional knowledge and existing coping practices.

  14. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Rasmussen, Martin; Ulrich, Thomas; Groth, Katrina; Smith, Curtis

    2016-01-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  15. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rasmussen, Martin [Norwegian Univ. of Science and Technology, Trondheim (Norway). Social Research; Herberger, Sarah [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ulrich, Thomas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-06-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  16. National Space Science Data Center Information Model

    Science.gov (United States)

    Bell, E. V.; McCaslin, P.; Grayzeck, E.; McLaughlin, S. A.; Kodis, J. M.; Morgan, T. H.; Williams, D. R.; Russell, J. L.

    2013-12-01

    The National Space Science Data Center (NSSDC) was established by NASA in 1964 to provide for the preservation and dissemination of scientific data from NASA missions. It has evolved to support distributed, active archives that were established in the Planetary, Astrophysics, and Heliophysics disciplines through a series of Memoranda of Understanding. The disciplines took over responsibility for working with new projects to acquire and distribute data for community researchers while the NSSDC remained vital as a deep archive. Since 2000, NSSDC has been using the Archive Information Package to preserve data over the long term. As part of its effort to streamline the ingest of data into the deep archive, the NSSDC developed and implemented a data model of desired and required metadata in XML. This process, in use for roughly five years now, has been successfully used to support the identification and ingest of data into the NSSDC archive, most notably those data from the Planetary Data System (PDS) submitted under PDS3. A series of software packages (X-ware) were developed to handle the submission of data from the PDS nodes utilizing a volume structure. An XML submission manifest is generated at the PDS provider site prior to delivery to NSSDC. The manifest ensures the fidelity of PDS data delivered to NSSDC. Preservation metadata is captured in an XML object when NSSDC archives the data. With the recent adoption by the PDS of the XML-based PDS4 data model, there is an opportunity for the NSSDC to provide additional services to the PDS such as the preservation, tracking, and restoration of individual products (e.g., a specific data file or document), which was unfeasible in the previous PDS3 system. The NSSDC is modifying and further streamlining its data ingest process to take advantage of the PDS4 model, an important consideration given the ever-increasing amount of data being generated and archived by orbiting missions at the Moon and Mars, other active projects

  17. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  18. Scope of Building Information Modeling (BIM in India

    Directory of Open Access Journals (Sweden)

    Mahua Mukherjee

    2009-01-01

    Full Text Available The design communication is gradually being changed from 2D based to integrated 3D digital interface. Building InformationModeling (BIM is a model-based design concept, in which buildings will be built virtually before they get built outin the field, where data models organized for complete integration of all relevant factors in the building lifecycle whichalso manages the information exchange between the AEC (Architects, Engineers, Contractors professionals, to strengthenthe interaction between the design team. BIM is a shared knowledge about the information for decisions making during itslifecycle. There’s still much to be learned about the opportunities and implications of this tool.This paper deals with the status check of BIM application in India, to do that a survey has been designed to check the acceptanceof BIM till date, while this application is widely accepted throughout the industry in many countries for managingproject information with capabilities for cost control and facilities management.

  19. International Planetary Data Alliance (IPDA) Information Model

    Science.gov (United States)

    Hughes, John Steven; Beebe, R.; Guinness, E.; Heather, D.; Huang, M.; Kasaba, Y.; Osuna, P.; Rye, E.; Savorskiy, V.

    2007-01-01

    This document is the third deliverable of the International Planetary Data Alliance (IPDA) Archive Data Standards Requirements Identification project. The goal of the project is to identify a subset of the standards currently in use by NASAs Planetary Data System (PDS) that are appropriate for internationalization. As shown in the highlighted sections of Figure 1, the focus of this project is the Information Model component of the Data Architecture Standards, namely the object models, a data dictionary, and a set of data formats.

  20. Road landslide information management and forecasting system base on GIS.

    Science.gov (United States)

    Wang, Wei Dong; Du, Xiang Gang; Xie, Cui Ming

    2009-09-01

    Take account of the characters of road geological hazard and its supervision, it is very important to develop the Road Landslides Information Management and Forecasting System based on Geographic Information System (GIS). The paper presents the system objective, function, component modules and key techniques in the procedure of system development. The system, based on the spatial information and attribute information of road geological hazard, was developed and applied in Guizhou, a province of China where there are numerous and typical landslides. The manager of communication, using the system, can visually inquire all road landslides information based on regional road network or on the monitoring network of individual landslide. Furthermore, the system, integrated with mathematical prediction models and the GIS's strongpoint on spatial analyzing, can assess and predict landslide developing procedure according to the field monitoring data. Thus, it can efficiently assists the road construction or management units in making decision to control the landslides and to reduce human vulnerability.

  1. Automatic transfer functions based on informational divergence.

    Science.gov (United States)

    Ruiz, Marc; Bardera, Anton; Boada, Imma; Viola, Ivan; Feixas, Miquel; Sbert, Mateu

    2011-12-01

    In this paper we present a framework to define transfer functions from a target distribution provided by the user. A target distribution can reflect the data importance, or highly relevant data value interval, or spatial segmentation. Our approach is based on a communication channel between a set of viewpoints and a set of bins of a volume data set, and it supports 1D as well as 2D transfer functions including the gradient information. The transfer functions are obtained by minimizing the informational divergence or Kullback-Leibler distance between the visibility distribution captured by the viewpoints and a target distribution selected by the user. The use of the derivative of the informational divergence allows for a fast optimization process. Different target distributions for 1D and 2D transfer functions are analyzed together with importance-driven and view-based techniques. © 2010 IEEE

  2. A contextual information based scholary paper recommender ...

    African Journals Online (AJOL)

    A contextual information based scholary paper recommender system using big data platform. ... Journal of Fundamental and Applied Sciences ... For implementing the system it has been used hadoop bed and the parallel programming because the volume of data was a part of a big data and the time was also an important ...

  3. Language-based multimedia information retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, J.L.; Hiemstra, Djoerd; Netter, K.

    2000-01-01

    This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material

  4. Study on geo-information modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana

    2006-01-01

    Roč. 5, č. 5 (2006), s. 1108-1113 ISSN 1109-2777 Institutional research plan: CEZ:AV0Z10750506 Keywords : control GIS * geo- information modelling * uncertainty * spatial temporal approach Web Services Subject RIV: BC - Control Systems Theory

  5. Using Interaction Scenarios to Model Information Systems

    DEFF Research Database (Denmark)

    Bækgaard, Lars; Bøgh Andersen, Peter

    The purpose of this paper is to define and discuss a set of interaction primitives that can be used to model the dynamics of socio-technical activity systems, including information systems, in a way that emphasizes structural aspects of the interaction that occurs in such systems. The primitives...

  6. Asset Condition, Information Systems and Decision Models

    CERN Document Server

    Willett, Roger; Brown, Kerry; Mathew, Joseph

    2012-01-01

    Asset Condition, Information Systems and Decision Models, is the second volume of the Engineering Asset Management Review Series. The manuscripts provide examples of implementations of asset information systems as well as some practical applications of condition data for diagnostics and prognostics. The increasing trend is towards prognostics rather than diagnostics, hence the need for assessment and decision models that promote the conversion of condition data into prognostic information to improve life-cycle planning for engineered assets. The research papers included here serve to support the on-going development of Condition Monitoring standards. This volume comprises selected papers from the 1st, 2nd, and 3rd World Congresses on Engineering Asset Management, which were convened under the auspices of ISEAM in collaboration with a number of organisations, including CIEAM Australia, Asset Management Council Australia, BINDT UK, and Chinese Academy of Sciences, Beijing University of Chemical Technology, Chin...

  7. Modeling behavioral considerations related to information security.

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Moyano, I. J.; Conrad, S. H.; Andersen, D. F. (Decision and Information Sciences); (SNL); (Univ. at Albany)

    2011-01-01

    The authors present experimental and simulation results of an outcome-based learning model for the identification of threats to security systems. This model integrates judgment, decision-making, and learning theories to provide a unified framework for the behavioral study of upcoming threats.

  8. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  9. Trust-based information system architecture for personal wellness.

    Science.gov (United States)

    Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd

    2014-01-01

    Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.

  10. Architectural approaches for HL7-based health information systems implementation.

    Science.gov (United States)

    López, D M; Blobel, B

    2010-01-01

    Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.

  11. METHODOLOGICAL APPROACH TO ANALYSIS AND EVALUATION OF INFORMATION PROTECTION IN INFORMATION SYSTEMS BASED ON VULNERABILITY DANGER

    Directory of Open Access Journals (Sweden)

    Y. M. Krotiuk

    2008-01-01

    Full Text Available The paper considers a methodological approach to an analysis and estimation of information security in the information systems which is based on the analysis of vulnerabilities and an extent of their hazard. By vulnerability hazard it is meant a complexity of its operation as a part of an information system. The required and sufficient vulnerability operational conditions  have  been  determined in the paper. The paper proposes a generalized model for attack realization which is used as a basis for construction of an attack realization model for an operation of a particular vulnerability. A criterion for estimation of information protection in the information systems which is based on the estimation of vulnerability hazard is formulated in the paper. The proposed approach allows to obtain a quantitative estimation of the information system security on the basis of the proposed schemes on realization of typical attacks for the distinguished classes of vulnerabilities.The methodical approach is used for choosing variants to be applied for realization of protection mechanisms in the information systems as well as for estimation of information safety in the operating information systems.

  12. A dictionary based informational genome analysis

    Directory of Open Access Journals (Sweden)

    Castellini Alberto

    2012-09-01

    Full Text Available Abstract Background In the post-genomic era several methods of computational genomics are emerging to understand how the whole information is structured within genomes. Literature of last five years accounts for several alignment-free methods, arisen as alternative metrics for dissimilarity of biological sequences. Among the others, recent approaches are based on empirical frequencies of DNA k-mers in whole genomes. Results Any set of words (factors occurring in a genome provides a genomic dictionary. About sixty genomes were analyzed by means of informational indexes based on genomic dictionaries, where a systemic view replaces a local sequence analysis. A software prototype applying a methodology here outlined carried out some computations on genomic data. We computed informational indexes, built the genomic dictionaries with different sizes, along with frequency distributions. The software performed three main tasks: computation of informational indexes, storage of these in a database, index analysis and visualization. The validation was done by investigating genomes of various organisms. A systematic analysis of genomic repeats of several lengths, which is of vivid interest in biology (for example to compute excessively represented functional sequences, such as promoters, was discussed, and suggested a method to define synthetic genetic networks. Conclusions We introduced a methodology based on dictionaries, and an efficient motif-finding software application for comparative genomics. This approach could be extended along many investigation lines, namely exported in other contexts of computational genomics, as a basis for discrimination of genomic pathologies.

  13. Information literacy for community health agent training in Brazil: a proposal for mediation based on the extensive and collaborative models - DOI: 10.3395/reciis.v3i3.283en

    Directory of Open Access Journals (Sweden)

    Elmira Luzia Melo Soares Simeão

    2009-09-01

    Full Text Available This article presents methodology based on the extensive communication model (SIMEÃO, 2006 and Alfin, an acronym published by Unesco to offer a concept for the process of information literacy acquisition (information literacy, in research applied to the communication context of information on health in Brazil. This paper assesses the mediation of Community Health Agents (ACS in their work in the Brazilian Unified Health System (SUS through the development of Alfin training workshops. In testing whether the proposal is applicable in a professional workspace which is more open, with more informal communication relationships, the research group observes the Community Health Agent (ACS, a supporting professional and the main mediator in the Family Health Care Program, and his/her performance as a communicator. The hypothesis is based on the following proposition: once trained by specialists in the fields of technology, information and communication, the ACS will be able to act as mediators with a broader view in terms of communication. The study also aims to identify the sources of information used by the ACS and the perspectives of expanding such sources after training in the Alfin workshops. The content produced in the workshops will also be a subject of study and of theory and methodology discussions; this contributes to broadening the proposition of the extensive communication model.

  14. Web-based Construction Information Management System

    Directory of Open Access Journals (Sweden)

    David Scott

    2012-11-01

    Full Text Available Centralised information systems that are accessible to all parties in a construction project are powerful tools in the quest to improve efficiency and to enhance the flow of information within the construction industry. This report points out the maturity of the necessary IT technology, the availability and the suitability of existing commercial products.Some of these products have been studied and analysed. An evaluation and selection process based on the functions offered in the products and their utility is presented. A survey of local construction personnel has been used to collect typical weighting data and performance criteria used in the evaluation process.

  15. Recommender system based on scarce information mining.

    Science.gov (United States)

    Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang

    2017-09-01

    Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Mixing Formal and Informal Model Elements for Tracing Requirements

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Ladenberger, Lukas

    2011-01-01

    Tracing between informal requirements and formal models is challenging. A method for such tracing should permit to deal efficiently with changes to both the requirements and the model. A particular challenge is posed by the persisting interplay of formal and informal elements. In this paper, we...... describe an incremental approach to requirements validation and systems modelling. Formal modelling facilitates a high degree of automation: it serves for validation and traceability. The foundation for our approach are requirements that are structured according to the WRSPM reference model. We provide...... a system for traceability with a state-based formal method that supports refinement. We do not require all specification elements to be modelled formally and support incremental incorporation of new specification elements into the formal model. Refinement is used to deal with larger amounts of requirements...

  17. Conceptual Modeling of Events as Information Objects and Change Agents

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    Traditionally, semantic data models have not supported the modeling of behavior. We present an event modeling approach that can be used to extend semantic data models like the entity-relationship model and the functional data model. We model an event as a two-sided phenomenon that is seen as a to...... it is comparable to an executable transaction schema. Finally, we briefly compare our approach to object-oriented approaches based on encapsulated objects.......Traditionally, semantic data models have not supported the modeling of behavior. We present an event modeling approach that can be used to extend semantic data models like the entity-relationship model and the functional data model. We model an event as a two-sided phenomenon that is seen...... as a totality of an information object and a change agent. When an event is modeled as an information object it is comparable to an entity that exists only at a specific point in time. It has attributes and can be used for querying and specification of constraints. When an event is modeled as a change agent...

  18. Modeling Information-Seeking Dialogues: The Conversational Roles (COR) Model.

    Science.gov (United States)

    Sitter, Stefan; Stein, Adelheit

    1996-01-01

    Introduces a generic, application-independent model of human-computer information-seeking dialog, the Conversational Roles (COR) Model, and reviews the theoretical background. COR is represented as a recursive state-transition-network that determines legitimate types and possible sequences of dialog acts, and categorizes dialog acts on the basis…

  19. Ontology-based information standards development

    OpenAIRE

    Heravi, Bahareh Rahmanzadeh

    2012-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for th...

  20. Acceptance model of a Hospital Information System.

    Science.gov (United States)

    Handayani, P W; Hidayanto, A N; Pinem, A A; Hapsari, I C; Sandhyaduhita, P I; Budi, I

    2017-03-01

    The purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance. This study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0. The study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of ptechnological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Brain parcellation based on information theory.

    Science.gov (United States)

    Bonmati, Ester; Bardera, Anton; Boada, Imma

    2017-11-01

    In computational neuroimaging, brain parcellation methods subdivide the brain into individual regions that can be used to build a network to study its structure and function. Using anatomical or functional connectivity, hierarchical clustering methods aim to offer a meaningful parcellation of the brain at each level of granularity. However, some of these methods have been only applied to small regions and strongly depend on the similarity measure used to merge regions. The aim of this work is to present a robust whole-brain hierarchical parcellation that preserves the global structure of the network. Brain regions are modeled as a random walk on the connectome. From this model, a Markov process is derived, where the different nodes represent brain regions and in which the structure can be quantified. Functional or anatomical brain regions are clustered by using an agglomerative information bottleneck method that minimizes the overall loss of information of the structure by using mutual information as a similarity measure. The method is tested with synthetic models, structural and functional human connectomes and is compared with the classic k-means. Results show that the parcellated networks preserve the main properties and are consistent across subjects. This work provides a new framework to study the human connectome using functional or anatomical connectivity at different levels. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. BIM-Based Construction Information Management Framework for Site Information Management

    Directory of Open Access Journals (Sweden)

    Dong-Gun Lee

    2018-01-01

    Full Text Available Projects in the construction industry are becoming increasingly large and complex, with construction technologies, methods, and the like developing rapidly. Various different types of information are generated by construction projects. Especially, a construction phase requires the input of many resources and generates a diverse set of information. While a variety of IT techniques are being deployed for information management during the construction phase, measures to create databases of such information and to link these various different types of information together are still insufficient. As such, this study aims to suggest a construction information database system based on BIM technology to enable the comprehensive management of site information generated during the construction phase. This study analyzed the information generated from construction sites and proposed a categorization system for structuring the generated information, along with a database model for storing such structured information. Through such efforts, it was confirmed that such a database system can be used for accumulating and using construction information; it is believed that, in the future, the continual accumulation and management of construction information will allow for corporate-level accumulation of knowledge as opposed to the individual accumulation of know-how.

  3. Organization model and formalized description of nuclear enterprise information system

    International Nuclear Information System (INIS)

    Yuan Feng; Song Yafeng; Li Xudong

    2012-01-01

    Organization model is one of the most important models of Nuclear Enterprise Information System (NEIS). Scientific and reasonable organization model is the prerequisite that NEIS has robustness and extendibility, and is also the foundation of the integration of heterogeneous system. Firstly, the paper describes the conceptual model of the NEIS on ontology chart, which provides a consistent semantic framework of organization. Then it discusses the relations between the concepts in detail. Finally, it gives the formalized description of the organization model of NEIS based on six-tuple array. (authors)

  4. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  5. Modeling Interoperable Information Systems with 3LGM² and IHE.

    Science.gov (United States)

    Stäubert, S; Schaaf, M; Jahn, F; Brandner, R; Winter, A

    2015-01-01

    Strategic planning of information systems (IS) in healthcare requires descriptions of the current and the future IS state. Enterprise architecture planning (EAP) tools like the 3LGM² tool help to build up and to analyze IS models. A model of the planned architecture can be derived from an analysis of current state IS models. Building an interoperable IS, i. e. an IS consisting of interoperable components, can be considered a relevant strategic information management goal for many IS in healthcare. Integrating the healthcare enterprise (IHE) is an initiative which targets interoperability by using established standards. To link IHE concepts to 3LGM² concepts within the 3LGM² tool. To describe how an information manager can be supported in handling the complex IHE world and planning interoperable IS using 3LGM² models. To describe how developers or maintainers of IHE profiles can be supported by the representation of IHE concepts in 3LGM². Conceptualization and concept mapping methods are used to assign IHE concepts such as domains, integration profiles actors and transactions to the concepts of the three-layer graph-based meta-model (3LGM²). IHE concepts were successfully linked to 3LGM² concepts. An IHE-master-model, i. e. an abstract model for IHE concepts, was modeled with the help of 3LGM² tool. Two IHE domains were modeled in detail (ITI, QRPH). We describe two use cases for the representation of IHE concepts and IHE domains as 3LGM² models. Information managers can use the IHE-master-model as reference model for modeling interoperable IS based on IHE profiles during EAP activities. IHE developers are supported in analyzing consistency of IHE concepts with the help of the IHE-master-model and functions of the 3LGM² tool The complex relations between IHE concepts can be modeled by using the EAP method 3LGM². 3LGM² tool offers visualization and analysis features which are now available for the IHE-master-model. Thus information managers and IHE

  6. Location-based health information services: a new paradigm in personalised information delivery

    Directory of Open Access Journals (Sweden)

    Boulos Maged

    2003-01-01

    Full Text Available Abstract Brute health information delivery to various devices can be easily achieved these days, making health information instantly available whenever it is needed and nearly anywhere. However, brute health information delivery risks overloading users with unnecessary information that does not answer their actual needs, and might even act as noise, masking any other useful and relevant information delivered with it. Users' profiles and needs are definitely affected by where they are, and this should be taken into consideration when personalising and delivering information to users in different locations. The main goal of location-based health information services is to allow better presentation of the distribution of health and healthcare needs and Internet resources answering them across a geographical area, with the aim to provide users with better support for informed decision-making. Personalised information delivery requires the acquisition of high quality metadata about not only information resources, but also information service users, their geographical location and their devices. Throughout this review, experience from a related online health information service, HealthCyberMap http://healthcybermap.semanticweb.org/, is referred to as a model that can be easily adapted to other similar services. HealthCyberMap is a Web-based directory service of medical/health Internet resources exploring new means to organise and present these resources based on consumer and provider locations, as well as the geographical coverage or scope of indexed resources. The paper also provides a concise review of location-based services, technologies for detecting user location (including IP geolocation, and their potential applications in health and healthcare.

  7. Fast mutual-information-based contrast enhancement

    Science.gov (United States)

    Cao, Gang; Yu, Lifang; Tian, Huawei; Huang, Xianglin; Wang, Yongbin

    2017-07-01

    Recently, T. Celik proposed an effective image contrast enhancement (CE) method based on spatial mutual information and PageRank (SMIRANK). According to the state-of-the-art evaluation criteria, it achieves the best visual enhancement quality among existing global CE methods. However, SMIRANK runs much slower than the other counterparts, such as histogram equalization (HE) and adaptive gamma correction. Low computational complexity is also required for good CE algorithms. In this paper, we novelly propose a fast SMIRANK algorithm, called FastSMIRANK. It integrates both spatial and gray-level downsampling into the generation of pixel value mapping function. Moreover, the computation of rank vectors is speeded up by replacing PageRank with a simple yet efficient row-based operation of mutual information matrix. Extensive experimental results show that the proposed FastSMIRANK could accelerate the processing speed of SMIRANK by about 20 times, and is even faster than HE. Comparable enhancement quality is preserved simultaneously.

  8. Multiagent Based Information Dissemination in Vehicular Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    S.S. Manvi

    2009-01-01

    Full Text Available Vehicular Ad hoc Networks (VANETs are a compelling application of ad hoc networks, because of the potential to access specific context information (e.g. traffic conditions, service updates, route planning and deliver multimedia services (Voice over IP, in-car entertainment, instant messaging, etc.. This paper proposes an agent based information dissemination model for VANETs. A two-tier agent architecture is employed comprising of the following: 1 'lightweight', network-facing, mobile agents; 2 'heavyweight', application-facing, norm-aware agents. The limitations of VANETs lead us to consider a hybrid wireless network architecture that includes Wireless LAN/Cellular and ad hoc networking for analyzing the proposed model. The proposed model provides flexibility, adaptability and maintainability for traffic information dissemination in VANETs as well as supports robust and agile network management. The proposed model has been simulated in various network scenarios to evaluate the effectiveness of the approach.

  9. Quality of Web-based information systems

    OpenAIRE

    Kazimierz Worwa; Jerzy Stanik

    2010-01-01

    The scope and complexity of current World Wide Web applications vary widely: from smallscale, short-lived services to large-scale enterprise applications distributed across the Internet and corporate intranets and extranets. As Web applications have evolved, the demands placed on Web-based systems and the complexity of designing, developing, maintaining, and managing these systems have also increased significantly. They provided vast, dynamic information in multiple media ...

  10. Mutual information in the Tangled Nature Model

    DEFF Research Database (Denmark)

    Jones, Dominic; Sibani, Paolo

    2010-01-01

    We consider the concept of mutual information in ecological networks, and use this idea to analyse the Tangled Nature model of co-evolution. We show that this measure of correlation has two distinct behaviours depending on how we define the network in question: if we consider only the network of ...... of viable species this measure increases, whereas for the whole system it decreases. It is suggested that these are complimentary behaviours that show how ecosystems can become both more stable and better adapted.......We consider the concept of mutual information in ecological networks, and use this idea to analyse the Tangled Nature model of co-evolution. We show that this measure of correlation has two distinct behaviours depending on how we define the network in question: if we consider only the network...

  11. Geographical information modelling for land resource survey

    OpenAIRE

    Bruin, de, S.

    2000-01-01

    The increasing popularity of geographical information systems (GIS) has at least three major implications for land resources survey. Firstly, GIS allows alternative and richer representation of spatial phenomena than is possible with the traditional paper map. Secondly, digital technology has improved the accessibility of ancillary data, such as digital elevation models and remotely sensed imagery, and the possibilities of incorporating these into target database production. Thirdly, owing to...

  12. Formal Information Model for Representing Production Resources

    OpenAIRE

    Siltala, Niko; Järvenpää, Eeva; Lanz, Minna

    2017-01-01

    Part 2: Intelligent Manufacturing Systems; International audience; This paper introduces a concept and associated descriptions to formally describe physical production resources for modular and reconfigurable production systems. These descriptions are source of formal information for (automatic) production system design and (re-)configuration. They can be further utilized during the system deployment and execution. The proposed concept and the underlying formal resource description model is c...

  13. The examination of an information-based approach to trust

    NARCIS (Netherlands)

    Verbrugge, Rineke; Sierra, Carles; Debenham, John; Harbers, Maaike; Sichman, JS; Padget, J; Ossowski, S; Noriega, P

    2008-01-01

    This article presents the results of experiments performed with agents based on an operalization of an information-theoretic model for trust. Experiments have been performed with the ART test-bed, a test domain for trust and reputation aiming to provide transparent and recognizable standards. An

  14. Autocorrelation and Regularization of Query-Based Information Retrieval Scores

    Science.gov (United States)

    2008-02-01

    like dogs, are digitigrades: they walk directly on their toes, the bones of their feet making up the lower part of the visible leg. (b) Molecular...techniques in information retrieval. Artif . Intell. Rev., 11(6):453–482, 1997. W. B. Croft. A model of cluster searching based on classification

  15. Human-Assisted Machine Information Exploitation: a crowdsourced investigation of information-based problem solving

    Science.gov (United States)

    Kase, Sue E.; Vanni, Michelle; Caylor, Justine; Hoye, Jeff

    2017-05-01

    The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical operational environment. These types of environments are characteristic of intelligence workflow processes conducted during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of crowds to model the interaction of the information consumer and the information required to solve a problem at different levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems (DSS) research represent the experimental conditions in this online single-player against-the-clock game where the player, acting in the role of an intelligence analyst, is tasked with a Commander's Critical Information Requirement (CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks (annotation, relation identification, and link diagram formation) with the assistance of `HAMIE the robot' who offers varying levels of information understanding dependent on question complexity. We provide preliminary results from a pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research platform.

  16. Semantic Information Modeling for Emerging Applications in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Natarajan, Sreedhar; Simmhan, Yogesh; Prasanna, Viktor

    2012-04-16

    Smart Grid modernizes power grid by integrating digital and information technologies. Millions of smart meters, intelligent appliances and communication infrastructures are under deployment allowing advanced IT applications to be developed to secure and manage power grid operations. Demand response (DR) is one such emerging application to optimize electricity demand by curtailing/shifting power load when peak load occurs. Existing DR approaches are mostly based on static plans such as pricing policies and load shedding schedules. However, improvements to power management applications rely on data emanating from existing and new information sources with the growth of Smart Grid information space. In particular, dynamic DR algorithms depend on information from smart meters that report interval-based power consumption measurement, HVAC systems that monitor buildings heat and humidity, and even weather forecast services. In order for emerging Smart Grid applications to take advantage of the diverse data influx, extensible information integration is required. In this paper, we develop an integrated Smart Grid information model using Semantic Web techniques and present case studies of using semantic information for dynamic DR. We show the semantic model facilitates information integration and knowledge representation for developing the next generation Smart Grid applications.

  17. Design and analysis of information model hotel complex

    Directory of Open Access Journals (Sweden)

    Garyaev Nikolai

    2016-01-01

    Full Text Available The article analyzes the innovation in 3D modeling and development of process design approaches based on visualization of information technology and computer-aided design systems. The problems arising in the modern design and the approach to address them.

  18. Infographic Modeling Based on 3d Laser Surveying for Informed Universal Design in Archaeological Areas: the Case of Oppidum of the Ancient City of Tusculum

    Science.gov (United States)

    Cemoli, L.; D'Auria, S.; De Silla, F.; Pucci, S.; Strollo, R. M.

    2017-08-01

    The valorisation of archaeological sites represents a fundamental action for the social and economic development of a country. An archaeological park is often a territory characterized by significant testimonies of antiquity of great landscape value. For this reason, it should be configured as an authentic outdoor museum, enriched by natural, environmental, architectural and urban components. In order to fulfill these requirements, it is fundamental the elaboration of a coherent scientific project of preservation, fruition and valorisation of the area, which merge the different components necessary for the establishment of an archaeological museum-park. One of the most critical aspects related to the fruition of archaeological sites is the accessibility to areas and routes, not always - if ever - designed for people with reduced mobility, also temporary (for example elderly, obese, visually impaired, etc.). In general, an established principle used in the new design is to pay attention to the so-called wide users, in accordance with the international guidelines summarized in the concept of Universal Design. In particular, this paper presents the use of three-dimensional models obtained from laser scanning surveys for the design of walking trails for people with reduced mobility in the Tusculum Archaeological-Cultural Park. The work was based on the fundamental implementation of the three-dimensional survey with terrestrial laser scanning for the construction and the control of the complex morphology of the site, and on the subsequent integration of models of the intervention in the three-dimensional reality "as-built" of the site. The obtained infographic model allowed to study and simulate the impact of the routes for people with reduced mobility, and to verify its efficiency in the historical and landscape context. Moreover, it was possible to verify the construction of other facilities in the real conditions of the site.

  19. INFOGRAPHIC MODELING BASED ON 3D LASER SURVEYING FOR INFORMED UNIVERSAL DESIGN IN ARCHAEOLOGICAL AREAS: THE CASE OF OPPIDUM OF THE ANCIENT CITY OF TUSCULUM

    Directory of Open Access Journals (Sweden)

    L. Cemoli

    2017-08-01

    Full Text Available The valorisation of archaeological sites represents a fundamental action for the social and economic development of a country. An archaeological park is often a territory characterized by significant testimonies of antiquity of great landscape value. For this reason, it should be configured as an authentic outdoor museum, enriched by natural, environmental, architectural and urban components. In order to fulfill these requirements, it is fundamental the elaboration of a coherent scientific project of preservation, fruition and valorisation of the area, which merge the different components necessary for the establishment of an archaeological museum-park. One of the most critical aspects related to the fruition of archaeological sites is the accessibility to areas and routes, not always – if ever – designed for people with reduced mobility, also temporary (for example elderly, obese, visually impaired, etc.. In general, an established principle used in the new design is to pay attention to the so-called wide users, in accordance with the international guidelines summarized in the concept of Universal Design. In particular, this paper presents the use of three-dimensional models obtained from laser scanning surveys for the design of walking trails for people with reduced mobility in the Tusculum Archaeological-Cultural Park. The work was based on the fundamental implementation of the three-dimensional survey with terrestrial laser scanning for the construction and the control of the complex morphology of the site, and on the subsequent integration of models of the intervention in the three-dimensional reality "as-built" of the site. The obtained infographic model allowed to study and simulate the impact of the routes for people with reduced mobility, and to verify its efficiency in the historical and landscape context. Moreover, it was possible to verify the construction of other facilities in the real conditions of the site.

  20. gis-based hydrological model based hydrological model upstream

    African Journals Online (AJOL)

    eobe

    its effectiveness in terms of data representation quality of modeling results, hydrological models usually embedded in Geographical Information. (GIS) environment to simulate various parame attributed to a selected catchment. complex technology highly suitable for spatial temporal data analyses and information extractio.

  1. INFORMATIONAL MODEL OF MENTAL ROTATION OF FIGURES

    Directory of Open Access Journals (Sweden)

    V. A. Lyakhovetskiy

    2016-01-01

    Full Text Available Subject of Study.The subject of research is the information structure of objects internal representations and operations over them, used by man to solve the problem of mental rotation of figures. To analyze this informational structure we considered not only classical dependencies of the correct answers on the angle of rotation, but also the other dependencies obtained recently in cognitive psychology. Method.The language of technical computing Matlab R2010b was used for developing information model of the mental rotation of figures. Such model parameters as the number of bits in the internal representation, an error probability in a single bit, discrete rotation angle, comparison threshold, and the degree of difference during rotation can be changed. Main Results.The model reproduces qualitatively such psychological dependencies as the linear increase of time of correct answers and the number of errors on the angle of rotation for identical figures, "flat" dependence of the time of correct answers and the number of errors on the angle of rotation for mirror-like figures. The simulation results suggest that mental rotation is an iterative process of finding a match between the two figures, each step of which can lead to a significant distortion of the internal representation of the stored objects. Matching is carried out within the internal representations that have no high invariance to rotation angle. Practical Significance.The results may be useful for understanding the role of learning (including the learning with a teacher in the development of effective information representation and operations on them in artificial intelligence systems.

  2. Information behavior versus communication: application models in multidisciplinary settings

    Directory of Open Access Journals (Sweden)

    Cecília Morena Maria da Silva

    2015-05-01

    Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.

  3. How informative are slip models for aftershock forecasting?

    Science.gov (United States)

    Bach, Christoph; Hainzl, Sebastian

    2013-04-01

    Coulomb stress changes (ΔCFS) have been recognized as a major trigger mechanism for earthquakes, in particular aftershock distributions and the spatial patterns of ΔCFS are often found to be correlated. However, the Coulomb stress calculations are based on slip inversions and the receiver fault mechanisms which both contain large uncertainties. In particular, slip inversions are usually non-unique and often differ strongly for the same earthquakes. Here we want to address the information content of those inversions with respect to aftershock forecasting. Therefore we compare the slip models to randomized fractal slip models which are only constrained by fault information and moment magnitude. The uncertainty of the aftershock mechanisms is considered by using many receiver fault orientations, and by calculating ΔCFS at several depth layers. The stress change is then converted into an aftershock probability map utilizing a clock advance model. To estimate the information content of the slip models, we use an Epidemic Type Aftershock Sequence (ETAS) model approach introduced by Bach and Hainzl (2012), where the spatial probability density of direct aftershocks is related to the ΔCFS calculations. Besides the directly triggered aftershocks, this approach also takes secondary aftershock triggering into account. We quantify our results by calculating the information gain of the randomized slip models relative to the corresponding published slip model. As case studies, we investigate the aftershock sequences of several well-known main shocks such as 1992 Landers, 1999 Hector Mine, 2004 Parkfield, 2002 Denali. First results show a huge difference in the information content of slip models. For some of the cases up to 90% of the random slip models are found to perform better than the originally published model, for some other cases only few random models are found performing better than the published slip model.

  4. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Simon Benjaminsson

    2010-08-01

    Full Text Available Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of Independent Component Analysis (ICA have been the methods mostly used on fMRI data, e.g. in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  5. Implications of Information Theory for Computational Modeling of Schizophrenia.

    Science.gov (United States)

    Silverstein, Steven M; Wibral, Michael; Phillips, William A

    2017-10-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

  6. Building Information Modelling for Smart Built Environments

    Directory of Open Access Journals (Sweden)

    Jianchao Zhang

    2015-01-01

    Full Text Available Building information modelling (BIM provides architectural 3D visualization and a standardized way to share and exchange building information. Recently, there has been an increasing interest in using BIM, not only for design and construction, but also the post-construction management of the built facility. With the emergence of smart built environment (SBE technology, which embeds most spaces with smart objects to enhance the building’s efficiency, security and comfort of its occupants, there is a need to understand and address the challenges BIM faces in the design, construction and management of future smart buildings. In this paper, we investigate how BIM can contribute to the development of SBE. Since BIM is designed to host information of the building throughout its life cycle, our investigation has covered phases from architecture design to facility management. Firstly, we extend BIM for the design phase to provide material/device profiling and the information exchange interface for various smart objects. Next, we propose a three-layer verification framework to assist BIM users in identifying possible defects in their SBE design. For the post-construction phase, we have designed a facility management tool to provide advanced energy management of smart grid-connected SBEs, where smart objects, as well as distributed energy resources (DERs are deployed.

  7. A cloud-based information repository for bridge monitoring applications

    Science.gov (United States)

    Jeong, Seongwoon; Zhang, Yilan; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2016-04-01

    This paper describes an information repository to support bridge monitoring applications on a cloud computing platform. Bridge monitoring, with instrumentation of sensors in particular, collects significant amount of data. In addition to sensor data, a wide variety of information such as bridge geometry, analysis model and sensor description need to be stored. Data management plays an important role to facilitate data utilization and data sharing. While bridge information modeling (BrIM) technologies and standards have been proposed and they provide a means to enable integration and facilitate interoperability, current BrIM standards support mostly the information about bridge geometry. In this study, we extend the BrIM schema to include analysis models and sensor information. Specifically, using the OpenBrIM standards as the base, we draw on CSI Bridge, a commercial software widely used for bridge analysis and design, and SensorML, a standard schema for sensor definition, to define the data entities necessary for bridge monitoring applications. NoSQL database systems are employed for data repository. Cloud service infrastructure is deployed to enhance scalability, flexibility and accessibility of the data management system. The data model and systems are tested using the bridge model and the sensor data collected at the Telegraph Road Bridge, Monroe, Michigan.

  8. INIS information retrieval based on IBM's IRMS

    International Nuclear Information System (INIS)

    Gadjokov, V.; Schmid, H.; Del Bigio, G.

    1975-01-01

    An information retrieval system for the INIS data base is described. It allows for batch processing on an IBM/360 or /370 computer operated under OS or VS. The program package consists basically of IBM's IRMS system which was converted from DOS to OS and adapted for INIS requirements. Sections 1-9 present the system from the user's point of view, deliberately omitting all the programming details. Program descriptions with data set definitions and file formats are given in sections 10-12. (author)

  9. Multi-UAV Doppler Information Fusion for Target Tracking Based on Distributed High Degrees Information Filters

    Directory of Open Access Journals (Sweden)

    Hamza Benzerrouk

    2018-03-01

    Full Text Available Multi-Unmanned Aerial Vehicle (UAV Doppler-based target tracking has not been widely investigated, specifically when using modern nonlinear information filters. A high-degree Gauss–Hermite information filter, as well as a seventh-degree cubature information filter (CIF, is developed to improve the fifth-degree and third-degree CIFs proposed in the most recent related literature. These algorithms are applied to maneuvering target tracking based on Radar Doppler range/range rate signals. To achieve this purpose, different measurement models such as range-only, range rate, and bearing-only tracking are used in the simulations. In this paper, the mobile sensor target tracking problem is addressed and solved by a higher-degree class of quadrature information filters (HQIFs. A centralized fusion architecture based on distributed information filtering is proposed, and yielded excellent results. Three high dynamic UAVs are simulated with synchronized Doppler measurement broadcasted in parallel channels to the control center for global information fusion. Interesting results are obtained, with the superiority of certain classes of higher-degree quadrature information filters.

  10. Relevance of information warfare models to critical infrastructure ...

    African Journals Online (AJOL)

    This article illustrates the relevance of information warfare models to critical infrastructure protection. Analogies of information warfare models to those of information security and information systems were used to deconstruct the models into their fundamental components and this will be discussed. The models were applied ...

  11. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  12. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  13. Sustainability Product Properties in Building Information Models

    Science.gov (United States)

    2012-09-01

    Pset_Material_Sustainability_US ThermalResistance Thermal resistance of the element, hr-CuFt-F/Btu (K-Cu m/W) 0 hr-CuFt-F/Btu Pset_Material_Sustainability_US Asphalt ...Model Checker, the sustainable information properties associated with the toilet fixture were visible by selecting the “Private 1.6 LPF” folder in...Performance - Required to be a minimum of 30% better than ASH RAE 90.1-2004 - The key strategies for conserving energy include energy efficiency in

  14. A focus on building information modelling.

    Science.gov (United States)

    Ryan, Alison

    2014-03-01

    With the Government Construction Strategy requiring a strengthening of the public sector's capability to implement Building Information Modelling (BIM) protocols, the goal being that all central government departments will be adopting, as a minimum, collaborative Level 2 BIM by 2016, Alison Ryan, of consulting engineers, DSSR, explains the principles behind BIM, its history and evolution, and some of the considerable benefits it can offer. These include lowering capital project costs through enhanced co-ordination, cutting carbon emissions, and the ability to manage facilities more efficiently.

  15. Building information modelling (BIM: now and beyond

    Directory of Open Access Journals (Sweden)

    Salman Azhar

    2012-12-01

    Full Text Available Building Information Modeling (BIM, also called n-D Modeling or Virtual Prototyping Technology, is a revolutionary development that is quickly reshaping the Architecture-Engineering-Construction (AEC industry. BIM is both a technology and a process. The technology component of BIM helps project stakeholders to visualize what is to be built in a simulated environment to identify any potential design, construction or operational issues. The process component enables close collaboration and encourages integration of the roles of all stakeholders on a project. The paper presents an overview of BIM with focus on its core concepts, applications in the project life cycle and benefits for project stakeholders with the help of case studies. The paper also elaborates risks and barriers to BIM implementation and future trends.

  16. Akaike information criterion to select well-fit resist models

    Science.gov (United States)

    Burbine, Andrew; Fryer, David; Sturtevant, John

    2015-03-01

    In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.

  17. A Compositional Relevance Model for Adaptive Information Retrieval

    Science.gov (United States)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  18. Multiscale information modelling for heart morphogenesis

    International Nuclear Information System (INIS)

    Abdulla, T; Imms, R; Summers, R; Schleich, J M

    2010-01-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  19. Multiscale information modelling for heart morphogenesis

    Science.gov (United States)

    Abdulla, T.; Imms, R.; Schleich, J. M.; Summers, R.

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  20. Uncertainty Quantification and Learning in Geophysical Modeling: How Information is Coded into Dynamical Models

    Science.gov (United States)

    Gupta, H. V.

    2014-12-01

    There is a clear need for comprehensive quantification of simulation uncertainty when using geophysical models to support and inform decision-making. Further, it is clear that the nature of such uncertainty depends on the quality of information in (a) the forcing data (driver information), (b) the model code (prior information), and (c) the specific values of inferred model components that localize the model to the system of interest (inferred information). Of course, the relative quality of each varies with geophysical discipline and specific application. In this talk I will discuss a structured approach to characterizing how 'Information', and hence 'Uncertainty', is coded into the structures of physics-based geophysical models. I propose that a better understanding of what is meant by "Information", and how it is embodied in models and data, can offer a structured (less ad-hoc), robust and insightful basis for diagnostic learning through the model-data juxtaposition. In some fields, a natural consequence may be to emphasize the a priori role of System Architecture (Process Modeling) over that of the selection of System Parameterization, thereby emphasizing the more creative aspect of scientific investigation - the use of models for Discovery and Learning.

  1. Informed Design of Educational Technology for Teaching and Learning? Towards an Evidence-Informed Model of Good Practice

    Science.gov (United States)

    Price, Linda; Kirkwood, Adrian

    2014-01-01

    The aim of this paper is to model evidence-informed design based on a selective critical analysis of research articles. The authors draw upon findings from an investigation into practitioners' use of educational technologies to synthesise and model what informs their designs. They found that practitioners' designs were often driven by implicit…

  2. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  3. Physically-based Canopy Reflectance Model Inversion of Vegetation Biophysical-Structural Information from Terra-MODIS Imagery in Boreal and Mountainous Terrain for Ecosystem, Climate and Carbon Models using the BIOPHYS-MFM Algorithm

    Science.gov (United States)

    Peddle, D. R.; Hall, F.

    2009-12-01

    The BIOPHYS algorithm provides innovative and flexible methods for the inversion of canopy reflectance models (CRM) to derive essential biophysical structural information (BSI) for quantifying vegetation state and disturbance, and for input to ecosystem, climate and carbon models. Based on spectral, angular, temporal and scene geometry inputs that can be provided or automatically derived, the BIOPHYS Multiple-Forward Mode (MFM) approach generates look-up tables (LUTs) that comprise reflectance data, structural inputs over specified or computed ranges, and the associated CRM output from forward mode runs. Image pixel and model LUT spectral values are then matched. The corresponding BSI retrieved from the LUT matches is output as the BSI results. BIOPHYS-MFM has been extensively used with agencies in Canada and the USA over the past decade (Peddle et al 2000-09; Soenen et al 2005-09; Gamon et al 2004; Cihlar et al 2003), such as CCRS, CFS, AICWR, NASA LEDAPS, BOREAS and MODIS Science Teams, and for the North American Carbon Program. The algorithm generates BSI products such as land cover, biomass, stand volume, stem density, height, crown closure, leaf area index (LAI) and branch area, crown dimension, productivity, topographic correction, structural change from harvest, forest fires and mountain pine beetle damage, and water / hydrology applications. BIOPHYS-MFM has been applied in different locations in Canada (six provinces from Newfoundland to British Columbia) and USA (NASA COVER, MODIS and LEDAPS sites) using 7 different CRM models and a variety of imagery (e.g. MODIS, Landsat, SPOT, IKONOS, airborne MSV, MMR, casi, Probe-1, AISA). In this paper we summarise the BIOPHYS-MFM algorithm and results from Terra-MODIS imagery from MODIS validation sites at Kananaskis Alberta in the Canadian Rocky Mountains, and from the Boreal Ecosystem Atmosphere Study (BOREAS) in Saskatchewan Canada. At the montane Rocky Mountain site, BIOPHYS-MFM density estimates were within

  4. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  5. Automated Physico-Chemical Cell Model Development through Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  6. Modeling Routinization in Games: An Information Theory Approach

    DEFF Research Database (Denmark)

    Wallner, Simon; Pichlmair, Martin; Hecher, Michael

    2015-01-01

    Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...

  7. Information Governance: A Model for Security in Medical Practice

    Directory of Open Access Journals (Sweden)

    Patricia A.H. Williams

    2007-03-01

    Full Text Available Information governance is becoming an important aspect of organisational accountability. In consideration that information is an integral asset of most organisations, the protection of this asset will increasingly rely on organisational capabilities in security.  In the medical arena this information is primarily sensitive patient-based information. Previous research has shown that application of security measures is a low priority for primary care medical practice and that awareness of the risks are seriously underestimated. Consequently, information security governance will be a key issue for medical practice in the future. Information security governance is a relatively new term and there is little existing research into how to meet governance requirements. The limited research that exists describes information security governance frameworks at a strategic level. However, since medical practice is already lagging in the implementation of appropriate security, such definition may not be practical although it is obviously desirable. This paper describes an on-going action research project undertaken in the area of medical information security, and presents a tactical approach model aimed at addressing information security governance and the protection of medical data. 

  8. Information Models of Acupuncture Analgesia and Meridian Channels

    Directory of Open Access Journals (Sweden)

    Chang Hua Zou

    2010-12-01

    Full Text Available Acupuncture and meridian channels have been major components of Chinese and Eastern Asian medicine—especially for analgesia—for over 2000 years. In recent decades, electroacupuncture (EA analgesia has been applied clinically and experimentally. However, there were controversial results between different treatment frequencies, or between the active and the placebo treatments; and the mechanisms of the treatments and the related meridian channels are still unknown. In this study, we propose a new term of infophysics therapy and develop information models of acupuncture (or EA analgesia and meridian channels, to understand the mechanisms and to explain the controversial results, based on Western theories of information, trigonometry and Fourier series, and physics, as well as published biomedical data. We are trying to build a bridge between Chinese medicine and Western medicine by investigating the Eastern acupuncture analgesia and meridian channels with Western sciences; we model the meridians as a physiological system that is mostly constructed with interstices in or between other physiological systems; we consider frequencies, amplitudes and wave numbers of electric field intensity (EFI as information data. Our modeling results demonstrate that information regulated with acupuncture (or EA is different from pain information, we provide answers to explain the controversial published results, and suggest that mechanisms of acupuncture (or EA analgesia could be mostly involved in information regulation of frequencies and amplitudes of EFI as well as neuronal transmitters such as endorphins.

  9. SYNTHESIS OF INFORMATION MODEL FOR ALTERNATIVE FUNCTIONAL DIAGNOSTICS PROCEDURE

    OpenAIRE

    P. F. Shchapov; R. P. Miguschenko

    2014-01-01

    Probabilistic approaches in information theory and information theory of measurement, allowing to calculate and analyze the amount expected to models measuring conversions and encoding tasks random measurement signals were considered. A probabilistic model of diagnostic information model transformation and diagnostic procedures was developed. Conditions for obtaining the maximum amount of diagnostic information were found out.

  10. The Knowledge Base Interface for Parametric Grid Information

    International Nuclear Information System (INIS)

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-01-01

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary

  11. Activity-Based Information Integrating the operations strategy

    Directory of Open Access Journals (Sweden)

    José Augusto da Rocha de Araujo

    2005-12-01

    Full Text Available In the globalized world, companies seek for new operations strategies to ensure world corporate success. This article analyzes how the cost management models – both traditional and activity-based, aid the planning and management of corporate globalized operations. The efficacy of the models application depends on their alignment with the competitive strategy. Companies must evaluate the nature of the competition and its competitive priorities; they should then define the necessary and sufficient dependence level on costs information. In this article, three dependence levels are presented: operational, decision support and strategic control. The result of the research shows the importance of alignment between the cost management model and the competitive strategy for corporate success, and confirms the adequacy of the activity-based costing model as a supporting tool for decision taking in a global strategy. Case studies in world class companies in Brazil are presented.

  12. Dynamic Information Management and Exchange for Command and Control Applications, Modelling and Enforcing Category-Based Access Control via Term Rewriting

    Science.gov (United States)

    2015-03-01

    Workshop on Logical and Se- mantic Frameworks, with Applications, Brasilia , Brazil, September 2014. Electronic Notes in Theoretical Computer Science (to... Brasilia , Brazil, September 2014, 2015. [3] S. Barker. The next 700 access control models or a unifying meta-model? In SACMAT 2009, 14th ACM Symposium on

  13. Managing for resilience: an information theory-based ...

    Science.gov (United States)

    Ecosystems are complex and multivariate; hence, methods to assess the dynamics of ecosystems should have the capacity to evaluate multiple indicators simultaneously. Most research on identifying leading indicators of regime shifts has focused on univariate methods and simple models which have limited utility when evaluating real ecosystems, particularly because drivers are often unknown. We discuss some common univariate and multivariate approaches for detecting critical transitions in ecosystems and demonstrate their capabilities via case studies. Synthesis and applications. We illustrate the utility of an information theory-based index for assessing ecosystem dynamics. Trends in this index also provide a sentinel of both abrupt and gradual transitions in ecosystems. In response to the need to identify leading indicators of regime shifts in ecosystems, our research compares traditional indicators and Fisher information, an information theory based method, by examining four case study systems. Results demonstrate the utility of methods and offers great promise for quantifying and managing for resilience.

  14. An information based approach to improving overhead imagery collection

    Science.gov (United States)

    Sourwine, Matthew J.; Hintz, Kenneth J.

    2011-06-01

    Recent growth in commercial imaging satellite development has resulted in a complex and diverse set of systems. To simplify this environment for both customer and vendor, an information based sensor management model was built to integrate tasking and scheduling systems. By establishing a relationship between image quality and information, tasking by NIIRS can be utilized to measure the customer's required information content. Focused on a reduction in uncertainty about a target of interest, the sensor manager finds the best sensors to complete the task given the active suite of imaging sensors' functions. This is done through determination of which satellite will meet customer information and timeliness requirements with low likelihood of interference at the highest rate of return.

  15. Implementation of Medical Information Exchange System Based on EHR Standard.

    Science.gov (United States)

    Han, Soon Hwa; Lee, Min Ho; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong

    2010-12-01

    To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information.

  16. Development of Personal Wellness Information Model for Pervasive Healthcare

    Directory of Open Access Journals (Sweden)

    Antto Seppälä

    2012-01-01

    Full Text Available Pervasive healthcare and citizen-centered care paradigm are moving the healthcare outside the hospital environment. Healthcare delivery is becoming more personalized and decentralized, focusing on prevention and proactive services with a complete view of health and wellbeing. The concept of wellness has been used to describe this holistic view of health, which focuses on physical, social, and mental well-being. Pervasive computing makes it possible to collect information and offer services anytime and anywhere. To support pervasive healthcare with wellness approaches, semantic interoperability is needed between all actors and information sources in the ecosystem. This study focuses on the domain of personal wellness and analyzes related concepts, relationships, and environments. As a result of this study, we have created an information model that focuses on the citizens’ perspectives and conceptualizations of personal wellness. The model has been created based on empirical research conducted with focus groups.

  17. Entropic information of dynamical AdS/QCD holographic models

    Energy Technology Data Exchange (ETDEWEB)

    Bernardini, Alex E., E-mail: alexeb@ufscar.br [Departamento de Física, Universidade Federal de São Carlos, PO Box 676, 13565-905, São Carlos, SP (Brazil); Rocha, Roldão da, E-mail: roldao.rocha@ufabc.edu.br [Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, UFABC, 09210-580, Santo André (Brazil)

    2016-11-10

    The Shannon based conditional entropy that underlies five-dimensional Einstein–Hilbert gravity coupled to a dilaton field is investigated in the context of dynamical holographic AdS/QCD models. Considering the UV and IR dominance limits of such AdS/QCD models, the conditional entropy is shown to shed some light onto the meson classification schemes, which corroborate with the existence of light-flavor mesons of lower spins in Nature. Our analysis is supported by a correspondence between statistical mechanics and information entropy which establishes the physical grounds to the Shannon information entropy, also in the context of statistical mechanics, and provides some specificities for accurately extending the entropic discussion to continuous modes of physical systems. From entropic informational grounds, the conditional entropy allows one to identify the lower experimental/phenomenological occurrence of higher spin mesons in Nature. Moreover, it introduces a quantitative theoretical apparatus for studying the instability of high spin light-flavor mesons.

  18. Classification models for the prediction of clinicians' information needs.

    Science.gov (United States)

    Del Fiol, Guilherme; Haug, Peter J

    2009-02-01

    Clinicians face numerous information needs during patient care activities and most of these needs are not met. Infobuttons are information retrieval tools that help clinicians to fulfill their information needs by providing links to on-line health information resources from within an electronic medical record (EMR) system. The aim of this study was to produce classification models based on medication infobutton usage data to predict the medication-related content topics (e.g., dose, adverse effects, drug interactions, patient education) that a clinician is most likely to choose while entering medication orders in a particular clinical context. We prepared a dataset with 3078 infobutton sessions and 26 attributes describing characteristics of the user, the medication, and the patient. In these sessions, users selected one out of eight content topics. Automatic attribute selection methods were then applied to the dataset to eliminate redundant and useless attributes. The reduced dataset was used to produce nine classification models from a set of state-of-the-art machine learning algorithms. Finally, the performance of the models was measured and compared. Area under the ROC curve (AUC) and agreement (kappa) between the content topics predicted by the models and those chosen by clinicians in each infobutton session. The performance of the models ranged from 0.49 to 0.56 (kappa). The AUC of the best model ranged from 0.73 to 0.99. The best performance was achieved when predicting choice of the adult dose, pediatric dose, patient education, and pregnancy category content topics. The results suggest that classification models based on infobutton usage data are a promising method for the prediction of content topics that a clinician would choose to answer patient care questions while using an EMR system.

  19. Using a High-Resolution Ensemble Modeling Method to Inform Risk-Based Decision-Making at Taylor Park Dam, Colorado

    Science.gov (United States)

    Mueller, M.; Mahoney, K. M.; Holman, K. D.

    2015-12-01

    The Bureau of Reclamation (Reclamation) is responsible for the safety of Taylor Park Dam, located in central Colorado at an elevation of 9300 feet. A key aspect of dam safety is anticipating extreme precipitation, runoff and the associated inflow of water to the reservoir within a probabilistic framework for risk analyses. The Cooperative Institute for Research in Environmental Sciences (CIRES) has partnered with Reclamation to improve understanding and estimation of precipitation in the western United States, including the Taylor Park watershed. A significant challenge is that Taylor Park Dam is located in a relatively data-sparse region, surrounded by mountains exceeding 12,000 feet. To better estimate heavy precipitation events in this basin, a high-resolution modeling approach is used. The Weather Research and Forecasting (WRF) model is employed to simulate events that have produced observed peaks in streamflow at the location of interest. Importantly, an ensemble of model simulations are run on each event so that uncertainty bounds (i.e., forecast error) may be provided such that the model outputs may be more effectively used in Reclamation's risk assessment framework. Model estimates of precipitation (and the uncertainty thereof) are then used in rainfall runoff models to determine the probability of inflows to the reservoir for use in Reclamation's dam safety risk analyses.

  20. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2015-01-01

    Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

  1. AN INFORMATION SERVICE MODEL FOR REMOTE SENSING EMERGENCY SERVICES

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2017-09-01

    Full Text Available This paper presents a method on the semantic access environment, which can solve the problem about how to identify the correct natural disaster emergency knowledge and return to the demanders. The study data is natural disaster knowledge text set. Firstly, based on the remote sensing emergency knowledge database, we utilize the sematic network to extract the key words in the input documents dataset. Then, using the semantic analysis based on words segmentation and PLSA, to establish the sematic access environment to identify the requirement of users and match the emergency knowledge in the database. Finally, the user preference model was established, which could help the system to return the corresponding information to the different users. The results indicate that semantic analysis can dispose the natural disaster knowledge effectively, which will realize diversified information service, enhance the precision of information retrieval and satisfy the requirement of users.

  2. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  3. Modeling the reemergence of information diffusion in social network

    Science.gov (United States)

    Yang, Dingda; Liao, Xiangwen; Shen, Huawei; Cheng, Xueqi; Chen, Guolong

    2018-01-01

    Information diffusion in networks is an important research topic in various fields. Existing studies either focus on modeling the process of information diffusion, e.g., independent cascade model and linear threshold model, or investigate information diffusion in networks with certain structural characteristics such as scale-free networks and small world networks. However, there are still several phenomena that have not been captured by existing information diffusion models. One of the prominent phenomena is the reemergence of information diffusion, i.e., a piece of information reemerges after the completion of its initial diffusion process. In this paper, we propose an optimized information diffusion model by introducing a new informed state into traditional susceptible-infected-removed model. We verify the proposed model via simulations in real-world social networks, and the results indicate that the model can reproduce the reemergence of information during the diffusion process.

  4. MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM

    Directory of Open Access Journals (Sweden)

    A. G. Korobeynikov

    2015-05-01

    Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed

  5. Cryptography, Information Operations and the Industrial Base: A Policy Dilemma

    National Research Council Canada - National Science Library

    Horner, Stephen

    1997-01-01

    .... The explosive force of information technology places the Global Information Infrastructure, the worldwide industrial base and the various world governments in both mutually supporting and somewhat adversarial positions...

  6. CRISP. Information Security Models and Their Economics

    International Nuclear Information System (INIS)

    Gustavsson, R.; Mellstrand, P.; Tornqvist, B.

    2005-03-01

    The deliverable D1.6 includes background material and specifications of a CRISP Framework on protection of information assets related to power net management and management of business operations related to energy services. During the project it was discovered by the CRISP consortium that the original description of WP 1.6 was not adequate for the project as such. The main insight was that the original emphasis on cost-benefit analysis of security protection measures was to early to address in the project. This issue is of course crucial in itself but requires new models of consequence analysis that still remains to be developed, especially for the new business models we are investigated in the CRISP project. The updated and approved version of the WP1.6 description, together with the also updated WP2.4 focus on Dependable ICT support of Power Grid Operations constitutes an integrated approach towards dependable and secure future utilities and their business processes. This document (D1.6) is a background to deliverable D2.4. Together they provide a dependability and security framework to the three CRISP experiments in WP3

  7. Business Process Modelling based on Petri nets

    Directory of Open Access Journals (Sweden)

    Qin Jianglong

    2017-01-01

    Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.

  8. Listener: a probe into information based material specification

    DEFF Research Database (Denmark)

    Ramsgaard Thomsen, Mette; Karmon, Ayelet

    2011-01-01

    This paper presents the thinking and making of the architectural research probe Listener. Developed as an interdisciplinary collaboration between textile design and architecture, Listener explores how information based fabrication technologies are challenging the material practices of architecture....... The paper investigates how textile design can be understood as a model for architectural production providing new strategies for material specification and allowing the thinking of material as inherently variegated and performative. The paper traces the two fold information based strategies present...... in the Listener project. Firstly, the paper presents the design strategy leading to the development of bespoke interfaces between parametric design and CNC based textile fabrication. Secondly, by integrating structural and actuated materials the paper presents the making of a new class of materials...

  9. Managing geometric information with a data base management system

    Science.gov (United States)

    Dube, R. P.

    1984-01-01

    The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.

  10. Family-based HIV prevention and intervention services for youth living in poverty-affected contexts: the CHAMP model of collaborative, evidence-informed programme development.

    Science.gov (United States)

    Bhana, Arvin; McKay, Mary M; Mellins, Claude; Petersen, Inge; Bell, Carl

    2010-06-23

    Family-based interventions with children who are affected by HIV and AIDS are not well established. The Collaborative HIV Prevention and Adolescent Mental Health Program (CHAMP) represents one of the few evidence-based interventions tested in low-income contexts in the US, Caribbean and South Africa. This paper provides a description of the theoretical and empirical bases of the development and implementation of CHAMP in two of these countries, the US and South Africa. In addition, with the advent of increasing numbers of children infected with HIV surviving into adolescence and young adulthood, a CHAMP+ family-based intervention, using the founding principles of CHAMP, has been developed to mitigate the risk influences associated with being HIV positive.

  11. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  12. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  13. Modeling and evaluation of information systems using coloured petri network

    Directory of Open Access Journals (Sweden)

    Ehsan Zamirpour

    2014-07-01

    Full Text Available Nowadays with the growth of organizations and their affiliates, the importance of information systems has increased. Functional and non-functional requirements of information systems in an organization are supported. There are literally several techniques to support the functional requirements in terms of software methodologies, but support for the second set of requirements has received little attention. Software Performance Engineering (SPE forum tries to address this issue by presenting software methodologies to support both types of requirements. In this paper, we present a formal model for the evaluation of system performance based on a pragmatic model. Because of supporting the concurrency concepts, petri net has a higher priority than queuing system. For mapping UML to colored Petri net diagram, we use an intermediate graph. The preliminary results indicate that the proposed model may save significant amount of computations.

  14. Agricultural Library Information Retrieval Based on Improved Semantic Algorithm

    OpenAIRE

    Meiling , Xie

    2014-01-01

    International audience; To support users to quickly access information they need from the agricultural library’s vast information and to improve the low intelligence query service, a model for intelligent library information retrieval was constructed. The semantic web mode was introduced and the information retrieval framework was designed. The model structure consisted of three parts: Information data integration, user interface and information retrieval match. The key method supporting retr...

  15. Regional Analysis of Remote Sensing Based Evapotranspiration Information

    Science.gov (United States)

    Geli, H. M. E.; Hain, C.; Anderson, M. C.; Senay, G. B.

    2017-12-01

    Recent research findings on modeling actual evapotranspiration (ET) using remote sensing data and methods have proven the ability of these methods to address wide range of hydrological and water resources issues including river basin water balance for improved water resources management, drought monitoring, drought impact and socioeconomic responses, agricultural water management, optimization of land-use for water conservations, water allocation agreement among others. However, there is still a critical need to identify appropriate type of ET information that can address each of these issues. The current trend of increasing demand for water due to population growth coupled with variable and limited water supply due to drought especially in arid and semiarid regions with limited water supply have highlighted the need for such information. To properly address these issues different spatial and temporal resolutions of ET information will need to be used. For example, agricultural water management applications require ET information at field (30-m) and daily time scales while for river basin hydrologic analysis relatively coarser spatial and temporal scales can be adequate for such regional applications. The objective of this analysis is to evaluate the potential of using an integrated ET information that can be used to address some of these issues collectively. This analysis will highlight efforts to address some of the issues that are applicable to New Mexico including assessment of statewide water budget as well as drought impact and socioeconomic responses which all require ET information but at different spatial and temporal scales. This analysis will provide an evaluation of four remote sensing based ET models including ALEXI, DisALEXI, SSEBop, and SEBAL3.0. The models will be compared with ground-based observations from eddy covariance towers and water balance calculations. Remote sensing data from Landsat, MODIS, and VIIRS sensors will be used to provide ET

  16. An Integrative Behavioral Model of Information Security Policy Compliance

    Directory of Open Access Journals (Sweden)

    Sang Hoon Kim

    2014-01-01

    Full Text Available The authors found the behavioral factors that influence the organization members’ compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members’ attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1 the study is expected to play a role of the baseline for future research about organization members’ compliance with the information security policy, (2 the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3 the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training

  17. An integrative behavioral model of information security policy compliance.

    Science.gov (United States)

    Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung

    2014-01-01

    The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing

  18. CSNS control cable information management system based on web

    International Nuclear Information System (INIS)

    Lu Huihui; Wang Chunhong; Li Luofeng; Liu Zhengtong; Lei Bifeng

    2014-01-01

    This paper presents an approach to data modeling a great number of control devices and cables with complicated relations of CSNS (China Spallation Neutron Source). The CSNS accelerator control cable database was created using MySQL, and the control cable information management system based on Web was further built. During the development of the database, the design idea of IRMIS database was studied. and the actual situation of CSNS accelerator control cables was investigated. The control cable database model fitting the requirements was designed. This system is of great convenience to manage and maintain CSNS control devices and cables in the future. (authors)

  19. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  20. Information Model and Its Element for Displaying Information on Technical Condition of Objects of Integrated Information System

    OpenAIRE

    Kovalenko, Anna; Smirnov, Alexey; Kovalenko, Alexander; Dorensky, Alexander; Коваленко, А. С.; Смірнов, О. А.; Коваленко, О. В.; Доренський, О. П.

    2016-01-01

    The suggested information elements for the system of information display of the technical condition of the integrated information system meet the essential requirements of the information presentation. They correlate with the real object simply and very accurately. The suggested model of information display of the technical condition of the objects of integrated information system improves the efficiency of the operator of technical diagnostics in evaluating the information about the...

  1. Canonical analysis based on mutual information

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2015-01-01

    combinations with the information theoretical measure mutual information (MI). We term this type of analysis canonical information analysis (CIA). MI allows for the actual joint distribution of the variables involved and not just second order statistics. While CCA is ideal for Gaussian data, CIA facilitates...

  2. Gamified Android Based Academic Information System

    Science.gov (United States)

    Setiana, Henry; Hansun, Seng

    2017-01-01

    Student is often lazy when it comes to studying, and how to motivate student was one of the problem in the educational world. To overcome the matters, we will implement the gamification method into an Academic Information System. Academic Information System is a software used for providing information and arranging administration which connected…

  3. Distributed calibrating snow models using remotely sensed snow cover information

    Science.gov (United States)

    Li, H.

    2015-12-01

    Distributed calibrating snow models using remotely sensed snow cover information Hongyi Li1, Tao Che1, Xin Li1, Jian Wang11. Cold and Arid Regions Environmental and Engineering Research Institute, Chinese Academy of Sciences, Lanzhou 730000, China For improving the simulation accuracy of snow model, remotely sensed snow cover data are used to calibrate spatial parameters of snow model. A physically based snow model is developed and snow parameters including snow surface roughness, new snow density and critical threshold temperature distinguishing snowfall from precipitation, are spatially calibrated in this study. The study region, Babaohe basin, located in northwestern China, have seasonal snow cover and with complex terrain. The results indicates that the spatially calibration of snow model parameters make the simulation results more reasonable, and the simulated snow accumulation days, plot-scale snow depth are more better than lumped calibration.

  4. Contexts for concepts: Information modeling for semantic interoperability

    NARCIS (Netherlands)

    Oude Luttighuis, P.H.W.M.; Stap, R.E.; Quartel, D.

    2011-01-01

    Conceptual information modeling is a well-established practice, aimed at preparing the implementation of information systems, the specification of electronic message formats, and the design of information processes. Today's ever more connected world however poses new challenges for conceptual

  5. Some Basic Information on Information-Based Complexity Theory

    Science.gov (United States)

    1989-07-01

    field. Another way to model some aspects of scientific computing was intro- duced in 1988 by L. Blum , M. Shub and S. Smale, [B1,Sh&Sm,1988]; Algebraic...331-346. [BI,Shu,&Sm,1988] Blum L., Shub M., Smale S., On a Theory of Computation over the Real Numbers; NP Completness, Recursive Functions, and Turing...COMPLEXITY THEORY Beresford Parlett Department of Mathematics ULrIve-• * y o’ Ca’,f,:’rnia , . Berkeley, California 94720 @J:TUo19920 92-27745 -- July 1989

  6. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  7. The influence of climatic changes on distribution pattern of six typical Kobresia species in Tibetan Plateau based on MaxEnt model and geographic information system

    Science.gov (United States)

    Hu, Zhongjun; Guo, Ke; Jin, Shulan; Pan, Huahua

    2018-01-01

    The issue that climatic change has great influence on species distribution is currently of great interest in field of biogeography. Six typical Kobresia species are selected from alpine grassland of Tibetan Plateau (TP) as research objects which are the high-quality forage for local husbandry, and their distribution changes are modeled in four periods by using MaxEnt model and GIS technology. The modeling results have shown that the distribution of these six typical Kobresia species in TP was strongly affected by two factors of "the annual precipitation" and "the precipitation in the wettest and driest quarters of the year". The modeling results have also shown that the most suitable habitats of K. pygmeae were located in the area around Qinghai Lake, the Hengduan-Himalayan mountain area, and the hinterland of TP. The most suitable habitats of K. humilis were mainly located in the area around Qinghai Lake and the hinterland of TP during the Last Interglacial period, and gradually merged into a bigger area; K. robusta and K. tibetica were located in the area around Qinghai Lake and the hinterland of TP, but they did not integrate into one area all the time, and K. capillifolia were located in the area around Qinghai Lake and extended to the southwest of the original distributing area, whereas K. macrantha were mainly distributed along the area of the Himalayan mountain chain, which had the smallest distribution area among them, and all these six Kobresia species can be divided into four types of "retreat/expansion" styles according to the changes of suitable habitat areas during the four periods; all these change styles are the result of long-term adaptations of the different species to the local climate changes in regions of TP and show the complexity of relationships between different species and climate. The research results have positive reference value to the protection of species diversity and sustainable development of the local husbandry in TP.

  8. Radiology information system: a workflow-based approach

    International Nuclear Information System (INIS)

    Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; Aalst, W.M.P. van der

    2009-01-01

    Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare. (orig.)

  9. Sensor-based interior modeling

    International Nuclear Information System (INIS)

    Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.

    1995-01-01

    Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation

  10. Jeddah Historical Building Information Modelling "JHBIM" - Object Library

    Science.gov (United States)

    Baik, A.; Alitany, A.; Boehm, J.; Robson, S.

    2014-05-01

    The theory of using Building Information Modelling "BIM" has been used in several Heritage places in the worldwide, in the case of conserving, documenting, managing, and creating full engineering drawings and information. However, one of the most serious issues that facing many experts in order to use the Historical Building Information Modelling "HBIM", is creating the complicated architectural elements of these Historical buildings. In fact, many of these outstanding architectural elements have been designed and created in the site to fit the exact location. Similarly, this issue has been faced the experts in Old Jeddah in order to use the BIM method for Old Jeddah historical Building. Moreover, The Saudi Arabian City has a long history as it contains large number of historic houses and buildings that were built since the 16th century. Furthermore, the BIM model of the historical building in Old Jeddah always take a lot of time, due to the unique of Hijazi architectural elements and no such elements library, which have been took a lot of time to be modelled. This paper will focus on building the Hijazi architectural elements library based on laser scanner and image survey data. This solution will reduce the time to complete the HBIM model and offering in depth and rich digital architectural elements library to be used in any heritage projects in Al-Balad district, Jeddah City.

  11. PROTOTYPE OF WEB BASED INFORMATION LITERACY TO ENHANCE STUDENT INFORMATION LITERACY SKILL IN STATE ISLAMIC HIGH SCHOOL INSAN CENDEKIA

    Directory of Open Access Journals (Sweden)

    Indah Kurnianingsih

    2017-07-01

    Full Text Available Abstract. Information Literacy (IL Program is a library program that aims to improve the ability of library users to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information. Information literacy learning is essential to be taught and applied in education from the beginning of the school so that students are able to find and organize information effectively and efficiently particularly regard to the school assignment and learning process. At present, various educational institutions began to implement online learning model to improve the quality of teaching and research quality. Due to the advancement of information technology, the information literacy program should be adjusted with the needs of library users. The purpose of this study was to design web-based information literacy model for school library. This research conducted through several stages which are: identifying the needs of web-based IL, designing web-based IL, determining the model and the contents of a web-based IL tutorial, and creating a prototype webbased IL. The results showed that 90,74% of respondents stated the need of web-based learning IL. The prototype of web-based learning IL is consisted of six main units using combination of the Big6 Skills model and 7 Concept of Information Literacy by Shapiro and Hughes. The main fiveth units are Library Skill, Resource Skill, Research Skill, Reading Skill, and Presenting Literacy. This prototype web-based information literacy is expected to support the information literacy learning in a holistic approach.

  12. Predicting Grain Growth in Nanocrystalline Materials: A Thermodynamic and Kinetic-Based Model Informed by High Temperature X-ray Diffraction Experiments

    Science.gov (United States)

    2014-10-01

    number of atomic layers forming the GB. The GB and lattice diffusion terms are DD and DL, respectively, and these can be given as DD = D 0 Dexp ( −QD...parameters: GB energy of pure Fe γ0, heat of segregation ∆Hseg, activation energy in the lattice QL, and the GB saturated solute excess Γb0. The model...Metal Physics. 1952;3:220–292. 47. Lagarias JC , Reeds JA, Wright MH, Wright PE. Convergence properties of the Nelder-Mead simplex method in low dimensions

  13. Models, Data, Information and Hypotheses: Towards a More Effective Use of Simulation Models in Hydrology

    Science.gov (United States)

    Hooper, R. P.; Nearing, G. S.; Couch, A.; Condon, L. E.

    2017-12-01

    Water balance provides an organizing principle for rainfall-runoff models, both as a constraint (conservation of mass) but also as a perspective that structures models as fluxes among various compartments and states. Given that fewer tunable model parameters provides a more powerful quantitative test of our understanding of hydrologic processes, a dilemma arises between the use of lumped conceptual models and fully distributed models: lumped models generally have fewer parameters than distributed models, but translating data collected at a physical location in the field into information that is meaningful to a lumped model is not straightforward. By contrast, physically distributed models using spatially explicit computational grids or meshes can more directly relate internal model states to field data, yet data are quite sparse relative to the information requirements of the model. Thus, a central challenge emerges of extracting information from data, whether point in situ measurements or remotely sensed data based upon grids or an eddy covariance measurement of uncertain footprint, to inform model parameter values or the dynamics of model state variables. The new National Water Model (NWM), developed by NOAA's Office of Water Prediction, offers an interesting test case for addressing this challenge. As currently constructed, the NWM translates between a gridded landscape structure and a geofabric of reach catchments defined by the NHDPlus. We are examining the application of the NWM to three different Critical Zone Observatory catchments (Shavers Creek (PA), Upper Sangamon River (IL) , and Clear Creek (IA)) to explore the representation of groundwater-surface water exchange in these well-characterized basins. We contrast the effectiveness of different modeling approaches in extracting information from the available data sets.

  14. Utility-based early modulation of processing distracting stimulus information.

    Science.gov (United States)

    Wendt, Mike; Luna-Rodriguez, Aquiles; Jacobsen, Thomas

    2014-12-10

    Humans are selective information processors who efficiently prevent goal-inappropriate stimulus information to gain control over their actions. Nonetheless, stimuli, which are both unnecessary for solving a current task and liable to cue an incorrect response (i.e., "distractors"), frequently modulate task performance, even when consistently paired with a physical feature that makes them easily discernible from target stimuli. Current models of cognitive control assume adjustment of the processing of distractor information based on the overall distractor utility (e.g., predictive value regarding the appropriate response, likelihood to elicit conflict with target processing). Although studies on distractor interference have supported the notion of utility-based processing adjustment, previous evidence is inconclusive regarding the specificity of this adjustment for distractor information and the stage(s) of processing affected. To assess the processing of distractors during sensory-perceptual phases we applied EEG recording in a stimulus identification task, involving successive distractor-target presentation, and manipulated the overall distractor utility. Behavioral measures replicated previously found utility modulations of distractor interference. Crucially, distractor-evoked visual potentials (i.e., posterior N1) were more pronounced in high-utility than low-utility conditions. This effect generalized to distractors unrelated to the utility manipulation, providing evidence for item-unspecific adjustment of early distractor processing to the experienced utility of distractor information. Copyright © 2014 the authors 0270-6474/14/3416720-06$15.00/0.

  15. EPO or not-EPO? An evidence based informed consent.

    Science.gov (United States)

    Mezza, E; Piccoli, G B; Pacitti, A; Soragna, G; Bermond, F; Burdese, M; Gai, M; Motta, D; Jeantet, A; Merletti, F; Vineis, P; Segoloni, G P

    2004-04-01

    Informed consent is crucial in therapeutic choices; however, the forms presented to patients are often locally developed and information may not be homogeneous. To prepare an evidence-based model for informed consent, applied in the case of erythropoietin therapy (EPO) as a teaching tool for medical students. Methodological tools of Evidence-Based Medicine (EBM) were developed within the EBM Course in the Medical School of Torino, Italy, as problem solving and patient information tools (5th year students work in small groups under the supervision of statisticians, epidemiologists and experts of internal medicine--nephrology in this case). Methodological and ethical problems were identified: in the pre-dialysis field, evidence from randomized clinical trials (RCT) is scant; how to use evidence gathered in dialysis? How to deal with implementation? How with the mass media? Do we need to discuss the drug choice with the patients? How to deal with rare and severe side effects?). The "evidence" was searched for on Medline/Embase, by using key-words and free terms. About 680 papers were retrieved and screened. Forms available on the Internet were retrieved and a general scheme was drawn: it included 5 areas: title, aim and targets (patients and family physicians); search strategies and updating; pros and cons of therapy; alternative options; open questions. EBM may offer valuable tools for systematically approaching patient information; the inclusion of this kind of exercise in the Medical School EBM courses may help enhance the awareness of future physicians of the correct communication with patients.

  16. OSIS: A PC-based oil spill information system

    International Nuclear Information System (INIS)

    Leech, M.V.; Tyler, A.; Wiltshire, M.

    1993-01-01

    Warren Spring Laboratory and BMT Ceemaid Ltd. are cooperating to produce an Oil Spill Information System (OSIS) that will have worldwide application. OSIS is based on EUROSPILL, a spill simulation model originally developed under programs sponsored by the European Commission and the Marine Pollution Control Unit of the United Kingdom government's Department of Transport. OSIS is implemented in the Microsoft Windows 3.x graphical environment on a personal computer. A variety of options enables the user to input information on continuous or instantaneous spills of different types of oil under variable environmental conditions, to simulate the fate of oil and the trajectory of a spill. Model results are presented in the forms of maps, charts, graphs, and tables, displayed in multiple windows on a color monitor. Color hard copy can be produced, and OSIS can be linked to other Windows software packages, providing the opportunity to create a suite of spill incident management tools

  17. Relay-based information broadcast in complex networks

    Science.gov (United States)

    Fan, Zhongyan; Han, Zeyu; Tang, Wallace K. S.; Lin, Dong

    2018-04-01

    Information broadcast (IB) is a critical process in complex network, usually accomplished by flooding mechanism. Although flooding is simple and no prior topological information is required, it consumes a lot of transmission overhead. Another extreme is the tree-based broadcast (TB), for which information is disseminated via a spanning tree. It achieves the minimal transmission overhead but the maintenance of spanning tree for every node is an obvious obstacle for implementation. Motivated by the success of scale-free network models for real-world networks, in this paper, we investigate the issues in IB by considering an alternative solution in-between these two extremes. A novel relay-based broadcast (RB) mechanism is proposed by employing a subset of nodes as relays. Information is firstly forwarded to one of these relays and then re-disseminated to others through the spanning tree whose root is the relay. This mechanism provides a trade-off solution between flooding and TB. On one hand, it saves up a lot of transmission overhead as compared to flooding; on the other hand, it costs much less resource for maintenance than TB as only a few spanning trees are needed. Based on two major criteria, namely the transmission overhead and the convergence time, the effectiveness of RB is confirmed. The impacts of relay assignment and network structures on performance are also studied in this work.

  18. A proven knowledge-based approach to prioritizing process information

    Science.gov (United States)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  19. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  20. Using EPA Tools and Data Services to Inform Changes to Design Storm Definitions for Wastewater Utilities based on Climate Model Projections

    Science.gov (United States)

    Tryby, M.; Fries, J. S.; Baranowski, C.

    2014-12-01

    Extreme precipitation events can cause significant impacts to drinking water and wastewater utilities, including facility damage, water quality impacts, service interruptions and potential risks to human health and the environment due to localized flooding and combined sewer overflows (CSOs). These impacts will become more pronounced with the projected increases in frequency and intensity of extreme precipitation events due to climate change. To model the impacts of extreme precipitation events, wastewater utilities often develop Intensity, Duration, and Frequency (IDF) rainfall curves and "design storms" for use in the U.S. Environmental Protection Agency's (EPA) Storm Water Management Model (SWMM). Wastewater utilities use SWMM for planning, analysis, and facility design related to stormwater runoff, combined and sanitary sewers, and other drainage systems in urban and non-urban areas. SWMM tracks (1) the quantity and quality of runoff made within each sub-catchment; and (2) the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period made up of multiple time steps. In its current format, EPA SWMM does not consider climate change projection data. Climate change may affect the relationship between intensity, duration, and frequency described by past rainfall events. Therefore, EPA is integrating climate projection data available in the Climate Resilience Evaluation and Awareness Tool (CREAT) into SWMM. CREAT is a climate risk assessment tool for utilities that provides downscaled climate change projection data for changes in the amount of rainfall in a 24-hour period for various extreme precipitation events (e.g., from 5-year to 100-year storm events). Incorporating climate change projections into SWMM will provide wastewater utilities with more comprehensive data they can use in planning for future storm events, thereby reducing the impacts to the utility and customers served from flooding and stormwater issues.

  1. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  2. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere; VALORA: sistema de base de datos para almacenar informacion significativa utilizada en la modelizacion del comportamiento en la biosfera

    Energy Technology Data Exchange (ETDEWEB)

    Valdes R, M. [CPHR, Calle 20, No. 4113 e/41 y 47, Playa, La Habana 11300 (Cuba); Aguero P, A.; Perez S, D.; Cancio P, D. [CIEMAT, Av. Complutense No. 22, 28040 Madrid (Spain)]. e-mail: zury@cphr.edu.cu

    2006-07-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  3. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...

  4. Cognitive and social information based PSO

    African Journals Online (AJOL)

    memory information, if a swarm calculates the position to be changed so that the swarm will reach quickly to its food source because it is very apparent that to reach their destination they made their velocity slow, moderate or fast of their own instinct. So only availing the information about the next position and direction is ...

  5. MODELLING PARTICIPATORY GEOGRAPHIC INFORMATION SYSTEM FOR CUSTOMARY LAND CONFLICT RESOLUTION

    Directory of Open Access Journals (Sweden)

    E. A. Gyamera

    2017-11-01

    Full Text Available Since land contributes to about 73 % of most countries Gross Domestic Product (GDP, attention on land rights have tremendously increased globally. Conflicts over land have therefore become part of the major problems associated with land administration. However, the conventional mechanisms for land conflict resolution do not provide satisfactory result to disputants due to various factors. This study sought to develop a Framework of using Participatory Geographic Information System (PGIS for customary land conflict resolution. The framework was modelled using Unified Modelling Language (UML. The PGIS framework, called butterfly model, consists of three units namely, Social Unit (SU, Technical Unit (TU and Decision Making Unit (DMU. The name butterfly model for land conflict resolution was adopted for the framework based on its features and properties. The framework has therefore been recommended to be adopted for land conflict resolution in customary areas.

  6. Semantic concept-enriched dependence model for medical information retrieval.

    Science.gov (United States)

    Choi, Sungbin; Choi, Jinwook; Yoo, Sooyoung; Kim, Heechun; Lee, Youngho

    2014-02-01

    In medical information retrieval research, semantic resources have been mostly used by expanding the original query terms or estimating the concept importance weight. However, implicit term-dependency information contained in semantic concept terms has been overlooked or at least underused in most previous studies. In this study, we incorporate a semantic concept-based term-dependence feature into a formal retrieval model to improve its ranking performance. Standardized medical concept terms used by medical professionals were assumed to have implicit dependency within the same concept. We hypothesized that, by elaborately revising the ranking algorithms to favor documents that preserve those implicit dependencies, the ranking performance could be improved. The implicit dependence features are harvested from the original query using MetaMap. These semantic concept-based dependence features were incorporated into a semantic concept-enriched dependence model (SCDM). We designed four different variants of the model, with each variant having distinct characteristics in the feature formulation method. We performed leave-one-out cross validations on both a clinical document corpus (TREC Medical records track) and a medical literature corpus (OHSUMED), which are representative test collections in medical information retrieval research. Our semantic concept-enriched dependence model consistently outperformed other state-of-the-art retrieval methods. Analysis shows that the performance gain has occurred independently of the concept's explicit importance in the query. By capturing implicit knowledge with regard to the query term relationships and incorporating them into a ranking model, we could build a more robust and effective retrieval model, independent of the concept importance. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Online Cancer Information Seeking: Applying and Extending the Comprehensive Model of Information Seeking.

    Science.gov (United States)

    Van Stee, Stephanie K; Yang, Qinghua

    2017-10-30

    This study applied the comprehensive model of information seeking (CMIS) to online cancer information and extended the model by incorporating an exogenous variable: interest in online health information exchange with health providers. A nationally representative sample from the Health Information National Trends Survey 4 Cycle 4 was analyzed to examine the extended CMIS in predicting online cancer information seeking. Findings from a structural equation model supported most of the hypotheses derived from the CMIS, as well as the extension of the model related to interest in online health information exchange. In particular, socioeconomic status, beliefs, and interest in online health information exchange predicted utility. Utility, in turn, predicted online cancer information seeking, as did information-carrier characteristics. An unexpected but important finding from the study was the significant, direct relationship between cancer worry and online cancer information seeking. Theoretical and practical implications are discussed.

  8. Constructing topic models of Internet of Things for information processing.

    Science.gov (United States)

    Xin, Jie; Cui, Zhiming; Zhang, Shukui; He, Tianxu; Li, Chunhua; Huang, Haojing

    2014-01-01

    Internet of Things (IoT) is regarded as a remarkable development of the modern information technology. There is abundant digital products data on the IoT, linking with multiple types of objects/entities. Those associated entities carry rich information and usually in the form of query records. Therefore, constructing high quality topic hierarchies that can capture the term distribution of each product record enables us to better understand users' search intent and benefits tasks such as taxonomy construction, recommendation systems, and other communications solutions for the future IoT. In this paper, we propose a novel record entity topic model (RETM) for IoT environment that is associated with a set of entities and records and a Gibbs sampling-based algorithm is proposed to learn the model. We conduct extensive experiments on real-world datasets and compare our approach with existing methods to demonstrate the advantage of our approach.

  9. Constructing Topic Models of Internet of Things for Information Processing

    Directory of Open Access Journals (Sweden)

    Jie Xin

    2014-01-01

    Full Text Available Internet of Things (IoT is regarded as a remarkable development of the modern information technology. There is abundant digital products data on the IoT, linking with multiple types of objects/entities. Those associated entities carry rich information and usually in the form of query records. Therefore, constructing high quality topic hierarchies that can capture the term distribution of each product record enables us to better understand users’ search intent and benefits tasks such as taxonomy construction, recommendation systems, and other communications solutions for the future IoT. In this paper, we propose a novel record entity topic model (RETM for IoT environment that is associated with a set of entities and records and a Gibbs sampling-based algorithm is proposed to learn the model. We conduct extensive experiments on real-world datasets and compare our approach with existing methods to demonstrate the advantage of our approach.

  10. Model of Employees Motivation Through Gamification of Information System

    Directory of Open Access Journals (Sweden)

    Jolanta Kostecka

    2015-05-01

    Full Text Available In this article the problem of motivation of employees, who are working with information system and whose work environment is full of monotonous, boring and repetitive tasks, is analyzed. On the basis of literature, theoretical aspects of work motivation are analyzed and it is suggested to use gamification in order to solve this problem. On the basis of literature, theoretical and practical aspects of motivation of gamers and gamification are analyzed. After all, it is suggested to use model which joins main aspects of employee needs and gamification. Through example of accounting specialists the offered model is used in practice. Based on the results of the research, opportunities of motivating accounting specialists through gamification of information system are evaluated.

  11. Predictors of reducing sexual and reproductive risk behaviors based on the information-motivation-behavioral skills (IMB model among unmarried rural-to-urban female migrants in Shanghai, China.

    Directory of Open Access Journals (Sweden)

    Yong Cai

    Full Text Available BACKGROUND: Due to the increase of premarital sex and the lack of reproductive health services, unmarried rural-to-urban female migrants experience more risks of sex and reproductive health (SRH. This study was designed to describe SRH related knowledge, attitude and risk behaviors among unmarried rural-to-urban female migrants and examine the predictors of reducing sexual and reproductive risk behaviors based on information-motivation-behavioral skills (IMB model and to describe the relationships between the constructs. METHODS: We conducted a cross-sectional study to assess SRH related information, motivation, behavioral skills and preventive behaviors among unmarried rural-to-urban female migrants in Shanghai, one of the largest importers of migrant laborers in China. Structural equation modeling (SEM was used to assess the IMB model. RESULTS: A total of 944 subjects completed their questionnaires. The mean age was 21.2 years old (SD = 2.3; range 16 to 28. Over one-fourth of participants reported having had premarital sex (N = 261, 27.6% and among whom 15.3% reported having had the experience of unintended pregnancy, 14.6% with the experience of abortion. The final IMB model provided acceptable fit to the data (CFI = 0.99, RMSEA = 0.034. Reducing sexual and reproductive risk behaviors was significantly predicted by SRH related information (β = 0.681, P<0.001 and behavioral skills(β = 0.239, P<0.001. Motivation (β = 0.479, P<0.001 was the significant indirect predictor of reducing sexual and reproductive risk behaviors mediated through behavioral skills. CONCLUSIONS: The results highlight the importance and necessity of conducting reproductive health promotion among unmarried rural-to-urban female migrants in China. The IMB model could be used to predict reducing sexual and reproductive risk behaviors and it suggests future interventions should focus on improving SRH related information and behavioral skills.

  12. Open Data in Mobile Applications, New Models for Service Information

    Directory of Open Access Journals (Sweden)

    Manuel GÉRTRUDIX BARRIO

    2016-06-01

    Full Text Available The combination of open data generated by government and the proliferation of mobile devices enables the creation of new information services and improved delivery of existing ones. Significantly, it allows citizens access to simple,quick and effective way to information. Free applications that use open data provide useful information in real time, tailored to the user experience and / or geographic location. This changes the concept of “service information”. Both the infomediary sector and citizens now have new models of production and dissemination of this type of information. From the theoretical contextualization of aspects such as processes datification of reality, mobile registration of everyday experience, or reinterpretation of the service information, we analyze the role of open data in the public sector in Spain and its application concrete in building apps based on this data sets. The findings indicate that this is a phenomenon that will continue to grow because these applications provide useful and efficient information to decision-making in everyday life.

  13. Alpine Windharvest: development of information base regarding potentials and the necessary technical, legal and socio-economic conditions for expanding wind energy in the Alpine Space - CFD modelling evaluation - Summary of WindSim CFD modelling procedure and validation

    Energy Technology Data Exchange (ETDEWEB)

    Schaffner, B.; Cattin, R. [Meteotest, Berne (Switzerland)

    2005-07-01

    This report presents the development work carried out by the Swiss meteorology specialists of the company METEOTEST as part of a project carried out together with the Swiss wind-energy organisation 'Suisse Eole'. The framework for the project is the EU Interreg IIIB Alpine Space Programme, a European Community Initiative Programme funded by the European Regional Development Fund. The project investigated the use of digital relief-analysis. The report describes the development of a basic information system to aid the investigation of the technical, legal and socio-economical conditions for the use of wind energy in the alpine area. The report deals with the use of computational fluid dynamics and wind simulation modelling techniques and their validation. Recommendations on the use of the results are made.

  14. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  15. Model-Based Assurance Case+ (MBAC+): Tutorial on Modeling Radiation Hardness Assurance Activities

    Science.gov (United States)

    Austin, Rebekah; Label, Ken A.; Sampson, Mike J.; Evans, John; Witulski, Art; Sierawski, Brian; Karsai, Gabor; Mahadevan, Nag; Schrimpf, Ron; Reed, Robert A.

    2017-01-01

    This presentation will cover why modeling is useful for radiation hardness assurance cases, and also provide information on Model-Based Assurance Case+ (MBAC+), NASAs Reliability Maintainability Template, and Fault Propagation Modeling.

  16. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  17. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  18. Integrating Building Information Modeling and Green Building Certification: The BIM-LEED Application Model Development

    Science.gov (United States)

    Wu, Wei

    2010-01-01

    Building information modeling (BIM) and green building are currently two major trends in the architecture, engineering and construction (AEC) industry. This research recognizes the market demand for better solutions to achieve green building certification such as LEED in the United States. It proposes a new strategy based on the integration of BIM…

  19. A condensed review of the intelligent user modeling of information retrieval system

    International Nuclear Information System (INIS)

    Choi, Kwang

    2001-10-01

    This study discussed theoretical aspects of user modeling, modeling cases of commecial systems and elements that need consideration when constructing user models. The results of this study are 1) Comprehensive and previous analysis of system users is required to bulid user model. 2) User information is collected from users directly and inference. 3) Frame structure is compatible to build user model. 4) Prototype user model is essential to bulid a user model and based on previous user analysis. 5) User model builder has interactive information collection, inference, flexibility, model updating functions. 6) User model builder has to reflect user's feedback

  20. Creation and usage of component model in projecting information systems

    OpenAIRE

    Urbonas, Paulius

    2004-01-01

    The purpose of this project was to create the information system, using component model. Making new information systems, often the same models are building. Realizing system with component model in creating new system it‘s possible to use the old components. To describe advantages of component model information system was created for company “Vilseda”. If the created components used in future, they have been projected according to theirs types(grafical user interface, data and function reques...