WorldWideScience

Sample records for common information model

  1. SatisFactory Common Information Data Exchange Model

    OpenAIRE

    CERTH

    2016-01-01

    This deliverable defines the Common Information Data Exchange Model (CIDEM). The aim of CIDEM is to provide a model of information elements (e.g. concepts, even, relations, interfaces) used for information exchange between components as well as for modelling work performed by other tasks (e.g. knowledge models to support human resources optimization). The CIDEM definition is considered as a shared vocabulary that enables to address the information needs for the SatisFactory framework components.

  2. A Transparent Translation from Legacy System Model into Common Information Model: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Fei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simpson, Jeffrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-27

    Advance in smart grid is forcing utilities towards better monitoring, control and analysis of distribution systems, and requires extensive cyber-based intelligent systems and applications to realize various functionalities. The ability of systems, or components within systems, to interact and exchange services or information with each other is the key to the success of smart grid technologies, and it requires efficient information exchanging and data sharing infrastructure. The Common Information Model (CIM) is a standard that allows different applications to exchange information about an electrical system, and it has become a widely accepted solution for information exchange among different platforms and applications. However, most existing legacy systems are not developed using CIM, but using their own languages. Integrating such legacy systems is a challenge for utilities, and the appropriate utilization of the integrated legacy systems is even more intricate. Thus, this paper has developed an approach and open-source tool in order to translate legacy system models into CIM format. The developed tool is tested for a commercial distribution management system and simulation results have proved its effectiveness.

  3. Standardized reporting of functioning information on ICF-based common metrics.

    Science.gov (United States)

    Prodinger, Birgit; Tennant, Alan; Stucki, Gerold

    2018-02-01

    In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians

  4. Modeling Common-Sense Decisions

    Science.gov (United States)

    Zak, Michail

    This paper presents a methodology for efficient synthesis of dynamical model simulating a common-sense decision making process. The approach is based upon the extension of the physics' First Principles that includes behavior of living systems. The new architecture consists of motor dynamics simulating actual behavior of the object, and mental dynamics representing evolution of the corresponding knowledge-base and incorporating it in the form of information flows into the motor dynamics. The autonomy of the decision making process is achieved by a feedback from mental to motor dynamics. This feedback replaces unavailable external information by an internal knowledgebase stored in the mental model in the form of probability distributions.

  5. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    Science.gov (United States)

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Transforming library service through information commons case studies for the digital age

    CERN Document Server

    Bailey, D Russell

    2008-01-01

    The Information Commons (IC) strives to unite all the facts and figures of the world into a resource available to everyone. Many academic libraries are considering implementing an information commons model that reflects the contemporary way patrons use resources. Others plan on revitalizing their libraries through configurations that easily integrate research, teaching, and learning with a digital focus. This invaluable guide provides the "how-to" information necessary for institutions considering the development of an information commons. Offering plain-speaking advice on what works, expert

  7. A Concept of Constructing a Common Information Space for High Tech Programs Using Information Analytical Systems

    Science.gov (United States)

    Zakharova, Alexandra A.; Kolegova, Olga A.; Nekrasova, Maria E.

    2016-04-01

    The paper deals with the issues in program management used for engineering innovative products. The existing project management tools were analyzed. The aim is to develop a decision support system that takes into account the features of program management used for high-tech products: research intensity, a high level of technical risks, unpredictable results due to the impact of various external factors, availability of several implementing agencies. The need for involving experts and using intelligent techniques for information processing is demonstrated. A conceptual model of common information space to support communication between members of the collaboration on high-tech programs has been developed. The structure and objectives of the information analysis system “Geokhod” were formulated with the purpose to implement the conceptual model of common information space in the program “Development and production of new class mining equipment - “Geokhod”.

  8. CERIF: The Common European Research Information Format Model

    Directory of Open Access Journals (Sweden)

    Brigitte Jörg

    2010-09-01

    Full Text Available With increased computing power more data than ever are being and will be produced, stored and (re- used. Data are collected in databases, computed and annotated, or transformed by specific tools. The knowledge from data is documented in research publications, reports, presentations, or other types of files. The management of data and knowledge is difficult, and even more complicated is their re-use, exchange, or integration. To allow for quality analysis or integration across data sets and to ensure access to scientific knowledge, additional information - Research Information - has to be assigned to data and knowledge entities. We present the metadata model CERIF to add information to entities such as Publication, Project, Organisation, Person, Product, Patent, Service, Equipment, and Facility and to manage the semantically enhanced relationships between these entities in a formalized way. CERIF has been released as an EC Recommendation to European Member States in 2000. Here, we refer to the latest version CERIF 2008-1.0.

  9. Using social network analysis and agent-based modelling to explore information flow using common operational pictures for maritime search and rescue operations.

    Science.gov (United States)

    Baber, C; Stanton, N A; Atkinson, J; McMaster, R; Houghton, R J

    2013-01-01

    The concept of common operational pictures (COPs) is explored through the application of social network analysis (SNA) and agent-based modelling to a generic search and rescue (SAR) scenario. Comparing the command structure that might arise from standard operating procedures with the sort of structure that might arise from examining information-in-common, using SNA, shows how one structure could be more amenable to 'command' with the other being more amenable to 'control' - which is potentially more suited to complex multi-agency operations. An agent-based model is developed to examine the impact of information sharing with different forms of COPs. It is shown that networks using common relevant operational pictures (which provide subsets of relevant information to groups of agents based on shared function) could result in better sharing of information and a more resilient structure than networks that use a COP. SNA and agent-based modelling are used to compare different forms of COPs for maritime SAR operations. Different forms of COP change the communications structures in the socio-technical systems in which they operate, which has implications for future design and development of a COP.

  10. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  11. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  12. Information-Theoretic Inference of Common Ancestors

    Directory of Open Access Journals (Sweden)

    Bastian Steudel

    2015-04-01

    Full Text Available A directed acyclic graph (DAG partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version.

  13. Joint Service Common Operating Environment (COE) Common Geographic Information System functional requirements

    Energy Technology Data Exchange (ETDEWEB)

    Meitzler, W.D.

    1992-06-01

    In the context of this document and COE, the Geographic Information Systems (GIS) are decision support systems involving the integration of spatially referenced data in a problem solving environment. They are digital computer systems for capturing, processing, managing, displaying, modeling, and analyzing geographically referenced spatial data which are described by attribute data and location. The ability to perform spatial analysis and the ability to combine two or more data sets to create new spatial information differentiates a GIS from other computer mapping systems. While the CCGIS allows for data editing and input, its primary purpose is not to prepare data, but rather to manipulate, analyte, and clarify it. The CCGIS defined herein provides GIS services and resources including the spatial and map related functionality common to all subsystems contained within the COE suite of C4I systems. The CCGIS, which is an integral component of the COE concept, relies on the other COE standard components to provide the definition for other support computing services required.

  14. Reuse-oriented common structure discovery in assembly models

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Pan; Zhang Jie; Li, Yuan; Yu, Jian Feng [The Ministry of Education Key Lab of Contemporary Design and Integrated Manufacturing Technology, Northwestern Polytechnical University, Xian (China)

    2017-01-15

    Discovering the common structures in assembly models provides designers with the commonalities that carry significant design knowledge across multiple products, which helps to improve design efficiency and accelerate the design process. In this paper, a discovery method has been developed to obtain the common structure in assembly models. First, this work proposes a graph descriptor that captures both the geometrical and topological information of the assembly model, in which shape vectors and link vectors quantitatively describe the part models and mating relationships, respectively. Then, a clustering step is introduced into the discovery, which clusters the similar parts by comparing the similarities between them. In addition, some rules are also provided to filter the frequent subgraphs in order to obtain the expected results. Compared with the existing method, the proposed approach could overcome the disadvantages by providing an independent description of the part model and taking into consideration the similar parts in assemblies, which leads to a more reasonable result. Finally, some experiments have been carried out and the experimental results demonstrate the effectiveness of the proposed approach.

  15. Reuse-oriented common structure discovery in assembly models

    International Nuclear Information System (INIS)

    Wang, Pan; Zhang Jie; Li, Yuan; Yu, Jian Feng

    2017-01-01

    Discovering the common structures in assembly models provides designers with the commonalities that carry significant design knowledge across multiple products, which helps to improve design efficiency and accelerate the design process. In this paper, a discovery method has been developed to obtain the common structure in assembly models. First, this work proposes a graph descriptor that captures both the geometrical and topological information of the assembly model, in which shape vectors and link vectors quantitatively describe the part models and mating relationships, respectively. Then, a clustering step is introduced into the discovery, which clusters the similar parts by comparing the similarities between them. In addition, some rules are also provided to filter the frequent subgraphs in order to obtain the expected results. Compared with the existing method, the proposed approach could overcome the disadvantages by providing an independent description of the part model and taking into consideration the similar parts in assemblies, which leads to a more reasonable result. Finally, some experiments have been carried out and the experimental results demonstrate the effectiveness of the proposed approach

  16. Modeling Common-Sense Decisions in Artificial Intelligence

    Science.gov (United States)

    Zak, Michail

    2010-01-01

    A methodology has been conceived for efficient synthesis of dynamical models that simulate common-sense decision- making processes. This methodology is intended to contribute to the design of artificial-intelligence systems that could imitate human common-sense decision making or assist humans in making correct decisions in unanticipated circumstances. This methodology is a product of continuing research on mathematical models of the behaviors of single- and multi-agent systems known in biology, economics, and sociology, ranging from a single-cell organism at one extreme to the whole of human society at the other extreme. Earlier results of this research were reported in several prior NASA Tech Briefs articles, the three most recent and relevant being Characteristics of Dynamics of Intelligent Systems (NPO -21037), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48; Self-Supervised Dynamical Systems (NPO-30634), NASA Tech Briefs, Vol. 27, No. 3 (March 2003), page 72; and Complexity for Survival of Living Systems (NPO- 43302), NASA Tech Briefs, Vol. 33, No. 7 (July 2009), page 62. The methodology involves the concepts reported previously, albeit viewed from a different perspective. One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Models of motor dynamics are used to simulate the observable behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. Autonomy is imparted to the decisionmaking process by feedback from mental to motor dynamics. This feedback replaces unavailable external information by information stored in the internal knowledge base. Representation

  17. The Information Warfare Life Cycle Model

    Directory of Open Access Journals (Sweden)

    Brett van Niekerk

    2011-11-01

    Full Text Available Information warfare (IW is a dynamic and developing concept, which constitutes a number of disciplines. This paper aims to develop a life cycle model for information warfare that is applicable to all of the constituent disciplines. The model aims to be scalable and applicable to civilian and military incidents where information warfare tactics are employed. Existing information warfare models are discussed, and a new model is developed from the common aspects of these existing models. The proposed model is then applied to a variety of incidents to test its applicability and scalability. The proposed model is shown to be applicable to multiple disciplines of information warfare and is scalable, thus meeting the objectives of the model.

  18. The Information Warfare Life Cycle Model

    Directory of Open Access Journals (Sweden)

    Brett van Niekerk

    2011-03-01

    Full Text Available Information warfare (IW is a dynamic and developing concept, which constitutes a number of disciplines. This paper aims to develop a life cycle model for information warfare that is applicable to all of the constituent disciplines. The model aims to be scalable and applicable to civilian and military incidents where information warfare tactics are employed. Existing information warfare models are discussed, and a new model is developed from the common aspects of these existing models. The proposed model is then applied to a variety of incidents to test its applicability and scalability. The proposed model is shown to be applicable to multiple disciplines of information warfare and is scalable, thus meeting the objectives of the model.

  19. Cross-cultural perspectives on physician and lay models of the common cold.

    Science.gov (United States)

    Baer, Roberta D; Weller, Susan C; de Alba García, Javier García; Rocha, Ana L Salcedo

    2008-06-01

    We compare physicians and laypeople within and across cultures, focusing on similarities and differences across samples, to determine whether cultural differences or lay-professional differences have a greater effect on explanatory models of the common cold. Data on explanatory models for the common cold were collected from physicians and laypeople in South Texas and Guadalajara, Mexico. Structured interview materials were developed on the basis of open-ended interviews with samples of lay informants at each locale. A structured questionnaire was used to collect information from each sample on causes, symptoms, and treatments for the common cold. Consensus analysis was used to estimate the cultural beliefs for each sample. Instead of systematic differences between samples based on nationality or level of professional training, all four samples largely shared a single-explanatory model of the common cold, with some differences on subthemes, such as the role of hot and cold forces in the etiology of the common cold. An evaluation of our findings indicates that, although there has been conjecture about whether cultural or lay-professional differences are of greater importance in understanding variation in explanatory models of disease and illness, systematic data collected on community and professional beliefs indicate that such differences may be a function of the specific illness. Further generalizations about lay-professional differences need to be based on detailed data for a variety of illnesses, to discern patterns that may be present. Finally, a systematic approach indicates that agreement across individual explanatory models is sufficient to allow for a community-level explanatory model of the common cold.

  20. Multitask TSK fuzzy system modeling by mining intertask common hidden structure.

    Science.gov (United States)

    Jiang, Yizhang; Chung, Fu-Lai; Ishibuchi, Hisao; Deng, Zhaohong; Wang, Shitong

    2015-03-01

    The classical fuzzy system modeling methods implicitly assume data generated from a single task, which is essentially not in accordance with many practical scenarios where data can be acquired from the perspective of multiple tasks. Although one can build an individual fuzzy system model for each task, the result indeed tells us that the individual modeling approach will get poor generalization ability due to ignoring the intertask hidden correlation. In order to circumvent this shortcoming, we consider a general framework for preserving the independent information among different tasks and mining hidden correlation information among all tasks in multitask fuzzy modeling. In this framework, a low-dimensional subspace (structure) is assumed to be shared among all tasks and hence be the hidden correlation information among all tasks. Under this framework, a multitask Takagi-Sugeno-Kang (TSK) fuzzy system model called MTCS-TSK-FS (TSK-FS for multiple tasks with common hidden structure), based on the classical L2-norm TSK fuzzy system, is proposed in this paper. The proposed model can not only take advantage of independent sample information from the original space for each task, but also effectively use the intertask common hidden structure among multiple tasks to enhance the generalization performance of the built fuzzy systems. Experiments on synthetic and real-world datasets demonstrate the applicability and distinctive performance of the proposed multitask fuzzy system model in multitask regression learning scenarios.

  1. Freeing data through The Polar Information Commons

    Science.gov (United States)

    de Bruin, T.; Chen, R. S.; Parsons, M. A.; Carlson, D. J.; Cass, K.; Finney, K.; Wilbanks, J.; Jochum, K.

    2010-12-01

    The polar regions are changing rapidly with dramatic global effect. Wise management of resources, improved decision support, and effective international cooperation on resource and geopolitical issues require deeper understanding and better prediction of these changes. Unfortunately, polar data and information remain scattered, scarce, and sporadic. Inspired by the Antarctic Treaty of 1959 that established the Antarctic as a global commons to be used only for peaceful purposes and scientific research, we assert that data and information about the polar regions are themselves “public goods” that should be shared ethically and with minimal constraint. ICSU’s Committee on Data (CODATA) therefore started the Polar Information Commons (PIC) as an open, virtual repository for vital scientific data and information. The PIC provides a shared, community-based cyber-infrastructure fostering innovation, improving scientific efficiency, and encouraging participation in polar research, education, planning, and management. The PIC builds on the legacy of the International Polar Year (IPY), providing a long-term framework for access to and preservation of both existing and future data and information about the polar regions. Rapid change demands rapid data access. The PIC system enables scientists to quickly expose their data to the world and share them through open protocols on the Internet. A PIC digital label will alert users and data centers to new polar data and ensure that usage rights are clear. The PIC utilizes the Science Commons Protocol for Implementing Open Access Data, which promotes open data access through the public domain coupled with community norms of practice to ensure use of data in a fair and equitable manner. A set of PIC norms has been developed in consultation with key polar data organizations and other stakeholders. We welcome inputs from the broad science community as we further develop and refine the PIC approach and move ahead with

  2. Constructing Common Information Space across Distributed Emergency Medical Teams

    DEFF Research Database (Denmark)

    Zhang, Zhan; Sarcevic, Aleksandra; Bossen, Claus

    2017-01-01

    This paper examines coordination and real-time information sharing across four emergency medical teams in a high-risk and distributed setting as they provide care to critically injured patients within the first hour after injury. Through multiple field studies we explored how common understanding...... of critical patient data is established across these heterogeneous teams and what coordination mechanisms are being used to support information sharing and interpretation. To analyze the data, we drew on the concept of Common Information Spaces (CIS). Our results showed that teams faced many challenges...... in achieving efficient information sharing and coordination, including difficulties in locating and assembling team members, communicating and interpreting information from the field, and accommodating differences in team goals and information needs, all while having minimal technology support. We reflect...

  3. The Common information space of the Training and Consulting Center design

    Directory of Open Access Journals (Sweden)

    Dorofeeva N.S.

    2017-04-01

    Full Text Available the article describes the relevance of the research, such as the assessment of the educational and consulting services market and also the competitive environment based on the analysis of the regional innovative infrastructure. The results of the center activity design are presented, and the basis of the concept of this center functioning is TRIZ (the Theory of Invention Tasks Solving. The basic functional capabilities of the common information space (CIS are formulated and justified in this research, the CIS-structure is formed, the interfaces of the information resources in the CIS for the interaction with potential users have been developed, and data modeling has been carried out.

  4. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    International Nuclear Information System (INIS)

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  5. The common information model CIM IEC 61968/61970 and 62325 : a practical introduction to the CIM

    CERN Document Server

    Uslar, Mathias; Rohjans, Sebastian; Trefke, Jörn; Vasquez Gonzalez, Jose Manuel

    2012-01-01

    Within the Smart Grid, the combination of automation equipment, communication technology and IT is crucial. Interoperability of devices and systems can be seen as the key enabler of smart grids. Therefore, international initiatives have been started in order to identify interoperability core standards for Smart Grids.   IEC 62357, the so called Seamless Integration Architecture, is one of these very core standards, which has been identified by recent Smart Grid initiatives and roadmaps to be essential for building and managing intelligent power systems. The Seamless Integration Architecture provides an overview of the interoperability and relations between further standards from IEC TC 57 like the IEC 61970/61968: Common Information Model - CIM.   CIM has proven to be a mature standard for interoperability and engineering; consequently, it is a cornerstone of the IEC Smart Grid Standardization Roadmap. This book provides an overview on how the CIM developed, in which international projects and roadmaps is h...

  6. Information risk and security modeling

    Science.gov (United States)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  7. LibQUAL+® and the Information Commons Initiative at Buffalo State College: 2003 to 2009

    Directory of Open Access Journals (Sweden)

    Eugene J. Harvey

    2013-06-01

    Full Text Available Objective – To examine the effect of a transition to an information commons model of service organization on perceptions of library service quality. In 2003, the E. H. Butler Library at Buffalo State College began development of an Information Commons, which included moving the computing help desk to the library, reorganizing the physical units in the library around functional service areas, and moving the reference desk to the lobby.Methods – In 2003, 2006, and 2009, the library administered the LibQUAL+ survey, which measures the relationship between perceived library service delivery and library user satisfaction. The 2003 survey was conducted before the implementation of the Information Commons Initiative. Analyses of variance were conducted to compare the effect of the service changes on users’ perceptions of library service quality between the three data collection points, as well as to explore differences between undergraduate and graduate students. Results – The analyses revealed significant differences between the three data points, with significantly more positive perceptions of library service quality in 2006 and 2009 than in 2003. Comparisons between 2006 and 2009 were not statistically significant. In 2003, no significant differences were found between undergraduate and graduate students’ perceptions. However, in 2006, undergraduate students perceived higher levels of service quality after the development of the Information Commons than graduate students. This difference was maintained in 2009.Conclusion – The Information Commons has become a popular place for new programming, exhibits, workshops, and cultural events on campus. The library staff and administration have regained the respect of the campus community, as well as an appreciation for user-driven input and feedback and for ongoing assessment and evaluation.

  8. State-Mandated (Mis)Information and Women's Endorsement of Common Abortion Myths.

    Science.gov (United States)

    Berglas, Nancy F; Gould, Heather; Turok, David K; Sanders, Jessica N; Perrucci, Alissa C; Roberts, Sarah C M

    The extent that state-mandated informed consent scripts affect women's knowledge about abortion is unknown. We examine women's endorsement of common abortion myths before and after receiving state-mandated information that included accurate and inaccurate statements about abortion. In Utah, women presenting for an abortion information visit completed baseline surveys (n = 494) and follow-up interviews 3 weeks later (n = 309). Women answered five items about abortion risks, indicating which of two statements was closer to the truth (as established by prior research) or responding "don't know." We developed a continuous myth endorsement scale (range, 0-1) and, using multivariable regression models, examined predictors of myth endorsement at baseline and change in myth endorsement from baseline to follow-up. At baseline, many women reported not knowing about abortion risks (range, 36%-70% across myths). Women who were younger, non-White, and had previously given birth but not had a prior abortion reported higher myth endorsement at baseline. Overall, myth endorsement decreased after the information visit (0.37-0.31; p < .001). However, endorsement of the myth that was included in the state script-describing inaccurate risks of depression and anxiety-increased at follow-up (0.47-0.52; p < .05). Lack of knowledge about the effects of abortion is common. Knowledge of information that was accurately presented or not referenced in state-mandated scripts increased. In contrast, inaccurate information was associated with decreases in women's knowledge about abortion, violating accepted principles of informed consent. State policies that require or result in the provision of inaccurate information should be reconsidered. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  9. Testing for a Common Volatility Process and Information Spillovers in Bivariate Financial Time Series Models

    NARCIS (Netherlands)

    J. Chen (Jinghui); M. Kobayashi (Masahito); M.J. McAleer (Michael)

    2016-01-01

    textabstractThe paper considers the problem as to whether financial returns have a common volatility process in the framework of stochastic volatility models that were suggested by Harvey et al. (1994). We propose a stochastic volatility version of the ARCH test proposed by Engle and Susmel (1993),

  10. Libre: Freeing Polar Data in an Information Commons

    Science.gov (United States)

    Duerr, R. E.; Parsons, M. A.

    2010-12-01

    As noted in the session description “The polar regions are at the forefront of modern environmental change, currently experiencing the largest and fastest changes in climate and environment”. Wise use of resources, astute management of our environment, improved decision support, and effective international cooperation on natural resource and geopolitical issues require a deeper understanding of, and an ability to predict change and its impact. Understanding and knowledge are built on data and information, yet polar information is scattered, scarce, and sporadic. Rapid change demands rapid data access. We envision a system where investigators quickly expose their data to the world and share them, without restriction, through open protocols on the Internet. A single giant, central archive is not practical for all polar data held around the world. Instead, we seek a collaborative, virtual space, where scientific data and information could be shared ethically and with minimal constraints. Inspired by the Antarctic Treaty of 1959 that established the Antarctic as a global commons to generate greater scientific understanding, the International Council of Science leads the Polar Information Commons (PIC). The PIC, engendered by the International Polar Year (IPY) and work on the IPY data policy, serves as an open, virtual repository for vital scientific data and information. An international network of scientific and data management organizations concerned with the scientific quality, integrity, and stewardship of data is developing the PIC. The PIC utilizes the Science Commons Protocol for Implementing Open Access Data, including establishment of community norms to encourage appropriate contributions to and use of PIC content. Data descriptions (metadata) are not necessarily registered in formal repositories or catalogues. They may simply be exposed to search engines or broadcast through syndication services such as RSS or Atom. The data are labeled or branded as part

  11. Common modelling approaches for training simulators for nuclear power plants

    International Nuclear Information System (INIS)

    1990-02-01

    Training simulators for nuclear power plant operating staff have gained increasing importance over the last twenty years. One of the recommendations of the 1983 IAEA Specialists' Meeting on Nuclear Power Plant Training Simulators in Helsinki was to organize a Co-ordinated Research Programme (CRP) on some aspects of training simulators. The goal statement was: ''To establish and maintain a common approach to modelling for nuclear training simulators based on defined training requirements''. Before adapting this goal statement, the participants considered many alternatives for defining the common aspects of training simulator models, such as the programming language used, the nature of the simulator computer system, the size of the simulation computers, the scope of simulation. The participants agreed that it was the training requirements that defined the need for a simulator, the scope of models and hence the type of computer complex that was required, the criteria for fidelity and verification, and was therefore the most appropriate basis for the commonality of modelling approaches. It should be noted that the Co-ordinated Research Programme was restricted, for a variety of reasons, to consider only a few aspects of training simulators. This report reflects these limitations, and covers only the topics considered within the scope of the programme. The information in this document is intended as an aid for operating organizations to identify possible modelling approaches for training simulators for nuclear power plants. 33 refs

  12. Information Commons Features Cutting-Edge Conservation and Technology

    Science.gov (United States)

    Gilroy, Marilyn

    2011-01-01

    This article features Richard J. Klarchek Information Commons (IC) at Loyola University Chicago, an all-glass library building on the shore of Chicago's Lake Michigan that is not only a state-of-the-art digital research library and study space--it also runs on cutting-edge energy technology. The building has attracted attention and visitors from…

  13. Effects of Common Factors on Dynamics of Stocks Traded by Investors with Limited Information Capacity

    Directory of Open Access Journals (Sweden)

    Songtao Wu

    2017-01-01

    Full Text Available An artificial stock market with agent-based model is built to investigate effects of different information characteristics of common factors on the dynamics stock returns. Investors with limited information capacity update their beliefs based on the information they have obtained and processed and optimize portfolios based on beliefs. We find that with changing of concerned information characteristics the uncertainty of stock price returns rises and is higher than the uncertainty of intrinsic value returns. However, this increase is constrained by the limited information capacity of investors. At the same time, we also find that dependence between returns of stock prices also increased with the changing information environment. The uncertainty and dependency pertaining to prices show a positive relationship. However, the positive relationship is weakened when taking into account the features of intrinsic values, based on which prices are generated.

  14. A Research Agenda for the Common Core State Standards: What Information Do Policymakers Need?

    Science.gov (United States)

    Rentner, Diane Stark; Ferguson, Maria

    2014-01-01

    This report looks specifically at the information and data needs of policymakers related to the Common Core State Standards (CCSS) and the types of research that could provide this information. The ideas in this report were informed by a series of meetings and discussions about a possible research agenda for the Common Core, sponsored by the…

  15. Enhanced Publications: Data Models and Information Systems

    Directory of Open Access Journals (Sweden)

    Alessia Bardi

    2014-04-01

    Full Text Available “Enhanced publications” are commonly intended as digital publications that consist of a mandatory narrative part (the description of the research conducted plus related “parts”, such as datasets, other publications, images, tables, workflows, devices. The state-of-the-art on information systems for enhanced publications has today reached the point where some kind of common understanding is required, in order to provide the methodology and language for scientists to compare, analyse, or simply discuss the multitude of solutions in the field. In this paper, we thoroughly examined the literature with a two-fold aim: firstly, introducing the terminology required to describe and compare structural and semantic features of existing enhanced publication data models; secondly, proposing a classification of enhanced publication information systems based on their main functional goals.

  16. Toward common working tools: Arab League Documentation and Information Centre experience

    Energy Technology Data Exchange (ETDEWEB)

    Redissi, M [ALDOC (Tunisia)

    1990-05-01

    The adoption of Arab common working tools in information handling has been one of the priorities of Arab League Documentation and Information Centre (ALDOC). Problems arising from the processing of Arabic language have been progressively settled. The Tunisian experience in the elimination of transliteration is worth mentioning. (author). 17 refs.

  17. Toward common working tools: Arab League Documentation and Information Centre experience

    International Nuclear Information System (INIS)

    Redissi, M.

    1990-05-01

    The adoption of Arab common working tools in information handling has been one of the priorities of Arab League Documentation and Information Centre (ALDOC). Problems arising from the processing of Arabic language have been progressively settled. The Tunisian experience in the elimination of transliteration is worth mentioning. (author). 17 refs

  18. High pressure common rail injection system modeling and control.

    Science.gov (United States)

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Language-Independent and Language-Specific Aspects of Early Literacy: An Evaluation of the Common Underlying Proficiency Model

    Science.gov (United States)

    Goodrich, J. Marc; Lonigan, Christopher J.

    2017-01-01

    According to the common underlying proficiency model (Cummins, 1981), as children acquire academic knowledge and skills in their first language, they also acquire language-independent information about those skills that can be applied when learning a second language. The purpose of this study was to evaluate the relevance of the common underlying…

  20. Potential Ramifications of Common Core State Standards Adoption on Information Literacy

    Directory of Open Access Journals (Sweden)

    Jacob Paul Eubanks

    2014-07-01

    Full Text Available In the United States, the decline in jobs for high school educated workers and the proliferation of jobs for post-secondary educated workers is driving the development of the Common Core State Standards. The Common Core State Standards theoretically shift K-12 pedagogy towards ability development of critical and extended thinking skills, preparing high school graduates for college and career readiness. This literature review explores the reasoning behind the shift to the Common Core State Standards and asks questions regarding the potential ramifications their adoption might have on post-secondary information literacy instruction.

  1. Detailed requirements document for common software of shuttle program information management system

    Science.gov (United States)

    Everette, J. M.; Bradfield, L. D.; Horton, C. L.

    1975-01-01

    Common software was investigated as a method for minimizing development and maintenance cost of the shuttle program information management system (SPIMS) applications while reducing the time-frame of their development. Those requirements satisfying these criteria are presented along with the stand-alone modules which may be used directly by applications. The SPIMS applications operating on the CYBER 74 computer, are specialized information management systems which use System 2000 as a data base manager. Common software provides the features to support user interactions on a CRT terminal using form input and command response capabilities. These features are available as subroutines to the applications.

  2. Common Mathematical Model of Fatigue Characteristics

    Directory of Open Access Journals (Sweden)

    Z. Maléř

    2004-01-01

    Full Text Available This paper presents a new common mathematical model which is able to describe fatigue characteristics in the whole necessary range by one equation only:log N = A(R + B(R ∙ log Sawhere A(R = AR2 + BR + C and B(R = DR2 + AR + F.This model was verified by five sets of fatigue data taken from the literature and by our own three additional original fatigue sets. The fatigue data usually described the region of N 104 to 3 x 106 and stress ratio of R = -2 to 0.5. In all these cases the proposed model described fatigue results with small scatter. Studying this model, following knowledge was obtained:– the parameter ”stress ratio R” was a good physical characteristic– the proposed model provided a good description of the eight collections of fatigue test results by one equation only– the scatter of the results through the whole scope is only a little greater than that round the individual S/N curve– using this model while testing may reduce the number of test samples and shorten the test time– as the proposed model represents a common form of the S/N curve, it may be used for processing uniform objective fatigue life results, which may enable mutual comparison of fatigue characteristics.

  3. Multiscale information modelling for heart morphogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Abdulla, T; Imms, R; Summers, R [Department of Electronic and Electrical Engineering, Loughborough University, Loughborough (United Kingdom); Schleich, J M, E-mail: T.Abdulla@lboro.ac.u [LTSI Signal and Image Processing Laboratory, University of Rennes 1, Rennes (France)

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  4. Multiscale information modelling for heart morphogenesis

    International Nuclear Information System (INIS)

    Abdulla, T; Imms, R; Summers, R; Schleich, J M

    2010-01-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  5. A Model for Information

    Directory of Open Access Journals (Sweden)

    Paul Walton

    2014-09-01

    Full Text Available This paper uses an approach drawn from the ideas of computer systems modelling to produce a model for information itself. The model integrates evolutionary, static and dynamic views of information and highlights the relationship between symbolic content and the physical world. The model includes what information technology practitioners call “non-functional” attributes, which, for information, include information quality and information friction. The concepts developed in the model enable a richer understanding of Floridi’s questions “what is information?” and “the informational circle: how can information be assessed?” (which he numbers P1 and P12.

  6. Information density converges in dialogue: Towards an information-theoretic model.

    Science.gov (United States)

    Xu, Yang; Reitter, David

    2018-01-01

    The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Detailed clinical models: representing knowledge, data and semantics in healthcare information technology.

    Science.gov (United States)

    Goossen, William T F

    2014-07-01

    This paper will present an overview of the developmental effort in harmonizing clinical knowledge modeling using the Detailed Clinical Models (DCMs), and will explain how it can contribute to the preservation of Electronic Health Records (EHR) data. Clinical knowledge modeling is vital for the management and preservation of EHR and data. Such modeling provides common data elements and terminology binding with the intention of capturing and managing clinical information over time and location independent from technology. Any EHR data exchange without an agreed clinical knowledge modeling will potentially result in loss of information. Many attempts exist from the past to model clinical knowledge for the benefits of semantic interoperability using standardized data representation and common terminologies. The objective of each project is similar with respect to consistent representation of clinical data, using standardized terminologies, and an overall logical approach. However, the conceptual, logical, and the technical expressions are quite different in one clinical knowledge modeling approach versus another. There currently are synergies under the Clinical Information Modeling Initiative (CIMI) in order to create a harmonized reference model for clinical knowledge models. The goal for the CIMI is to create a reference model and formalisms based on for instance the DCM (ISO/TS 13972), among other work. A global repository of DCMs may potentially be established in the future.

  8. Analysing commons to improve the design of volunteered geographic information repositories

    CSIR Research Space (South Africa)

    Van den Berg, H

    2011-06-01

    Full Text Available that everyone with Internet access can join the user community. Volunteered geographic information (VGI) is a special case of user generated content. Web 2.0 technologies have enabled user generated commons, such as open source projects and Wikipedia; the former...

  9. Development of a common data model for scientific simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J. [Los Alamos National Lab., NM (United States); Butler, D.M. [Limit Point Systems, Inc. (United States); Matarazzo, C.; Miller, M. [Lawrence Livermore National Lab., CA (United States); Schoof, L. [Sandia National Lab., Albuquerque, NM (United States)

    1999-06-01

    The problem of sharing data among scientific simulation models is a difficult and persistent one. Computational scientists employ an enormous variety of discrete approximations in modeling physical processes on computers. Problems occur when models based on different representations are required to exchange data with one another, or with some other software package. Within the DOE`s Accelerated Strategic Computing Initiative (ASCI), a cross-disciplinary group called the Data Models and Formats (DMF) group, has been working to develop a common data model. The current model is comprised of several layers of increasing semantic complexity. One of these layers is an abstract model based on set theory and topology called the fiber bundle kernel (FBK). This layer provides the flexibility needed to describe a wide range of mesh-approximated functions as well as other entities. This paper briefly describes the ASCI common data model, its mathematical basis, and ASCI prototype development. These prototypes include an object-oriented data management library developed at Los Alamos called the Common Data Model Library or CDMlib, the Vector Bundle API from the Lawrence Livermore Laboratory, and the DMF API from Sandia National Laboratory.

  10. A multiple shock model for common cause failures using discrete Markov chain

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Kang, Chang Soon

    1992-01-01

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  11. Data Model Management for Space Information Systems

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool

  12. Modeling in the Common Core State Standards

    Science.gov (United States)

    Tam, Kai Chung

    2011-01-01

    The inclusion of modeling and applications into the mathematics curriculum has proven to be a challenging task over the last fifty years. The Common Core State Standards (CCSS) has made mathematical modeling both one of its Standards for Mathematical Practice and one of its Conceptual Categories. This article discusses the need for mathematical…

  13. PACC information management code for common cause failures analysis

    International Nuclear Information System (INIS)

    Ortega Prieto, P.; Garcia Gay, J.; Mira McWilliams, J.

    1987-01-01

    The purpose of this paper is to present the PACC code, which, through an adequate data management, makes the task of computerized common-mode failure analysis easier. PACC processes and generates information in order to carry out the corresponding qualitative analysis, by means of the boolean technique of transformation of variables, and the quantitative analysis either using one of several parametric methods or a direct data-base. As far as the qualitative analysis is concerned, the code creates several functional forms for the transformation equations according to the user's choice. These equations are subsequently processed by boolean manipulation codes, such as SETS. The quantitative calculations of the code can be carried out in two different ways: either starting from a common cause data-base, or through parametric methods, such as the Binomial Failure Rate Method, the Basic Parameters Method or the Multiple Greek Letter Method, among others. (orig.)

  14. Hospital information system: reusability, designing, modelling, recommendations for implementing.

    Science.gov (United States)

    Huet, B

    1998-01-01

    The aims of this paper are to precise some essential conditions for building reuse models for hospital information systems (HIS) and to present an application for hospital clinical laboratories. Reusability is a general trend in software, however reuse can involve a more or less part of design, classes, programs; consequently, a project involving reusability must be precisely defined. In the introduction it is seen trends in software, the stakes of reuse models for HIS and the special use case constituted with a HIS. The main three parts of this paper are: 1) Designing a reuse model (which objects are common to several information systems?) 2) A reuse model for hospital clinical laboratories (a genspec object model is presented for all laboratories: biochemistry, bacteriology, parasitology, pharmacology, ...) 3) Recommendations for generating plug-compatible software components (a reuse model can be implemented as a framework, concrete factors that increase reusability are presented). In conclusion reusability is a subtle exercise of which project must be previously and carefully defined.

  15. MRI information for commonly used otologic implants: review and update.

    Science.gov (United States)

    Azadarmaki, Roya; Tubbs, Rhonda; Chen, Douglas A; Shellock, Frank G

    2014-04-01

    To review information on magnetic resonance imaging (MRI) issues for commonly used otologic implants. Manufacturing companies, National Library of Medicine's online database, and an additional online database (www.MRIsafety.com). A literature review of the National Library of Medicine's online database with focus on MRI issues for otologic implants was performed. The MRI information on implants provided by manufacturers was reviewed. Baha and Ponto Pro osseointegrated implants' abutment and fixture and the implanted magnet of the Sophono Alpha 1 and 2 abutment-free systems are approved for 3-Tesla magnetic resonance (MR) systems. The external processors of these devices are MR Unsafe. Of the implants tested, middle ear ossicular prostheses, including stapes prostheses, except for the 1987 McGee prosthesis, are MR Conditional for 1.5-Tesla (and many are approved for 3-Tesla) MR systems. Cochlear implants with removable magnets are approved for patients undergoing MRI at 1.5 Tesla after magnet removal. The MED-EL PULSAR, SONATA, CONCERT, and CONCERT PIN cochlear implants can be used in patients undergoing MRI at 1.5 Tesla with application of a protective bandage. The MED-EL COMBI 40+ can be used in 0.2-Tesla MR systems. Implants made from nonmagnetic and nonconducting materials are MR Safe. Knowledge of MRI guidelines for commonly used otologic implants is important. Guidelines on MRI issues approved by the US Food and Drug Administration are not always the same compared with other parts of the world. This monograph provides a current reference for physicians on MRI issues for commonly used otologic implants.

  16. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  17. Comparing the use of an online expert health network against common information sources to answer health questions.

    Science.gov (United States)

    Rhebergen, Martijn D F; Lenderink, Annet F; van Dijk, Frank J H; Hulshof, Carel T J

    2012-02-02

    Many workers have questions about occupational safety and health (OSH). It is unknown whether workers are able to find correct, evidence-based answers to OSH questions when they use common information sources, such as websites, or whether they would benefit from using an easily accessible, free-of-charge online network of OSH experts providing advice. To assess the rate of correct, evidence-based answers to OSH questions in a group of workers who used an online network of OSH experts (intervention group) compared with a group of workers who used common information sources (control group). In a quasi-experimental study, workers in the intervention and control groups were randomly offered 2 questions from a pool of 16 standardized OSH questions. Both questions were sent by mail to all participants, who had 3 weeks to answer them. The intervention group was instructed to use only the online network ArboAntwoord, a network of about 80 OSH experts, to solve the questions. The control group was instructed that they could use all information sources available to them. To assess answer correctness as the main study outcome, 16 standardized correct model answers were constructed with the help of reviewers who performed literature searches. Subsequently, the answers provided by all participants in the intervention (n = 94 answers) and control groups (n = 124 answers) were blinded and compared with the correct model answers on the degree of correctness. Of the 94 answers given by participants in the intervention group, 58 were correct (62%), compared with 24 of the 124 answers (19%) in the control group, who mainly used informational websites found via Google. The difference between the 2 groups was significant (rate difference = 43%, 95% confidence interval [CI] 30%-54%). Additional analysis showed that the rate of correct main conclusions of the answers was 85 of 94 answers (90%) in the intervention group and 75 of 124 answers (61%) in the control group (rate difference

  18. Animal Models for Evaluation of Bone Implants and Devices: Comparative Bone Structure and Common Model Uses.

    Science.gov (United States)

    Wancket, L M

    2015-09-01

    Bone implants and devices are a rapidly growing field within biomedical research, and implants have the potential to significantly improve human and animal health. Animal models play a key role in initial product development and are important components of nonclinical data included in applications for regulatory approval. Pathologists are increasingly being asked to evaluate these models at the initial developmental and nonclinical biocompatibility testing stages, and it is important to understand the relative merits and deficiencies of various species when evaluating a new material or device. This article summarizes characteristics of the most commonly used species in studies of bone implant materials, including detailed information about the relevance of a particular model to human bone physiology and pathology. Species reviewed include mice, rats, rabbits, guinea pigs, dogs, sheep, goats, and nonhuman primates. Ultimately, a comprehensive understanding of the benefits and limitations of different model species will aid in rigorously evaluating a novel bone implant material or device. © The Author(s) 2015.

  19. Informal Marketing: A Commercialization Model Guided by Brazilian Jeitinho, Informality and Entrepreneurship

    Directory of Open Access Journals (Sweden)

    Gustavo Henrique Silva de Souza

    2014-08-01

    Full Text Available In Brazil, street vendors and hawkers, currently recognized as micro-entrepreneurs, commonly are developing unconventional marketing strategies in informal markets, that is, with characteristics of intuition, improvisation and lawlessness. Remarkably, these marketing strategies have shown good sales results, highlighting the following question: What kind of marketing is that which is not in the handbooks of marketing and is overlooked by leading authors in the field? Considering this problem, this study aims to propose an explanatory model for this “marketing” phenomenon, theoretically grounded and within empirical basis, in the light of theories that address the psychological makeup of the Brazilian Jeitinho, the culture of informality and Entrepreneurship. Thus, we propose a concept that fills a gap over the traditional marketing theories existing. 

  20. Towards GLUE2 evolution of the computing element information model

    CERN Document Server

    Andreozzi, S; Field, L; Kónya, B

    2008-01-01

    A key advantage of Grid systems is the ability to share heterogeneous resources and services between traditional administrative and organizational domains. This ability enables virtual pools of resources to be created and assigned to groups of users. Resource awareness, the capability of users or user agents to have knowledge about the existence and state of resources, is required in order utilize the resource. This awareness requires a description of the services and resources typically defined via a community-agreed information model. One of the most popular information models, used by a number of Grid infrastructures, is the GLUE Schema, which provides a common language for describing Grid resources. Other approaches exist, however they follow different modeling strategies. The presence of different flavors of information models for Grid resources is a barrier for enabling inter-Grid interoperability. In order to solve this problem, the GLUE Working Group in the context of the Open Grid Forum was started. ...

  1. SWARMs Ontology: A Common Information Model for the Cooperation of Underwater Robots

    Science.gov (United States)

    Li, Xin; Bilbao, Sonia; Martín-Wanton, Tamara; Bastos, Joaquim; Rodriguez, Jonathan

    2017-01-01

    In order to facilitate cooperation between underwater robots, it is a must for robots to exchange information with unambiguous meaning. However, heterogeneity, existing in information pertaining to different robots, is a major obstruction. Therefore, this paper presents a networked ontology, named the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) ontology, to address information heterogeneity and enable robots to have the same understanding of exchanged information. The SWARMs ontology uses a core ontology to interrelate a set of domain-specific ontologies, including the mission and planning, the robotic vehicle, the communication and networking, and the environment recognition and sensing ontology. In addition, the SWARMs ontology utilizes ontology constructs defined in the PR-OWL ontology to annotate context uncertainty based on the Multi-Entity Bayesian Network (MEBN) theory. Thus, the SWARMs ontology can provide both a formal specification for information that is necessarily exchanged between robots and a command and control entity, and also support for uncertainty reasoning. A scenario on chemical pollution monitoring is described and used to showcase how the SWARMs ontology can be instantiated, be extended, represent context uncertainty, and support uncertainty reasoning. PMID:28287468

  2. BIM. Building Information Model. Special issue; BIM. Building Information Model. Themanummer

    Energy Technology Data Exchange (ETDEWEB)

    Van Gelder, A.L.A. [Arta and Consultancy, Lage Zwaluwe (Netherlands); Van den Eijnden, P.A.A. [Stichting Marktwerking Installatietechniek, Zoetermeer (Netherlands); Veerman, J.; Mackaij, J.; Borst, E. [Royal Haskoning DHV, Nijmegen (Netherlands); Kruijsse, P.M.D. [Wolter en Dros, Amersfoort (Netherlands); Buma, W. [Merlijn Media, Waddinxveen (Netherlands); Bomhof, F.; Willems, P.H.; Boehms, M. [TNO, Delft (Netherlands); Hofman, M.; Verkerk, M. [ISSO, Rotterdam (Netherlands); Bodeving, M. [VIAC Installatie Adviseurs, Houten (Netherlands); Van Ravenswaaij, J.; Van Hoven, H. [BAM Techniek, Bunnik (Netherlands); Boeije, I.; Schalk, E. [Stabiplan, Bodegraven (Netherlands)

    2012-11-15

    A series of 14 articles illustrates the various aspects of the Building Information Model (BIM). The essence of BIM is to capture information about the building process and the building product. [Dutch] In 14 artikelen worden diverse aspecten m.b.t. het Building Information Model (BIM) belicht. De essentie van BIM is het vastleggen van informatie over het bouwproces en het bouwproduct.

  3. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  4. Detailed modeling of common rail fuel injection process

    NARCIS (Netherlands)

    Seykens, X.L.J.; Somers, L.M.T.; Baert, R.S.G.

    2005-01-01

    Modeling of fuel injection equipment is a tool that is used increasingly for explaining or predicting the effect of advanced diesel injection strategies on combustion and emissions. This paper reports on the modeling of the high-pressure part of a research type Heavy Duty Common Rail (CR) fuel

  5. Information acquisition during online decision-making : A model-based exploration using eye-tracking data

    NARCIS (Netherlands)

    Shi, W.; Wedel, M.; Pieters, R.

    2013-01-01

    We propose a model of eye-tracking data to understand information acquisition patterns on attribute-by-product matrices, which are common in online choice environments such as comparison websites. The objective is to investigate how consumers gather product and attribute information from moment to

  6. Enabling interoperability in planetary sciences and heliophysics: The case for an information model

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Raugh, Anne C.; Cecconi, Baptiste; Guinness, Edward A.; Isbell, Christopher E.; Mafi, Joseph N.; Gordon, Mitchell K.; Hardman, Sean H.; Joyner, Ronald S.

    2018-01-01

    The Planetary Data System has developed the PDS4 Information Model to enable interoperability across diverse science disciplines. The Information Model is based on an integration of International Organization for Standardization (ISO) level standards for trusted digital archives, information model development, and metadata registries. Where controlled vocabularies provides a basic level of interoperability by providing a common set of terms for communication between both machines and humans the Information Model improves interoperability by means of an ontology that provides semantic information or additional related context for the terms. The information model was defined by team of computer scientists and science experts from each of the diverse disciplines in the Planetary Science community, including Atmospheres, Geosciences, Cartography and Imaging Sciences, Navigational and Ancillary Information, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies. The model was designed to be extensible beyond the Planetary Science community, for example there are overlaps between certain PDS disciplines and the Heliophysics and Astrophysics disciplines. "Interoperability" can apply to many aspects of both the developer and the end-user experience, for example agency-to-agency, semantic level, and application level interoperability. We define these types of interoperability and focus on semantic level interoperability, the type of interoperability most directly enabled by an information model.

  7. Use of The International Classification of Functioning, Disability and Health (ICF as a conceptual framework and common language for disability statistics and health information systems

    Directory of Open Access Journals (Sweden)

    Kostanjsek Nenad

    2011-05-01

    Full Text Available Abstract A common framework for describing functional status information is needed in order to make this information comparable and of value. The World Health Organization’s International Classification of Functioning, Disability and Health (ICF, which has been approved by all its member states, provides this common language and framework. The article provides an overview of ICF taxonomy, introduces the conceptual model which underpins ICF and elaborates on how ICF is used at population and clinical level. Furthermore, the article presents key features of the ICF tooling environment and outlines current and future developments of the classification.

  8. Creation of a common information system on the Republic of Kazakhstan radiation hazardous objects

    International Nuclear Information System (INIS)

    Kadyrzhanov, K.K.; Kuterbekov, K.A.; Lukashenko, S.N.; Morenko, V.S.; Glushchenko, V.N.

    2005-01-01

    Works on creation of a common information system on the Republic of Kazakhstan territory radiation hazardous objects for providing of radiation situation control and stewardship decision making under nature-conservative measures conducting are considered. The information system is forming on the base of up-to-date GIS system - ArcGIS - and incorporates two databases - geographical and attributive

  9. Why common carrier and network neutrality principles apply to the Nationwide Health Information Network (NWHIN).

    Science.gov (United States)

    Gaynor, Mark; Lenert, Leslie; Wilson, Kristin D; Bradner, Scott

    2014-01-01

    The Office of the National Coordinator will be defining the architecture of the Nationwide Health Information Network (NWHIN) together with the proposed HealtheWay public/private partnership as a development and funding strategy. There are a number of open questions--for example, what is the best way to realize the benefits of health information exchange? How valuable are regional health information organizations in comparison with a more direct approach? What is the role of the carriers in delivering this service? The NWHIN is to exist for the public good, and thus shares many traits of the common law notion of 'common carriage' or 'public calling,' the modern term for which is network neutrality. Recent policy debates in Congress and resulting potential regulation have implications for key stakeholders within healthcare that use or provide services, and for those who exchange information. To date, there has been little policy debate or discussion about the implications of a neutral NWHIN. This paper frames the discussion for future policy debate in healthcare by providing a brief education and summary of the modern version of common carriage, of the key stakeholder positions in healthcare, and of the potential implications of the network neutrality debate within healthcare.

  10. One step forward, two steps back? The GMC, the common law and 'informed' consent.

    Science.gov (United States)

    Fovargue, Sara; Miola, José

    2010-08-01

    Until 2008, if doctors followed the General Medical Council's (GMC's) guidance on providing information prior to obtaining a patient's consent to treatment, they would be going beyond what was technically required by the law. It was hoped that the common law would catch up with this guidance and encourage respect for patients' autonomy by facilitating informed decision-making. Regrettably, this has not occurred. For once, the law's inability to keep up with changing medical practice and standards is not the problem. The authors argue that while the common law has moved forward and started to recognise the importance of patient autonomy and informed decision-making, the GMC has taken a step back in their 2008 guidance on consent. Indeed, doctors are now required to tell their patients less than they were in 1998 when the last guidance was produced. This is an unfortunate development and the authors urge the GMC to revisit their guidance.

  11. Towards GLUE 2: evolution of the computing element information model

    International Nuclear Information System (INIS)

    Andreozzi, S; Burke, S; Field, L; Konya, B

    2008-01-01

    A key advantage of Grid systems is the ability to share heterogeneous resources and services between traditional administrative and organizational domains. This ability enables virtual pools of resources to be created and assigned to groups of users. Resource awareness, the capability of users or user agents to have knowledge about the existence and state of resources, is required in order utilize the resource. This awareness requires a description of the services and resources typically defined via a community-agreed information model. One of the most popular information models, used by a number of Grid infrastructures, is the GLUE Schema, which provides a common language for describing Grid resources. Other approaches exist, however they follow different modeling strategies. The presence of different flavors of information models for Grid resources is a barrier for enabling inter-Grid interoperability. In order to solve this problem, the GLUE Working Group in the context of the Open Grid Forum was started. The purpose of the group is to oversee a major redesign of the GLUE Schema which should consider the successful modeling choices and flaws that have emerged from practical experience and modeling choices from other initiatives. In this paper, we present the status of the new model for describing computing resources as the first output from the working group with the aim of dissemination and soliciting feedback from the community

  12. Sharing Data to Build a Medical Information Commons: From Bermuda to the Global Alliance.

    Science.gov (United States)

    Cook-Deegan, Robert; Ankeny, Rachel A; Maxson Jones, Kathryn

    2017-08-31

    The Human Genome Project modeled its open science ethos on nematode biology, most famously through daily release of DNA sequence data based on the 1996 Bermuda Principles. That open science philosophy persists, but daily, unfettered release of data has had to adapt to constraints occasioned by the use of data from individual people, broader use of data not only by scientists but also by clinicians and individuals, the global reach of genomic applications and diverse national privacy and research ethics laws, and the rising prominence of a diverse commercial genomics sector. The Global Alliance for Genomics and Health was established to enable the data sharing that is essential for making meaning of genomic variation. Data-sharing policies and practices will continue to evolve as researchers, health professionals, and individuals strive to construct a global medical and scientific information commons.

  13. The use of network theory to model disparate ship design information

    Science.gov (United States)

    Rigterink, Douglas; Piks, Rebecca; Singer, David J.

    2014-06-01

    This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  14. Summary of human social, cultural, behavioral (HSCB) modeling for information fusion panel discussion

    Science.gov (United States)

    Blasch, Erik; Salerno, John; Kadar, Ivan; Yang, Shanchieh J.; Fenstermacher, Laurie; Endsley, Mica; Grewe, Lynne

    2013-05-01

    During the SPIE 2012 conference, panelists convened to discuss "Real world issues and challenges in Human Social/Cultural/Behavioral modeling with Applications to Information Fusion." Each panelist presented their current trends and issues. The panel had agreement on advanced situation modeling, working with users for situation awareness and sense-making, and HSCB context modeling in focusing research activities. Each panelist added different perspectives based on the domain of interest such as physical, cyber, and social attacks from which estimates and projections can be forecasted. Also, additional techniques were addressed such as interest graphs, network modeling, and variable length Markov Models. This paper summarizes the panelists discussions to highlight the common themes and the related contrasting approaches to the domains in which HSCB applies to information fusion applications.

  15. Developing common information elements for renewable energy systems: summary and proceedings of the SERI/AID workshop

    Energy Technology Data Exchange (ETDEWEB)

    Ashworth, J.H.; Neuendorffer, J.W.

    1980-06-01

    This report describes the activities, conclusions, and recommendations of the Workshop on Evaluation Systems for Renewable Energy Systems sponsored by the Agency for International Development and SERI, held 20-22 February 1980 in Golden, Colorado. The primary objectives of the workshop was to explore whether it was possible to establish common information elements that would describe the operation and impact of renewable energy projects in developing countries. The workshop provided a forum for development program managers to discuss the information they would like to receive about renewable energy projects and to determine whether common data could be agreed on to facilitate information exchange among development organizations. Such information could be shared among institutions and used to make informed judyments on the economic, technical, and social feasibility of the technologies. Because developing countries and foreign assistance agencies will be financing an increasing number of renewable energy projects, these organizations need information on the field experience of renewable energy technologies. The report describes the substance of the workshop discussions and includes the papers presented on information systems and technology evaluation and provides lists of important information elements generated by both the plenary sessions and the small working groups.

  16. Multidimensional Models of Information Need

    OpenAIRE

    Yun-jie (Calvin) Xu; Kai Huang (Joseph) Tan

    2009-01-01

    User studies in information science have recognised relevance as a multidimensional construct. An implication of multidimensional relevance is that a user's information need should be modeled by multiple data structures to represent different relevance dimensions. While the extant literature has attempted to model multiple dimensions of a user's information need, the fundamental assumption that a multidimensional model is better than a uni-dimensional model has not been addressed. This study ...

  17. The quality of information on three common ENT procedures on the Internet.

    LENUS (Irish Health Repository)

    2012-02-01

    BACKGROUND: The Internet hosts a large number of high-quality medical resources and poses seemingly endless opportunities to inform, teach, and connect professionals and patients alike. However, it is difficult for the lay person to distinguish accurate from inaccurate information. AIM: This study was undertaken in an attempt to assess the quality of information on otolaryngology available on the Internet. METHODS: Sixty appropriate websites, using search engines Yahoo and Google, were evaluated for completeness and accuracy using three commonly performed ENT operations: tonsillectomy (T), septoplasty (S), and myringoplasty (M). RESULTS: A total of 60 websites were evaluated (NT = 20, NM = 20, NS = 20). A total of 86.7% targeted lay population and 13.3% targeted the medical professionals. 35% of the sites included all critical information that patients should know prior to undergoing surgery and over 94% of these were found to contain no inaccuracies. Negative bias towards medical profession was detected in 3% of websites. CONCLUSIONS: In the current climate, with informed consent being of profound importance, the Internet represents a useful tool for both patients and surgeons.

  18. Conceptual models of information processing

    Science.gov (United States)

    Stewart, L. J.

    1983-01-01

    The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.

  19. Beware the tail that wags the dog: informal and formal models in biology.

    Science.gov (United States)

    Gunawardena, Jeremy

    2014-11-05

    Informal models have always been used in biology to guide thinking and devise experiments. In recent years, formal mathematical models have also been widely introduced. It is sometimes suggested that formal models are inherently superior to informal ones and that biology should develop along the lines of physics or economics by replacing the latter with the former. Here I suggest to the contrary that progress in biology requires a better integration of the formal with the informal. © 2014 Gunawardena. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  20. Common lines modeling for reference free Ab-initio reconstruction in cryo-EM.

    Science.gov (United States)

    Greenberg, Ido; Shkolnisky, Yoel

    2017-11-01

    We consider the problem of estimating an unbiased and reference-free ab initio model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab initio models with resolutions of 20Å or better, even from class averages consisting of as few as three raw images per class. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. SWOT analysis on National Common Geospatial Information Service Platform of China

    Science.gov (United States)

    Zheng, Xinyan; He, Biao

    2010-11-01

    Currently, the trend of International Surveying and Mapping is shifting from map production to integrated service of geospatial information, such as GOS of U.S. etc. Under this circumstance, the Surveying and Mapping of China is inevitably shifting from 4D product service to NCGISPC (National Common Geospatial Information Service Platform of China)-centered service. Although State Bureau of Surveying and Mapping of China has already provided a great quantity of geospatial information service to various lines of business, such as emergency and disaster management, transportation, water resource, agriculture etc. The shortcomings of the traditional service mode are more and more obvious, due to the highly emerging requirement of e-government construction, the remarkable development of IT technology and emerging online geospatial service demands of various lines of business. NCGISPC, which aimed to provide multiple authoritative online one-stop geospatial information service and API for further development to government, business and public, is now the strategic core of SBSM (State Bureau of Surveying and Mapping of China). This paper focuses on the paradigm shift that NCGISPC brings up by using SWOT (Strength, Weakness, Opportunity and Threat) analysis, compared to the service mode that based on 4D product. Though NCGISPC is still at its early stage, it represents the future service mode of geospatial information of China, and surely will have great impact not only on the construction of digital China, but also on the way that everyone uses geospatial information service.

  2. The use of network theory to model disparate ship design information

    Directory of Open Access Journals (Sweden)

    Douglas Rigterink

    2014-06-01

    Full Text Available This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship's distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  3. The use of network theory to model disparate ship design information

    Directory of Open Access Journals (Sweden)

    Rigterink Douglas

    2014-06-01

    Full Text Available This paper introduces the use of network theory to model and analyze disparate ship design information. This work will focus on a ship’s distributed systems and their intra- and intersystem structures and interactions. The three system to be analyzed are: a passageway system, an electrical system, and a fire fighting system. These systems will be analyzed individually using common network metrics to glean information regarding their structures and attributes. The systems will also be subjected to community detection algorithms both separately and as a multiplex network to compare their similarities, differences, and interactions. Network theory will be shown to be useful in the early design stage due to its simplicity and ability to model any shipboard system.

  4. Ultrascalable Techniques Applied to the Global Intelligence Community Information Awareness Common Operating Picture (IA COP)

    National Research Council Canada - National Science Library

    Valdes, Alfonso; Kadte, Jim

    2005-01-01

    The focus of this research is to develop detection, correlation, and representation approaches to address the needs of the Intelligence Community Information Awareness Common Operating Picture (IA COP...

  5. Mechanisms for integration of information models across related domains

    Science.gov (United States)

    Atkinson, Rob

    2010-05-01

    It is well recognised that there are opportunities and challenges in cross-disciplinary data integration. A significant barrier, however, is creating a conceptual model of the combined domains and the area of integration. For example, a groundwater domain application may require information from several related domains: geology, hydrology, water policy, etc. Each domain may have its own data holdings and conceptual models, but these will share various common concepts (eg. The concept of an aquifer). These areas of semantic overlap present significant challenges, firstly to choose a single representation (model) of a concept that appears in multiple disparate models,, then to harmonise these other models with the single representation. In addition, models may exist at different levels of abstraction depending on how closely aligned they are with a particular implementation. This makes it hard for modellers in one domain to introduce elements from another domain without either introducing a specific style of implementation, or conversely dealing with a set of abstract patterns that are hard to integrate with existing implementations. Models are easier to integrate if they are broken down into small units, with common concepts implemented using common models from well-known, and predictably managed shared libraries. This vision however requires development of a set of mechanisms (tools and procedures) for implementing and exploiting libraries of model components. These mechanisms need to handle publication, discovery, subscription, versioning and implementation of models in different forms. In this presentation a coherent suite of such mechanisms is proposed, using a scenario based on re-use of geosciences models. This approach forms the basis of a comprehensive strategy to empower domain modellers to create more interoperable systems. The strategy address a range of concerns and practice, and includes methodologies, an accessible toolkit, improvements to available

  6. An Improved Information Value Model Based on Gray Clustering for Landslide Susceptibility Mapping

    Directory of Open Access Journals (Sweden)

    Qianqian Ba

    2017-01-01

    Full Text Available Landslides, as geological hazards, cause significant casualties and economic losses. Therefore, it is necessary to identify areas prone to landslides for prevention work. This paper proposes an improved information value model based on gray clustering (IVM-GC for landslide susceptibility mapping. This method uses the information value derived from an information value model to achieve susceptibility classification and weight determination of landslide predisposing factors and, hence, obtain the landslide susceptibility of each study unit based on the clustering analysis. Using a landslide inventory of Chongqing, China, which contains 8435 landslides, three landslide susceptibility maps were generated based on the common information value model (IVM, an information value model improved by an analytic hierarchy process (IVM-AHP and our new improved model. Approximately 70% (5905 of the inventory landslides were used to generate the susceptibility maps, while the remaining 30% (2530 were used to validate the results. The training accuracies of the IVM, IVM-AHP and IVM-GC were 81.8%, 78.7% and 85.2%, respectively, and the prediction accuracies were 82.0%, 78.7% and 85.4%, respectively. The results demonstrate that all three methods perform well in evaluating landslide susceptibility. Among them, IVM-GC has the best performance.

  7. Item Information in the Rasch Model

    NARCIS (Netherlands)

    Engelen, Ron J.H.; van der Linden, Willem J.; Oosterloo, Sebe J.

    1988-01-01

    Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling

  8. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  9. Using and Disclosing Confidential Patient Information and The English Common Law: What are the Information Requirements of a Valid Consent?

    Science.gov (United States)

    Chico, Victoria; Taylor, Mark J

    2018-02-01

    The National Health Service in England and Wales is dependent upon the flow of confidential patient data. In the context of consent to the use of patient health data, insistence on the requirements of an 'informed' consent that are difficult to achieve will drive reliance on alternatives to consent. Here we argue that one can obtain a valid consent to the disclosure of confidential patient data, such that this disclosure would not amount to a breach of the common law duty of confidentiality, having provided less information than would typically be associated with an 'informed consent'. This position protects consent as a practicable legal basis for disclosure from debilitating uncertainty or impracticability and, perhaps counter-intuitively, promotes patient autonomy.

  10. The Relationship between Teacher Attitudes toward the Common Core State Standards and Informational Text

    Science.gov (United States)

    Estruch, Marcie Jane

    2018-01-01

    This study sought to determine the relationship between teachers' attitudes toward the Common Core State Standards and three predetermined factors. These factors were (1) teachers' attitudes toward the practicality of pedagogical shift three, balancing informational and literary texts, (2) teachers' attitudes toward school support with the…

  11. Multi-state Markov models for disease progression in the presence of informative examination times: an application to hepatitis C.

    Science.gov (United States)

    Sweeting, M J; Farewell, V T; De Angelis, D

    2010-05-20

    In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)-infected individuals or AIDS in HIV-infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow-up. Transition times between disease states are therefore interval censored. Multi-state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non-informative, and hence the examination process is ignorable in a likelihood-based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow-up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non-ignorable. Copyright 2010 John Wiley & Sons, Ltd.

  12. COMPLEMENTARITY OF HISTORIC BUILDING INFORMATION MODELLING AND GEOGRAPHIC INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    X. Yang

    2016-06-01

    Full Text Available In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM and Geographical Information Systems (GIS to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D, time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.

  13. Information System Model as a Mobbing Prevention: A Case Study

    Directory of Open Access Journals (Sweden)

    Ersin Karaman

    2014-06-01

    Full Text Available In this study, it is aimed to detect mobbing issues in Atatürk University, Economics and Administrative Science Facultyand provide an information system model to prevent mobbing and reduce the risk. The study consists of two parts;i detect mobbing situation via questionnaire and ii design an information system based on the findings of the first part. The questionnaire was applied to research assistants in the faculty. Five factors were analyzed and it is concluded that research assistants have not been exposed to mobbing except the fact that they have mobbing perception about task assignment process. Results show that task operational difficulty, task time and task period are the common mobbing issues.  In order to develop an information system to cope with these issues,   assignment of exam proctor process is addressed. Exam time, instructor location, classroom location and exam duration are the considered as decision variables to developed linear programming (LP model. Coefficients of these variables and constraints about the LP model are specified in accordance with the findings. It is recommended that research assistants entrusting process should be conducted by using this method to prevent and reduce the risk of mobbing perception in the organization.

  14. Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random

    Science.gov (United States)

    Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David

    2013-01-01

    Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…

  15. INFORMATION MODEL OF SOCIAL TRANSFORMATIONS

    Directory of Open Access Journals (Sweden)

    Мария Васильевна Комова

    2013-09-01

    Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the

  16. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar

  17. Towards a common thermodynamic database for speciation models

    International Nuclear Information System (INIS)

    Lee, J. van der; Lomenech, C.

    2004-01-01

    Bio-geochemical speciation models and reactive transport models are reaching an operational stage, allowing simulation of complex dynamic experiments and description of field observations. For decades, the main focus has been on model performance but at present, the availability and reliability of thermodynamic data is the limiting factor of the models. Thermodynamic models applied to real and complex geochemical systems require much more extended thermodynamic databases with many minerals, colloidal phases, humic and fulvic acids, cementitious phases and (dissolved) organic complexing agents. Here we propose a methodological approach to achieve, ultimately, a common, operational database including the reactions and constants of these phases. Provided they are coherent with the general thermodynamic laws, sorption reactions are included as well. We therefore focus on sorption reactions and parameter values associated with specific sorption models. The case of sorption on goethite has been used to illustrate the way the methodology handles the problem of inconsistency and data quality. (orig.)

  18. Development of a QTL-environment-based predictive model for node addition rate in common bean.

    Science.gov (United States)

    Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J

    2017-05-01

    This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.

  19. Modeling study of solute transport in the unsaturated zone. Information and data sets. Volume 1

    International Nuclear Information System (INIS)

    Polzer, W.L.; Fuentes, H.R.; Springer, E.P.; Nyhan, J.W.

    1986-05-01

    The Environmental Science Group (HSE-12) is conducting a study to compare various approaches of modeling water and solute transport in porous media. Various groups representing different approaches will model a common set of transport data so that the state of the art in modeling and field experimentation can be discussed in a positive framework with an assessment of current capabilities and future needs in this area of research. This paper provides information and sets of data that will be useful to the modelers in meeting the objectives of the modeling study. The information and data sets include: (1) a description of the experimental design and methods used in obtaining solute transport data, (2) supporting data that may be useful in modeling the data set of interest, and (3) the data set to be modeled

  20. Care episode retrieval: distributional semantic models for information retrieval in the clinical domain.

    Science.gov (United States)

    Moen, Hans; Ginter, Filip; Marsi, Erwin; Peltonen, Laura-Maria; Salakoski, Tapio; Salanterä, Sanna

    2015-01-01

    Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a--possibly unfinished--care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task.

  1. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  2. Ambient commons: attention in the age of embodied information

    CERN Document Server

    McCullough, Malcolm

    2013-01-01

    The world is filling with ever more kinds of media, in ever more contexts and formats. Glowing rectangles have become part of the scene; screens, large and small, appear everywhere. Physical locations are increasingly tagged and digitally augmented. Sensors, processors, and memory are not found only in chic smart phones but also built into everyday objects. Amid this flood, your attention practices matter more than ever. You might not be able to tune this world out. So it is worth remembering that underneath all these augmentations and data flows, fixed forms persist, and that to notice them can improve other sensibilities. In Ambient Commons, Malcolm McCullough explores the workings of attention though a rediscovery of surroundings. Not all that informs has been written and sent; not all attention involves deliberate thought. The intrinsic structure of space -- the layout of a studio, for example, or a plaza -- becomes part of any mental engagement with it. McCullough describes what he calls the Ambient: an...

  3. Executive Information Systems' Multidimensional Models

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.

  4. Otwarty model licencjonowania Creative Commons

    OpenAIRE

    Tarkowski, Alek

    2007-01-01

    The paper presents a family of Creative Commons licenses (which form nowadays one of the basic legal tools used in the Open Access movement), as well as a genesis of the licenses – inspired by Open Software Licenses and the concept of commons. Then legal tools such as individual Creative Commons licenses are discussed as well as how to use them, with a special emphasis on practical applications in science and education. The author discusses also his research results on scientific publishers a...

  5. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  6. Engineering modelling. A contribution to the CommonKADS library

    Energy Technology Data Exchange (ETDEWEB)

    Top, J.L.; Akkermans, J.M.

    1993-12-01

    Generic knowledge components and models for the task of in particular engineering modelling are presented.It is intended as a contribution to the CommonKADS library. In the first chapter an executive summary is provided. Next, the Conceptual Modelling Language (CML) definitions of the various generic library components are given. In the following two chapters the underlying theory is developed. First, a task-oriented analysis is made, based upon the similarities between modelling and design tasks. Second, an ontological analysis is given, which shows that ontology differentiation constitutes an important problem-solving method (PSM) for engineering modelling, on a par with task-decomposition PSMs. Finally, three different modelling applications, based on existing knowledgeable systems, are analyzed, which analysis illustrates and provides data points for the discussed generic components and models for modelling. 50 figs., 77 refs.

  7. Building a Values-Informed Mental Model for New Orleans Climate Risk Management.

    Science.gov (United States)

    Bessette, Douglas L; Mayer, Lauren A; Cwik, Bryan; Vezér, Martin; Keller, Klaus; Lempert, Robert J; Tuana, Nancy

    2017-10-01

    Individuals use values to frame their beliefs and simplify their understanding when confronted with complex and uncertain situations. The high complexity and deep uncertainty involved in climate risk management (CRM) lead to individuals' values likely being coupled to and contributing to their understanding of specific climate risk factors and management strategies. Most mental model approaches, however, which are commonly used to inform our understanding of people's beliefs, ignore values. In response, we developed a "Values-informed Mental Model" research approach, or ViMM, to elicit individuals' values alongside their beliefs and determine which values people use to understand and assess specific climate risk factors and CRM strategies. Our results show that participants consistently used one of three values to frame their understanding of risk factors and CRM strategies in New Orleans: (1) fostering a healthy economy, wealth, and job creation, (2) protecting and promoting healthy ecosystems and biodiversity, and (3) preserving New Orleans' unique culture, traditions, and historically significant neighborhoods. While the first value frame is common in analyses of CRM strategies, the latter two are often ignored, despite their mirroring commonly accepted pillars of sustainability. Other values like distributive justice and fairness were prioritized differently depending on the risk factor or strategy being discussed. These results suggest that the ViMM method could be a critical first step in CRM decision-support processes and may encourage adoption of CRM strategies more in line with stakeholders' values. © 2017 Society for Risk Analysis.

  8. Predisposing, precipitating and perpetuating factors and the common sense model of illness

    DEFF Research Database (Denmark)

    Carstensen, Tina; Kasch, Helge; Frostholm, Lisbeth

    2017-01-01

    Background: Various predisposing, precipitating and perpetuating factors are found to be associated with development of persistent symptoms and disability after whiplash trauma. According to the commonsense model of illness, people use commonsense knowledge to develop individual illness models when...... facing health threat. Question: Can we use the common-sense model as a unifying model to encompass the impact of predisposing, precipitating, and perpetuating factors in the development of chronic whiplash? Looking into specific factors and their interaction: Do illness perceptions mediate the effect...... of precollision sick leave on chronic whiplash? Methods: This presentation will integrate findings from research on predisposing, precipitating, perpetuating factors that are associated with poor outcome after whiplash trauma and propose the common-sense model as a unifying model. Data from a study including 740...

  9. A Framework for Modeling Emerging Diseases to Inform Management.

    Science.gov (United States)

    Russell, Robin E; Katz, Rachel A; Richgels, Katherine L D; Walsh, Daniel P; Grant, Evan H C

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  10. Function Model for Community Health Service Information

    Science.gov (United States)

    Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong

    In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.

  11. IEA Common Exercise 4: ARX, ARMAX and grey-box models for thermal performance characterization of the test box

    DEFF Research Database (Denmark)

    Bacher, Peder; Andersen, Philip Hvidthøft Delff

    -biased and accurate estimates of the essential performance parameters, including reliable uncertainties of the estimates. Important is also the development of methodologies for analyzing the quality of data, for example correlated inputs and lack of information in data (e.g. if no clearsky days with direct solar...... for the Common Exercise 3b (CE3) data measured in Belgium and the results are compared. The focus in this report is on model selection and validation enabling a stable and reliable performance assessment. Basically, the challenge is to find a procedure for each type of model, which can give un...

  12. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  13. Establishing experimental model of human internal carotid artery siphon segment in canine common carotid artery

    International Nuclear Information System (INIS)

    Cui Xuee; Li Minghua; Wang Yongli; Cheng Yingsheng; Li Wenbin

    2005-01-01

    Objective: To study the feasibility of establishing experimental model of human internal carotid artery siphon segment in canine common carotid artery (CCA) by end-to-end anastomoses of one side common carotid artery segment with the other side common carotid artery. Methods: Surgical techniques were used to make siphon model in 8 canines. One side CCA was taken as the parent artery and anastomosing with the cut off contra-lateral CCA segment which has passed through within the S-shaped glass tube. Two weeks after the creation of models angiography showed the model siphons were patent. Results: Experimental models of human internal carotid artery siphon segment were successfully made in all 8 dogs. Conclusions: It is practically feasible to establish experimental canine common carotid artery models of siphon segment simulating human internal carotid artery. (authors)

  14. A proposed general model of information behaviour.

    Directory of Open Access Journals (Sweden)

    2003-01-01

    Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.

  15. Model of students’ sport-oriented physical education with application of information technologies

    Directory of Open Access Journals (Sweden)

    O.M. Olkhovy

    2015-06-01

    Full Text Available Purpose: working out and practical application of approaches to perfection of physical education system’s functioning. Material: in the research students (boys- n=92, girls- n=45 of 18-20 years old took part. Results: structural model of students’ sport-oriented physical education with application of information technologies has been formed. The main purpose of such model’s creation was cultivation of students’ demand in physical functioning and formation of healthy life style in students’ environment. The model of the process includes orienting, executive and control components. In this model groups of commonly accepted physical education and sport-oriented groups function. Conclusions: Main structural components of the created model have been determined: conceptual, motivation-active, resulting.

  16. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  17. Modeling web-based information seeking by users who are blind.

    Science.gov (United States)

    Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti

    2011-01-01

    This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.

  18. Model for Electromagnetic Information Leakage

    OpenAIRE

    Mao Jian; Li Yongmei; Zhang Jiemin; Liu Jinming

    2013-01-01

    Electromagnetic leakage will happen in working information equipments; it could lead to information leakage. In order to discover the nature of information in electromagnetic leakage, this paper combined electromagnetic theory with information theory as an innovative research method. It outlines a systematic model of electromagnetic information leakage, which theoretically describes the process of information leakage, intercept and reproduction based on electromagnetic radiation, and ana...

  19. Examining the functionality of the DeLone and McLean information system success model as a framework for synthesis in nursing information and communication technology research.

    Science.gov (United States)

    Booth, Richard G

    2012-06-01

    In this review, studies examining information and communication technology used by nurses in clinical practice were examined. Overall, a total of 39 studies were assessed spanning a time period from 1995 to 2008. The impacts of the various health information and communication technology evaluated by individual studies were synthesized using the DeLone and McLean's six-dimensional framework for evaluating information systems success (ie, System Quality, Information Quality, Service Quality, Use, User Satisfaction, and Net Benefits). Overall, the majority of researchers reported results related to the overall Net Benefits (positive, negative, and indifferent) of the health information and communication technology used by nurses. Attitudes and user satisfaction with technology were also commonly measured attributes. The current iteration of DeLone and McLean model is effective at synthesizing basic elements of health information and communication technology use by nurses. Regardless, the current model lacks the sociotechnical sensitivity to capture deeper nurse-technology relationalities. Limitations and recommendations are provided for researchers considering using the DeLone and McLean model for evaluating health information and communication technology used by nurses.

  20. Development of a single-layer Nb3Sn common coil dipole model

    Energy Technology Data Exchange (ETDEWEB)

    Igor Novitski et al.

    2002-12-13

    A high-field dipole magnet based on the common coil design was developed at Fermilab for a future Very Large Hadron Collider. A short model of this magnet with a design field of 11 T in two 40-mm apertures is being fabricated using the react-and-wind technique. In order to study and optimize the magnet design two 165-mm long mechanical models were assembled and tested. A technological model consisting of magnet straight section and ends was also fabricated in order to check the tooling and the winding and assembly procedures. This paper describes the design and technology of the common coil dipole magnet and summarizes the status of short model fabrication.The results of the mechanical model tests and comparison with FE mechanical analysis are also presented.

  1. Patient information leaflets: informing or frightening? A focus group study exploring patients' emotional reactions and subsequent behavior towards package leaflets of commonly prescribed medications in family practices.

    Science.gov (United States)

    Herber, Oliver Rudolf; Gies, Verena; Schwappach, David; Thürmann, Petra; Wilm, Stefan

    2014-10-02

    The purpose of patient information leaflets (PILs) is to inform patients about the administration, precautions and potential side effects of their prescribed medication. Despite European Commission guidelines aiming at increasing readability and comprehension of PILs little is known about the potential risk information has on patients. This article explores patients' reactions and subsequent behavior towards risk information conveyed in PILs of commonly prescribed drugs by general practitioners (GPs) for the treatment of Type 2 diabetes, hypertension or hypercholesterolemia; the most frequent cause for consultations in family practices in Germany. We conducted six focus groups comprising 35 patients which were recruited in GP practices. Transcripts were read and coded for themes; categories were created by abstracting data and further refined into a coding framework. Three interrelated categories are presented: (i) The vast amount of side effects and drug interactions commonly described in PILs provoke various emotional reactions in patients which (ii) lead to specific patient behavior of which (iii) consulting the GP for assistance is among the most common. Findings show that current description of potential risk information caused feelings of fear and anxiety in the reader resulting in undesirable behavioral reactions. Future PILs need to convey potential risk information in a language that is less frightening while retaining the information content required to make informed decisions about the prescribed medication. Thus, during the production process greater emphasis needs to be placed on testing the degree of emotional arousal provoked in patients when reading risk information to allow them to undertake a benefit-risk-assessment of their medication that is based on rational rather than emotional (fearful) reactions.

  2. Common Privacy Myths

    Science.gov (United States)

    ... the common myths: Health information cannot be faxed – FALSE Your information may be shared between healthcare providers by faxing ... E-mail cannot be used to transmit health information – FALSE E-mail can be used to transmit information, ...

  3. An Affinity-to-Commons Model of Public Support For Environmental Energy Policy

    International Nuclear Information System (INIS)

    Merrill, Ryan; Sintov, Nicole

    2016-01-01

    As atmospheric CO_2 continues to rise above 450 PPM, policymakers struggle with uncertainty concerning predictors of citizen support for environmental energy policies (EEPs) and preferences for their design, topics which have received limited attention in empirical literature. We present an original model of policy support based on citizens’ affinity-to-commons: pathways by which individuals enjoy natural public goods that in turn shape preferences between alternative policy mechanisms. We evaluate this model using a survey of southern California electricity customers, with results indicating the model's utility in predicting public support of EEP. Stronger community ties are associated with preferences for “pull”-type subsidies, whereas stronger connections to natural commons are linked to support for both “pull” and “push”-type sanctions. Findings have implications for coalition building as advocates may engender support for green energy policy by framing sanctions as protecting natural commons, and framing subsidies either in this same way and/or as producing benefits for communities. - Highlights: • A commons-oriented model of citizen support for environmental energy policy is proposed (Thaler (2012)). • A factor analysis identifies local tax shifts, green subsidies, and energy taxes (Schultz et al. (1995)). • Community connections predict support for policies with employing subsidies (Sabatier (2006)). • Connection to nature predicts support for policies using both sanctions and subsidies. (Stern et al. (1999)).

  4. Refreshing Information Literacy: Learning from Recent British Information Literacy Models

    Science.gov (United States)

    Martin, Justine

    2013-01-01

    Models play an important role in helping practitioners implement and promote information literacy. Over time models can lose relevance with the advances in technology, society, and learning theory. Practitioners and scholars often call for adaptations or transformations of these frameworks to articulate the learning needs in information literacy…

  5. Irregular Shaped Building Design Optimization with Building Information Modelling

    Directory of Open Access Journals (Sweden)

    Lee Xia Sheng

    2016-01-01

    Full Text Available This research is to recognise the function of Building Information Modelling (BIM in design optimization for irregular shaped buildings. The study focuses on a conceptual irregular shaped “twisted” building design similar to some existing sculpture-like architectures. Form and function are the two most important aspects of new buildings, which are becoming more sophisticated as parts of equally sophisticated “systems” that we are living in. Nowadays, it is common to have irregular shaped or sculpture-like buildings which are very different when compared to regular buildings. Construction industry stakeholders are facing stiff challenges in many aspects such as buildability, cost effectiveness, delivery time and facility management when dealing with irregular shaped building projects. Building Information Modelling (BIM is being utilized to enable architects, engineers and constructors to gain improved visualization for irregular shaped buildings; this has a purpose of identifying critical issues before initiating physical construction work. In this study, three variations of design options differing in rotating angle: 30 degrees, 60 degrees and 90 degrees are created to conduct quantifiable comparisons. Discussions are focused on three major aspects including structural planning, usable building space, and structural constructability. This research concludes that Building Information Modelling is instrumental in facilitating design optimization for irregular shaped building. In the process of comparing different design variations, instead of just giving “yes or no” type of response, stakeholders can now easily visualize, evaluate and decide to achieve the right balance based on their own criteria. Therefore, construction project stakeholders are empowered with superior evaluation and decision making capability.

  6. Data analysis using the Binomial Failure Rate common cause model

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1983-09-01

    This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers

  7. Directory of Energy Information Administration Models 1994

    International Nuclear Information System (INIS)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994

  8. Directory of Energy Information Administration Models 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994.

  9. Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

    Science.gov (United States)

    Hydro2k Consortium, Pages

    2017-12-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform

  10. Examining the Reading Level of Internet Medical Information for Common Internal Medicine Diagnoses.

    Science.gov (United States)

    Hutchinson, Nora; Baird, Grayson L; Garg, Megha

    2016-06-01

    The National Institutes of Health (NIH) recommend that health materials be written at a grade 6-7 reading level, which has generally not been achieved in online reading materials. Up to the present time, there have not been any assessments focused on the reading level of online educational materials across the most popular consumer Web sites for common internal medicine diagnoses. In this study, we examined the readability of open-access online health information for 9 common internal medicine diagnoses. Nine of the most frequently encountered inpatient and ambulatory internal medicine diagnoses were selected for analysis. In November and December 2014, these diagnoses were used as search terms in Google, and the top 5 Web sites across all diagnoses and a diagnosis-specific site were analyzed across 5 validated reading indices. On average, the lowest reading grade-level content was provided by the NIH (10.7), followed by WebMD (10.9), Mayo Clinic (11.3), and diagnosis-specific Web sites (11.5). Conversely, Wikipedia provided content that required the highest grade-level readability (14.6). The diagnoses with the lowest reading grade levels were chronic obstructive pulmonary disease (10.8), followed by diabetes (10.9), congestive heart failure (11.7), osteoporosis (11.7) and hypertension (11.7). Depression had the highest grade-level readability (13.8). Despite recommendations for patient health information to be written at a grade 6-7 reading level, our examination of online educational materials pertaining to 9 common internal medicine diagnoses revealed reading levels significantly above the NIH recommendation. This was seen across both diagnosis-specific and general Web sites. There is a need to improve the readability of online educational materials made available to patients. These improvements have the potential to greatly enhance patient awareness, engagement, and physician-patient communication. Published by Elsevier Inc.

  11. Modeling stimulus variation in three common implicit attitude tasks.

    Science.gov (United States)

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  12. Online Cancer Information Seeking: Applying and Extending the Comprehensive Model of Information Seeking.

    Science.gov (United States)

    Van Stee, Stephanie K; Yang, Qinghua

    2017-10-30

    This study applied the comprehensive model of information seeking (CMIS) to online cancer information and extended the model by incorporating an exogenous variable: interest in online health information exchange with health providers. A nationally representative sample from the Health Information National Trends Survey 4 Cycle 4 was analyzed to examine the extended CMIS in predicting online cancer information seeking. Findings from a structural equation model supported most of the hypotheses derived from the CMIS, as well as the extension of the model related to interest in online health information exchange. In particular, socioeconomic status, beliefs, and interest in online health information exchange predicted utility. Utility, in turn, predicted online cancer information seeking, as did information-carrier characteristics. An unexpected but important finding from the study was the significant, direct relationship between cancer worry and online cancer information seeking. Theoretical and practical implications are discussed.

  13. Upgrade of Common Cause Failure Modelling of NPP Krsko PSA

    International Nuclear Information System (INIS)

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2006-01-01

    Over the last thirty years the probabilistic safety assessments (PSA) have been increasingly applied in technical engineering practice. Various failure modes of system of concern are mathematically and explicitly modelled by means of fault tree structure. Statistical independence of basic events from which the fault tree is built is not acceptable for an event category referred to as common cause failures (CCF). Based on overview of current international status of modelling of common cause failures in PSA several steps were made related to primary technical basis for methodology and data used for CCF model upgrade project in NPP Krsko (NEK) PSA. As a primary technical basis for methodological aspects of CCF modelling in Krsko PSA the following documents were considered: NUREG/CR-5485, NUREG/CR-4780, and Westinghouse Owners Group documents (WOG) WCAP-15674 and WCAP-15167. Use of these documents is supported by the most relevant guidelines and standards in the field, such as ASME PRA Standard and NRC Regulatory Guide 1.200. WCAP documents are in compliance with NUREG/CR-5485 and NUREG/CR-4780. Additionally, they provide WOG perspective on CCF modelling, which is important to consider since NEK follows WOG practice in resolving many generic and regulatory issues. It is, therefore, desirable that NEK CCF methodology and modelling is in general accordance with recommended WOG approaches. As a primary basis for CCF data needed to estimate CCF model parameters and their uncertainty, the main used documents were: NUREG/CR-5497, NUREG/CR-6268, WCAP-15167, and WCAP-16187. Use of NUREG/CR-5497 and NUREG/CR-6268 as a source of data for CCF parameter estimating is supported by the most relevant industry and regulatory PSA guides and standards currently existing in the field, including WOG. However, the WCAP document WCAP-16187 has provided a basis for CCF parameter values specific to Westinghouse PWR plants. Many of events from NRC / INEEL database were re-classified in WCAP

  14. Crowdsourcing cyber security: a property rights view of exclusion and theft on the information commons

    Directory of Open Access Journals (Sweden)

    Gary Shiffman

    2013-02-01

    Full Text Available Individuals increasingly rely upon the internet for basic economic interaction. Current cyber security mechanisms are unable to stop adversaries and hackers from gaining access to sensitive information stored on government, business, and public computers. Experts propose implementing attribution and audit frameworks in cyberspace to deter, prevent, and prosecute cyber criminals and attackers. However, this method faces significant policy and resource constraints. Social science research, specifically in law and economics, concerning common-pool resources suggests an organic approach to cyber security may yield an appropriate solution. This cyber commons method involves treating the internet as a commons and encouraging individuals and institutions to voluntarily implement innovative and adaptive monitoring mechanisms. Such mechanisms are already in use and in many cases have proven more effective than attribution mechanisms in resisting and tracing the source of cyber attacks.

  15. Firm Size and the Information Content of Over-the-Counter Common Stock Offerings

    OpenAIRE

    Robert M. Hull; George E. Pinches

    1995-01-01

    We examine the announcement period of stock returns for 179 over-the-counter (OTC) firms that issue common stock to reduce nonconvertible debt. We find that small OTC firms experience returns that are significantly more negative than large OTC firms. Regression tests reveal that firm size is a significant factor in accounting for stock returns. Other tests establish as firm size a dominant effect. Our support for a firm size effect is consistent with a differential information effect given th...

  16. On valuing information in adaptive-management models.

    Science.gov (United States)

    Moore, Alana L; McCarthy, Michael A

    2010-08-01

    Active adaptive management looks at the benefit of using strategies that may be suboptimal in the near term but may provide additional information that will facilitate better management in the future. In many adaptive-management problems that have been studied, the optimal active and passive policies (accounting for learning when designing policies and designing policy on the basis of current best information, respectively) are very similar. This seems paradoxical; when faced with uncertainty about the best course of action, managers should spend very little effort on actively designing programs to learn about the system they are managing. We considered two possible reasons why active and passive adaptive solutions are often similar. First, the benefits of learning are often confined to the particular case study in the modeled scenario, whereas in reality information gained from local studies is often applied more broadly. Second, management objectives that incorporate the variance of an estimate may place greater emphasis on learning than more commonly used objectives that aim to maximize an expected value. We explored these issues in a case study of Merri Creek, Melbourne, Australia, in which the aim was to choose between two options for revegetation. We explicitly incorporated monitoring costs in the model. The value of the terminal rewards and the choice of objective both influenced the difference between active and passive adaptive solutions. Explicitly considering the cost of monitoring provided a different perspective on how the terminal reward and management objective affected learning. The states for which it was optimal to monitor did not always coincide with the states in which active and passive adaptive management differed. Our results emphasize that spending resources on monitoring is only optimal when the expected benefits of the options being considered are similar and when the pay-off for learning about their benefits is large.

  17. Building information modelling (BIM)

    CSIR Research Space (South Africa)

    Conradie, Dirk CU

    2009-02-01

    Full Text Available The concept of a Building Information Model (BIM) also known as a Building Product Model (BPM) is nothing new. A short article on BIM will never cover the entire filed, because it is a particularly complex filed that is recently beginning to receive...

  18. Space in Numerical and Ordinal Information: A Common Construct?

    Directory of Open Access Journals (Sweden)

    Philipp Alexander Schroeder

    2017-12-01

    Full Text Available Space is markedly involved in numerical processing, both explicitly in instrumental learning and implicitly in mental operations on numbers. Besides action decisions, action generations, and attention, the response-related effect of numerical magnitude or ordinality on space is well documented in the Spatial-Numerical Associations of Response Codes (SNARC effect. Here, right- over left-hand responses become relatively faster with increasing magnitude positions. However, SNARC-like behavioral signatures in non-numerical tasks with ordinal information were also observed and inspired new models integrating seemingly spatial effects of ordinal and numerical metrics. To examine this issue further, we report a comparison between numerical SNARC and ordinal SNARC-like effects to investigate group-level characteristics and individual-level deductions from generalized views, i.e., convergent validity. Participants solved order-relevant (before/after classification and order-irrelevant tasks (font color classification with numerical stimuli 1-5, comprising both magnitude and order information, and with weekday stimuli, comprising only ordinal information. A small correlation between magnitude- and order-related SNARCs was observed, but effects are not pronounced in order-irrelevant color judgments. On the group level, order-relevant spatial-numerical associations were best accounted for by a linear magnitude predictor, whereas the SNARC effect for weekdays was categorical. Limited by the representativeness of these tasks and analyses, results are inconsistent with a single amodal cognitive mechanism that activates space in mental processing of cardinal and ordinal information alike. A possible resolution to maintain a generalized view is proposed by discriminating different spatial activations, possibly mediated by visuospatial and verbal working memory, and by relating results to findings from embodied numerical cognition.

  19. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  20. Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model

    Directory of Open Access Journals (Sweden)

    Kionna Oliveira Bernardes Santos

    2016-01-01

    Full Text Available Background. The Self-Reporting Questionnaire (SRQ-20 is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms.

  1. A preliminary geodetic data model for geographic information systems

    Science.gov (United States)

    Kelly, K. M.

    2009-12-01

    Our ability to gather and assimilate integrated data collections from multiple disciplines is important for earth system studies. Moreover, geosciences data collection has increased dramatically, with pervasive networks of observational stations on the ground, in the oceans, in the atmosphere and in space. Contemporary geodetic observations from several space and terrestrial technologies contribute to our knowledge of earth system processes and thus are a valuable source of high accuracy information for many global change studies. Assimilation of these geodetic observations and numerical models into models of weather, climate, oceans, hydrology, ice, and solid Earth processes is an important contribution geodesists can make to the earth science community. Clearly, the geodetic observations and models are fundamental to these contributions. ESRI wishes to provide leadership in the geodetic community to collaboratively build an open, freely available content specification that can be used by anyone to structure and manage geodetic data. This Geodetic Data Model will provide important context for all geographic information. The production of a task-specific geodetic data model involves several steps. The goal of the data model is to provide useful data structures and best practices for each step, making it easier for geodesists to organize their data and metadata in a way that will be useful in their data analyses and to their customers. Built on concepts from the successful Arc Marine data model, we introduce common geodetic data types and summarize the main thematic layers of the Geodetic Data Model. These provide a general framework for envisioning the core feature classes required to represent geodetic data in a geographic information system. Like Arc Marine, the framework is generic to allow users to build workflow or product specific geodetic data models tailored to the specific task(s) at hand. This approach allows integration of the data with other existing

  2. Information-Processing Models and Curriculum Design

    Science.gov (United States)

    Calfee, Robert C.

    1970-01-01

    "This paper consists of three sections--(a) the relation of theoretical analyses of learning to curriculum design, (b) the role of information-processing models in analyses of learning processes, and (c) selected examples of the application of information-processing models to curriculum design problems." (Author)

  3. Mathematical models utilized in the retrieval of displacement information encoded in fringe patterns

    Science.gov (United States)

    Sciammarella, Cesar A.; Lamberti, Luciano

    2016-02-01

    All the techniques that measure displacements, whether in the range of visible optics or any other form of field methods, require the presence of a carrier signal. A carrier signal is a wave form modulated (modified) by an input, deformation of the medium. A carrier is tagged to the medium under analysis and deforms with the medium. The wave form must be known both in the unmodulated and the modulated conditions. There are two basic mathematical models that can be utilized to decode the information contained in the carrier, phase modulation or frequency modulation, both are closely connected. Basic problems connected to the detection and recovery of displacement information that are common to all optical techniques will be analyzed in this paper, focusing on the general theory common to all the methods independently of the type of signal utilized. The aspects discussed are those that have practical impact in the process of data gathering and data processing.

  4. Characteristics of evolving models of care for arthritis: A key informant study

    Directory of Open Access Journals (Sweden)

    Veinot Paula

    2008-07-01

    Full Text Available Abstract Background The burden of arthritis is increasing in the face of diminishing health human resources to deliver care. In response, innovative models of care delivery are developing to facilitate access to quality care. Most models have developed in response to local needs with limited evaluation. The primary objective of this study is to a examine the range of models of care that deliver specialist services using a medical/surgical specialist and at least one other health care provider and b document the strengths and challenges of the identified models. A secondary objective is to identify key elements of best practice models of care for arthritis. Methods Semi-structured interviews were conducted with a sample of key informants with expertise in arthritis from jurisdictions with primarily publicly-funded health care systems. Qualitative data were analyzed using a constant comparative approach to identify common types of models of care, strengths and challenges of models, and key components of arthritis care. Results Seventy-four key informants were interviewed from six countries. Five main types of models of care emerged. 1 Specialized arthritis programs deliver comprehensive, multidisciplinary team care for arthritis. Two models were identified using health care providers (e.g. nurses or physiotherapists in expanded clinical roles: 2 triage of patients with musculoskeletal conditions to the appropriate services including specialists; and 3 ongoing management in collaboration with a specialist. Two models promoting rural access were 4 rural consultation support and 5 telemedicine. Key informants described important components of models of care including knowledgeable health professionals and patients. Conclusion A range of models of care for arthritis have been developed. This classification can be used as a framework for discussing care delivery. Areas for development include integration of care across the continuum, including primary

  5. Classical Logic and Quantum Logic with Multiple and Common Lattice Models

    Directory of Open Access Journals (Sweden)

    Mladen Pavičić

    2016-01-01

    Full Text Available We consider a proper propositional quantum logic and show that it has multiple disjoint lattice models, only one of which is an orthomodular lattice (algebra underlying Hilbert (quantum space. We give an equivalent proof for the classical logic which turns out to have disjoint distributive and nondistributive ortholattices. In particular, we prove that both classical logic and quantum logic are sound and complete with respect to each of these lattices. We also show that there is one common nonorthomodular lattice that is a model of both quantum and classical logic. In technical terms, that enables us to run the same classical logic on both a digital (standard, two-subset, 0-1-bit computer and a nondigital (say, a six-subset computer (with appropriate chips and circuits. With quantum logic, the same six-element common lattice can serve us as a benchmark for an efficient evaluation of equations of bigger lattice models or theorems of the logic.

  6. Common elements of adolescent prevention programs: minimizing burden while maximizing reach.

    Science.gov (United States)

    Boustani, Maya M; Frazier, Stacy L; Becker, Kimberly D; Bechor, Michele; Dinizulu, Sonya M; Hedemann, Erin R; Ogle, Robert R; Pasalich, Dave S

    2015-03-01

    A growing number of evidence-based youth prevention programs are available, but challenges related to dissemination and implementation limit their reach and impact. The current review identifies common elements across evidence-based prevention programs focused on the promotion of health-related outcomes in adolescents. We reviewed and coded descriptions of the programs for common practice and instructional elements. Problem-solving emerged as the most common practice element, followed by communication skills, and insight building. Psychoeducation, modeling, and role play emerged as the most common instructional elements. In light of significant comorbidity in poor outcomes for youth, and corresponding overlap in their underlying skills deficits, we propose that synthesizing the prevention literature using a common elements approach has the potential to yield novel information and inform prevention programming to minimize burden and maximize reach and impact for youth.

  7. Analysis of NASA Common Research Model Dynamic Data

    Science.gov (United States)

    Balakrishna, S.; Acheson, Michael J.

    2011-01-01

    Recent NASA Common Research Model (CRM) tests at the Langley National Transonic Facility (NTF) and Ames 11-foot Transonic Wind Tunnel (11-foot TWT) have generated an experimental database for CFD code validation. The database consists of force and moment, surface pressures and wideband wing-root dynamic strain/wing Kulite data from continuous sweep pitch polars. The dynamic data sets, acquired at 12,800 Hz sampling rate, are analyzed in this study to evaluate CRM wing buffet onset and potential CRM wing flow separation.

  8. System Dynamics Modeling for Supply Chain Information Sharing

    Science.gov (United States)

    Feng, Yang

    In this paper, we try to use the method of system dynamics to model supply chain information sharing. Firstly, we determine the model boundaries, establish system dynamics model of supply chain before information sharing, analyze the model's simulation results under different changed parameters and suggest improvement proposal. Then, we establish system dynamics model of supply chain information sharing and make comparison and analysis on the two model's simulation results, to show the importance of information sharing in supply chain management. We wish that all these simulations would provide scientific supports for enterprise decision-making.

  9. Common Practices from Two Decades of Water Resources Modelling Published in Environmental Modelling & Software: 1997 to 2016

    Science.gov (United States)

    Ames, D. P.; Peterson, M.; Larsen, J.

    2016-12-01

    A steady flow of manuscripts describing integrated water resources management (IWRM) modelling has been published in Environmental Modelling & Software since the journal's inaugural issue in 1997. These papers represent two decades of peer-reviewed scientific knowledge regarding methods, practices, and protocols for conducting IWRM. We have undertaken to explore this specific assemblage of literature with the intention of identifying commonly reported procedures in terms of data integration methods, modelling techniques, approaches to stakeholder participation, means of communication of model results, and other elements of the model development and application life cycle. Initial results from this effort will be presented including a summary of commonly used practices, and their evolution over the past two decades. We anticipate that results will show a pattern of movement toward greater use of both stakeholder/participatory modelling methods as well as increased use of automated methods for data integration and model preparation. Interestingly, such results could be interpreted to show that the availability of better, faster, and more integrated software tools and technologies free the modeler to take a less technocratic and more human approach to water resources modelling.

  10. Communicate and collaborate by using building information modeling

    DEFF Research Database (Denmark)

    Mondrup, Thomas Fænø; Karlshøj, Jan; Vestergaard, Flemming

    Building Information Modeling (BIM) represents a new approach within the Architecture, Engineering, and Construction (AEC) industry, one that encourages collaboration and engagement of all stakeholders on a project. This study discusses the potential of adopting BIM as a communication...... and collaboration platform. The discussion is based on: (1) a review of the latest BIM literature, (2) a qualitative survey of professionals within the industry, and (3) mapping of available BIM standards. This study presents the potential benefits, risks, and the overarching challenges of adopting BIM, and makes...... recommendations for its use, particularly as a tool for collaboration. Specifically, this study focuses on the issue of implementing standardized BIM guidelines across national borders (in this study Denmark and Sweden), and discusses the challenge of developing a common standard applicable and acceptable at both...

  11. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  12. Neuroticism and common mental disorders : Meaning and utility of a complex relationship

    NARCIS (Netherlands)

    Ormel, Johan; Jeronimus, Bertus F; Kotov, Roman; Riese, Harriëtte; Bos, Elisabeth H; Hankin, Benjamin; Rosmalen, Judith G M; Oldehinkel, Albertine J

    Neuroticism's prospective association with common mental disorders (CMDs) has fueled the assumption that neuroticism is an independent etiologically informative risk factor. This vulnerability model postulates that neuroticism sets in motion processes that lead to CMDs. However, four other models

  13. The Common Body of Knowledge: A Framework to Promote Relevant Information Security Research

    Directory of Open Access Journals (Sweden)

    Kenneth J. Knapp

    2007-03-01

    Full Text Available This study proposes using an established common body of knowledge (CBK as one means of organizing information security literature.  Consistent with calls for more relevant information systems (IS research, this industry-developed framework can motivate future research towards topics that are important to the security practitioner.  In this review, forty-eight articles from ten IS journals from 1995 to 2004 are selected and cross-referenced to the ten domains of the information security CBK.  Further, we distinguish articles as empirical research, frameworks, or tutorials.  Generally, this study identified a need for additional empirical research in every CBK domain including topics related to legal aspects of information security.  Specifically, this study identified a need for additional IS security research relating to applications development, physical security, operations security, and business continuity.  The CBK framework is inherently practitioner oriented and using it will promote relevancy by steering IS research towards topics important to practitioners.  This is important considering the frequent calls by prominent information systems scholars for more relevant research.  Few research frameworks have emerged from the literature that specifically classify the diversity of security threats and range of problems that businesses today face.  With the recent surge of interest in security, the need for a comprehensive framework that also promotes relevant research can be of great value.

  14. Modeling the reemergence of information diffusion in social network

    Science.gov (United States)

    Yang, Dingda; Liao, Xiangwen; Shen, Huawei; Cheng, Xueqi; Chen, Guolong

    2018-01-01

    Information diffusion in networks is an important research topic in various fields. Existing studies either focus on modeling the process of information diffusion, e.g., independent cascade model and linear threshold model, or investigate information diffusion in networks with certain structural characteristics such as scale-free networks and small world networks. However, there are still several phenomena that have not been captured by existing information diffusion models. One of the prominent phenomena is the reemergence of information diffusion, i.e., a piece of information reemerges after the completion of its initial diffusion process. In this paper, we propose an optimized information diffusion model by introducing a new informed state into traditional susceptible-infected-removed model. We verify the proposed model via simulations in real-world social networks, and the results indicate that the model can reproduce the reemergence of information during the diffusion process.

  15. The use of cloud enabled building information models – an expert analysis

    Directory of Open Access Journals (Sweden)

    Alan Redmond

    2015-10-01

    Full Text Available The dependency of today’s construction professionals to use singular commercial applications for design possibilities creates the risk of being dictated by the language-tools they use. This unknowingly approach to converting to the constraints of a particular computer application’s style, reduces one’s association with cutting-edge design as no single computer application can support all of the tasks associated with building-design and production. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. Cloud computing is a centralized heterogeneous platform that enables different applications to be connected to each other through using remote data servers. However, the possibility of providing an interoperable process based on binding several construction applications through a single repository platform ‘cloud computing’ required further analysis. The following Delphi questionnaires analysed the exchanging information opportunities of Building Information Modelling (BIM as the possible solution for the integration of applications on a cloud platform. The survey structure is modelled to; (i identify the most appropriate applications for advancing interoperability at the early design stage, (ii detect the most severe barriers of BIM implementation from a business and legal viewpoint, (iii examine the need for standards to address information exchange between design team, and (iv explore the use of the most common interfaces for exchanging information. The anticipated findings will assist in identifying a model that will enhance the standardized passing of information between systems at the feasibility design stage of a construction project.

  16. The use of cloud enabled building information models – an expert analysis

    Directory of Open Access Journals (Sweden)

    Alan Redmond

    2012-12-01

    Full Text Available The dependency of today’s construction professionals to use singular commercial applications for design possibilities creates the risk of being dictated by the language-tools they use. This unknowingly approach to converting to the constraints of a particular computer application’s style, reduces one’s association with cutting-edge design as no single computer application can support all of the tasks associated with building-design and production. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. Cloud computing is a centralized heterogeneous platform that enables different applications to be connected to each other through using remote data servers. However, the possibility of providing an interoperable process based on binding several construction applications through a single repository platform ‘cloud computing’ required further analysis. The following Delphi questionnaires analysed the exchanging information opportunities of Building Information Modelling (BIM as the possible solution for the integration of applications on a cloud platform. The survey structure is modelled to; (i identify the most appropriate applications for advancing interoperability at the early design stage, (ii detect the most severe barriers of BIM implementation from a business and legal viewpoint, (iii examine the need for standards to address information exchange between design team, and (iv explore the use of the most common interfaces for exchanging information. The anticipated findings will assist in identifying a model that will enhance the standardized passing of information between systems at the feasibility design stage of a construction project.

  17. Promoting Model-based Definition to Establish a Complete Product Definition.

    Science.gov (United States)

    Ruemler, Shawn P; Zimmerman, Kyle E; Hartman, Nathan W; Hedberg, Thomas; Feeny, Allison Barnard

    2017-05-01

    The manufacturing industry is evolving and starting to use 3D models as the central knowledge artifact for product data and product definition, or what is known as Model-based Definition (MBD). The Model-based Enterprise (MBE) uses MBD as a way to transition away from using traditional paper-based drawings and documentation. As MBD grows in popularity, it is imperative to understand what information is needed in the transition from drawings to models so that models represent all the relevant information needed for processes to continue efficiently. Finding this information can help define what data is common amongst different models in different stages of the lifecycle, which could help establish a Common Information Model. The Common Information Model is a source that contains common information from domain specific elements amongst different aspects of the lifecycle. To help establish this Common Information Model, information about how models are used in industry within different workflows needs to be understood. To retrieve this information, a survey mechanism was administered to industry professionals from various sectors. Based on the results of the survey a Common Information Model could not be established. However, the results gave great insight that will help in further investigation of the Common Information Model.

  18. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  19. Identifying appropriate reference data models for comparative effectiveness research (CER) studies based on data from clinical information systems.

    Science.gov (United States)

    Ogunyemi, Omolola I; Meeker, Daniella; Kim, Hyeon-Eui; Ashish, Naveen; Farzaneh, Seena; Boxwala, Aziz

    2013-08-01

    The need for a common format for electronic exchange of clinical data prompted federal endorsement of applicable standards. However, despite obvious similarities, a consensus standard has not yet been selected in the comparative effectiveness research (CER) community. Using qualitative metrics for data retrieval and information loss across a variety of CER topic areas, we compare several existing models from a representative sample of organizations associated with clinical research: the Observational Medical Outcomes Partnership (OMOP), Biomedical Research Integrated Domain Group, the Clinical Data Interchange Standards Consortium, and the US Food and Drug Administration. While the models examined captured a majority of the data elements that are useful for CER studies, data elements related to insurance benefit design and plans were most detailed in OMOP's CDM version 4.0. Standardized vocabularies that facilitate semantic interoperability were included in the OMOP and US Food and Drug Administration Mini-Sentinel data models, but are left to the discretion of the end-user in Biomedical Research Integrated Domain Group and Analysis Data Model, limiting reuse opportunities. Among the challenges we encountered was the need to model data specific to a local setting. This was handled by extending the standard data models. We found that the Common Data Model from the OMOP met the broadest complement of CER objectives. Minimal information loss occurred in mapping data from institution-specific data warehouses onto the data models from the standards we assessed. However, to support certain scenarios, we found a need to enhance existing data dictionaries with local, institution-specific information.

  20. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  1. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  2. Is the Internet a Suitable Patient Resource for Information on Common Radiological Investigations?: Radiology-Related Information on the Internet.

    Science.gov (United States)

    Bowden, Dermot J; Yap, Lee-Chien; Sheppard, Declan G

    2017-07-01

    This study aimed to assess the quality of Internet information about common radiological investigations. Four search engines (Google, Bing, Yahoo, and Duckduckgo) were searched using the terms "X-ray," "cat scan," "MRI," "ultrasound," and "pet scan." The first 10 webpage results returned for each search term were recorded, and their quality and readability were analyzed by two independent reviewers (DJB and LCY), with discrepancies resolved by consensus. Analysis of information quality was conducted using validated instruments for the assessment of health-care information (DISCERN score is a multi-domain tool for assessment of health-care information quality by health-care professionals and laypeople (max 80 points)) and readability (Flesch-Kincaid and SMOG or Simple Measure of Gobbledygook scores). The search result pages were further classified into categories as follows: commercial, academic (educational/institutional), and news/magazine. Several organizations offer website accreditation for health-care information, and accreditation is recognized by the presence of a hallmark or logo on the website. The presence of any valid accreditation marks on each website was recorded. Mean scores between groups were compared for significance using the Student t test. A total of 200 webpages returned (108 unique website addresses). The average DISCERN score was search engines. No significant difference was seen in readability between modalities or between search engines. Websites carrying validated accreditation marks were associated with higher average DISCERN scores: X-ray (39.36 vs 25.35), computed tomography (45.45 vs 31.33), and ultrasound (40.91 vs 27.62) (P information on the Internet is poor. High-quality online resources should be identified so that patients may avoid the use of poor-quality information derived from general search engine queries. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  3. An evaluation of four crop:weed competition models using a common data set

    NARCIS (Netherlands)

    Deen, W.; Cousens, R.; Warringa, J.; Bastiaans, L.; Carberry, P.; Rebel, K.; Riha, S.; Murphy, C.; Benjamin, L.R.; Cloughley, C.; Cussans, J.; Forcella, F.

    2003-01-01

    To date, several crop : weed competition models have been developed. Developers of the various models were invited to compare model performance using a common data set. The data set consisted of wheat and Lolium rigidum grown in monoculture and mixtures under dryland and irrigated conditions.

  4. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  5. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  6. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    Science.gov (United States)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  7. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  8. Investigations of inter-system common cause failures

    International Nuclear Information System (INIS)

    Nonclerca, P.; Gallois, M.; Vasseur, D.

    2012-01-01

    Intra-system common-cause failures (CCF) are widely studied and addressed in existing PSA models, but the information and studies that incorporate the potential for inter-system CCF is limited. However, the French Safety Authority has requested that EDF investigate the possibility of common-cause failure across system boundaries for Flamanville 3 (an EPR design). Also, the modeling of inter-system CCF, or the proof that their impact is negligible, would satisfy Capability Category III for one of the requirements in the ASME/ANS PRA standard in the U.S. EDF and EPRI have been working on a method to assess when it is necessary to take into account inter-system CCF in a PSA model between 2008 and 2010. This method is based both on the likelihood of inter-system CCF and on its demonstrated potential impact on CDF (core damage frequency). This method was first applied on pumps in different systems of the 900 MWe series plants. The second application concerned the motor-operated valves across different systems, using the same PSA model. This second application helped us refine the method, which was not optimal when the number of concerned components is very large. Since then, the method has been successfully applied on the pumps and 10 kV breakers of the EPR power plant in Flamanville. This paper describes the method and the results obtained in some of these studies. All studies have shown either that components in different systems, when they were not already part of a common cause failure group in the model, are not susceptible to common causes of failure, or that the potential for inter-system common-cause failure is negligible regarding the overall risk. (authors)

  9. Mathematical Modeling, Sense Making, and the Common Core State Standards

    Science.gov (United States)

    Schoenfeld, Alan H.

    2013-01-01

    On October 14, 2013 the Mathematics Education Department at Teachers College hosted a full-day conference focused on the Common Core Standards Mathematical Modeling requirements to be implemented in September 2014 and in honor of Professor Henry Pollak's 25 years of service to the school. This article is adapted from my talk at this conference…

  10. Directory of Energy Information Administration models 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    This directory revises and updates the Directory of Energy Information Administration Models 1995, DOE/EIA-0293(95), Energy Information Administration (EIA), U.S. Department of Energy, July 1995. Four models have been deleted in this directory as they are no longer being used: (1) Market Penetration Model for Ground-Water Heat Pump Systems (MPGWHP); (2) Market Penetration Model for Residential Rooftop PV Systems (MPRESPV-PC); (3) Market Penetration Model for Active and Passive Solar Technologies (MPSOLARPC); and (4) Revenue Requirements Modeling System (RRMS).

  11. Bayesian inference in a discrete shock model using confounded common cause data

    International Nuclear Information System (INIS)

    Kvam, Paul H.; Martz, Harry F.

    1995-01-01

    We consider redundant systems of identical components for which reliability is assessed statistically using only demand-based failures and successes. Direct assessment of system reliability can lead to gross errors in estimation if there exist external events in the working environment that cause two or more components in the system to fail in the same demand period which have not been included in the reliability model. We develop a simple Bayesian model for estimating component reliability and the corresponding probability of common cause failure in operating systems for which the data is confounded; that is, the common cause failures cannot be distinguished from multiple independent component failures in the narrative event descriptions

  12. THE INFORMATION MODEL «SOCIAL EXPLOSION»

    Directory of Open Access Journals (Sweden)

    Alexander Chernyavskiy

    2012-01-01

    Full Text Available Article is dedicated to examination and analysis of the construction of the information model «social explosion», which corresponds to the newest «colored» revolutions. The analysis of model makes it possible to see effective approaches to the initiation of this explosion and by the use of contemporary information communications as honeycomb connection and the mobile Internet

  13. Incorporating prior information into differential network analysis using non-paranormal graphical models.

    Science.gov (United States)

    Zhang, Xiao-Fei; Ou-Yang, Le; Yan, Hong

    2017-08-15

    Understanding how gene regulatory networks change under different cellular states is important for revealing insights into network dynamics. Gaussian graphical models, which assume that the data follow a joint normal distribution, have been used recently to infer differential networks. However, the distributions of the omics data are non-normal in general. Furthermore, although much biological knowledge (or prior information) has been accumulated, most existing methods ignore the valuable prior information. Therefore, new statistical methods are needed to relax the normality assumption and make full use of prior information. We propose a new differential network analysis method to address the above challenges. Instead of using Gaussian graphical models, we employ a non-paranormal graphical model that can relax the normality assumption. We develop a principled model to take into account the following prior information: (i) a differential edge less likely exists between two genes that do not participate together in the same pathway; (ii) changes in the networks are driven by certain regulator genes that are perturbed across different cellular states and (iii) the differential networks estimated from multi-view gene expression data likely share common structures. Simulation studies demonstrate that our method outperforms other graphical model-based algorithms. We apply our method to identify the differential networks between platinum-sensitive and platinum-resistant ovarian tumors, and the differential networks between the proneural and mesenchymal subtypes of glioblastoma. Hub nodes in the estimated differential networks rediscover known cancer-related regulator genes and contain interesting predictions. The source code is at https://github.com/Zhangxf-ccnu/pDNA. szuouyl@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  15. Evaluating procedural modelling for 3D models of informal settlements in urban design activities

    Directory of Open Access Journals (Sweden)

    Victoria Rautenbach

    2015-11-01

    Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.

  16. Age and growth of the common blacktip shark Carcharhinus ...

    African Journals Online (AJOL)

    Age and growth estimates from length-at-age data were produced for the common blacktip shark Carcharhinus limbatus from Indonesia. Back-calculation techniques were used due to a low sample size (n = 30), which was dominated by large, mature sharks. A multi-model approach incorporating Akaike's information ...

  17. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    Science.gov (United States)

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  18. The Application of Typology Method in Historical Building Information Modelling (hbim) Taking the Information Surveying and Mapping of Jiayuguan Fortress Town as AN Example

    Science.gov (United States)

    Li, D. Y.; Li, K.; Wu, C.

    2017-08-01

    With the promotion of fine degree of the heritage building surveying and mapping, building information modelling technology(BIM) begins to be used in surveying and mapping, renovation, recording and research of heritage building, called historical building information modelling(HBIM). The hierarchical frameworks of parametric component library of BIM, belonging to the same type with the same parameters, has the same internal logic with archaeological typology which is more and more popular in the age identification of ancient buildings. Compared with the common materials, 2D drawings and photos, typology with HBIM has two advantages — (1) comprehensive building information both in collection and representation and (2) uniform and reasonable classification criteria This paper will take the information surveying and mapping of Jiayuguan Fortress Town as an example to introduce the field work method of information surveying and mapping based on HBIM technology and the construction of Revit family library.And then in order to prove the feasibility and advantage of HBIM technology used in typology method, this paper will identify the age of Guanghua gate tower, Rouyuan gate tower, Wenchang pavilion and the theater building of Jiayuguan Fortress Town with HBIM technology and typology method.

  19. THE APPLICATION OF TYPOLOGY METHOD IN HISTORICAL BUILDING INFORMATION MODELLING (HBIM TAKING THE INFORMATION SURVEYING AND MAPPING OF JIAYUGUAN FORTRESS TOWN AS AN EXAMPLE

    Directory of Open Access Journals (Sweden)

    D. Y. Li

    2017-08-01

    Full Text Available With the promotion of fine degree of the heritage building surveying and mapping, building information modelling technology(BIM begins to be used in surveying and mapping, renovation, recording and research of heritage building, called historical building information modelling(HBIM. The hierarchical frameworks of parametric component library of BIM, belonging to the same type with the same parameters, has the same internal logic with archaeological typology which is more and more popular in the age identification of ancient buildings. Compared with the common materials, 2D drawings and photos, typology with HBIM has two advantages — (1 comprehensive building information both in collection and representation and (2 uniform and reasonable classification criteria This paper will take the information surveying and mapping of Jiayuguan Fortress Town as an example to introduce the field work method of information surveying and mapping based on HBIM technology and the construction of Revit family library.And then in order to prove the feasibility and advantage of HBIM technology used in typology method, this paper will identify the age of Guanghua gate tower, Rouyuan gate tower, Wenchang pavilion and the theater building of Jiayuguan Fortress Town with HBIM technology and typology method.

  20. Effective international information exchange as a key element of modern tax systems: promises and pitfalls of the OECD’s common reporting standard

    Directory of Open Access Journals (Sweden)

    Stjepan Gadzo

    2017-06-01

    Full Text Available Today’s global economic environment is characterized by the high mobility of capital and labour across national borders. Against the backdrop of a legal framework governing taxation of cross-border income, this may lead to double taxation on the one hand, as well as provide opportunities for tax evasion and tax avoidance on the other. It is well-established that a prerequisite for effective taxation of foreign-sourced income earned by “domestic taxpayers” (i.e. tax residents is the system of administrative co-operation across national boundaries, mainly in the form of exchange of tax-relevant information between tax authorities. Since the lack of information-exchange mechanisms is linked with tax havens and the proliferation of “harmful tax practices”, the OECD put the issue high on the global political agenda as early as 1998. Further developments strengthened the importance of the exchange of information, leading to the so-called “big bang” of 2009, i.e. to a significant increase in the number of concluded tax information exchange agreements, caused by the growing concern about international tax evasion and avoidance in the post-crisis period.Nowadays the so-called automatic exchange of information (AEOI between tax authorities has emerged as a new global standard. This is mostly due to the development of specific national and international models, aimed at enhancing intergovernmental cooperation in fighting offshore tax evasion. In this regard special attention should be drawn to the 2014 release of the OECD’s Common Reporting Standard (CRS, which is based on the idea that banks and other financial institutions should play a crucial role in providing information on taxpayer’s income and assets to tax authorities around the globe.The aim of this paper is to explore some of the most important implications of the adoption of the CRS as a global AEOI model. While there are marked advantages of the new standard - mainly related

  1. Common-image gathers in the offset domain from reverse-time migration

    KAUST Repository

    Zhan, Ge; Zhang, Minyu

    2014-01-01

    Kirchhoff migration is flexible to output common-image gathers (CIGs) in the offset domain by imaging data with different offsets separately. These CIGs supply important information for velocity model updates and amplitude-variation-with-offset (AVO

  2. Evaluating common de-identification heuristics for personal health information.

    Science.gov (United States)

    El Emam, Khaled; Jabbouri, Sam; Sams, Scott; Drouet, Youenn; Power, Michael

    2006-11-21

    With the growing adoption of electronic medical records, there are increasing demands for the use of this electronic clinical data in observational research. A frequent ethics board requirement for such secondary use of personal health information in observational research is that the data be de-identified. De-identification heuristics are provided in the Health Insurance Portability and Accountability Act Privacy Rule, funding agency and professional association privacy guidelines, and common practice. The aim of the study was to evaluate whether the re-identification risks due to record linkage are sufficiently low when following common de-identification heuristics and whether the risk is stable across sample sizes and data sets. Two methods were followed to construct identification data sets. Re-identification attacks were simulated on these. For each data set we varied the sample size down to 30 individuals, and for each sample size evaluated the risk of re-identification for all combinations of quasi-identifiers. The combinations of quasi-identifiers that were low risk more than 50% of the time were considered stable. The identification data sets we were able to construct were the list of all physicians and the list of all lawyers registered in Ontario, using 1% sampling fractions. The quasi-identifiers of region, gender, and year of birth were found to be low risk more than 50% of the time across both data sets. The combination of gender and region was also found to be low risk more than 50% of the time. We were not able to create an identification data set for the whole population. Existing Canadian federal and provincial privacy laws help explain why it is difficult to create an identification data set for the whole population. That such examples of high re-identification risk exist for mainstream professions makes a strong case for not disclosing the high-risk variables and their combinations identified here. For professional subpopulations with published

  3. Common problematic aspects of coupling hydrological models with groundwater flow models on the river catchment scale

    Directory of Open Access Journals (Sweden)

    R. Barthel

    2006-01-01

    Full Text Available Model coupling requires a thorough conceptualisation of the coupling strategy, including an exact definition of the individual model domains, the "transboundary" processes and the exchange parameters. It is shown here that in the case of coupling groundwater flow and hydrological models – in particular on the regional scale – it is very important to find a common definition and scale-appropriate process description of groundwater recharge and baseflow (or "groundwater runoff/discharge" in order to achieve a meaningful representation of the processes that link the unsaturated and saturated zones and the river network. As such, integration by means of coupling established disciplinary models is problematic given that in such models, processes are defined from a purpose-oriented, disciplinary perspective and are therefore not necessarily consistent with definitions of the same process in the model concepts of other disciplines. This article contains a general introduction to the requirements and challenges of model coupling in Integrated Water Resources Management including a definition of the most relevant technical terms, a short description of the commonly used approach of model coupling and finally a detailed consideration of the role of groundwater recharge and baseflow in coupling groundwater models with hydrological models. The conclusions summarize the most relevant problems rather than giving practical solutions. This paper aims to point out that working on a large scale in an integrated context requires rethinking traditional disciplinary workflows and encouraging communication between the different disciplines involved. It is worth noting that the aspects discussed here are mainly viewed from a groundwater perspective, which reflects the author's background.

  4. Modeling the compliance of polyurethane nanofiber tubes for artificial common bile duct

    Science.gov (United States)

    Moazeni, Najmeh; Vadood, Morteza; Semnani, Dariush; Hasani, Hossein

    2018-02-01

    The common bile duct is one of the body’s most sensitive organs and a polyurethane nanofiber tube can be used as a prosthetic of the common bile duct. The compliance is one of the most important properties of prosthetic which should be adequately compliant as long as possible to keep the behavioral integrity of prosthetic. In the present paper, the prosthetic compliance was measured and modeled using regression method and artificial neural network (ANN) based on the electrospinning process parameters such as polymer concentration, voltage, tip-to-collector distance and flow rate. Whereas, the ANN model contains different parameters affecting on the prediction accuracy directly, the genetic algorithm (GA) was used to optimize the ANN parameters. Finally, it was observed that the optimized ANN model by GA can predict the compliance with high accuracy (mean absolute percentage error = 8.57%). Moreover, the contribution of variables on the compliance was investigated through relative importance analysis and the optimum values of parameters for ideal compliance were determined.

  5. Interactive Sonification Exploring Emergent Behavior Applying Models for Biological Information and Listening

    Science.gov (United States)

    Choi, Insook

    2018-01-01

    as an informal model of a listener's clinical attention to data sonification through multisensory interaction in a context of structured inquiry. Three methods are introduced to assess the proposed sonification framework: Listening Scenario classification, data flow Attunement, and Sonification Design Patterns to classify sound control. Case study implementations are assessed against these methods comparing levels of abstraction between experimental data and sound generation. Outcomes demonstrate the framework performance as a reference model for representing experimental implementations, also for identifying common sonification structures having different experimental implementations, identifying common functions implemented in different subsystems, and comparing impact of affordances across multiple implementations of listening scenarios. PMID:29755311

  6. Interactive Sonification Exploring Emergent Behavior Applying Models for Biological Information and Listening

    Directory of Open Access Journals (Sweden)

    Insook Choi

    2018-04-01

    Listening is introduced as an informal model of a listener's clinical attention to data sonification through multisensory interaction in a context of structured inquiry. Three methods are introduced to assess the proposed sonification framework: Listening Scenario classification, data flow Attunement, and Sonification Design Patterns to classify sound control. Case study implementations are assessed against these methods comparing levels of abstraction between experimental data and sound generation. Outcomes demonstrate the framework performance as a reference model for representing experimental implementations, also for identifying common sonification structures having different experimental implementations, identifying common functions implemented in different subsystems, and comparing impact of affordances across multiple implementations of listening scenarios.

  7. Digital gene atlas of neonate common marmoset brain.

    Science.gov (United States)

    Shimogori, Tomomi; Abe, Ayumi; Go, Yasuhiro; Hashikawa, Tsutomu; Kishi, Noriyuki; Kikuchi, Satomi S; Kita, Yoshiaki; Niimi, Kimie; Nishibe, Hirozumi; Okuno, Misako; Saga, Kanako; Sakurai, Miyano; Sato, Masae; Serizawa, Tsuna; Suzuki, Sachie; Takahashi, Eiki; Tanaka, Mami; Tatsumoto, Shoji; Toki, Mitsuhiro; U, Mami; Wang, Yan; Windak, Karl J; Yamagishi, Haruhiko; Yamashita, Keiko; Yoda, Tomoko; Yoshida, Aya C; Yoshida, Chihiro; Yoshimoto, Takuro; Okano, Hideyuki

    2018-03-01

    Interest in the common marmoset (Callithrix jacchus) as a primate model animal has grown recently, in part due to the successful demonstration of transgenic marmosets. However, there is some debate as to the suitability of marmosets, compared to more widely used animal models, such as the macaque monkey and mouse. Especially, the usage of marmoset for animal models of human cognition and mental disorders, is still yet to be fully explored. To examine the prospects of the marmoset model for neuroscience research, the Marmoset Gene Atlas (https://gene-atlas.bminds.brain.riken.jp/) provides a whole brain gene expression atlas in the common marmoset. We employ in situ hybridization (ISH) to systematically analyze gene expression in neonate marmoset brains, which allows us to compare expression with other model animals such as mouse. We anticipate that these data will provide sufficient information to develop tools that enable us to reveal marmoset brain structure, function, cellular and molecular organization for primate brain research. Copyright © 2017 Elsevier Ireland Ltd and Japan Neuroscience Society. All rights reserved.

  8. Toward combining thematic information with hierarchical multiscale segmentations using tree Markov random field model

    Science.gov (United States)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi

    2017-09-01

    It has been a common idea to produce multiscale segmentations to represent the various geographic objects in high-spatial resolution remote sensing (HR) images. However, it remains a great challenge to automatically select the proper segmentation scale(s) just according to the image information. In this study, we propose a novel way of information fusion at object level by combining hierarchical multiscale segmentations with existed thematic information produced by classification or recognition. The tree Markov random field (T-MRF) model is designed for the multiscale combination framework, through which the object type is determined as close as the existed thematic information. At the same time, the object boundary is jointly determined by the thematic labels and the multiscale segments through the minimization of the energy function. The benefits of the proposed T-MRF combination model include: (1) reducing the dependence of segmentation scale selection when utilizing multiscale segmentations; (2) exploring the hierarchical context naturally imbedded in the multiscale segmentations. The HR images in both urban and rural areas are used in the experiments to show the effectiveness of the proposed combination framework on these two aspects.

  9. Data retrieval systems and models of information situations

    International Nuclear Information System (INIS)

    Jankowski, L.

    1984-01-01

    Demands placed on data retrieval systems and their basic parameters are given. According to the stage of development of data collection and processing, data retrieval systems may be divided into systems for the simple recording and provision of data, systems for recording and providing data with integrated statistical functions, and logical information systems. The structure is characterized of the said information systems as are methods of processing and representation of facts. The notion is defined of ''artificial intelligence'' in the development of logical information systems. The structure of representing knowledge in diverse forms of the model is decisive in logical information systems related to nuclear research. The main model elements are the characteristics of data, forms of representation and program. In dependence on the structure of data, the structure of the preparatory and transformation algorithms and on the aim of the system it is possible to classify data retrieval systems related to nuclear research and technology into five logical information models: linear, identification, advisory, theory-experiment models and problem solving models. The characteristics are given of the said models and examples of data retrieval systems for the individual models. (E.S.)

  10. Information-based models for finance and insurance

    Science.gov (United States)

    Hoyle, Edward

    2010-10-01

    In financial markets, the information that traders have about an asset is reflected in its price. The arrival of new information then leads to price changes. The `information-based framework' of Brody, Hughston and Macrina (BHM) isolates the emergence of information, and examines its role as a driver of price dynamics. This approach has led to the development of new models that capture a broad range of price behaviour. This thesis extends the work of BHM by introducing a wider class of processes for the generation of the market filtration. In the BHM framework, each asset is associated with a collection of random cash flows. The asset price is the sum of the discounted expectations of the cash flows. Expectations are taken with respect (i) an appropriate measure, and (ii) the filtration generated by a set of so-called information processes that carry noisy or imperfect market information about the cash flows. To model the flow of information, we introduce a class of processes termed Lévy random bridges (LRBs), generalising the Brownian and gamma information processes of BHM. Conditioned on its terminal value, an LRB is identical in law to a Lévy bridge. We consider in detail the case where the asset generates a single cash flow X_T at a fixed date T. The flow of information about X_T is modelled by an LRB with random terminal value X_T. An explicit expression for the price process is found by working out the discounted conditional expectation of X_T with respect to the natural filtration of the LRB. New models are constructed using information processes related to the Poisson process, the Cauchy process, the stable-1/2 subordinator, the variance-gamma process, and the normal inverse-Gaussian process. These are applied to the valuation of credit-risky bonds, vanilla and exotic options, and non-life insurance liabilities.

  11. Homeland Security and Information Control: A Model of Asymmetric Information Flows.

    Science.gov (United States)

    Maxwell, Terrence A.

    2003-01-01

    Summarizes some of the activities the United States government has undertaken to control the dissemination of information since 2001. It also explores, through a conceptual model of information flows, potential impacts and discontinuities between policy purposes and outcomes. (AEF)

  12. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  13. A Model for an Electronic Information Marketplace

    Directory of Open Access Journals (Sweden)

    Wei Ge

    2005-11-01

    Full Text Available As the information content on the Internet increases, the task of locating desired information and assessing its quality becomes increasingly difficult. This development causes users to be more willing to pay for information that is focused on specific issues, verifiable, and available upon request. Thus, the nature of the Internet opens up the opportunity for information trading. In this context, the Internet cannot only be used to close the transaction, but also to deliver the product - desired information - to the user. Early attempts to implement such business models have fallen short of expectations. In this paper, we discuss the limitations of such practices and present a modified business model for information trading, which uses a reverse auction approach together with a multiple-buyer price discovery process

  14. Thermodynamics of information processing based on enzyme kinetics: An exactly solvable model of an information pump.

    Science.gov (United States)

    Cao, Yuansheng; Gong, Zongping; Quan, H T

    2015-06-01

    Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012)] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013)], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.

  15. Construction Process Simulation and Safety Analysis Based on Building Information Model and 4D Technology

    Institute of Scientific and Technical Information of China (English)

    HU Zhenzhong; ZHANG Jianping; DENG Ziyin

    2008-01-01

    Time-dependent structure analysis theory has been proved to be more accurate and reliable com-pared to commonly used methods during construction. However, so far applications are limited to partial pe-riod and part of the structure because of immeasurable artificial intervention. Based on the building informa-tion model (BIM) and four-dimensional (4D) technology, this paper proposes an improves structure analysis method, which can generate structural geometry, resistance model, and loading conditions automatically by a close interlink of the schedule information, architectural model, and material properties. The method was applied to a safety analysis during a continuous and dynamic simulation of the entire construction process.The results show that the organic combination of the BIM, 4D technology, construction simulation, and safety analysis of time-dependent structures is feasible and practical. This research also lays a foundation for further researches on building lifecycle management by combining architectural design, structure analy-sis, and construction management.

  16. Directory of energy information administration models 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-13

    This updated directory has been published annually; after this issue, it will be published only biennially. The Disruption Impact Simulator Model in use by EIA is included. Model descriptions have been updated according to revised documentation approved during the past year. This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included are 37 EIA models active as of February 1, 1995. The first group is the National Energy Modeling System (NEMS) models. The second group is all other EIA models that are not part of NEMS. Appendix A identifies major EIA modeling systems and the models within these systems. Appendix B is a summary of the `Annual Energy Outlook` Forecasting System.

  17. Compilation of information on melter modeling

    International Nuclear Information System (INIS)

    Eyler, L.L.

    1996-03-01

    The objective of the task described in this report is to compile information on modeling capabilities for the High-Temperature Melter and the Cold Crucible Melter and issue a modeling capabilities letter report summarizing existing modeling capabilities. The report is to include strategy recommendations for future modeling efforts to support the High Level Waste (BLW) melter development

  18. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  19. Thermodynamics of information processing based on enzyme kinetics: An exactly solvable model of an information pump

    Science.gov (United States)

    Cao, Yuansheng; Gong, Zongping; Quan, H. T.

    2015-06-01

    Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012), 10.1073/pnas.1204263109] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013), 10.1103/PhysRevLett.111.030602], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.

  20. TRUST MODEL FOR INFORMATION SECURITY OF MULTI-AGENT ROBOTIC SYSTEMS WITH A DECENTRALIZED MANAGEMENT

    Directory of Open Access Journals (Sweden)

    I. A. Zikratov

    2014-03-01

    Full Text Available The paper deals with the issues on protection of multi-agent robotic systems against attacks by robots-saboteurs. The operation analysis of such systems with decentralized control is carried out. Concept of harmful information impact (attack from a robot-saboteur to the multi-agent robotic system is given. The class of attacks is considered using interception of messages, formation and transfer of misinformation to group of robots, and also carrying out other actions with vulnerabilities of multiagent algorithms without obviously identified signs of invasion of robots-saboteurs. The model of information security is developed, in which robots-agents work out trust levels to each other analyzing the events occurring in the system. The idea of trust model consists in the analysis of transferred information by each robot and the executed actions of other members in a group, comparison of chosen decision on iteration step k with objective function of the group. Distinctive feature of the trust model in comparison with the closest analogue - Buddy Security Model in which the exchange between the agents security tokens is done — is involvement of the time factor during which agents have to "prove" by their actions the usefulness in achievement of a common goal to members of the group. Variants of this model realization and ways of an assessment of trust levels for agents in view of the security policy accepted in the group are proposed.

  1. Process and building information modelling in the construction industry by using information delivery manuals and model view definitions

    DEFF Research Database (Denmark)

    Karlshøj, Jan

    2012-01-01

    The construction industry is gradually increasing its use of structured information and building information modelling.To date, the industry has suffered from the disadvantages of a project-based organizational structure and ad hoc solutions. Furthermore, it is not used to formalizing the flow...... of information and specifying exactly which objects and properties are needed for each process and which information is produced by the processes. The present study is based on reviewing the existing methodology of Information Delivery Manuals (IDM) from Buildingsmart, which also is also an ISO standard 29481...... Part 1; and the Model View Definition (MVD) methodology developed by Buildingsmart and BLIS. The research also includes a review of concrete IDM development projects that have been developed over the last five years. Although the study has identified interest in the IDM methodology in a number...

  2. Information dissemination model for social media with constant updates

    Science.gov (United States)

    Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui

    2018-07-01

    With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.

  3. Study on the standard architecture for geoinformation common services

    Science.gov (United States)

    Zha, Z.; Zhang, L.; Wang, C.; Jiang, J.; Huang, W.

    2014-04-01

    The construction of platform for geoinformation common services was completed or on going in in most provinces and cities in these years in China, and the platforms plays an important role in the economic and social activities. Geoinfromation and geoinfromation based services are the key issues in the platform. The standards on geoinormation common services play as bridges among the users, systems and designers of the platform. The standard architecture for geoinformation common services is the guideline for designing and using the standard system in which the standards integrated to each other to promote the development, sharing and services of geoinformation resources. To establish the standard architecture for geoinformation common services is one of the tasks of "Study on important standards for geonformation common services and management of public facilities in city". The scope of the standard architecture is defined, such as data or information model, interoperability interface or service, information management. Some Research work on the status of international standards of geoinormation common services in organization and countries, like ISO/TC 211, OGC and other countries or unions like USA, EU, Japan have done. Some principles are set up to evaluate the standard, such as availability, suitability and extensible ability. Then the development requirement and practical situation are analyzed, and a framework of the standard architecture for geoinformation common services are proposed. Finally, a summary and prospects of the geoinformation standards are made.

  4. Comparison of the precision of three commonly used GPS models

    Directory of Open Access Journals (Sweden)

    E Chavoshi

    2016-04-01

    Full Text Available Introduction: Development of science in various fields has caused change in the methods to determine geographical location. Precision farming involves new technology that provides the opportunity for farmers to change in factors such as nutrients, soil moisture available to plants, soil physical and chemical characteristics and other factors with the spatial resolution of less than a centimeter to several meters to monitor and evaluate. GPS receivers based on precision farming operations specified accuracies are used in the following areas: 1 monitoring of crop and soil sampling (less than one meter accuracy 2 use of fertilizer, pesticide and seed work (less than half a meter accuracy 3 Transplantation and row cultivation (precision of less than 4 cm (Perez et al., 2011. In one application of GPS in agriculture, route guidance precision farming tractors in the fields was designed to reduce the transmission error that deviate from the path specified in the range of 50 to 300 mm driver informed and improved way to display (Perez et al., 2011. In another study, the system automatically guidance, based on RTK-GPS technology, precision tillage operations was used between and within the rows very close to the drip irrigation pipe and without damage to their crops at a distance of 50 mm (Abidine et al., 2004. In another study, to compare the accuracy and precision of the receivers, 5 different models of Trimble Mark GPS devices from 15 stations were mapped, the results indicated that minimum error was related to Geo XT model with an accuracy of 91 cm and maximum error was related to Pharos model with an accuracy of 5.62 m (Kindra et al., 2006. Due to the increasing use of GPS receivers in agriculture as well as the lack of trust on the real accuracy and precision of receivers, this study aimed to compare the positioning accuracy and precision of three commonly used GPS receivers models used to specify receivers with the lowest error for precision

  5. Information Literacy for Health Professionals: Teaching Essential Information Skills with the Big6 Information Literacy Model

    Science.gov (United States)

    Santana Arroyo, Sonia

    2013-01-01

    Health professionals frequently do not possess the necessary information-seeking abilities to conduct an effective search in databases and Internet sources. Reference librarians may teach health professionals these information and technology skills through the Big6 information literacy model (Big6). This article aims to address this issue. It also…

  6. INFORMATION MODEL OF A GENERAL PRACTITIONER

    Directory of Open Access Journals (Sweden)

    S. M. Zlepko

    2016-06-01

    Full Text Available In the paper the authors developed information model family doctor shows its innovation and functionality. The proposed model meets the requirements of the current job description and criteria World Organization of Family Doctors.

  7. Information in general medical practices: the information processing model.

    Science.gov (United States)

    Crowe, Sarah; Tully, Mary P; Cantrill, Judith A

    2010-04-01

    The need for effective communication and handling of secondary care information in general practices is paramount. To explore practice processes on receiving secondary care correspondence in a way that integrates the information needs and perceptions of practice staff both clinical and administrative. Qualitative study using semi-structured interviews with a wide range of practice staff (n = 36) in nine practices in the Northwest of England. Analysis was based on the framework approach using N-Vivo software and involved transcription, familiarization, coding, charting, mapping and interpretation. The 'information processing model' was developed to describe the six stages involved in practice processing of secondary care information. These included the amendment or updating of practice records whilst simultaneously or separately actioning secondary care recommendations, using either a 'one-step' or 'two-step' approach, respectively. Many factors were found to influence each stage and impact on the continuum of patient care. The primary purpose of processing secondary care information is to support patient care; this study raises the profile of information flow and usage within practices as an issue requiring further consideration.

  8. Directory of Energy Information Administration Models 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-07-06

    This directory contains descriptions about each model, including the title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included in this directory are 35 EIA models active as of May 1, 1993. Models that run on personal computers are identified by ``PC`` as part of the acronym. EIA is developing new models, a National Energy Modeling System (NEMS), and is making changes to existing models to include new technologies, environmental issues, conservation, and renewables, as well as extend forecast horizon. Other parts of the Department are involved in this modeling effort. A fully operational model is planned which will integrate completed segments of NEMS for its first official application--preparation of EIA`s Annual Energy Outlook 1994. Abstracts for the new models will be included in next year`s version of this directory.

  9. Information Systems Outsourcing Relationship Model

    Directory of Open Access Journals (Sweden)

    Richard Flemming

    2007-09-01

    Full Text Available Increasing attention is being paid to what determines the success of an information systems outsourcing arrangement. The current research aims to provide an improved understanding of the factors influencing the outcome of an information systems outsourcing relationship and to provide a preliminary validation of an extended outsourcing relationship model by interviews with information systems outsourcing professionals in both the client and vendor of a major Australian outsourcing relationship. It also investigates whether the client and the vendor perceive the relationship differently and if so, how they perceive it differently and whether the two perspectives are interrelated.

  10. BRIDG: a domain information model for translational and clinical protocol-driven research.

    Science.gov (United States)

    Becnel, Lauren B; Hastak, Smita; Ver Hoef, Wendy; Milius, Robert P; Slack, MaryAnn; Wold, Diane; Glickman, Michael L; Brodsky, Boris; Jaffe, Charles; Kush, Rebecca; Helton, Edward

    2017-09-01

    It is critical to integrate and analyze data from biological, translational, and clinical studies with data from health systems; however, electronic artifacts are stored in thousands of disparate systems that are often unable to readily exchange data. To facilitate meaningful data exchange, a model that presents a common understanding of biomedical research concepts and their relationships with health care semantics is required. The Biomedical Research Integrated Domain Group (BRIDG) domain information model fulfills this need. Software systems created from BRIDG have shared meaning "baked in," enabling interoperability among disparate systems. For nearly 10 years, the Clinical Data Standards Interchange Consortium, the National Cancer Institute, the US Food and Drug Administration, and Health Level 7 International have been key stakeholders in developing BRIDG. BRIDG is an open-source Unified Modeling Language-class model developed through use cases and harmonization with other models. With its 4+ releases, BRIDG includes clinical and now translational research concepts in its Common, Protocol Representation, Study Conduct, Adverse Events, Regulatory, Statistical Analysis, Experiment, Biospecimen, and Molecular Biology subdomains. The model is a Clinical Data Standards Interchange Consortium, Health Level 7 International, and International Standards Organization standard that has been utilized in national and international standards-based software development projects. It will continue to mature and evolve in the areas of clinical imaging, pathology, ontology, and vocabulary support. BRIDG 4.1.1 and prior releases are freely available at https://bridgmodel.nci.nih.gov . © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Information model of economy

    Directory of Open Access Journals (Sweden)

    N.S.Gonchar

    2006-01-01

    Full Text Available A new stochastic model of economy is developed that takes into account the choice of consumers are the dependent random fields. Axioms of such a model are formulated. The existence of random fields of consumer's choice and decision making by firms are proved. New notions of conditionally independent random fields and random fields of evaluation of information by consumers are introduced. Using the above mentioned random fields the random fields of consumer choice and decision making by firms are constructed. The theory of economic equilibrium is developed.

  12. MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM

    Directory of Open Access Journals (Sweden)

    A. G. Korobeynikov

    2015-05-01

    Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed

  13. A new method for explicit modelling of single failure event within different common cause failure groups

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.

  14. Informing groundwater models with near-surface geophysical data

    DEFF Research Database (Denmark)

    Herckenrath, Daan

    Over the past decade geophysical methods have gained an increased popularity due to their ability to map hydrologic properties. Such data sets can provide valuable information to improve hydrologic models. Instead of using the measured geophysical and hydrologic data simultaneously in one inversion...... approach, many of the previous studies apply a Sequential Hydrogeophysical Inversion (SHI) in which inverted geophysical models provide information for hydrologic models. In order to fully exploit the information contained in geophysical datasets for hydrological purposes, a coupled hydrogeophysical...... inversion was introduced (CHI), in which a hydrologic model is part of the geophysical inversion. Current CHI-research has been focussing on the translation of simulated state variables of hydrologic models to geophysical model parameters. We refer to this methodology as CHI-S (State). In this thesis a new...

  15. Bayesian informative dropout model for longitudinal binary data with random effects using conditional and joint modeling approaches.

    Science.gov (United States)

    Chan, Jennifer S K

    2016-05-01

    Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Extended object-oriented Petri net model for mission reliability simulation of repairable PMS with common cause failures

    International Nuclear Information System (INIS)

    Wu, Xin-yang; Wu, Xiao-Yue

    2015-01-01

    Phased Mission Systems (PMS) have several phases with different success criteria. Generally, traditional analytical methods need to make some assumptions when they are applied for reliability evaluation and analysis of complex PMS, for example, the components are non-repairable or components are not subjected to common cause failures (CCF). However, the evaluation and analysis results may be inapplicable when the assumptions do not agree with practical situation. In this article, we propose an extended object-oriented Petri net (EOOPN) model for mission reliability simulation of repairable PMS with CCFs. Based on object-oriented Petri net (OOPN), EOOPN defines four reusable sub-models to depict PMS at system, phase, or component levels respectively, logic transitions to depict complex components reliability logics in a more readable form, and broadcast place to transmit shared information among components synchronously. After extension, EOOPN could deal with repairable PMS with both external and internal CCFs conveniently. The mission reliability modelling, simulation and analysis using EOOPN are illustrated by a PMS example. The results demonstrate that the proposed EOOPN model is effective. - Highlights: • EOOPN model was effective in reliability simulation for repairable PMS with CCFs. • EOOPN had modular and hierarchical structure. • New elements of EOOPN made the modelling process more convenient and friendlier. • EOOPN had better model reusability and readability than other PNs

  17. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  18. Comparative digital cartilage histology for human and common osteoarthritis models

    Directory of Open Access Journals (Sweden)

    Pedersen DR

    2013-02-01

    Full Text Available Douglas R Pedersen, Jessica E Goetz, Gail L Kurriger, James A MartinDepartment of Orthopaedics and Rehabilitation, University of Iowa, Iowa City, IA, USAPurpose: This study addresses the species-specific and site-specific details of weight-bearing articular cartilage zone depths and chondrocyte distributions among humans and common osteoarthritis (OA animal models using contemporary digital imaging tools. Histological analysis is the gold-standard research tool for evaluating cartilage health, OA severity, and treatment efficacy. Historically, evaluations were made by expert analysts. However, state-of-the-art tools have been developed that allow for digitization of entire histological sections for computer-aided analysis. Large volumes of common digital cartilage metrics directly complement elucidation of trends in OA inducement and concomitant potential treatments.Materials and methods: Sixteen fresh human knees, 26 adult New Zealand rabbit stifles, and 104 bovine lateral plateaus were measured for four cartilage zones and the cell densities within each zone. Each knee was divided into four weight-bearing sites: the medial and lateral plateaus and femoral condyles.Results: One-way analysis of variance followed by pairwise multiple comparisons (Holm–Sidak method at a significance of 0.05 clearly confirmed the variability between cartilage depths at each site, between sites in the same species, and between weight-bearing articular cartilage definitions in different species.Conclusion: The present study clearly demonstrates multisite, multispecies differences in normal weight-bearing articular cartilage, which can be objectively quantified by a common digital histology imaging technique. The clear site-specific differences in normal cartilage must be taken into consideration when characterizing the pathoetiology of OA models. Together, these provide a path to consistently analyze the volume and variety of histologic slides necessarily generated

  19. How informative are slip models for aftershock forecasting?

    Science.gov (United States)

    Bach, Christoph; Hainzl, Sebastian

    2013-04-01

    Coulomb stress changes (ΔCFS) have been recognized as a major trigger mechanism for earthquakes, in particular aftershock distributions and the spatial patterns of ΔCFS are often found to be correlated. However, the Coulomb stress calculations are based on slip inversions and the receiver fault mechanisms which both contain large uncertainties. In particular, slip inversions are usually non-unique and often differ strongly for the same earthquakes. Here we want to address the information content of those inversions with respect to aftershock forecasting. Therefore we compare the slip models to randomized fractal slip models which are only constrained by fault information and moment magnitude. The uncertainty of the aftershock mechanisms is considered by using many receiver fault orientations, and by calculating ΔCFS at several depth layers. The stress change is then converted into an aftershock probability map utilizing a clock advance model. To estimate the information content of the slip models, we use an Epidemic Type Aftershock Sequence (ETAS) model approach introduced by Bach and Hainzl (2012), where the spatial probability density of direct aftershocks is related to the ΔCFS calculations. Besides the directly triggered aftershocks, this approach also takes secondary aftershock triggering into account. We quantify our results by calculating the information gain of the randomized slip models relative to the corresponding published slip model. As case studies, we investigate the aftershock sequences of several well-known main shocks such as 1992 Landers, 1999 Hector Mine, 2004 Parkfield, 2002 Denali. First results show a huge difference in the information content of slip models. For some of the cases up to 90% of the random slip models are found to perform better than the originally published model, for some other cases only few random models are found performing better than the published slip model.

  20. Building Information Model: advantages, tools and adoption efficiency

    Science.gov (United States)

    Abakumov, R. G.; Naumov, A. E.

    2018-03-01

    The paper expands definition and essence of Building Information Modeling. It describes content and effects from application of Information Modeling at different stages of a real property item. Analysis of long-term and short-term advantages is given. The authors included an analytical review of Revit software package in comparison with Autodesk with respect to: features, advantages and disadvantages, cost and pay cutoff. A prognostic calculation is given for efficiency of adoption of the Building Information Modeling technology, with examples of its successful adoption in Russia and worldwide.

  1. Common factors and the exchange rate: results from the Brazilian case

    Directory of Open Access Journals (Sweden)

    Wilson Rafael de Oliveira Felício

    2014-03-01

    Full Text Available This paper studies the usefulness of factor models in explaining the dynamics of the exchange rate Real / Dollar from January 1999 to August 2011. The paper verifies that the inclusion of factors embedded on the common movements of exchange rates of a set of countries significantly improves the in-sample and out-of-sample predictive power of the models comprising only macroeconomic fundamentals commonly used in the literature to forecast the exchange rate. The paper also links the information contained in the factors to global shocks like the demand for dollars - a "dollar effect", volatility and liquidity of global financial markets.

  2. On consensus through communication without a commonly known protocol

    OpenAIRE

    Tsakas Elias; Voorneveld Mark

    2010-01-01

    The present paper extends the standard model of pairwise communication among Bayesianagents to cases where the structure of the communication protocol is not commonly known.We show that, even under strict conditions on the structure of the protocols and the nature of the transmitted signals, a consensus may never be reached if very little asymmetric information about the protocol is introduced.

  3. The Semantic Environment: Heuristics for a Cross-Context Human-Information Interaction Model

    Science.gov (United States)

    Resmini, Andrea; Rosati, Luca

    This chapter introduces a multidisciplinary holistic approach for the general design of successful bridge experiences as a cross-context human-information interaction model. Nowadays it is common to interact through a number of different domains in order to communicate successfully, complete a task, or elicit a desired response: Users visit a reseller’s web site to find a specific item, book it, then drive to the closest store to complete their purchase. As such, one of the crucial challenges user experience design will face in the near future is how to structure and provide bridge experiences seamlessly spanning multiple communication channels or media formats for a specific purpose.

  4. Statistical intercomparison of global climate models: A common principal component approach with application to GCM data

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Boyle, J.S.

    1993-05-01

    Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution

  5. Multi-dimensional indoor location information model

    NARCIS (Netherlands)

    Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.

    2013-01-01

    Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and

  6. Thermal mathematical modeling of a multicell common pressure vessel nickel-hydrogen battery

    Science.gov (United States)

    Kim, Junbom; Nguyen, T. V.; White, R. E.

    1992-01-01

    A two-dimensional and time-dependent thermal model of a multicell common pressure vessel (CPV) nickel-hydrogen battery was developed. A finite element solver called PDE/Protran was used to solve this model. The model was used to investigate the effects of various design parameters on the temperature profile within the cell. The results were used to help find a design that will yield an acceptable temperature gradient inside a multicell CPV nickel-hydrogen battery. Steady-state and unsteady-state cases with a constant heat generation rate and a time-dependent heat generation rate were solved.

  7. An animal model that reflects human disease: the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Carrion, Ricardo; Patterson, Jean L

    2012-06-01

    The common marmoset is a new world primate belonging to the Callitrichidae family weighing between 350 and 400 g. The marmoset has been shown to be an outstanding model for studying aging, reproduction, neuroscience, toxicology, and infectious disease. With regard to their susceptibility to infectious agents, they are exquisite NHP models for viral, protozoan and bacterial agents, as well as prions. The marmoset provides the advantages of a small animal model in high containment coupled with the immunological repertoire of a nonhuman primate and susceptibility to wild type, non-adapted viruses. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Evaluation Model of Tea Industry Information Service Quality

    OpenAIRE

    Shi , Xiaohui; Chen , Tian’en

    2015-01-01

    International audience; According to characteristics of tea industry information service, this paper have built service quality evaluation index system for tea industry information service quality, R-cluster analysis and multiple regression have been comprehensively used to contribute evaluation model with a high practice and credibility. Proved by the experiment, the evaluation model of information service quality has a good precision, which has guidance significance to a certain extent to e...

  9. A non-linear model of information seeking behaviour

    Directory of Open Access Journals (Sweden)

    Allen E. Foster

    2005-01-01

    Full Text Available The results of a qualitative, naturalistic, study of information seeking behaviour are reported in this paper. The study applied the methods recommended by Lincoln and Guba for maximising credibility, transferability, dependability, and confirmability in data collection and analysis. Sampling combined purposive and snowball methods, and led to a final sample of 45 inter-disciplinary researchers from the University of Sheffield. In-depth semi-structured interviews were used to elicit detailed examples of information seeking. Coding of interview transcripts took place in multiple iterations over time and used Atlas-ti software to support the process. The results of the study are represented in a non-linear Model of Information Seeking Behaviour. The model describes three core processes (Opening, Orientation, and Consolidation and three levels of contextual interaction (Internal Context, External Context, and Cognitive Approach, each composed of several individual activities and attributes. The interactivity and shifts described by the model show information seeking to be non-linear, dynamic, holistic, and flowing. The paper concludes by describing the whole model of behaviours as analogous to an artist's palette, in which activities remain available throughout information seeking. A summary of key implications of the model and directions for further research are included.

  10. Bioremediation in fractured rock: 1. Modeling to inform design, monitoring, and expectations

    Science.gov (United States)

    Tiedeman, Claire; Shapiro, Allen M.; Hsieh, Paul A.; Imbrigiotta, Thomas; Goode, Daniel J.; Lacombe, Pierre; DeFlaun, Mary F.; Drew, Scott R.; Johnson, Carole D.; Williams, John H.; Curtis, Gary P.

    2018-01-01

    Field characterization of a trichloroethene (TCE) source area in fractured mudstones produced a detailed understanding of the geology, contaminant distribution in fractures and the rock matrix, and hydraulic and transport properties. Groundwater flow and chemical transport modeling that synthesized the field characterization information proved critical for designing bioremediation of the source area. The planned bioremediation involved injecting emulsified vegetable oil and bacteria to enhance the naturally occurring biodegradation of TCE. The flow and transport modeling showed that injection will spread amendments widely over a zone of lower‐permeability fractures, with long residence times expected because of small velocities after injection and sorption of emulsified vegetable oil onto solids. Amendments transported out of this zone will be diluted by groundwater flux from other areas, limiting bioremediation effectiveness downgradient. At nearby pumping wells, further dilution is expected to make bioremediation effects undetectable in the pumped water. The results emphasize that in fracture‐dominated flow regimes, the extent of injected amendments cannot be conceptualized using simple homogeneous models of groundwater flow commonly adopted to design injections in unconsolidated porous media (e.g., radial diverging or dipole flow regimes). Instead, it is important to synthesize site characterization information using a groundwater flow model that includes discrete features representing high‐ and low‐permeability fractures. This type of model accounts for the highly heterogeneous hydraulic conductivity and groundwater fluxes in fractured‐rock aquifers, and facilitates designing injection strategies that target specific volumes of the aquifer and maximize the distribution of amendments over these volumes.

  11. Food Insecurity and Common Mental Disorders among Ethiopian Youth: Structural Equation Modeling

    Science.gov (United States)

    Lindstrom, David; Belachew, Tefera; Hadley, Craig; Lachat, Carl; Verstraeten, Roos; De Cock, Nathalie; Kolsteren, Patrick

    2016-01-01

    Background Although the consequences of food insecurity on physical health and nutritional status of youth living have been reported, its effect on their mental health remains less investigated in developing countries. The aim of this study was to examine the pathways through which food insecurity is associated with poor mental health status among youth living in Ethiopia. Methods We used data from Jimma Longitudinal Family Survey of Youth (JLFSY) collected in 2009/10. A total of 1,521 youth were included in the analysis. We measured food insecurity using a 5-items scale and common mental disorders using the 20-item Self-Reporting Questionnaire (SRQ-20). Structural and generalized equation modeling using maximum likelihood estimation method was used to analyze the data. Results The prevalence of common mental disorders was 30.8% (95% CI: 28.6, 33.2). Food insecurity was independently associated with common mental disorders (β = 0.323, Pinsecurity on common mental disorders was direct and only 8.2% of their relationship was partially mediated by physical health. In addition, poor self-rated health (β = 0.285, Pinsecurity is directly associated with common mental disorders among youth in Ethiopia. Interventions that aim to improve mental health status of youth should consider strategies to improve access to sufficient, safe and nutritious food. PMID:27846283

  12. Informed Principal Model and Contract in Supply Chain with Demand Disruption Asymmetric Information

    Directory of Open Access Journals (Sweden)

    Huan Zhang

    2016-01-01

    Full Text Available Because of the frequency and disastrous influence, the supply chain disruption has caused extensive concern both in the industry and in the academia. In a supply chain with one manufacturer and one retailer, the demand of the retailer is uncertain and meanwhile may suffer disruption with a probability. Taking the demand disruption probability as the retailer’s asymmetric information, an informed principal model with the retailer as the principal is explored to make the contract. The retailer can show its information to the manufacturer through the contract. It is found out that the high-risk retailer intends to pretend to be the low-risk one. So the separating contract is given through the low-information-intensity allocation, in which the order quantity and the transferring payment for the low-risk retailer distort upwards, but those of high-risk retailer do not distort. In order to reduce the signaling cost which the low-risk retailer pays, the interim efficient model is introduced, which ends up with the order quantity and transferring payment distorting upwards again but less than before. In the numerical examples, with two different mutation probabilities, the informed principal contracts show the application of the informed principal model in the supply chain with demand disruption.

  13. Characterization of plasma thiol redox potential in a common marmoset model of aging

    Directory of Open Access Journals (Sweden)

    James R. Roede

    2013-01-01

    Full Text Available Due to its short lifespan, ease of use and age-related pathologies that mirror those observed in humans, the common marmoset (Callithrix jacchus is poised to become a standard nonhuman primate model of aging. Blood and extracellular fluid possess two major thiol-dependent redox nodes involving cysteine (Cys, cystine (CySS, glutathione (GSH and glutathione disulfide (GSSG. Alteration in these plasma redox nodes significantly affects cellular physiology, and oxidation of the plasma Cys/CySS redox potential (EhCySS is associated with aging and disease risk in humans. The purpose of this study was to determine age-related changes in plasma redox metabolites and corresponding redox potentials (Eh to further validate the marmoset as a nonhuman primate model of aging. We measured plasma thiol redox states in marmosets and used existing human data with multivariate adaptive regression splines (MARS to model the relationships between age and redox metabolites. A classification accuracy of 70.2% and an AUC of 0.703 were achieved using the MARS model built from the marmoset redox data to classify the human samples as young or old. These results show that common marmosets provide a useful model for thiol redox biology of aging.

  14. Study on geo-information modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana

    2006-01-01

    Roč. 5, č. 5 (2006), s. 1108-1113 ISSN 1109-2777 Institutional research plan: CEZ:AV0Z10750506 Keywords : control GIS * geo-information modelling * uncertainty * spatial temporal approach Web Services Subject RIV: BC - Control Systems Theory

  15. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  16. A generalized model via random walks for information filtering

    International Nuclear Information System (INIS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-01-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  17. 41 CFR 301-72.101 - What information should we provide an employee before authorizing the use of common carrier...

    Science.gov (United States)

    2010-07-01

    ... Section 301-72.101 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY... documents; (b) Your procedures for the control and accounting of common carrier transportation documents... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false What information should...

  18. Spinal Cord Injury Model System Information Network

    Science.gov (United States)

    ... the UAB-SCIMS More The UAB-SCIMS Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network as a resource to promote knowledge in the ...

  19. A conceptual model of the automated credibility assessment of the volunteered geographic information

    International Nuclear Information System (INIS)

    Idris, N H; Jackson, M J; Ishak, M H I

    2014-01-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd – sourced based applications. There are two main components proposed to be assessed in the conceptual model – metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers

  20. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  1. Comparing Two Different Approaches to the Modeling of the Common Cause Failures in Fault Trees

    International Nuclear Information System (INIS)

    Vukovic, I.; Mikulicic, V.; Vrbanic, I.

    2002-01-01

    The potential for common cause failures in systems that perform critical functions has been recognized as very important contributor to risk associated with operation of nuclear power plants. Consequentially, modeling of common cause failures (CCF) in fault trees has become one among the essential elements in any probabilistic safety assessment (PSA). Detailed and realistic representation of CCF potential in fault tree structure is sometimes very challenging task. This is especially so in the cases where a common cause group involves more than two components. During the last ten years the difficulties associated with this kind of modeling have been overcome to some degree by development of integral PSA tools with high capabilities. Some of them allow for the definition of CCF groups and their automated expanding in the process of Boolean resolution and generation of minimal cutsets. On the other hand, in PSA models developed and run by more traditional tools, CCF-potential had to be modeled in the fault trees explicitly. With explicit CCF modeling, fault trees can grow very large, especially in the cases when they involve CCF groups with 3 or more members, which can become an issue for the management of fault trees and basic events with traditional non-integral PSA models. For these reasons various simplifications had to be made. Speaking in terms of an overall PSA model, there are also some other issues that need to be considered, such as maintainability and accessibility of the model. In this paper a comparison is made between the two approaches to CCF modeling. Analysis is based on a full-scope Level 1 PSA model for internal initiating events that had originally been developed by a traditional PSA tool and later transferred to a new-generation PSA tool with automated CCF modeling capabilities. Related aspects and issues mentioned above are discussed in the paper. (author)

  2. Clinical information modeling processes for semantic interoperability of electronic health records: systematic review and inductive analysis.

    Science.gov (United States)

    Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak

    2015-07-01

    This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Information structure design for databases a practical guide to data modelling

    CERN Document Server

    Mortimer, Andrew J

    2014-01-01

    Computer Weekly Professional Series: Information Structure Design for Databases: A Practical Guide to Data modeling focuses on practical data modeling covering business and information systems. The publication first offers information on data and information, business analysis, and entity relationship model basics. Discussions cover degree of relationship symbols, relationship rules, membership markers, types of information systems, data driven systems, cost and value of information, importance of data modeling, and quality of information. The book then takes a look at entity relationship mode

  4. Agricultural information dissemination using ICTs: A review and analysis of information dissemination models in China

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2016-03-01

    Full Text Available Over the last three decades, China’s agriculture sector has been transformed from the traditional to modern practice through the effective deployment of Information and Communication Technologies (ICTs. Information processing and dissemination have played a critical role in this transformation process. Many studies in relation to agriculture information services have been conducted in China, but few of them have attempted to provide a comprehensive review and analysis of different information dissemination models and their applications. This paper aims to review and identify the ICT based information dissemination models in China and to share the knowledge and experience in applying emerging ICTs in disseminating agriculture information to farmers and farm communities to improve productivity and economic, social and environmental sustainability. The paper reviews and analyzes the development stages of China’s agricultural information dissemination systems and different mechanisms for agricultural information service development and operations. Seven ICT-based information dissemination models are identified and discussed. Success cases are presented. The findings provide a useful direction for researchers and practitioners in developing future ICT based information dissemination systems. It is hoped that this paper will also help other developing countries to learn from China’s experience and best practice in their endeavor of applying emerging ICTs in agriculture information dissemination and knowledge transfer.

  5. Research on network information security model and system construction

    OpenAIRE

    Wang Haijun

    2016-01-01

    It briefly describes the impact of large data era on China’s network policy, but also brings more opportunities and challenges to the network information security. This paper reviews for the internationally accepted basic model and characteristics of network information security, and analyses the characteristics of network information security and their relationship. On the basis of the NIST security model, this paper describes three security control schemes in safety management model and the...

  6. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  8. Semantic Building Information Modeling and high definition surveys for Cultural Heritage sites

    Directory of Open Access Journals (Sweden)

    Simone Garagnani

    2012-11-01

    Full Text Available In recent years, digital technology devoted to the building design has experienced significant advancements allowing to reach, by means of the Building Information Modeling, those goals only imagined since the mid-Seventies of the last century. The BIM process, bearer of several advantages for actors and designers who implement it in their workflow, may be employed even in various case studies related to some interventions on the existing architectural Cultural Heritage. The semantics typical of the classical architecture, so pervasive in the European urban landscape, as well as the Modern or Contemporary architecture features, coincide with the self-conscious structure made of “smart objects” proper of BIM, which proves to be an effective system to document component relationships. However, the translation of existing buildings geometric information, acquired using the common techniques of laser scanning and digital photogrammetry, into BIM objects, is still a critical process that this paper aims to investigate, describing possible methods and approaches.

  9. Building Information Modeling Comprehensive Overview

    Directory of Open Access Journals (Sweden)

    Sergey Kalinichuk

    2015-07-01

    Full Text Available The article is addressed to provide a comprehensive review on recently accelerated development of the Information Technology within project market such as industrial, engineering, procurement and construction. Author’s aim is to cover the last decades of the growth of the Information and Communication Technology in construction industry in particular Building Information Modeling and testifies that the problem of a choice of the effective project realization method not only has not lost its urgency, but has also transformed into one of the major condition of the intensive technology development. All of it has created a great impulse on shortening the project duration and has led to the development of various schedule compression techniques what becomes a focus of modern construction.

  10. Integrating water data, models and forecasts - the Australian Water Resources Information System (Invited)

    Science.gov (United States)

    Argent, R.; Sheahan, P.; Plummer, N.

    2010-12-01

    working with the OGC’s Hydrology Domain Working Group on the development of WaterML 2, which will provide an international standard applicable to a sub-set of the information handled by WDTF. Making water data accessible for multiple uses, such as for predictive models and external products, has required the development of consistent data models for describing the relationships between the various data elements. Early development of the AWRIS data model has utilised a model-driven architecture approach, the benefits of which are likely to accrue in the long term, as more products and services are developed from the common core. Moving on from our initial focus on data organisation and management, the Bureau is in the early stages of developing an integrated modelling suite (the Bureau Hydrological Modelling System - BHMS) which will encompass the variety of hydrological modelling needs of the Bureau, ranging from water balances, assessments and accounts, to streamflow and hydrological forecasting over scales from hours and days to years and decades. It is envisaged that this modelling suite will also be developed, as far as possible, using standardised, discoverable services to enhance data-model and model-model integration.

  11. Modelling financial markets with agents competing on different time scales and with different amount of information

    Science.gov (United States)

    Wohlmuth, Johannes; Andersen, Jørgen Vitting

    2006-05-01

    We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.

  12. Information behavior versus communication: application models in multidisciplinary settings

    Directory of Open Access Journals (Sweden)

    Cecília Morena Maria da Silva

    2015-05-01

    Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.

  13. A descriptive model of information problem solving while using internet

    NARCIS (Netherlands)

    Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber

    2009-01-01

    This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information

  14. IT Business Value Model for Information Intensive Organizations

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Gastaud Maçada

    2012-01-01

    Full Text Available Many studies have highlighted the capacity Information Technology (IT has for generating value for organizations. Investments in IT made by organizations have increased each year. Therefore, the purpose of the present study is to analyze the IT Business Value for Information Intensive Organizations (IIO - e.g. banks, insurance companies and securities brokers. The research method consisted of a survey that used and combined the models from Weill and Broadbent (1998 and Gregor, Martin, Fernandez, Stern and Vitale (2006. Data was gathered using an adapted instrument containing 5 dimensions (Strategic, Informational, Transactional, Transformational and Infra-structure with 27 items. The instrument was refined by employing statistical techniques such as Exploratory and Confirmatory Factorial Analysis through Structural Equations (first and second order Model Measurement. The final model is composed of four factors related to IT Business Value: Strategic, Informational, Transactional and Transformational, arranged in 15 items. The dimension Infra-structure was excluded during the model refinement process because it was discovered during interviews that managers were unable to perceive it as a distinct dimension of IT Business Value.

  15. Designing the Microbial Research Commons

    Energy Technology Data Exchange (ETDEWEB)

    Uhlir, Paul F. [Board on Research Data and Information Policy and Global Affairs, Washington, DC (United States)

    2011-10-01

    Recent decades have witnessed an ever-increasing range and volume of digital data. All elements of the pillars of science--whether observation, experiment, or theory and modeling--are being transformed by the continuous cycle of generation, dissemination, and use of factual information. This is even more so in terms of the re-using and re-purposing of digital scientific data beyond the original intent of the data collectors, often with dramatic results. We all know about the potential benefits and impacts of digital data, but we are also aware of the barriers, the challenges in maximizing the access, and use of such data. There is thus a need to think about how a data infrastructure can enhance capabilities for finding, using, and integrating information to accelerate discovery and innovation. How can we best implement an accessible, interoperable digital environment so that the data can be repeatedly used by a wide variety of users in different settings and with different applications? With this objective: to use the microbial communities and microbial data, literature, and the research materials themselves as a test case, the Board on Research Data and Information held an International Symposium on Designing the Microbial Research Commons at the National Academy of Sciences in Washington, DC on 8-9 October 2009. The symposium addressed topics such as models to lower the transaction costs and support access to and use of microbiological materials and digital resources from the perspective of publicly funded research, public-private interactions, and developing country concerns. The overall goal of the symposium was to stimulate more research and implementation of improved legal and institutional models for publicly funded research in microbiology.

  16. REMOTE SYNTHESIS AND CONTROL INFORMATION TECHNOLOGY OF SYSTEM-DYNAMIC MODELS

    Directory of Open Access Journals (Sweden)

    A. V. Masloboev

    2015-07-01

    Full Text Available The general line of research is concerned with development of information technologies and computer simulation tools for management information and analytical support of complex semistructured systems. Regional socio-economic systems are consideredas a representative of this system type. Investigation is carried out within the bounds of development strategy implementation of the Arctic zone of the Russian Federation and national safety until 2020 in the Murmansk region, specifically under engineering of high end information infrastructure for innovation and security control problem-solving of regional development. Research methodology consists of system dynamics modeling method, distributed information system engineering technologies, pattern-based modeling and design techniques. The work deals with development of toolkit for decision-making information support problem-solving in the field of innovation security management of regional economics. For that purpose a system-dynamic models suite of innovation process standard components and information technology for remote formation and control of innovation business simulation models under research have been developed. Designed toolkit provides innovation security index dynamics forecasting and innovation business effectiveness of regional economics. Information technology is implemented within the bounds of thin client architecture and is intended for simulation models design process automation of complex systems. Technology implementation software tools provide pattern-based system-dynamic models distributed formation and simulation control of innovation processes. The technology provides availability and reusability index enhancement of information support facilities in application to innovation process simulation at the expense of distributed access to innovation business simulation modeling tools and model synthesis by the reusable components, simulating standard elements of innovation

  17. The Process-Oriented Simulation (POS) model for common cause failures: recent progress

    International Nuclear Information System (INIS)

    Berg, H.P.; Goertz, R.; Schimetschka, E.; Kesten, J.

    2006-01-01

    A common-cause failure (CCF) model based on stochastic simulation has been developed to complement the established approaches and to overcome some of their shortcomings. Reflecting the models proximity to the CCF process it was called Process Oriented Simulation (POS) Model. In recent years, some progress has been made to render the POS model fit for practical applications comprising the development of parameter estimates and a number of test applications in areas where results were already available - especially from CCF benchmarks - and comparison can provide insights in strong and weak points of the different approaches. In this paper, a detailed description of the POS model is provided together with the approach to parameter estimation and representative test applications. It is concluded, that the POS model has a number of strengths - especially the feature to provide reasonable extrapolation to CCF groups with high degrees of redundancy - and thus a considerable potential to complement the insights obtained from existing modeling. (orig.)

  18. Information model of the 'Ukryttya' object

    International Nuclear Information System (INIS)

    Batij, E.V.; Ermolenko, A.A.; Kotlyarov, V.T.

    2008-01-01

    There were described the building principles and content of the 'Ukryttya' object information model that has been developed at the Institute for Safety Problems of NPP. Using the client/server architecture in this system (the simultaneous access of the many users), Autodesk Map Guide and ASP.NET technologies allowed avoiding the typical defects of the 'stand-alone desktop' information systems (that aimed for a single user)

  19. Contexts for concepts: Information modeling for semantic interoperability

    NARCIS (Netherlands)

    Oude Luttighuis, P.H.W.M.; Stap, R.E.; Quartel, D.

    2011-01-01

    Conceptual information modeling is a well-established practice, aimed at preparing the implementation of information systems, the specification of electronic message formats, and the design of information processes. Today's ever more connected world however poses new challenges for conceptual

  20. Formal approach to modeling of modern Information Systems

    Directory of Open Access Journals (Sweden)

    Bálint Molnár

    2016-01-01

    Full Text Available Most recently, the concept of business documents has started to play double role. On one hand, a business document (word processing text or calculation sheet can be used as specification tool, on the other hand the business document is an immanent constituent of business processes, thereby essential component of business Information Systems. The recent tendency is that the majority of documents and their contents within business Information Systems remain in semi-structured format and a lesser part of documents is transformed into schemas of structured databases. In order to keep the emerging situation in hand, we suggest the creation (1 a theoretical framework for modeling business Information Systems; (2 and a design method for practical application based on the theoretical model that provides the structuring principles. The modeling approach that focuses on documents and their interrelationships with business processes assists in perceiving the activities of modern Information Systems.

  1. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  2. High resolution reservoir geological modelling using outcrop information

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Changmin; Lin Kexiang; Liu Huaibo [Jianghan Petroleum Institute, Hubei (China)] [and others

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  3. Information retrieval models foundations and relationships

    CERN Document Server

    Roelleke, Thomas

    2013-01-01

    Information Retrieval (IR) models are a core component of IR research and IR systems. The past decade brought a consolidation of the family of IR models, which by 2000 consisted of relatively isolated views on TF-IDF (Term-Frequency times Inverse-Document-Frequency) as the weighting scheme in the vector-space model (VSM), the probabilistic relevance framework (PRF), the binary independence retrieval (BIR) model, BM25 (Best-Match Version 25, the main instantiation of the PRF/BIR), and language modelling (LM). Also, the early 2000s saw the arrival of divergence from randomness (DFR).Regarding in

  4. ξ common cause failure model and method for defense effectiveness estimation

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1991-08-01

    Two issues have been dealt. One is to develop an event based parametric model called ξ-CCF model. Its parameters are expressed in the fraction of the progressive multiplicities of failure events. By these expressions, the contribution of each multiple failure can be presented more clearly. It can help to select defense tactics against common cause failures. The other is to provide a method which is based on the operational experience and engineering judgement to estimate the effectiveness of defense tactics. It is expressed in terms of reduction matrix for a given tactics on a specific plant in the event by event form. The application of practical example shows that the model in cooperation with the method can simply estimate the effectiveness of defense tactics. It can be easily used by the operators and its application may be extended

  5. An information spreading model based on online social networks

    Science.gov (United States)

    Wang, Tao; He, Juanjuan; Wang, Xiaoxia

    2018-01-01

    Online social platforms are very popular in recent years. In addition to spreading information, users could review or collect information on online social platforms. According to the information spreading rules of online social network, a new information spreading model, namely IRCSS model, is proposed in this paper. It includes sharing mechanism, reviewing mechanism, collecting mechanism and stifling mechanism. Mean-field equations are derived to describe the dynamics of the IRCSS model. Moreover, the steady states of reviewers, collectors and stiflers and the effects of parameters on the peak values of reviewers, collectors and sharers are analyzed. Finally, numerical simulations are performed on different networks. Results show that collecting mechanism and reviewing mechanism, as well as the connectivity of the network, make information travel wider and faster, and compared to WS network and ER network, the speed of reviewing, sharing and collecting information is fastest on BA network.

  6. Creative Commons licenses and the non-commercial condition: Implications for the re-use of biodiversity information.

    Science.gov (United States)

    Hagedorn, Gregor; Mietchen, Daniel; Morris, Robert A; Agosti, Donat; Penev, Lyubomir; Berendsohn, Walter G; Hobern, Donald

    2011-01-01

    The Creative Commons (CC) licenses are a suite of copyright-based licenses defining terms for the distribution and re-use of creative works. CC provides licenses for different use cases and includes open content licenses such as the Attribution license (CC BY, used by many Open Access scientific publishers) and the Attribution Share Alike license (CC BY-SA, used by Wikipedia, for example). However, the license suite also contains non-free and non-open licenses like those containing a "non-commercial" (NC) condition. Although many people identify "non-commercial" with "non-profit", detailed analysis reveals that significant differences exist and that the license may impose some unexpected re-use limitations on works thus licensed. After providing background information on the concepts of Creative Commons licenses in general, this contribution focuses on the NC condition, its advantages, disadvantages and appropriate scope. Specifically, it contributes material towards a risk analysis for potential re-users of NC-licensed works.

  7. Modeling Interoperable Information Systems with 3LGM² and IHE.

    Science.gov (United States)

    Stäubert, S; Schaaf, M; Jahn, F; Brandner, R; Winter, A

    2015-01-01

    Strategic planning of information systems (IS) in healthcare requires descriptions of the current and the future IS state. Enterprise architecture planning (EAP) tools like the 3LGM² tool help to build up and to analyze IS models. A model of the planned architecture can be derived from an analysis of current state IS models. Building an interoperable IS, i. e. an IS consisting of interoperable components, can be considered a relevant strategic information management goal for many IS in healthcare. Integrating the healthcare enterprise (IHE) is an initiative which targets interoperability by using established standards. To link IHE concepts to 3LGM² concepts within the 3LGM² tool. To describe how an information manager can be supported in handling the complex IHE world and planning interoperable IS using 3LGM² models. To describe how developers or maintainers of IHE profiles can be supported by the representation of IHE concepts in 3LGM². Conceptualization and concept mapping methods are used to assign IHE concepts such as domains, integration profiles actors and transactions to the concepts of the three-layer graph-based meta-model (3LGM²). IHE concepts were successfully linked to 3LGM² concepts. An IHE-master-model, i. e. an abstract model for IHE concepts, was modeled with the help of 3LGM² tool. Two IHE domains were modeled in detail (ITI, QRPH). We describe two use cases for the representation of IHE concepts and IHE domains as 3LGM² models. Information managers can use the IHE-master-model as reference model for modeling interoperable IS based on IHE profiles during EAP activities. IHE developers are supported in analyzing consistency of IHE concepts with the help of the IHE-master-model and functions of the 3LGM² tool The complex relations between IHE concepts can be modeled by using the EAP method 3LGM². 3LGM² tool offers visualization and analysis features which are now available for the IHE-master-model. Thus information managers and IHE

  8. Mutual information and the fidelity of response of gene regulatory models

    International Nuclear Information System (INIS)

    Tabbaa, Omar P; Jayaprakash, C

    2014-01-01

    We investigate cellular response to extracellular signals by using information theory techniques motivated by recent experiments. We present results for the steady state of the following gene regulatory models found in both prokaryotic and eukaryotic cells: a linear transcription-translation model and a positive or negative auto-regulatory model. We calculate both the information capacity and the mutual information exactly for simple models and approximately for the full model. We find that (1) small changes in mutual information can lead to potentially important changes in cellular response and (2) there are diminishing returns in the fidelity of response as the mutual information increases. We calculate the information capacity using Gillespie simulations of a model for the TNF-α-NF-κ B network and find good agreement with the measured value for an experimental realization of this network. Our results provide a quantitative understanding of the differences in cellular response when comparing experimentally measured mutual information values of different gene regulatory models. Our calculations demonstrate that Gillespie simulations can be used to compute the mutual information of more complex gene regulatory models, providing a potentially useful tool in synthetic biology. (paper)

  9. An urban runoff model designed to inform stormwater management decisions.

    Science.gov (United States)

    Beck, Nicole G; Conley, Gary; Kanner, Lisa; Mathias, Margaret

    2017-05-15

    We present an urban runoff model designed for stormwater managers to quantify runoff reduction benefits of mitigation actions that has lower input data and user expertise requirements than most commonly used models. The stormwater tool to estimate load reductions (TELR) employs a semi-distributed approach, where landscape characteristics and process representation are spatially-lumped within urban catchments on the order of 100 acres (40 ha). Hydrologic computations use a set of metrics that describe a 30-year rainfall distribution, combined with well-tested algorithms for rainfall-runoff transformation and routing to generate average annual runoff estimates for each catchment. User inputs include the locations and specifications for a range of structural best management practice (BMP) types. The model was tested in a set of urban catchments within the Lake Tahoe Basin of California, USA, where modeled annual flows matched that of the observed flows within 18% relative error for 5 of the 6 catchments and had good regional performance for a suite of performance metrics. Comparisons with continuous simulation models showed an average of 3% difference from TELR predicted runoff for a range of hypothetical urban catchments. The model usually identified the dominant BMP outflow components within 5% relative error of event-based measured flow data and simulated the correct proportionality between outflow components. TELR has been implemented as a web-based platform for use by municipal stormwater managers to inform prioritization, report program benefits and meet regulatory reporting requirements (www.swtelr.com). Copyright © 2017. Published by Elsevier Ltd.

  10. A Realism-Based View on Counts in OMOP's Common Data Model.

    Science.gov (United States)

    Ceusters, Werner; Blaisure, Jonathan

    2017-01-01

    Correctly counting entities is a requirement for analytics tools to function appropriately. The Observational Medical Outcomes Partnership's (OMOP) Common Data Model (CDM) specifications were examined to assess the extent to which counting in OMOP CDM compatible data repositories would work as expected. To that end, constructs (tables, fields and attributes) defined in the OMOP CDM as well as cardinality constraints and other business rules found in its documentation and related literature were compared to the types of entities and axioms proposed in realism-based ontologies. It was found that not only the model itself, but also a proposed standard algorithm for computing condition eras may lead to erroneous counting of several sorts of entities.

  11. Organizational information assets classification model and security architecture methodology

    Directory of Open Access Journals (Sweden)

    Mostafa Tamtaji

    2015-12-01

    Full Text Available Today's, Organizations are exposed with huge and diversity of information and information assets that are produced in different systems shuch as KMS, financial and accounting systems, official and industrial automation sysytems and so on and protection of these information is necessary. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released.several benefits of this model cuses that organization has a great trend to implementing Cloud computing. Maintaining and management of information security is the main challenges in developing and accepting of this model. In this paper, at first, according to "design science research methodology" and compatible with "design process at information systems research", a complete categorization of organizational assets, including 355 different types of information assets in 7 groups and 3 level, is presented to managers be able to plan corresponding security controls according to importance of each groups. Then, for directing of organization to architect it’s information security in cloud computing environment, appropriate methodology is presented. Presented cloud computing security architecture , resulted proposed methodology, and presented classification model according to Delphi method and expers comments discussed and verified.

  12. Semantic reasoning with XML-based biomedical information models.

    Science.gov (United States)

    O'Connor, Martin J; Das, Amar

    2010-01-01

    The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.

  13. Common factors method to predict the carcass composition tissue in kid goats

    Directory of Open Access Journals (Sweden)

    Helen Fernanda Barros Gomes

    2013-03-01

    Full Text Available The objective of this work was to analyze the interrelations among weights and carcass measures of the longissimus lumborum muscle thickness and area, and of sternum tissue thickness, measured directly on carcass and by ultrasound scan. Measures were taken on live animals and after slaughter to develop models of multiple linear regression, to estimate the composition of shoulder blade, from selected variables in 89 kids of both genders and five breed groups, raised in feedlot system. The variables considered relevant and not redundant on the information they carry, for the common factor analysis, were used in the carcass composition estimate development models. The presuppositions of linear regression models relative to residues were evaluated, the estimated residues were subjected to analysis of variance and the means were compared by the Student t test. Based in these results, the group of 32 initial variables could be reduced to four variables: hot carcass weight, rump perimeter, leg length and tissue height at the fourth sternum bone. The analysis of common factors was shown as an effective technique to study the interrelations among the independent variables. The measures of carcass dimension, alone, did not add any information to hot carcass weight. The carcass muscle weight can be estimated with high precision from simple models, without the need for information related to gender and breed, and they could be built based on carcass weight, which makes it easy to be applied. The fat and bones estimate models were not as accurate.

  14. Mathematical models of information and stochastic systems

    CERN Document Server

    Kornreich, Philipp

    2008-01-01

    From ancient soothsayers and astrologists to today's pollsters and economists, probability theory has long been used to predict the future on the basis of past and present knowledge. Mathematical Models of Information and Stochastic Systems shows that the amount of knowledge about a system plays an important role in the mathematical models used to foretell the future of the system. It explains how this known quantity of information is used to derive a system's probabilistic properties. After an introduction, the book presents several basic principles that are employed in the remainder of the t

  15. The Esri 3D city information model

    International Nuclear Information System (INIS)

    Reitz, T; Schubiger-Banz, S

    2014-01-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases

  16. Competitive provision of tune-ins under common private information

    Czech Academy of Sciences Publication Activity Database

    Celik, Levent

    2016-01-01

    Roč. 44, January (2016), s. 113-122 ISSN 0167-7187 Institutional support: PRVOUK-P23 Keywords : informative advertising * information disclosure * tune-ins Subject RIV: AH - Economics Impact factor: 0.795, year: 2016

  17. Food Insecurity and Common Mental Disorders among Ethiopian Youth: Structural Equation Modeling.

    Directory of Open Access Journals (Sweden)

    Mulusew G Jebena

    Full Text Available Although the consequences of food insecurity on physical health and nutritional status of youth living have been reported, its effect on their mental health remains less investigated in developing countries. The aim of this study was to examine the pathways through which food insecurity is associated with poor mental health status among youth living in Ethiopia.We used data from Jimma Longitudinal Family Survey of Youth (JLFSY collected in 2009/10. A total of 1,521 youth were included in the analysis. We measured food insecurity using a 5-items scale and common mental disorders using the 20-item Self-Reporting Questionnaire (SRQ-20. Structural and generalized equation modeling using maximum likelihood estimation method was used to analyze the data.The prevalence of common mental disorders was 30.8% (95% CI: 28.6, 33.2. Food insecurity was independently associated with common mental disorders (β = 0.323, P<0.05. Most (91.8% of the effect of food insecurity on common mental disorders was direct and only 8.2% of their relationship was partially mediated by physical health. In addition, poor self-rated health (β = 0.285, P<0.05, high socioeconomic status (β = -0.076, P<0.05, parental education (β = 0.183, P<0.05, living in urban area (β = 0.139, P<0.05, and female-headed household (β = 0.192, P<0.05 were associated with common mental disorders.Food insecurity is directly associated with common mental disorders among youth in Ethiopia. Interventions that aim to improve mental health status of youth should consider strategies to improve access to sufficient, safe and nutritious food.

  18. Modeling Human Information Acquisition Strategies

    NARCIS (Netherlands)

    Heuvelink, Annerieke; Klein, Michel C. A.; van Lambalgen, Rianne; Taatgen, Niels A.; Rijn, Hedderik van

    2009-01-01

    The focus of this paper is the development of a computational model for intelligent agents that decides on whether to acquire required information by retrieving it from memory or by interacting with the world. First, we present a task for which such decisions have to be made. Next, we discuss an

  19. Common pathways toward informing policy and environmental strategies to promote health: a study of CDC's Prevention Research Centers.

    Science.gov (United States)

    Neri, Elizabeth M; Stringer, Kate J; Spadaro, Antonia J; Ballman, Marie R; Grunbaum, Jo Anne

    2015-03-01

    This study examined the roles academic researchers can play to inform policy and environmental strategies that promote health and prevent disease. Prevention Research Centers (PRCs) engage in academic-community partnerships to conduct applied public health research. Interviews were used to collect data on the roles played by 32 PRCs to inform policy and environmental strategies that were implemented between September 2009 and September 2010. Descriptive statistics were calculated in SAS 9.2. A difference in roles played was observed depending on whether strategies were policy or environmental. Of the policy initiatives, the most common roles were education, research, and partnership. In contrast, the most prevalent roles the PRCs played in environmental approaches were research and providing health promotion resources. Academic research centers play various roles to help inform policy and environmental strategies. © 2014 Society for Public Health Education.

  20. Predictive modeling in e-mental health: A common language framework

    Directory of Open Access Journals (Sweden)

    Dennis Becker

    2018-06-01

    Full Text Available Recent developments in mobile technology, sensor devices, and artificial intelligence have created new opportunities for mental health care research. Enabled by large datasets collected in e-mental health research and practice, clinical researchers and members of the data mining community increasingly join forces to build predictive models for health monitoring, treatment selection, and treatment personalization. This paper aims to bridge the historical and conceptual gaps between the distant research domains involved in this new collaborative research by providing a conceptual model of common research goals. We first provide a brief overview of the data mining field and methods used for predictive modeling. Next, we propose to characterize predictive modeling research in mental health care on three dimensions: 1 time, relative to treatment (i.e., from screening to post-treatment relapse monitoring, 2 types of available data (e.g., questionnaire data, ecological momentary assessments, smartphone sensor data, and 3 type of clinical decision (i.e., whether data are used for screening purposes, treatment selection or treatment personalization. Building on these three dimensions, we introduce a framework that identifies four model types that can be used to classify existing and future research and applications. To illustrate this, we use the framework to classify and discuss published predictive modeling mental health research. Finally, in the discussion, we reflect on the next steps that are required to drive forward this promising new interdisciplinary field.

  1. Spiral model pilot project information model

    Science.gov (United States)

    1991-01-01

    The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.

  2. Exploring the common molecular basis for the universal DNA mutation bias: Revival of Loewdin mutation model

    International Nuclear Information System (INIS)

    Fu, Liang-Yu; Wang, Guang-Zhong; Ma, Bin-Guang; Zhang, Hong-Yu

    2011-01-01

    Highlights: → There exists a universal G:C → A:T mutation bias in three domains of life. → This universal mutation bias has not been sufficiently explained. → A DNA mutation model proposed by Loewdin 40 years ago offers a common explanation. -- Abstract: Recently, numerous genome analyses revealed the existence of a universal G:C → A:T mutation bias in bacteria, fungi, plants and animals. To explore the molecular basis for this mutation bias, we examined the three well-known DNA mutation models, i.e., oxidative damage model, UV-radiation damage model and CpG hypermutation model. It was revealed that these models cannot provide a sufficient explanation to the universal mutation bias. Therefore, we resorted to a DNA mutation model proposed by Loewdin 40 years ago, which was based on inter-base double proton transfers (DPT). Since DPT is a fundamental and spontaneous chemical process and occurs much more frequently within GC pairs than AT pairs, Loewdin model offers a common explanation for the observed universal mutation bias and thus has broad biological implications.

  3. Conceptualising 'knowledge management' in the context of library and information science using the core/periphery model

    Directory of Open Access Journals (Sweden)

    O.B. Onyancha

    2009-04-01

    Full Text Available This study took cognisance of the fact that the term 'knowledge management' lacks a universally accepted definition, and consequently sought to describe the term using the most common co-occurring terms in knowledge management (KM literature as indexed in the Library, Information Science and Technology Abstracts (LISTA database. Using a variety of approaches and analytic techniques (e.g. core/periphery analysis and co-occurrence of words as subject terms, data were analysed using the core/periphery model and social networks through UCINET for Windows, TI, textSTAT and Bibexcel computer-aided software. The study identified the following as the compound terms with which KM co-occurs most frequently: information resources management, information science, information technology, information services, information retrieval, library science, management information systems and libraries. The core single subject terms with which KM can be defined include resources, technology, libraries, systems, services, retrieval, storage, data and computers. The article concludes by offering the library and information science (LIS professionals' general perception of KM based on their use of terms, through which KM can be defined within the context of LIS.

  4. A Structural Contingency Theory Model of Library and Technology Partnerships within an Academic Library Information Commons

    Science.gov (United States)

    Tuai, Cameron K.

    2011-01-01

    The integration of librarians and technologists to deliver information services represents a new and potentially costly organizational challenge for many library administrators. To understand better how to control the costs of integration, the research presented here will use structural contingency theory to study the coordination of librarians…

  5. Enterprise Modelling for an Educational Information Infrastructure

    NARCIS (Netherlands)

    Widya, I.A.; Michiels, E.F.; Volman, C.J.A.M.; Pokraev, S.; de Diana, I.P.F.; Filipe, J.; Sharp, B.; Miranda, P.

    2001-01-01

    This paper reports the modelling exercise of an educational information infrastructure that aims to support the organisation of teaching and learning activities suitable for a wide range of didactic policies. The modelling trajectory focuses on capturing invariant structures of relations between

  6. Propensity to Search: Common, Leisure, and Labor Models of Consumer Behavior

    Directory of Open Access Journals (Sweden)

    Sergey MALAKHOV

    2015-05-01

    Full Text Available The analysis of the propensity to search specifies the “common” or the ordinary model of consumer behavior based on the synthesis of the neoclassical approach with satisficing concept, and “leisure” and “labor” models of behavior that represent different combinations of conspicuous consumption, leisure, and labor. While the “common model” of behavior demonstrates a moderate propensity to search, “leisure” and “labor” models of consumer behavior exhibit vigorous propensities to search that results in purchase of unnecessary items and therefore in overconsumption. This trend is also presented in home production where vigorous propensity to search takes the form of the vigorous propensity to produce at home. The analysis of trends in allocation of time provides grounds for the assumption that men have more accentuated propensity to search and to produce at home than women that results in overconsumption of unnecessary items.

  7. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  8. Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.

    Science.gov (United States)

    Ferrari, Alberto

    2017-01-01

    Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.

  9. Deriving user-informed climate information from climate model ensemble results

    Science.gov (United States)

    Huebener, Heike; Hoffmann, Peter; Keuler, Klaus; Pfeifer, Susanne; Ramthun, Hans; Spekat, Arne; Steger, Christian; Warrach-Sagi, Kirsten

    2017-07-01

    Communication between providers and users of climate model simulation results still needs to be improved. In the German regional climate modeling project ReKliEs-De a midterm user workshop was conducted to allow the intended users of the project results to assess the preliminary results and to streamline the final project results to their needs. The user feedback highlighted, in particular, the still considerable gap between climate research output and user-tailored input for climate impact research. Two major requests from the user community addressed the selection of sub-ensembles and some condensed, easy to understand information on the strengths and weaknesses of the climate models involved in the project.

  10. Combined Common Person and Common Item Equating of Medical Science Examinations.

    Science.gov (United States)

    Kelley, Paul R.

    This equating study of the National Board of Medical Examiners Examinations was a combined common persons and common items equating, using the Rasch model. The 1,000-item test was administered to about 3,000 second-year medical students in seven equal-length subtests: anatomy, physiology, biochemistry, pathology, microbiology, pharmacology, and…

  11. Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

    Science.gov (United States)

    Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel

    2016-01-01

    Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications. PMID:27929077

  12. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  13. Noise Simulations of the High-Lift Common Research Model

    Science.gov (United States)

    Lockard, David P.; Choudhari, Meelan M.; Vatsa, Veer N.; O'Connell, Matthew D.; Duda, Benjamin; Fares, Ehab

    2017-01-01

    The PowerFLOW(TradeMark) code has been used to perform numerical simulations of the high-lift version of the Common Research Model (HL-CRM) that will be used for experimental testing of airframe noise. Time-averaged surface pressure results from PowerFLOW(TradeMark) are found to be in reasonable agreement with those from steady-state computations using FUN3D. Surface pressure fluctuations are highest around the slat break and nacelle/pylon region, and synthetic array beamforming results also indicate that this region is the dominant noise source on the model. The gap between the slat and pylon on the HL-CRM is not realistic for modern aircraft, and most nacelles include a chine that is absent in the baseline model. To account for those effects, additional simulations were completed with a chine and with the slat extended into the pylon. The case with the chine was nearly identical to the baseline, and the slat extension resulted in higher surface pressure fluctuations but slightly reduced radiated noise. The full-span slat geometry without the nacelle/pylon was also simulated and found to be around 10 dB quieter than the baseline over almost the entire frequency range. The current simulations are still considered preliminary as changes in the radiated acoustics are still being observed with grid refinement, and additional simulations with finer grids are planned.

  14. 5D Building Information Modelling – A Practicability Review

    Directory of Open Access Journals (Sweden)

    Lee Xia Sheng

    2016-01-01

    Full Text Available Quality, time and cost are the three most important elements in any construction project. Building information that comes timely and accurately in multiple dimensions will facilitate a refined decision making process which can improve the construction quality, time and cost. 5 dimensional Building Information Modelling or 5D BIM is an emerging trend in the construction industry that integrates all the major information starting from the initial design to the final construction stage. After that, the integrated information is arranged and communicated through Virtual Design and Construction (VDC. This research is to gauge the practicability of 5D BIM with an action research type pilot study by the means of hands-on modelling of a conceptual bungalow design based on one of the most popular BIM tools. A bungalow is selected as a study subject to simulate the major stages of 5D BIM digital workflow. The whole process starts with developing drawings (2D into digital model (3D, and is followed by the incorporation of time (4D and cost (5D. Observations are focused on the major factors that will affect the practicability of 5D BIM, including the modelling effort, inter-operability, information output and limitations. This research concludes that 5D BIM certainly has high level practicability which further differentiates BIM from Computer Aided Design (CAD. The integration of information not only enhanced the efficiency and accuracy of process in all stages, but also enabled decision makers to have a sophisticated interpretation of information which is almost impossible with the conventional 2D CAD workflow. Although it is possible to incorporate more than 5 dimensions of information, it is foreseeable that excessive information may escalate the complexity unfavourably for BIM implementation. 5D BIM has achieved a significant level of practicability; further research should be conducted to streamline implementation. Once 5D BIM is matured and widely

  15. Informed consent in direct-to-consumer personal genome testing: the outline of a model between specific and generic consent.

    Science.gov (United States)

    Bunnik, Eline M; Janssens, A Cecile J W; Schermer, Maartje H N

    2014-09-01

    Broad genome-wide testing is increasingly finding its way to the public through the online direct-to-consumer marketing of so-called personal genome tests. Personal genome tests estimate genetic susceptibilities to multiple diseases and other phenotypic traits simultaneously. Providers commonly make use of Terms of Service agreements rather than informed consent procedures. However, to protect consumers from the potential physical, psychological and social harms associated with personal genome testing and to promote autonomous decision-making with regard to the testing offer, we argue that current practices of information provision are insufficient and that there is a place--and a need--for informed consent in personal genome testing, also when it is offered commercially. The increasing quantity, complexity and diversity of most testing offers, however, pose challenges for information provision and informed consent. Both specific and generic models for informed consent fail to meet its moral aims when applied to personal genome testing. Consumers should be enabled to know the limitations, risks and implications of personal genome testing and should be given control over the genetic information they do or do not wish to obtain. We present the outline of a new model for informed consent which can meet both the norm of providing sufficient information and the norm of providing understandable information. The model can be used for personal genome testing, but will also be applicable to other, future forms of broad genetic testing or screening in commercial and clinical settings. © 2012 John Wiley & Sons Ltd.

  16. Creating a data resource: what will it take to build a medical information commons?

    Science.gov (United States)

    Deverka, Patricia A; Majumder, Mary A; Villanueva, Angela G; Anderson, Margaret; Bakker, Annette C; Bardill, Jessica; Boerwinkle, Eric; Bubela, Tania; Evans, Barbara J; Garrison, Nanibaa' A; Gibbs, Richard A; Gentleman, Robert; Glazer, David; Goldstein, Melissa M; Greely, Hank; Harris, Crane; Knoppers, Bartha M; Koenig, Barbara A; Kohane, Isaac S; La Rosa, Salvatore; Mattison, John; O'Donnell, Christopher J; Rai, Arti K; Rehm, Heidi L; Rodriguez, Laura L; Shelton, Robert; Simoncelli, Tania; Terry, Sharon F; Watson, Michael S; Wilbanks, John; Cook-Deegan, Robert; McGuire, Amy L

    2017-09-22

    National and international public-private partnerships, consortia, and government initiatives are underway to collect and share genomic, personal, and healthcare data on a massive scale. Ideally, these efforts will contribute to the creation of a medical information commons (MIC), a comprehensive data resource that is widely available for both research and clinical uses. Stakeholder participation is essential in clarifying goals, deepening understanding of areas of complexity, and addressing long-standing policy concerns such as privacy and security and data ownership. This article describes eight core principles proposed by a diverse group of expert stakeholders to guide the formation of a successful, sustainable MIC. These principles promote formation of an ethically sound, inclusive, participant-centric MIC and provide a framework for advancing the policy response to data-sharing opportunities and challenges.

  17. Data needs for common cause failure analysis

    International Nuclear Information System (INIS)

    Parry, G.W.; Paula, H.M.; Rasmuson, D.; Whitehead, D.

    1990-01-01

    The procedures guide for common cause failure analysis published jointly by USNRC and EPRI requires a detailed historical event analysis. Recent work on the further development of the cause-defense picture of common cause failures introduced in that guide identified the information that is necessary to perform the detailed analysis in an objective manner. This paper summarizes these information needs

  18. Cognition to Collaboration: User-Centric Approach and Information Behaviour Theories/Models

    Directory of Open Access Journals (Sweden)

    Alperen M Aydin

    2016-12-01

    Full Text Available Aim/Purpose: The objective of this paper is to review the vast literature of user-centric in-formation science and inform about the emerging themes in information behaviour science. Background:\tThe paradigmatic shift from system-centric to user-centric approach facilitates research on the cognitive and individual information processing. Various information behaviour theories/models emerged. Methodology: Recent information behaviour theories and models are presented. Features, strengths and weaknesses of the models are discussed through the analysis of the information behaviour literature. Contribution: This paper sheds light onto the weaknesses in earlier information behaviour models and stresses (and advocates the need for research on social information behaviour. Findings: Prominent information behaviour models deal with individual information behaviour. People live in a social world and sort out most of their daily or work problems in groups. However, only seven papers discuss social information behaviour (Scopus search. Recommendations for Practitioners\t: ICT tools used for inter-organisational sharing should be redesigned for effective information-sharing during disaster/emergency times. Recommendation for Researchers: There are scarce sources on social side of the information behaviour, however, most of the work tasks are carried out in groups/teams. Impact on Society: In dynamic work contexts like disaster management and health care settings, collaborative information-sharing may result in decreasing the losses. Future Research: A fieldwork will be conducted in disaster management context investigating the inter-organisational information-sharing.

  19. The common operational picture as collective sensemaking

    NARCIS (Netherlands)

    Wolbers, J.J.; Boersma, F.K.

    2013-01-01

    The common operational picture is used to overcome coordination and information management problems during emergency response. Increasingly, this approach is incorporated in more advanced information systems. This is rooted in an 'information warehouse' perspective, which implies information can be

  20. Models of organisation and information system design | Mohamed ...

    African Journals Online (AJOL)

    We devote this paper to the models of organisation, and see which is best suited to provide a basis for information processing and transmission. In this respect we shall be dealing with four models of organisation, namely: the classical mode, the behavioural model, the systems model and the cybernetic model of ...

  1. A Participatory Model for Multi-Document Health Information Summarisation

    Directory of Open Access Journals (Sweden)

    Dinithi Nallaperuma

    2017-03-01

    Full Text Available Increasing availability and access to health information has been a paradigm shift in healthcare provision as it empowers both patients and practitioners alike. Besides awareness, significant time savings and process efficiencies can be achieved through effective summarisation of healthcare information. Relevance and accuracy are key concerns when generating summaries for such documents. Despite advances in automated summarisation approaches, the role of participation has not been explored. In this paper, we propose a new model for multi-document health information summarisation that takes into account the role of participation. The updated IS user participation theory was extended to explicate these roles. The proposed model integrates both extractive and abstractive summarisation processes with continuous participatory inputs to each phase. The model was implemented as a client-server application and evaluated by both domain experts and health information consumers. Results from the evaluation phase indicates the model is successful in generating relevant and accurate summaries for diverse audiences.

  2. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  3. Comparisons of Bitcoin Cryptosystem with Other Common Internet Transaction Systems by AHP Technique

    Directory of Open Access Journals (Sweden)

    Davor Maček

    2017-01-01

    Full Text Available This paper describes proposed methodology for evaluation of critical systems and prioritization of critical risks and assets identified in highly secured information systems. For different types of information assets or security environments it is necessary to apply different techniques and methods for their prioritization and evaluation. In this article, VECTOR matrix method for prioritization of critical assets and critical risks is explained and integrated into AHP (Analytic Hierarchy Process technique as a set of fixed criteria for evaluation of defined alternatives. Bitcoin cryptocurrency was compared and evaluated along with other common Internet transaction systems by information security professionals according to defined VECTOR criteria. Also, the newly proposed hybrid AHP model is presented with potential case studies for future research. This article tries to discover security posture of Bitcoin cryptocurrency in the context of information security risks related to the existing most common online payment systems like e-banking, m-banking, and e-commerce.

  4. The common ancestry of life

    Directory of Open Access Journals (Sweden)

    Wolf Yuri I

    2010-11-01

    Full Text Available Abstract Background It is common belief that all cellular life forms on earth have a common origin. This view is supported by the universality of the genetic code and the universal conservation of multiple genes, particularly those that encode key components of the translation system. A remarkable recent study claims to provide a formal, homology independent test of the Universal Common Ancestry hypothesis by comparing the ability of a common-ancestry model and a multiple-ancestry model to predict sequences of universally conserved proteins. Results We devised a computational experiment on a concatenated alignment of universally conserved proteins which shows that the purported demonstration of the universal common ancestry is a trivial consequence of significant sequence similarity between the analyzed proteins. The nature and origin of this similarity are irrelevant for the prediction of "common ancestry" of by the model-comparison approach. Thus, homology (common origin of the compared proteins remains an inference from sequence similarity rather than an independent property demonstrated by the likelihood analysis. Conclusion A formal demonstration of the Universal Common Ancestry hypothesis has not been achieved and is unlikely to be feasible in principle. Nevertheless, the evidence in support of this hypothesis provided by comparative genomics is overwhelming. Reviewers this article was reviewed by William Martin, Ivan Iossifov (nominated by Andrey Rzhetsky and Arcady Mushegian. For the complete reviews, see the Reviewers' Report section.

  5. Examining the media portrayal of obesity through the lens of the Common Sense Model of Illness Representations.

    Science.gov (United States)

    De Brún, Aoife; McCarthy, Mary; McKenzie, Kenneth; McGloin, Aileen

    2015-01-01

    This study examined the Irish media discourse on obesity by employing the Common Sense Model of Illness Representations. A media sample of 368 transcripts was compiled from newspaper articles (n = 346), radio discussions (n = 5), and online news articles (n = 17) on overweight and obesity from the years 2005, 2007, and 2009. Using the Common Sense Model and framing theory to guide the investigation, a thematic analysis was conducted on the media sample. Analysis revealed that the behavioral dimensions of diet and activity levels were the most commonly cited causes of and interventions in obesity. The advertising industry was blamed for obesity, and there were calls for increased government action to tackle the issue. Physical illness and psychological consequences of obesity were prevalent in the sample, and analysis revealed that the economy, regardless of its state, was blamed for obesity. These results are discussed in terms of expectations of audience understandings of the issue and the implications of these dominant portrayals and framings on public support for interventions. The article also outlines the value of a qualitative analytical framework that combines the Common Sense Model and framing theory in the investigation of illness narratives.

  6. A Multi-Level Model of Information Seeking in the Clinical Domain

    Science.gov (United States)

    Hung, Peter W.; Johnson, Stephen B.; Kaufman, David R.; Mendonça, Eneida A.

    2008-01-01

    Objective: Clinicians often have difficulty translating information needs into effective search strategies to find appropriate answers. Information retrieval systems employing an intelligent search agent that generates adaptive search strategies based on human search expertise could be helpful in meeting clinician information needs. A prerequisite for creating such systems is an information seeking model that facilitates the representation of human search expertise. The purpose of developing such a model is to provide guidance to information seeking system development and to shape an empirical research program. Design: The information seeking process was modeled as a complex problem-solving activity. After considering how similarly complex activities had been modeled in other domains, we determined that modeling context-initiated information seeking across multiple problem spaces allows the abstraction of search knowledge into functionally consistent layers. The knowledge layers were identified in the information science literature and validated through our observations of searches performed by health science librarians. Results: A hierarchical multi-level model of context-initiated information seeking is proposed. Each level represents (1) a problem space that is traversed during the online search process, and (2) a distinct layer of knowledge that is required to execute a successful search. Grand strategy determines what information resources will be searched, for what purpose, and in what order. The strategy level represents an overall approach for searching a single resource. Tactics are individual moves made to further a strategy. Operations are mappings of abstract intentions to information resource-specific concrete input. Assessment is the basis of interaction within the strategic hierarchy, influencing the direction of the search. Conclusion: The described multi-level model provides a framework for future research and the foundation for development of an

  7. Millennial Students' Mental Models of Information Retrieval

    Science.gov (United States)

    Holman, Lucy

    2009-01-01

    This qualitative study examines first-year college students' online search habits in order to identify patterns in millennials' mental models of information retrieval. The study employed a combination of modified contextual inquiry and concept mapping methodologies to elicit students' mental models. The researcher confirmed previously observed…

  8. Global scientific research commons under the Nagoya Protocol: Towards a collaborative economy model for the sharing of basic research assets.

    Science.gov (United States)

    Dedeurwaerdere, Tom; Melindi-Ghidi, Paolo; Broggiato, Arianna

    2016-01-01

    This paper aims to get a better understanding of the motivational and transaction cost features of building global scientific research commons, with a view to contributing to the debate on the design of appropriate policy measures under the recently adopted Nagoya Protocol. For this purpose, the paper analyses the results of a world-wide survey of managers and users of microbial culture collections, which focused on the role of social and internalized motivations, organizational networks and external incentives in promoting the public availability of upstream research assets. Overall, the study confirms the hypotheses of the social production model of information and shareable goods, but it also shows the need to complete this model. For the sharing of materials, the underlying collaborative economy in excess capacity plays a key role in addition to the social production, while for data, competitive pressures amongst scientists tend to play a bigger role.

  9. Ruling the Commons. Introducing a new methodology for the analysis of historical commons

    Directory of Open Access Journals (Sweden)

    Tine de Moor

    2016-10-01

    Full Text Available Despite significant progress in recent years, the evolution of commons over the long run remains an under-explored area within commons studies. During the last years an international team of historians have worked under the umbrella of the Common Rules Project in order to design and test a new methodology aimed at advancing our knowledge on the dynamics of institutions for collective action – in particular commons. This project aims to contribute to the current debate on commons on three different fronts. Theoretically, it explicitly draws our attention to issues of change and adaptation in the commons – contrasting with more static analyses. Empirically, it highlights the value of historical records as a rich source of information for longitudinal analysis of the functioning of commons. Methodologically, it develops a systematic way of analyzing and comparing commons’ regulations across regions and time, setting a number of variables that have been defined on the basis of the “most common denominators” in commons regulation across countries and time periods. In this paper we introduce the project, describe our sources and methodology, and present the preliminary results of our analysis.

  10. The Common Vision. Reviews: Books.

    Science.gov (United States)

    Chattin-McNichols, John

    1998-01-01

    Reviews Marshak's book describing the work of educators Maria Montessori, Rudolf Steiner, Aurobindo Ghose, and Inayat Khan. Maintains that the book gives clear, concise information on each educator and presents a common vision for children and their education; also maintains that it gives theoretical and practical information and discusses…

  11. An information propagation model considering incomplete reading behavior in microblog

    Science.gov (United States)

    Su, Qiang; Huang, Jiajia; Zhao, Xiande

    2015-02-01

    Microblog is one of the most popular communication channels on the Internet, and has already become the third largest source of news and public opinions in China. Although researchers have studied the information propagation in microblog using the epidemic models, previous studies have not considered the incomplete reading behavior among microblog users. Therefore, the model cannot fit the real situations well. In this paper, we proposed an improved model entitled Microblog-Susceptible-Infected-Removed (Mb-SIR) for information propagation by explicitly considering the user's incomplete reading behavior. We also tested the effectiveness of the model using real data from Sina Microblog. We demonstrate that the new proposed model is more accurate in describing the information propagation in microblog. In addition, we also investigate the effects of the critical model parameters, e.g., reading rate, spreading rate, and removed rate through numerical simulations. The simulation results show that, compared with other parameters, reading rate plays the most influential role in the information propagation performance in microblog.

  12. Beyond attributions: Understanding public stigma of mental illness with the common sense model.

    Science.gov (United States)

    Mak, Winnie W S; Chong, Eddie S K; Wong, Celia C Y

    2014-03-01

    The present study applied the common sense model (i.e., cause, controllability, timeline, consequences, and illness coherence) to understand public attitudes toward mental illness and help-seeking intention and to examine the mediating role of perceived controllability between causal attributions with public attitudes and help seeking. Based on a randomized household sample of 941 Chinese community adults in Hong Kong, results of the structural equation modeling demonstrated that people who endorsed cultural lay beliefs tended to perceive the course of mental illness as less controllable, whereas those with psychosocial attributions see its course as more controllable. The more people perceived the course of mental illness as less controllable, more chronic, and incomprehensible, the lower was their acceptance and the greater was mental illness stigma. Furthermore, those who perceived mental illness with dire consequences were more likely to feel greater stigma and social distance. Conversely, when people were more accepting, they were more likely to seek help for psychological services and felt a shorter social distance. The common sense model provides a multidimensional framework in understanding public's mental illness perceptions and stigma. Not only should biopsychosocial determinants of mental illness be advocated to the public, cultural myths toward mental illness must be debunked.

  13. How Qualitative Methods Can be Used to Inform Model Development.

    Science.gov (United States)

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  14. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki

    2013-06-21

    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  15. A framework for scalable parameter estimation of gene circuit models using structural information

    KAUST Repository

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-01-01

    Motivation: Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Results: Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. The Author 2013.

  16. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  17. A Compositional Relevance Model for Adaptive Information Retrieval

    Science.gov (United States)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  18. Fisher information and quantum potential well model for finance

    Energy Technology Data Exchange (ETDEWEB)

    Nastasiuk, V.A., E-mail: nasa@i.ua

    2015-09-25

    The probability distribution function (PDF) for prices on financial markets is derived by extremization of Fisher information. It is shown how on that basis the quantum-like description for financial markets arises and different financial market models are mapped by quantum mechanical ones. - Highlights: • The financial Schrödinger equation is derived using the principle of minimum Fisher information. • Statistical models for price variation are mapped by the quantum models of coupled particle. • The model of quantum particle in parabolic potential well corresponds to Efficient market.

  19. Fisher information and quantum potential well model for finance

    International Nuclear Information System (INIS)

    Nastasiuk, V.A.

    2015-01-01

    The probability distribution function (PDF) for prices on financial markets is derived by extremization of Fisher information. It is shown how on that basis the quantum-like description for financial markets arises and different financial market models are mapped by quantum mechanical ones. - Highlights: • The financial Schrödinger equation is derived using the principle of minimum Fisher information. • Statistical models for price variation are mapped by the quantum models of coupled particle. • The model of quantum particle in parabolic potential well corresponds to Efficient market

  20. Asset Condition, Information Systems and Decision Models

    CERN Document Server

    Willett, Roger; Brown, Kerry; Mathew, Joseph

    2012-01-01

    Asset Condition, Information Systems and Decision Models, is the second volume of the Engineering Asset Management Review Series. The manuscripts provide examples of implementations of asset information systems as well as some practical applications of condition data for diagnostics and prognostics. The increasing trend is towards prognostics rather than diagnostics, hence the need for assessment and decision models that promote the conversion of condition data into prognostic information to improve life-cycle planning for engineered assets. The research papers included here serve to support the on-going development of Condition Monitoring standards. This volume comprises selected papers from the 1st, 2nd, and 3rd World Congresses on Engineering Asset Management, which were convened under the auspices of ISEAM in collaboration with a number of organisations, including CIEAM Australia, Asset Management Council Australia, BINDT UK, and Chinese Academy of Sciences, Beijing University of Chemical Technology, Chin...

  1. Information Models of Acupuncture Analgesia and Meridian Channels

    Directory of Open Access Journals (Sweden)

    Chang Hua Zou

    2010-12-01

    Full Text Available Acupuncture and meridian channels have been major components of Chinese and Eastern Asian medicine—especially for analgesia—for over 2000 years. In recent decades, electroacupuncture (EA analgesia has been applied clinically and experimentally. However, there were controversial results between different treatment frequencies, or between the active and the placebo treatments; and the mechanisms of the treatments and the related meridian channels are still unknown. In this study, we propose a new term of infophysics therapy and develop information models of acupuncture (or EA analgesia and meridian channels, to understand the mechanisms and to explain the controversial results, based on Western theories of information, trigonometry and Fourier series, and physics, as well as published biomedical data. We are trying to build a bridge between Chinese medicine and Western medicine by investigating the Eastern acupuncture analgesia and meridian channels with Western sciences; we model the meridians as a physiological system that is mostly constructed with interstices in or between other physiological systems; we consider frequencies, amplitudes and wave numbers of electric field intensity (EFI as information data. Our modeling results demonstrate that information regulated with acupuncture (or EA is different from pain information, we provide answers to explain the controversial published results, and suggest that mechanisms of acupuncture (or EA analgesia could be mostly involved in information regulation of frequencies and amplitudes of EFI as well as neuronal transmitters such as endorphins.

  2. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  3. THE MODEL FOR RISK ASSESSMENT ERP-SYSTEMS INFORMATION SECURITY

    Directory of Open Access Journals (Sweden)

    V. S. Oladko

    2016-12-01

    Full Text Available The article deals with the problem assessment of information security risks in the ERP-system. ERP-system functions and architecture are studied. The model malicious impacts on levels of ERP-system architecture are composed. Model-based risk assessment, which is the quantitative and qualitative approach to risk assessment, built on the partial unification 3 methods for studying the risks of information security - security models with full overlapping technique CRAMM and FRAP techniques developed.

  4. A common type system for clinical natural language processing.

    Science.gov (United States)

    Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G

    2013-01-03

    One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.

  5. How innovation commons contribute to discovering and developing new technologies

    Directory of Open Access Journals (Sweden)

    Darcy W.E. Allen

    2016-09-01

    Full Text Available In modern economics, the institutions surrounding the creation and development of new technologies are firms, markets and governments. We propose an alternative theory that locates the institutional origin of new technologies further back in the commons when self-organizing groups of technology enthusiasts develop effective governance rules to pool distributed information resources. The ‘innovation commons’ alleviates uncertainty around a nascent technology by pooling distributed information about uses, costs, problems and opportunities. While innovation commons are mostly temporary, because the resource itself – the information about opportunities – is only temporarily valuable, they are a further addition to the Pantheon of commons, and suggest that the institutions of the commons – and the common pool resource of information about applications of the technology – may be far more important in the study of innovation than previously thought.

  6. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  7. Building a world class information security governance model

    CSIR Research Space (South Africa)

    Lessing, M

    2008-05-01

    Full Text Available practice documents. The resulting model covers all the relevant aspects on strategic, management and technical level when implemented altogether. This model includes the related aspects of Corporate Governance, Information Technology Governance...

  8. A spread willingness computing-based information dissemination model.

    Science.gov (United States)

    Huang, Haojing; Cui, Zhiming; Zhang, Shukui

    2014-01-01

    This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  9. Building Information Modeling for Managing Design and Construction

    DEFF Research Database (Denmark)

    Berard, Ole Bengt

    outcome of construction work. Even though contractors regularly encounter design information problems, these issues are accepted as a condition of doing business and better design information has yet to be defined. Building information modeling has the inherent promise of improving the quality of design...... information for work tasks. * Amount of Information – the number of documents and files, and other media, should be appropriate for the scope. The criteria were identified by empirical studies and theory on information quality in the architectural, engineering and construction (AEC) industry and other fields......Contractors planning and executing construction work encounter many kinds of problems with design information, such as uncoordinated drawings and specification, missing relevant information, and late delivery of design information. Research has shown that missing design information and unintended...

  10. Safety Case Development as an Information Modelling Problem

    Science.gov (United States)

    Lewis, Robert

    This paper considers the benefits from applying information modelling as the basis for creating an electronically-based safety case. It highlights the current difficulties of developing and managing large document-based safety cases for complex systems such as those found in Air Traffic Control systems. After a review of current tools and related literature on this subject, the paper proceeds to examine the many relationships between entities that can exist within a large safety case. The paper considers the benefits to both safety case writers and readers from the future development of an ideal safety case tool that is able to exploit these information models. The paper also introduces the idea that the safety case has formal relationships between entities that directly support the safety case argument using a methodology such as GSN, and informal relationships that provide links to direct and backing evidence and to supporting information.

  11. A Comprehensive Model of Cancer-Related Information Seeking Applied to Magazines.

    Science.gov (United States)

    Johnson, J. David; Meischke, Hendrika

    1993-01-01

    Examines a comprehensive model of information seeking resulting from the synthesis of three theoretical research streams: the health belief model, uses and gratifications research, and a model of media exposure. Suggests that models of information seeking from mass media should focus on purely communicative factors. (RS)

  12. Preservice Secondary Teachers' Conceptions from a Mathematical Modeling Activity and Connections to the Common Core State Standards

    Science.gov (United States)

    Stohlmann, Micah; Maiorca, Cathrine; Olson, Travis A.

    2015-01-01

    Mathematical modeling is an essential integrated piece of the Common Core State Standards. However, researchers have shown that mathematical modeling activities can be difficult for teachers to implement. Teachers are more likely to implement mathematical modeling activities if they have their own successful experiences with such activities. This…

  13. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  14. Common and Critical Components Among Community Health Assessment and Community Health Improvement Planning Models.

    Science.gov (United States)

    Pennel, Cara L; Burdine, James N; Prochaska, John D; McLeroy, Kenneth R

    Community health assessment and community health improvement planning are continuous, systematic processes for assessing and addressing health needs in a community. Since there are different models to guide assessment and planning, as well as a variety of organizations and agencies that carry out these activities, there may be confusion in choosing among approaches. By examining the various components of the different assessment and planning models, we are able to identify areas for coordination, ways to maximize collaboration, and strategies to further improve community health. We identified 11 common assessment and planning components across 18 models and requirements, with a particular focus on health department, health system, and hospital models and requirements. These common components included preplanning; developing partnerships; developing vision and scope; collecting, analyzing, and interpreting data; identifying community assets; identifying priorities; developing and implementing an intervention plan; developing and implementing an evaluation plan; communicating and receiving feedback on the assessment findings and/or the plan; planning for sustainability; and celebrating success. Within several of these components, we discuss characteristics that are critical to improving community health. Practice implications include better understanding of different models and requirements by health departments, hospitals, and others involved in assessment and planning to improve cross-sector collaboration, collective impact, and community health. In addition, federal and state policy and accreditation requirements may be revised or implemented to better facilitate assessment and planning collaboration between health departments, hospitals, and others for the purpose of improving community health.

  15. Management of information in development projects – a proposed integrated model

    Directory of Open Access Journals (Sweden)

    C. Bester

    2008-11-01

    Full Text Available The first section of the article focuses on the need for development in Africa and the specific challenges of development operations. It describes the need for a holistic and integrated information management model as part of the project management body of knowledge aimed at managing the information flow between communities and development project teams. It is argued that information, and access to information, is crucial in development projects and can therefore be seen as a critical success factor in any development project. In the second section of the article, the three information areas of the holistic and integrated information management model are described. In the section thereafter we suggest roles and actions for information managers to facilitate information processes integral to the model. These processes seek to create a developing information community that aligns itself with the development project, and supports and sustains it.

  16. High Level Information Fusion (HLIF) with nested fusion loops

    Science.gov (United States)

    Woodley, Robert; Gosnell, Michael; Fischer, Amber

    2013-05-01

    Situation modeling and threat prediction require higher levels of data fusion in order to provide actionable information. Beyond the sensor data and sources the analyst has access to, the use of out-sourced and re-sourced data is becoming common. Through the years, some common frameworks have emerged for dealing with information fusion—perhaps the most ubiquitous being the JDL Data Fusion Group and their initial 4-level data fusion model. Since these initial developments, numerous models of information fusion have emerged, hoping to better capture the human-centric process of data analyses within a machine-centric framework. 21st Century Systems, Inc. has developed Fusion with Uncertainty Reasoning using Nested Assessment Characterizer Elements (FURNACE) to address challenges of high level information fusion and handle bias, ambiguity, and uncertainty (BAU) for Situation Modeling, Threat Modeling, and Threat Prediction. It combines JDL fusion levels with nested fusion loops and state-of-the-art data reasoning. Initial research has shown that FURNACE is able to reduce BAU and improve the fusion process by allowing high level information fusion (HLIF) to affect lower levels without the double counting of information or other biasing issues. The initial FURNACE project was focused on the underlying algorithms to produce a fusion system able to handle BAU and repurposed data in a cohesive manner. FURNACE supports analyst's efforts to develop situation models, threat models, and threat predictions to increase situational awareness of the battlespace. FURNACE will not only revolutionize the military intelligence realm, but also benefit the larger homeland defense, law enforcement, and business intelligence markets.

  17. An information search model for online social Networks - MOBIRSE

    Directory of Open Access Journals (Sweden)

    Miguel Angel Niño Zambrano

    2015-09-01

    Full Text Available Online Social Networks (OSNs have been gaining great importance among Internet users in recent years.  These are sites where it is possible to meet people, publish, and share content in a way that is both easy and free of charge. As a result, the volume of information contained in these websites has grown exponentially, and web search has consequently become an important tool for users to easily find information relevant to their social networking objectives. Making use of ontologies and user profiles can make these searches more effective. This article presents a model for Information Retrieval in OSNs (MOBIRSE based on user profile and ontologies which aims to improve the relevance of retrieved information on these websites. The social network Facebook was chosen for a case study and as the instance for the proposed model. The model was validated using measures such as At-k Precision and Kappa statistics, to assess its efficiency.

  18. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  19. Study on a Threat-Countermeasure Model Based on International Standard Information

    Directory of Open Access Journals (Sweden)

    Guillermo Horacio Ramirez Caceres

    2008-12-01

    Full Text Available Many international standards exist in the field of IT security. This research is based on the ISO/IEC 15408, 15446, 19791, 13335 and 17799 standards. In this paper, we propose a knowledge base comprising a threat countermeasure model based on international standards for identifying and specifying threats which affect IT environments. In addition, the proposed knowledge base system aims at fusing similar security control policies and objectives in order to create effective security guidelines for specific IT environments. As a result, a knowledge base of security objectives was developed on the basis of the relationships inside the standards as well as the relationships between different standards. In addition, a web application was developed which displays details about the most common threats to information systems, and for each threat presents a set of related security control policies from different international standards, including ISO/IEC 27002.

  20. INFORMATIONAL MODEL OF MENTAL ROTATION OF FIGURES

    Directory of Open Access Journals (Sweden)

    V. A. Lyakhovetskiy

    2016-01-01

    Full Text Available Subject of Study.The subject of research is the information structure of objects internal representations and operations over them, used by man to solve the problem of mental rotation of figures. To analyze this informational structure we considered not only classical dependencies of the correct answers on the angle of rotation, but also the other dependencies obtained recently in cognitive psychology. Method.The language of technical computing Matlab R2010b was used for developing information model of the mental rotation of figures. Such model parameters as the number of bits in the internal representation, an error probability in a single bit, discrete rotation angle, comparison threshold, and the degree of difference during rotation can be changed. Main Results.The model reproduces qualitatively such psychological dependencies as the linear increase of time of correct answers and the number of errors on the angle of rotation for identical figures, "flat" dependence of the time of correct answers and the number of errors on the angle of rotation for mirror-like figures. The simulation results suggest that mental rotation is an iterative process of finding a match between the two figures, each step of which can lead to a significant distortion of the internal representation of the stored objects. Matching is carried out within the internal representations that have no high invariance to rotation angle. Practical Significance.The results may be useful for understanding the role of learning (including the learning with a teacher in the development of effective information representation and operations on them in artificial intelligence systems.

  1. Report on the Regulatory Experience of Risk-Informed In-service Inspection of Nuclear Power Plant Components and Common Views (consensus document)

    International Nuclear Information System (INIS)

    2004-08-01

    The present report represents the work product of the activities conducted by the Task Force. The TF performed a review and inventory of the existing approaches to risk-informed inservice inspection and testing, and completed its work in 1999 with a Current Practices Document 2, titled Report on risk-informed in-service inspection and in-service testing (EUR 19153 EN). In November 2001, the NRWG held a Special session on risk-informed applications, with emphasis on risk-informed inservice inspection, where results and experiences from pilot studies on risk-informed inservice inspection (RI-ISI), performed in several European countries, were presented and discussed. As a follow-up in May 2002, the TF was reconvened with the objectives to analyse from the regulatory point of view key aspects associated with the application of risk-informed inservice inspection, and to go beyond a state of the art report, presenting a series of recommendations of good practices or common positions reached by the regulators represented in the Task Force. (author)

  2. A Reference Architecture for Space Information Management

    Science.gov (United States)

    Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.

    2006-01-01

    We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.

  3. Research on new information service model of the contemporary library

    International Nuclear Information System (INIS)

    Xin Pingping; Lu Yan

    2010-01-01

    According to the development of the internet and multimedia technology, the information service models in the contemporary library become both of the traditional and digital information service. The libraries in each country do their best to make the voluminous information and the complex technology be High-integrated in the background management, and also make the front interface be more and more convenient to the users. The essential characteristics of the information service of the contemporary library are all-in-one and humanness. In this article, we will describe several new hot information service models of the contemporary library in detail, such as individualized service, reference service, reference service and strategic information service. (authors)

  4. The Protein Model Portal--a comprehensive resource for protein structure and model information.

    Science.gov (United States)

    Haas, Juergen; Roth, Steven; Arnold, Konstantin; Kiefer, Florian; Schmidt, Tobias; Bordoli, Lorenza; Schwede, Torsten

    2013-01-01

    The Protein Model Portal (PMP) has been developed to foster effective use of 3D molecular models in biomedical research by providing convenient and comprehensive access to structural information for proteins. Both experimental structures and theoretical models for a given protein can be searched simultaneously and analyzed for structural variability. By providing a comprehensive view on structural information, PMP offers the opportunity to apply consistent assessment and validation criteria to the complete set of structural models available for proteins. PMP is an open project so that new methods developed by the community can contribute to PMP, for example, new modeling servers for creating homology models and model quality estimation servers for model validation. The accuracy of participating modeling servers is continuously evaluated by the Continuous Automated Model EvaluatiOn (CAMEO) project. The PMP offers a unique interface to visualize structural coverage of a protein combining both theoretical models and experimental structures, allowing straightforward assessment of the model quality and hence their utility. The portal is updated regularly and actively developed to include latest methods in the field of computational structural biology. Database URL: http://www.proteinmodelportal.org.

  5. The Protein Model Portal—a comprehensive resource for protein structure and model information

    Science.gov (United States)

    Haas, Juergen; Roth, Steven; Arnold, Konstantin; Kiefer, Florian; Schmidt, Tobias; Bordoli, Lorenza; Schwede, Torsten

    2013-01-01

    The Protein Model Portal (PMP) has been developed to foster effective use of 3D molecular models in biomedical research by providing convenient and comprehensive access to structural information for proteins. Both experimental structures and theoretical models for a given protein can be searched simultaneously and analyzed for structural variability. By providing a comprehensive view on structural information, PMP offers the opportunity to apply consistent assessment and validation criteria to the complete set of structural models available for proteins. PMP is an open project so that new methods developed by the community can contribute to PMP, for example, new modeling servers for creating homology models and model quality estimation servers for model validation. The accuracy of participating modeling servers is continuously evaluated by the Continuous Automated Model EvaluatiOn (CAMEO) project. The PMP offers a unique interface to visualize structural coverage of a protein combining both theoretical models and experimental structures, allowing straightforward assessment of the model quality and hence their utility. The portal is updated regularly and actively developed to include latest methods in the field of computational structural biology. Database URL: http://www.proteinmodelportal.org PMID:23624946

  6. The Short Personality Inventory for DSM-5 and Its Conjoined Structure with the Common Five-Factor Model

    Science.gov (United States)

    Kajonius, Petri J.

    2017-01-01

    Research is currently testing how the new maladaptive personality inventory for DSM (PID-5) and the well-established common Five-Factor Model (FFM) together can serve as an empirical and theoretical foundation for clinical psychology. The present study investigated the official short version of the PID-5 together with a common short version of…

  7. Selecting among five common modelling approaches for integrated environmental assessment and management

    NARCIS (Netherlands)

    Kelly, Rebecca A.; Jakeman, Anthony J.; Barreteau, Olivier; Borsuk, Mark E.; El-Sawah, Sondoss; Hamilton, Serena H.; Henriksen, Hans Jorgen; Kuikka, Sakari; Maier, Holger R.; Rizzoli, Andrea Emilio; van Delden, H.; Voinov, A.

    2013-01-01

    The design and implementation of effective environmental policies need to be informed by a holistic understanding of the system processes (biophysical, social and economic), their complex interactions, and how they respond to various changes. Models, integrating different system processes into a

  8. Towards dynamic reference information models: Readiness for ICT mass customisation

    NARCIS (Netherlands)

    Verdouw, C.N.; Beulens, A.J.M.; Trienekens, J.H.; Verwaart, D.

    2010-01-01

    Current dynamic demand-driven networks make great demands on, in particular, the interoperability and agility of information systems. This paper investigates how reference information models can be used to meet these demands by enhancing ICT mass customisation. It was found that reference models for

  9. Dynamic information architecture system (DIAS) : multiple model simulation management

    International Nuclear Information System (INIS)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-01-01

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers can schedule other events; create or remove Entities from the

  10. Proposing a Metaliteracy Model to Redefine Information Literacy

    Science.gov (United States)

    Jacobson, Trudi E.; Mackey, Thomas P.

    2013-01-01

    Metaliteracy is envisioned as a comprehensive model for information literacy to advance critical thinking and reflection in social media, open learning settings, and online communities. At this critical time in higher education, an expansion of the original definition of information literacy is required to include the interactive production and…

  11. Implementation of the common phrase index method on the phrase query for information retrieval

    Science.gov (United States)

    Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah

    2017-08-01

    As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.

  12. User-Oriented and Cognitive Models of Information Retrieval

    DEFF Research Database (Denmark)

    Skov, Mette; Järvelin, Kalervo; Ingwersen, Peter

    2018-01-01

    The domain of user-oriented and cognitive information retrieval (IR) is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain. The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical...... applications. Several models with different emphases on user-oriented and cognitive IR are presented—ranging from overall approaches and relevance models to procedural models, cognitive models, and task-based models. The present entry does not discuss empirical findings based on the models....

  13. A framework for scalable parameter estimation of gene circuit models using structural information.

    Science.gov (United States)

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  14. A Spread Willingness Computing-Based Information Dissemination Model

    Science.gov (United States)

    Cui, Zhiming; Zhang, Shukui

    2014-01-01

    This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network. PMID:25110738

  15. A Spread Willingness Computing-Based Information Dissemination Model

    Directory of Open Access Journals (Sweden)

    Haojing Huang

    2014-01-01

    Full Text Available This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user’s spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  16. Models, Metaphors and Symbols for Information and Knowledge Systems

    Directory of Open Access Journals (Sweden)

    David Williams

    2014-01-01

    Full Text Available A literature search indicates that Data, Information and Knowledge continue to be placed into a hierarchical construct where it is considered that information is more valuable than data and that information can be processed into becoming precious knowledge. Wisdom continues to be added to the model to further confuse the issue. This model constrains our ability to think more logically about how and why we develop knowledge management systems to support and enhance knowledge- intensive processes, tasks or projects. This paper seeks to summarise development of the Data-Information-Knowledge-Wisdom hierarchy, explore the extensive criticism of it and present a more logical (and accurate construct for the elements of intellectual capital when developing and managing Knowledge Management Systems.

  17. Learning classification models with soft-label information.

    Science.gov (United States)

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2014-01-01

    Learning of classification models in medicine often relies on data labeled by a human expert. Since labeling of clinical data may be time-consuming, finding ways of alleviating the labeling costs is critical for our ability to automatically learn such models. In this paper we propose a new machine learning approach that is able to learn improved binary classification models more efficiently by refining the binary class information in the training phase with soft labels that reflect how strongly the human expert feels about the original class labels. Two types of methods that can learn improved binary classification models from soft labels are proposed. The first relies on probabilistic/numeric labels, the other on ordinal categorical labels. We study and demonstrate the benefits of these methods for learning an alerting model for heparin induced thrombocytopenia. The experiments are conducted on the data of 377 patient instances labeled by three different human experts. The methods are compared using the area under the receiver operating characteristic curve (AUC) score. Our AUC results show that the new approach is capable of learning classification models more efficiently compared to traditional learning methods. The improvement in AUC is most remarkable when the number of examples we learn from is small. A new classification learning framework that lets us learn from auxiliary soft-label information provided by a human expert is a promising new direction for learning classification models from expert labels, reducing the time and cost needed to label data.

  18. Metabolic Modeling of Common Escherichia coli Strains in Human Gut Microbiome

    Directory of Open Access Journals (Sweden)

    Yue-Dong Gao

    2014-01-01

    Full Text Available The recent high-throughput sequencing has enabled the composition of Escherichia coli strains in the human microbial community to be profiled en masse. However, there are two challenges to address: (1 exploring the genetic differences between E. coli strains in human gut and (2 dynamic responses of E. coli to diverse stress conditions. As a result, we investigated the E. coli strains in human gut microbiome using deep sequencing data and reconstructed genome-wide metabolic networks for the three most common E. coli strains, including E. coli HS, UTI89, and CFT073. The metabolic models show obvious strain-specific characteristics, both in network contents and in behaviors. We predicted optimal biomass production for three models on four different carbon sources (acetate, ethanol, glucose, and succinate and found that these stress-associated genes were involved in host-microbial interactions and increased in human obesity. Besides, it shows that the growth rates are similar among the models, but the flux distributions are different, even in E. coli core reactions. The correlations between human diabetes-associated metabolic reactions in the E. coli models were also predicted. The study provides a systems perspective on E. coli strains in human gut microbiome and will be helpful in integrating diverse data sources in the following study.

  19. An Object-Oriented Information Model for Policy-based Management of Distributed Applications

    NARCIS (Netherlands)

    Diaz, G.; Gay, V.C.J.; Horlait, E.; Hamza, M.H.

    2002-01-01

    This paper presents an object-oriented information model to support a policy-based management for distributed multimedia applications. The information base contains application-level information about the users, the applications, and their profile. Our Information model is described in details and

  20. Integrated Modeling Approach for the Development of Climate-Informed, Actionable Information

    Directory of Open Access Journals (Sweden)

    David R. Judi

    2018-06-01

    Full Text Available Flooding is a prevalent natural disaster with both short and long-term social, economic, and infrastructure impacts. Changes in intensity and frequency of precipitation (including rain, snow, and rain-on-snow events create challenges for the planning and management of resilient infrastructure and communities. While there is general acknowledgment that new infrastructure design should account for future climate change, no clear methods or actionable information are available to community planners and designers to ensure resilient designs considering an uncertain climate future. This research demonstrates an approach for an integrated, multi-model, and multi-scale simulation to evaluate future flood impacts. This research used regional climate projections to drive high-resolution hydrology and flood models to evaluate social, economic, and infrastructure resilience for the Snohomish Watershed, WA, USA. Using the proposed integrated modeling approach, the peaks of precipitation and streamflows were found to shift from spring and summer to the earlier winter season. Moreover, clear non-stationarities in future flood risk were discovered under various climate scenarios. This research provides a clear approach for the incorporation of climate science in flood resilience analysis and to also provides actionable information relative to the frequency and intensity of future precipitation events.

  1. Semantic Information Modeling for Emerging Applications in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Natarajan, Sreedhar; Simmhan, Yogesh; Prasanna, Viktor

    2012-04-16

    Smart Grid modernizes power grid by integrating digital and information technologies. Millions of smart meters, intelligent appliances and communication infrastructures are under deployment allowing advanced IT applications to be developed to secure and manage power grid operations. Demand response (DR) is one such emerging application to optimize electricity demand by curtailing/shifting power load when peak load occurs. Existing DR approaches are mostly based on static plans such as pricing policies and load shedding schedules. However, improvements to power management applications rely on data emanating from existing and new information sources with the growth of Smart Grid information space. In particular, dynamic DR algorithms depend on information from smart meters that report interval-based power consumption measurement, HVAC systems that monitor buildings heat and humidity, and even weather forecast services. In order for emerging Smart Grid applications to take advantage of the diverse data influx, extensible information integration is required. In this paper, we develop an integrated Smart Grid information model using Semantic Web techniques and present case studies of using semantic information for dynamic DR. We show the semantic model facilitates information integration and knowledge representation for developing the next generation Smart Grid applications.

  2. Ensuring consistency and persistence to the Quality Information Model - The role of the GeoViQua Broker

    Science.gov (United States)

    Bigagli, Lorenzo; Papeschi, Fabrizio; Nativi, Stefano; Bastin, Lucy; Masó, Joan

    2013-04-01

    GeoViQua (QUAlity aware VIsualisation for the Global Earth Observation System of Systems) is an FP7 project aiming at complementing the Global Earth Observation System of Systems (GEOSS) with rigorous data quality specifications and quality-aware capabilities, in order to improve reliability in scientific studies and policy decision-making. GeoViQua main scientific and technical objective is to enhance the GEOSS Common Infrastructure (GCI) providing the user community with innovative quality-aware search and visualization tools, which will be integrated in the GEOPortal, as well as made available to other end-user interfaces. To this end, GeoViQua will promote the extension of the current standard metadata for geographic information with accurate and expressive quality indicators. The project will also contribute to the definition of a quality label, the GEOLabel, reflecting scientific relevance, quality, acceptance and societal needs. The concept of Quality Information is very broad. When talking about the quality of a product, this is not limited to geophysical quality but also includes concepts like mission quality (e.g. data coverage with respect to planning). In general, it provides an indication of the overall fitness for use of a specific type of product. Employing and extending several ISO standards such as 19115, 19157 and 19139, a common set of data quality indicators has been selected to be used within the project. The resulting work, in the form of a data model, is expressed in XML Schema Language and encoded in XML. Quality information can be stated both by data producers and by data users, actually resulting in two conceptually distinct data models, the Producer Quality model and the User Quality model (or User Feedback model). A very important issue concerns the association between the quality reports and the affected products that are target of the report. This association is usually achieved by means of a Product Identifier (PID), but actually just

  3. Modeling information diffusion in time-varying community networks

    Science.gov (United States)

    Cui, Xuelian; Zhao, Narisa

    2017-12-01

    Social networks are rarely static, and they typically have time-varying network topologies. A great number of studies have modeled temporal networks and explored social contagion processes within these models; however, few of these studies have considered community structure variations. In this paper, we present a study of how the time-varying property of a modular structure influences the information dissemination. First, we propose a continuous-time Markov model of information diffusion where two parameters, mobility rate and community attractiveness, are introduced to address the time-varying nature of the community structure. The basic reproduction number is derived, and the accuracy of this model is evaluated by comparing the simulation and theoretical results. Furthermore, numerical results illustrate that generally both the mobility rate and community attractiveness significantly promote the information diffusion process, especially in the initial outbreak stage. Moreover, the strength of this promotion effect is much stronger when the modularity is higher. Counterintuitively, it is found that when all communities have the same attractiveness, social mobility no longer accelerates the diffusion process. In addition, we show that the local spreading in the advantage group has been greatly enhanced due to the agglomeration effect caused by the social mobility and community attractiveness difference, which thus increases the global spreading.

  4. Acceptance model of a Hospital Information System.

    Science.gov (United States)

    Handayani, P W; Hidayanto, A N; Pinem, A A; Hapsari, I C; Sandhyaduhita, P I; Budi, I

    2017-03-01

    The purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance. This study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0. The study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals. Based on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS. Copyright © 2016

  5. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  6. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  7. Combining the Generic Entity-Attribute-Value Model and Terminological Models into a Common Ontology to Enable Data Integration and Decision Support.

    Science.gov (United States)

    Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte

    2018-01-01

    The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.

  8. Information driving force and its application in agent-based modeling

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2018-04-01

    Exploring the scientific impact of online big-data has attracted much attention of researchers from different fields in recent years. Complex financial systems are typical open systems profoundly influenced by the external information. Based on the large-scale data in the public media and stock markets, we first define an information driving force, and analyze how it affects the complex financial system. The information driving force is observed to be asymmetric in the bull and bear market states. As an application, we then propose an agent-based model driven by the information driving force. Especially, all the key parameters are determined from the empirical analysis rather than from statistical fitting of the simulation results. With our model, both the stationary properties and non-stationary dynamic behaviors are simulated. Considering the mean-field effect of the external information, we also propose a few-body model to simulate the financial market in the laboratory.

  9. Traffic congestion forecasting model for the INFORM System. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Azarm, A.; Mughabghab, S.; Stock, D.

    1995-05-01

    This report describes a computerized traffic forecasting model, developed by Brookhaven National Laboratory (BNL) for a portion of the Long Island INFORM Traffic Corridor. The model has gone through a testing phase, and currently is able to make accurate traffic predictions up to one hour forward in time. The model will eventually take on-line traffic data from the INFORM system roadway sensors and make projections as to future traffic patterns, thus allowing operators at the New York State Department of Transportation (D.O.T.) INFORM Traffic Management Center to more optimally manage traffic. It can also form the basis of a travel information system. The BNL computer model developed for this project is called ATOP for Advanced Traffic Occupancy Prediction. The various modules of the ATOP computer code are currently written in Fortran and run on PC computers (pentium machine) faster than real time for the section of the INFORM corridor under study. The following summarizes the various routines currently contained in the ATOP code: Statistical forecasting of traffic flow and occupancy using historical data for similar days and time (long term knowledge), and the recent information from the past hour (short term knowledge). Estimation of the empirical relationships between traffic flow and occupancy using long and short term information. Mechanistic interpolation using macroscopic traffic models and based on the traffic flow and occupancy forecasted (item-1), and the empirical relationships (item-2) for the specific highway configuration at the time of simulation (construction, lane closure, etc.). Statistical routine for detection and classification of anomalies and their impact on the highway capacity which are fed back to previous items.

  10. A Process Model for Goal-Based Information Retrieval

    Directory of Open Access Journals (Sweden)

    Harvey Hyman

    2014-12-01

    Full Text Available In this paper we examine the domain of information search and propose a "goal-based" approach to study search strategy. We describe "goal-based information search" using a framework of Knowledge Discovery. We identify two Information Retrieval (IR goals using the constructs of Knowledge Acquisition (KA and Knowledge Explanation (KE. We classify these constructs into two specific information problems: An exploration-exploitation problem and an implicit-explicit problem. Our proposed framework is an extension of prior work in this domain, applying an IR Process Model originally developed for Legal-IR and adapted to Medical-IR. The approach in this paper is guided by the recent ACM-SIG Medical Information Retrieval (MedIR Workshop definition: "methodologies and technologies that seek to improve access to medical information archives via a process of information retrieval."

  11. Information model design health service childhood cancer for parents and caregivers

    Science.gov (United States)

    Ramli, Syazwani; Muda, Zurina

    2015-05-01

    Most Malaysians do not realize that they are suffer from a chronic disease until the disease is confirmed to be at a critical stage. This is because lack of awareness among Malaysians about a chronic disease especially in a childhood cancer. Based on report of the National Cancer Council (MAKNA),11 million adults and children suffered with cancer and 6 million of them die in a worldwide. Lack of public exposure to this disease leads to health problems to their children. Information model design health service childhood cancer for p arents and caregivers using an android application medium can be used by a doctor to deliver an information of cancer to the parents and caregivers. The development of this information model design health service childhood cancer for parents and caregivers are using an integration of health promotion theory, spiral model and lean model to form a new model that can be used as a model design content of health service. The method using in this study are by an interview technique and questionnaires along the study was conducted. Hopefully the production of this information model design health service childhood cancer for parents and caregivers using an android apps as a medium can help parents, caregivers and public to know more about information of childhood cancer and at the same time can gain an awareness among them and this app also can be used as a medium for doctors to deliver an information to the parents and caregivers.

  12. Spatially-Explicit Bayesian Information Entropy Metrics for Calibrating Landscape Transformation Models

    Directory of Open Access Journals (Sweden)

    Kostas Alexandridis

    2013-06-01

    Full Text Available Assessing spatial model performance often presents challenges related to the choice and suitability of traditional statistical methods in capturing the true validity and dynamics of the predicted outcomes. The stochastic nature of many of our contemporary spatial models of land use change necessitate the testing and development of new and innovative methodologies in statistical spatial assessment. In many cases, spatial model performance depends critically on the spatially-explicit prior distributions, characteristics, availability and prevalence of the variables and factors under study. This study explores the statistical spatial characteristics of statistical model assessment of modeling land use change dynamics in a seven-county study area in South-Eastern Wisconsin during the historical period of 1963–1990. The artificial neural network-based Land Transformation Model (LTM predictions are used to compare simulated with historical land use transformations in urban/suburban landscapes. We introduce a range of Bayesian information entropy statistical spatial metrics for assessing the model performance across multiple simulation testing runs. Bayesian entropic estimates of model performance are compared against information-theoretic stochastic entropy estimates and theoretically-derived accuracy assessments. We argue for the critical role of informational uncertainty across different scales of spatial resolution in informing spatial landscape model assessment. Our analysis reveals how incorporation of spatial and landscape information asymmetry estimates can improve our stochastic assessments of spatial model predictions. Finally our study shows how spatially-explicit entropic classification accuracy estimates can work closely with dynamic modeling methodologies in improving our scientific understanding of landscape change as a complex adaptive system and process.

  13. Closed Loop Brain Model of Neocortical Information Based Exchange

    Directory of Open Access Journals (Sweden)

    James eKozloski

    2016-01-01

    Full Text Available Here we describe an information based exchange' model of brain function that ascribes to neocortex, basal ganglia, and thalamus distinct network functions. The model allows us to analyze whole brain system set point measures, such as the rate and heterogeneity of transitions in striatum and neocortex, in the context of neuromodulation and other perturbations. Our closed-loop model is grounded in neuroanatomical observations, proposing a novel Grand Loop through neocortex, and invokes different forms of plasticity at specific tissue interfaces and their principle cell synapses to achieve these transitions. By implementing a system for maximum information based exchange of action potentials between modeled neocortical areas, we observe changes to these measures in simulation. We hypothesize that similar dynamic set points and modulations exist in the brain's resting state activity, and that different modifications to information based exchange may shift the risk profile of different component tissues, resulting in different neurodegenerative diseases. This model is targeted for further development using IBM's Neural Tissue Simulator, which allows scalable elaboration of networks, tissues, and their neural and synaptic components towards ever greater complexity and biological realism.

  14. Information Society Visions in the Nordic Countries

    DEFF Research Database (Denmark)

    Henten, Anders; Kristensen, Thomas Myrup

    2000-01-01

    This paper analyses the information society visions put forward by the governments/administrations of the Nordic countries and compares them to the visions advanced at the EU-level. The paper suggests that the information society visions constitute a kind of common ideology for almost the whole...... political spectrum although it is characterised by a high degree of neo-liberal thinking. It is further argued that there is no distinctly Nordic model for an information society....

  15. A common type system for clinical natural language processing

    Directory of Open Access Journals (Sweden)

    Wu Stephen T

    2013-01-01

    Full Text Available Abstract Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs, thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.

  16. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  17. A multi-model evaluation of aerosols over South Asia: common problems and possible causes

    Science.gov (United States)

    Pan, X.; Chin, M.; Gautam, R.; Bian, H.; Kim, D.; Colarco, P. R.; Diehl, T. L.; Takemura, T.; Pozzoli, L.; Tsigaridis, K.; Bauer, S.; Bellouin, N.

    2015-05-01

    Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000-2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October-January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo-Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of

  18. Using whole disease modeling to inform resource allocation decisions: economic evaluation of a clinical guideline for colorectal cancer using a single model.

    Science.gov (United States)

    Tappenden, Paul; Chilcott, Jim; Brennan, Alan; Squires, Hazel; Glynne-Jones, Rob; Tappenden, Janine

    2013-06-01

    To assess the feasibility and value of simulating whole disease and treatment pathways within a single model to provide a common economic basis for informing resource allocation decisions. A patient-level simulation model was developed with the intention of being capable of evaluating multiple topics within National Institute for Health and Clinical Excellence's colorectal cancer clinical guideline. The model simulates disease and treatment pathways from preclinical disease through to detection, diagnosis, adjuvant/neoadjuvant treatments, follow-up, curative/palliative treatments for metastases, supportive care, and eventual death. The model parameters were informed by meta-analyses, randomized trials, observational studies, health utility studies, audit data, costing sources, and expert opinion. Unobservable natural history parameters were calibrated against external data using Bayesian Markov chain Monte Carlo methods. Economic analysis was undertaken using conventional cost-utility decision rules within each guideline topic and constrained maximization rules across multiple topics. Under usual processes for guideline development, piecewise economic modeling would have been used to evaluate between one and three topics. The Whole Disease Model was capable of evaluating 11 of 15 guideline topics, ranging from alternative diagnostic technologies through to treatments for metastatic disease. The constrained maximization analysis identified a configuration of colorectal services that is expected to maximize quality-adjusted life-year gains without exceeding current expenditure levels. This study indicates that Whole Disease Model development is feasible and can allow for the economic analysis of most interventions across a disease service within a consistent conceptual and mathematical infrastructure. This disease-level modeling approach may be of particular value in providing an economic basis to support other clinical guidelines. Copyright © 2013 International

  19. Advanced modeling of management processes in information technology

    CERN Document Server

    Kowalczuk, Zdzislaw

    2014-01-01

    This book deals with the issues of modelling management processes of information technology and IT projects while its core is the model of information technology management and its component models (contextual, local) describing initial processing and the maturity capsule as well as a decision-making system represented by a multi-level sequential model of IT technology selection, which acquires a fuzzy rule-based implementation in this work. In terms of applicability, this work may also be useful for diagnosing applicability of IT standards in evaluation of IT organizations. The results of this diagnosis might prove valid for those preparing new standards so that – apart from their own visions – they could, to an even greater extent, take into account the capabilities and needs of the leaders of project and manufacturing teams. The book is intended for IT professionals using the ITIL, COBIT and TOGAF standards in their work. Students of computer science and management who are interested in the issue of IT...

  20. Sustainability Product Properties in Building Information Models

    Science.gov (United States)

    2012-09-01

    preferred car- pool parking spots, preferred low-emitting/fuel-efficient vehicle parking spots, bike racks and telecommuting as options to promote good...most part, these have not been in a computable form. Fallon then stressed the importance of a common conceptual framework, using the IFC model...organizations would be formed with the help of Mr. Kalin. He stressed the goal of the project was to create templates that would be free to use

  1. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  2. Modeling the Informal Economy in Mexico. A Structural Equation Approach

    OpenAIRE

    Brambila Macias, Jose

    2008-01-01

    This paper uses annual data for the period 1970-2006 in order to estimate and investigate the evolution of the Mexican informal economy. In order to do so, we model the informal economy as a latent variable and try to explain it through relationships between possible cause and indicator variables using structural equation modeling (SEM). Our results indicate that the Mexican informal sector at the beginning of the 1970’s initially accounted for 40 percent of GDP while slightly decreasing to s...

  3. Modelling of information processes management of educational complex

    Directory of Open Access Journals (Sweden)

    Оксана Николаевна Ромашкова

    2014-12-01

    Full Text Available This work concerns information model of the educational complex which includes several schools. A classification of educational complexes formed in Moscow is given. There are also a consideration of the existing organizational structure of the educational complex and a suggestion of matrix management structure. Basic management information processes of the educational complex were conceptualized.

  4. Teaching modelling of distributed information and control systems to students using the method of simulation modelling

    Directory of Open Access Journals (Sweden)

    A. V. Gabalin

    2017-01-01

    Full Text Available Mathematical modelling is one of the most effective means to study the complex systems and processes. One of the most convenient means of mathematical modelling used in the analysis of functioning of systems of this class are simulation models that describe the structure and behavior of the system in the form of a program for the PC and allow conducting computer experiments with the aim of obtaining the necessary data on the functioning of the elements and the system as a whole during certain time intervals. Currently, the simulation tools market presents a large number of different simulation systems. However, the selection of suitable tools is very important. Specialized programs in particular are GPSS World, MATLAB/ Simulink, and AnyLogic. Distributed information and control systems (ICS are dispersed in space multifunctional coherent set of stationary and moving elements with developed technical means of reception, transmission and processing of information. The task is to determine a rational structure of ICS, system-planned indicators of quality of development and functioning of which meet specified requirements under given structural constraints, characteristics of information flows, and parameters of technical tools. For experimental research of functioning processes of the described system a simulation model was developed. This model allows obtaining and evaluating such functional characteristics as the degree of technical means utilization, waiting time of information in queues for service, the level of efficiency of transmission and processing of information, the time of forming a single media and etc. The model also allows evaluating the performance of the system depending on the flight schedule, flight paths, characteristics of technical means, the system structure, failure of individual elements and depending on other parameters. The developed simulation model in GPSS allows students to master the subject area deeply enough – the

  5. Generalisation of geographic information cartographic modelling and applications

    CERN Document Server

    Mackaness, William A; Sarjakoski, L Tiina

    2011-01-01

    Theoretical and Applied Solutions in Multi Scale MappingUsers have come to expect instant access to up-to-date geographical information, with global coverage--presented at widely varying levels of detail, as digital and paper products; customisable data that can readily combined with other geographic information. These requirements present an immense challenge to those supporting the delivery of such services (National Mapping Agencies (NMA), Government Departments, and private business. Generalisation of Geographic Information: Cartographic Modelling and Applications provides detailed review

  6. Automated Physico-Chemical Cell Model Development through Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  7. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  8. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    Science.gov (United States)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated

  9. A promising poison information centre model for Africa

    Directory of Open Access Journals (Sweden)

    Carine Marks

    2016-06-01

    Conclusion: A number of benefits might result from such a poisons centre network hub, including: (1 Improved cooperation between countries on poisoning problems; (2 Harmonisation and strengthening of research and surveillance; (3 Common standards and best practices e.g. regulating chemicals, data management, and staff training; and (4 Greater bargaining power to secure resources. Further investigation is needed to identify the most suitable location for the network hub, the activities it should fulfil, and the availability of specialists in poisons information who could become members of the hub.

  10. A Study of Prisoner’s Dilemma Game Model with Incomplete Information

    Directory of Open Access Journals (Sweden)

    Xiuqin Deng

    2015-01-01

    Full Text Available Prisoners’ dilemma is a typical game theory issue. In our study, it is regarded as an incomplete information game with unpublicized game strategies. We solve our problem by establishing a machine learning model using Bayes formula. The model established is referred to as the Bayes model. Based on the Bayesian model, we can make the prediction of players’ choices to better complete the unknown information in the game. And we suggest the hash table to make improvement in space and time complexity. We build a game system with several types of game strategy for testing. In double- or multiplayer games, the Bayes model is more superior to other strategy models; the total income using Bayes model is higher than that of other models. Moreover, from the result of the games on the natural model with Bayes model, as well as the natural model with TFT model, it is found that Bayes model accrued more benefits than TFT model on average. This demonstrates that the Bayes model introduced in this study is feasible and effective. Therefore, it provides a novel method of solving incomplete information game problem.

  11. A common mode of origin of power laws in models of market and earthquake

    Science.gov (United States)

    Bhattacharyya, Pratip; Chatterjee, Arnab; Chakrabarti, Bikas K.

    2007-07-01

    We show that there is a common mode of origin for the power laws observed in two different models: (i) the Pareto law for the distribution of money among the agents with random-saving propensities in an ideal gas-like market model and (ii) the Gutenberg-Richter law for the distribution of overlaps in a fractal-overlap model for earthquakes. We find that the power laws appear as the asymptotic forms of ever-widening log-normal distributions for the agents’ money and the overlap magnitude, respectively. The identification of the generic origin of the power laws helps in better understanding and in developing generalized views of phenomena in such diverse areas as economics and geophysics.

  12. Quantum mechanics, common sense and the black hole information paradox

    CERN Document Server

    Danielsson, U H; Danielsson, Ulf H.; Schiffer, Marcelo

    1993-01-01

    The purpose of this paper is to analyse, in the light of information theory and with the arsenal of (elementary) quantum mechanics (EPR correlations, copying machines, teleportation, mixing produced in sub-systems owing to a trace operation, etc.) the scenarios available on the market to resolve the so-called black-hole information paradox. We shall conclude that the only plausible ones are those where either the unitary evolution of quantum mechanics is given up, in which information leaks continuously in the course of black-hole evaporation through non-local processes, or those in which the world is polluted by an infinite number of meta-stable remnants.

  13. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  14. Method for Measuring the Information Content of Terrain from Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Lujin Hu

    2015-10-01

    Full Text Available As digital terrain models are indispensable for visualizing and modeling geographic processes, terrain information content is useful for terrain generalization and representation. For terrain generalization, if the terrain information is considered, the generalized terrain may be of higher fidelity. In other words, the richer the terrain information at the terrain surface, the smaller the degree of terrain simplification. Terrain information content is also important for evaluating the quality of the rendered terrain, e.g., the rendered web terrain tile service in Google Maps (Google Inc., Mountain View, CA, USA. However, a unified definition and measures for terrain information content have not been established. Therefore, in this paper, a definition and measures for terrain information content from Digital Elevation Model (DEM, i.e., a digital model or 3D representation of a terrain’s surface data are proposed and are based on the theory of map information content, remote sensing image information content and other geospatial information content. The information entropy was taken as the information measuring method for the terrain information content. Two experiments were carried out to verify the measurement methods of the terrain information content. One is the analysis of terrain information content in different geomorphic types, and the results showed that the more complex the geomorphic type, the richer the terrain information content. The other is the analysis of terrain information content with different resolutions, and the results showed that the finer the resolution, the richer the terrain information. Both experiments verified the reliability of the measurements of the terrain information content proposed in this paper.

  15. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  16. An Integrative Behavioral Model of Information Security Policy Compliance

    Directory of Open Access Journals (Sweden)

    Sang Hoon Kim

    2014-01-01

    Full Text Available The authors found the behavioral factors that influence the organization members’ compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members’ attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1 the study is expected to play a role of the baseline for future research about organization members’ compliance with the information security policy, (2 the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3 the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training

  17. An integrative behavioral model of information security policy compliance.

    Science.gov (United States)

    Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung

    2014-01-01

    The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing

  18. Validation of the DeLone and McLean Information Systems Success Model.

    Science.gov (United States)

    Ojo, Adebowale I

    2017-01-01

    This study is an adaptation of the widely used DeLone and McLean information system success model in the context of hospital information systems in a developing country. A survey research design was adopted in the study. A structured questionnaire was used to collect data from 442 health information management personnel in five Nigerian teaching hospitals. A structural equation modeling technique was used to validate the model's constructs. It was revealed that system quality significantly influenced use (β = 0.53, p Information quality significantly influenced use (β = 0.24, p 0.05), but it significantly influenced perceived net benefits (β = 0.21, p 0.05). The study validates the DeLone and McLean information system success model in the context of a hospital information system in a developing country. Importantly, system quality and use were found to be important measures of hospital information system success. It is, therefore, imperative that hospital information systems are designed in such ways that are easy to use, flexible, and functional to serve their purpose.

  19. A common signal detection model accounts for both perception and discrimination of the watercolor effect.

    Science.gov (United States)

    Devinck, Frédéric; Knoblauch, Kenneth

    2012-03-21

    Establishing the relation between perception and discrimination is a fundamental objective in psychophysics, with the goal of characterizing the neural mechanisms mediating perception. Here, we show that a procedure for estimating a perceptual scale based on a signal detection model also predicts discrimination performance. We use a recently developed procedure, Maximum Likelihood Difference Scaling (MLDS), to measure the perceptual strength of a long-range, color, filling-in phenomenon, the Watercolor Effect (WCE), as a function of the luminance ratio between the two components of its generating contour. MLDS is based on an equal-variance, gaussian, signal detection model and yields a perceptual scale with interval properties. The strength of the fill-in percept increased 10-15 times the estimate of the internal noise level for a 3-fold increase in the luminance ratio. Each observer's estimated scale predicted discrimination performance in a subsequent paired-comparison task. A common signal detection model accounts for both the appearance and discrimination data. Since signal detection theory provides a common metric for relating discrimination performance and neural response, the results have implications for comparing perceptual and neural response functions.

  20. Common sunflower (Helianthus annuus) interference in soybean (Glycine max)

    International Nuclear Information System (INIS)

    Geier, P.W.; Maddux, L.D.; Moshier, L.J.; Stahlman, P.W.

    1996-01-01

    Multiple weed species in the field combine to cause yield losses and can be described using one of several empirical models. Field studies were conducted to compare observed corn yield loss caused by common sunflower and shattercane populations with predicted yield losses modeled using a multiple species rectangular hyperbola model, an additive model, or the yield loss model in the decision support system, WeedSOFT, and to derive competitive indices for common sunflower and shattercane. Common sunflower and shattercane emerged with corn and selected densities established in field experiments at Scandia and Rossville, KS, between 2000 and 2002. The multiple species rectangular hyperbola model fit pooled data from three of five location–years with a predicted maximum corn yield loss of 60%. Initial slope parameter estimate for common sunflower was 49.2 and 4.2% for shattercane. A ratio of these estimates indicated that common sunflower was 11 times more competitive than shattercane. When common sunflower was assigned a competitive index (CI) value of 10, shattercane CI was 0.9. Predicted yield losses modeled for separate common sunflower or shattercane populations were additive when compared with observed yield losses caused by low-density mixed populations of common sunflower (0 to 0.5 plants m −2 ) and shattercane (0 to 4 plants m −2 ). However, a ratio of estimates of these models indicated that common sunflower was only four times as competitive as shattercane, with a CI of 2.5 for shattercane. The yield loss model in WeedSOFT underpredicted the same corn losses by 7.5%. Clearly, both the CI for shattercane and the yield loss model in WeedSOFT need to be reevaluated, and the multiple species rectangular hyperbola model is proposed. (author)

  1. Information support model and its impact on utility, satisfaction and loyalty of users

    Directory of Open Access Journals (Sweden)

    Sead Šadić

    2016-11-01

    Full Text Available In today’s modern age, information systems are of vital importance for successful performance of any organization. The most important role of any information system is its information support. This paper develops an information support model and presents the results of the survey examining the effects of such model. The survey was performed among the employees of Brčko District Government and comprised three phases. The first phase assesses the influence of the quality of information support and information on information support when making decisions. The second phase examines the impact of information support when making decisions on the perceived availability and user satisfaction with information support. The third phase examines the effects of perceived usefulness as well as information support satisfaction on user loyalty. The model is presented using six hypotheses, which were tested by means of a multivariate regression analysis. The demonstrated model shows that the quality of information support and information is of vital importance in the decision-making process. The perceived usefulness and customer satisfaction are of vital importance for continuous usage of information support. The model is universal, and if slightly modified, it can be used in any sphere of life where satisfaction is measured for clients and users of some service.

  2. Work and information processing in a solvable model of Maxwell's demon.

    Science.gov (United States)

    Mandal, Dibyendu; Jarzynski, Christopher

    2012-07-17

    We describe a minimal model of an autonomous Maxwell demon, a device that delivers work by rectifying thermal fluctuations while simultaneously writing information to a memory register. We solve exactly for the steady-state behavior of our model, and we construct its phase diagram. We find that our device can also act as a "Landauer eraser", using externally supplied work to remove information from the memory register. By exposing an explicit, transparent mechanism of operation, our model offers a simple paradigm for investigating the thermodynamics of information processing by small systems.

  3. Detecting Hotspot Information Using Multi-Attribute Based Topic Model.

    Directory of Open Access Journals (Sweden)

    Jing Wang

    Full Text Available Microblogging as a kind of social network has become more and more important in our daily lives. Enormous amounts of information are produced and shared on a daily basis. Detecting hot topics in the mountains of information can help people get to the essential information more quickly. However, due to short and sparse features, a large number of meaningless tweets and other characteristics of microblogs, traditional topic detection methods are often ineffective in detecting hot topics. In this paper, we propose a new topic model named multi-attribute latent dirichlet allocation (MA-LDA, in which the time and hashtag attributes of microblogs are incorporated into LDA model. By introducing time attribute, MA-LDA model can decide whether a word should appear in hot topics or not. Meanwhile, compared with the traditional LDA model, applying hashtag attribute in MA-LDA model gives the core words an artificially high ranking in results meaning the expressiveness of outcomes can be improved. Empirical evaluations on real data sets demonstrate that our method is able to detect hot topics more accurately and efficiently compared with several baselines. Our method provides strong evidence of the importance of the temporal factor in extracting hot topics.

  4. Detecting Hotspot Information Using Multi-Attribute Based Topic Model

    Science.gov (United States)

    Wang, Jing; Li, Li; Tan, Feng; Zhu, Ying; Feng, Weisi

    2015-01-01

    Microblogging as a kind of social network has become more and more important in our daily lives. Enormous amounts of information are produced and shared on a daily basis. Detecting hot topics in the mountains of information can help people get to the essential information more quickly. However, due to short and sparse features, a large number of meaningless tweets and other characteristics of microblogs, traditional topic detection methods are often ineffective in detecting hot topics. In this paper, we propose a new topic model named multi-attribute latent dirichlet allocation (MA-LDA), in which the time and hashtag attributes of microblogs are incorporated into LDA model. By introducing time attribute, MA-LDA model can decide whether a word should appear in hot topics or not. Meanwhile, compared with the traditional LDA model, applying hashtag attribute in MA-LDA model gives the core words an artificially high ranking in results meaning the expressiveness of outcomes can be improved. Empirical evaluations on real data sets demonstrate that our method is able to detect hot topics more accurately and efficiently compared with several baselines. Our method provides strong evidence of the importance of the temporal factor in extracting hot topics. PMID:26496635

  5. Levy Random Bridges and the Modelling of Financial Information

    OpenAIRE

    Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea

    2009-01-01

    The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...

  6. Thalamic neuron models encode stimulus information by burst-size modulation

    Directory of Open Access Journals (Sweden)

    Daniel Henry Elijah

    2015-09-01

    Full Text Available Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of

  7. Thalamic neuron models encode stimulus information by burst-size modulation.

    Science.gov (United States)

    Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A

    2015-01-01

    Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.

  8. An architecture model for multiple disease management information systems.

    Science.gov (United States)

    Chen, Lichin; Yu, Hui-Chu; Li, Hao-Chun; Wang, Yi-Van; Chen, Huang-Jen; Wang, I-Ching; Wang, Chiou-Shiang; Peng, Hui-Yu; Hsu, Yu-Ling; Chen, Chi-Huang; Chuang, Lee-Ming; Lee, Hung-Chang; Chung, Yufang; Lai, Feipei

    2013-04-01

    Disease management is a program which attempts to overcome the fragmentation of healthcare system and improve the quality of care. Many studies have proven the effectiveness of disease management. However, the case managers were spending the majority of time in documentation, coordinating the members of the care team. They need a tool to support them with daily practice and optimizing the inefficient workflow. Several discussions have indicated that information technology plays an important role in the era of disease management. Whereas applications have been developed, it is inefficient to develop information system for each disease management program individually. The aim of this research is to support the work of disease management, reform the inefficient workflow, and propose an architecture model that enhance on the reusability and time saving of information system development. The proposed architecture model had been successfully implemented into two disease management information system, and the result was evaluated through reusability analysis, time consumed analysis, pre- and post-implement workflow analysis, and user questionnaire survey. The reusability of the proposed model was high, less than half of the time was consumed, and the workflow had been improved. The overall user aspect is positive. The supportiveness during daily workflow is high. The system empowers the case managers with better information and leads to better decision making.

  9. Informational model verification of ZVS Buck quasi-resonant DC-DC converter

    Science.gov (United States)

    Vakovsky, Dimiter; Hinov, Nikolay

    2016-12-01

    The aim of the paper is to create a polymorphic informational model of a ZVS Buck quasi-resonant DC-DC converter for the modeling purposes of the object. For the creation of the model is applied flexible open standards for setting, storing, publishing and exchange of data in distributed information environment. The created model is useful for creation of many and different by type variants with different configuration of the composing elements and different inner model of the examined object.

  10. Possibilities of water run-off models by using geological information systems

    International Nuclear Information System (INIS)

    Oeverland, H.; Kleeberg, H.B.

    1992-01-01

    The movement of water in a given region is determined by a number of regional factors, e.g. land use and topography. However, the available precipitation-runoff models take little account of this regional information. Geological information systems, on the other hand, are instruments for efficient management, presentation and evaluation of local information, so the best approach would be a combination of the two types of models. The requirements to be met by such a system are listed; they result from the processes to be modelled (continuous runoff, high-water runoff, mass transfer) but also from the available data and their acquisition and processing. Ten of the best-known precipitation-runoff models are presented and evaluated on the basis of the requirements listed. The basic concept of an integrated model is outlined, and additional modulus required for modelling are defined. (orig./BBR) [de

  11. Finding a minimally informative Dirichlet prior distribution using least squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  12. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    International Nuclear Information System (INIS)

    Kelly, Dana; Atwood, Corwin

    2011-01-01

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  13. Process-aware information systems : bridging people and software through process technology

    NARCIS (Netherlands)

    Dumas, M.; Aalst, van der W.M.P.; Hofstede, ter A.H.M.

    2005-01-01

    A unifying foundation to design and implement process-aware information systems This publication takes on the formidable task of establishing a unifying foundation and set of common underlying principles to effectively model, design, and implement process-aware information systems. Authored by

  14. Global Information Enterprise (GIE) Modeling and Simulation (GIESIM)

    National Research Council Canada - National Science Library

    Bell, Paul

    2005-01-01

    ... AND S) toolkits into the Global Information Enterprise (GIE) Modeling and Simulation (GIESim) framework to create effective user analysis of candidate communications architectures and technologies...

  15. Approaching the Affective Factors of Information Seeking: The Viewpoint of the Information Search Process Model

    Science.gov (United States)

    Savolainen, Reijo

    2015-01-01

    Introduction: The article contributes to the conceptual studies of affective factors in information seeking by examining Kuhlthau's information search process model. Method: This random-digit dial telephone survey of 253 people (75% female) living in a rural, medically under-serviced area of Ontario, Canada, follows-up a previous interview study…

  16. An Overview of Models, Methods, and Reagents Developed for Translational Autoimmunity Research in the Common Marmoset (Callithrix jacchus)

    NARCIS (Netherlands)

    Jagessar, S. Anwar; Vierboom, Michel; Blezer, Erwin L. A.; Bauer, Jan; 't Hart, Bert A.; Kap, Yolanda S.

    The common marmoset (Callithrix jacchus) is a small-bodied Neotropical primate and a useful preclinical animal model for translational research into autoimmune-mediated inflammatory diseases (AIMID), such as rheumatoid arthritis (RA) and multiple sclerosis (MS). The animal model for MS established

  17. An overview of models, methods, and reagents developed for translational autoimmunity research in the common marmoset (Callithrix jacchus)

    NARCIS (Netherlands)

    S.A. Jagessar (Anwar); M.P.M. Vierboom (Michel); E. Blezer (Erwin); J. Bauer; B.A. 't Hart (Bert); Y.S. Kap (Yolanda)

    2013-01-01

    textabstractThe common marmoset (Callithrix jacchus) is a small-bodied Neotropical primate and a useful preclinical animal model for translational research into autoimmune-mediated inflammatory diseases (AIMID), such as rheumatoid arthritis (RA) and multiple sclerosis (MS). The animal model for MS

  18. Social insect colony as a biological regulatory system: modelling information flow in dominance networks.

    Science.gov (United States)

    Nandi, Anjan K; Sumana, Annagiri; Bhattacharya, Kunal

    2014-12-06

    Social insects provide an excellent platform to investigate flow of information in regulatory systems since their successful social organization is essentially achieved by effective information transfer through complex connectivity patterns among the colony members. Network representation of such behavioural interactions offers a powerful tool for structural as well as dynamical analysis of the underlying regulatory systems. In this paper, we focus on the dominance interaction networks in the tropical social wasp Ropalidia marginata-a species where behavioural observations indicate that such interactions are principally responsible for the transfer of information between individuals about their colony needs, resulting in a regulation of their own activities. Our research reveals that the dominance networks of R. marginata are structurally similar to a class of naturally evolved information processing networks, a fact confirmed also by the predominance of a specific substructure-the 'feed-forward loop'-a key functional component in many other information transfer networks. The dynamical analysis through Boolean modelling confirms that the networks are sufficiently stable under small fluctuations and yet capable of more efficient information transfer compared to their randomized counterparts. Our results suggest the involvement of a common structural design principle in different biological regulatory systems and a possible similarity with respect to the effect of selection on the organization levels of such systems. The findings are also consistent with the hypothesis that dominance behaviour has been shaped by natural selection to co-opt the information transfer process in such social insect species, in addition to its primal function of mediation of reproductive competition in the colony. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    Science.gov (United States)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  20. Simulation Experiments with the Model of Information-Psychological Influences on Mass Consciousness

    Directory of Open Access Journals (Sweden)

    Vladimir A. Minaev

    2017-06-01

    Full Text Available The article deals with the problem of researching the dynamics of information and psychological influences on mass consciousness, the possibility of their forecasting and management, which is one of the most important aspects of ensuring the information and psychological security of society and its citizens. To research the dynamics of information and psychological influence on mass consciousness, the article suggests a method of system-dynamic modeling, grounded and implemented on models of complex socio-economic phenomena by J. FoiTcster in the 1950s. The application of this method to solving the problems of information security is investigated by various foreign scientific teams. The method of system- dynamic modeling allows to display and investigate many factors that critically affect the processes of information and psychological influences on mass consciousness. The following factors arc taken into account in the model proposed by the authors of the model: the probability of «enthusiasm» of ideas embedded in the content of information and psychological influences, with interpersonal contact and as a result of the influence of the mass media, the massiveness and regularity of the mass media propagandizing the idea of information and psychological influences, the probability of forgetting the idea, embedded in the content of information and psychological influences, the probability of communication on the topic, embedded in the content of information and psychological influences, the probability of «enthusiasm» ideas of information and psychological influences in one communication contact, the average number of acquaintances with one message in the media, the probability of «enthusiasm» of a new idea from the content of information and psychological influences after reading the information in the media, the number of contacts a person, keen on ideas of information and psychological influences, per day. The mathematical apparatus on

  1. The PDS4 Information Model and its Role in Agile Science Data Curation

    Science.gov (United States)

    Hughes, J. S.; Crichton, D.

    2017-12-01

    PDS4 is an information model-driven service architecture supporting the capture, management, distribution and integration of massive planetary science data captured in distributed data archives world-wide. The PDS4 Information Model (IM), the core element of the architecture, was developed using lessons learned from 20 years of archiving Planetary Science Data and best practices for information model development. The foundational principles were adopted from the Open Archival Information System (OAIS) Reference Model (ISO 14721), the Metadata Registry Specification (ISO/IEC 11179), and W3C XML (Extensible Markup Language) specifications. These provided respectively an object oriented model for archive information systems, a comprehensive schema for data dictionaries and hierarchical governance, and rules for rules for encoding documents electronically. The PDS4 Information model is unique in that it drives the PDS4 infrastructure by providing the representation of concepts and their relationships, constraints, rules, and operations; a sharable, stable, and organized set of information requirements; and machine parsable definitions that are suitable for configuring and generating code. This presentation will provide an over of the PDS4 Information Model and how it is being leveraged to develop and evolve the PDS4 infrastructure and enable agile curation of over 30 years of science data collected by the international Planetary Science community.

  2. Dynamic information architecture system (DIAS) : multiple model simulation management.

    Energy Technology Data Exchange (ETDEWEB)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-05-13

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers

  3. Implications of Information Theory for Computational Modeling of Schizophrenia.

    Science.gov (United States)

    Silverstein, Steven M; Wibral, Michael; Phillips, William A

    2017-10-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

  4. A Product Development Decision Model for Cockpit Weather Information Systems

    Science.gov (United States)

    Sireli, Yesim; Kauffmann, Paul; Gupta, Surabhi; Kachroo, Pushkin

    2003-01-01

    There is a significant market demand for advanced cockpit weather information products. However, it is unclear how to identify the most promising technological options that provide the desired mix of consumer requirements by employing feasible technical systems at a price that achieves market success. This study develops a unique product development decision model that employs Quality Function Deployment (QFD) and Kano's model of consumer choice. This model is specifically designed for exploration and resolution of this and similar information technology related product development problems.

  5. A Product Development Decision Model for Cockpit Weather Information System

    Science.gov (United States)

    Sireli, Yesim; Kauffmann, Paul; Gupta, Surabhi; Kachroo, Pushkin; Johnson, Edward J., Jr. (Technical Monitor)

    2003-01-01

    There is a significant market demand for advanced cockpit weather information products. However, it is unclear how to identify the most promising technological options that provide the desired mix of consumer requirements by employing feasible technical systems at a price that achieves market success. This study develops a unique product development decision model that employs Quality Function Deployment (QFD) and Kano's model of consumer choice. This model is specifically designed for exploration and resolution of this and similar information technology related product development problems.

  6. An information model of a centralized admission campaign in ...

    African Journals Online (AJOL)

    The aim of the work is to structure individual application environments of the information model of a centralized admission campaign in higher education institutions in Russia by modifying the corresponding structure of the Federal information system supporting state final examination and admission procedures. , The ...

  7. The most common friend first immunization

    International Nuclear Information System (INIS)

    Nian Fu-Zhong; Hu Cha-Sheng

    2016-01-01

    In this paper, a standard susceptible-infected-recovered-susceptible(SIRS) epidemic model based on the Watts–Strogatz (WS) small-world network model and the Barabsi–Albert (BA) scale-free network model is established, and a new immunization scheme — “the most common friend first immunization” is proposed, in which the most common friend’s node is described as being the first immune on the second layer protection of complex networks. The propagation situations of three different immunization schemes — random immunization, high-risk immunization, and the most common friend first immunization are studied. At the same time, the dynamic behaviors are also studied on the WS small-world and the BA scale-free network. Moreover, the analytic and simulated results indicate that the immune effect of the most common friend first immunization is better than random immunization, but slightly worse than high-risk immunization. However, high-risk immunization still has some limitations. For example, it is difficult to accurately define who a direct neighbor in the life is. Compared with the traditional immunization strategies having some shortcomings, the most common friend first immunization is effective, and it is nicely consistent with the actual situation. (paper)

  8. Building Information Modelling and Standardised Construction Contracts: a Content Analysis of the GC21 Contract

    Directory of Open Access Journals (Sweden)

    Aaron Manderson

    2015-08-01

    Full Text Available Building Information Modelling (BIM is seen as a panacea to many of the ills confronting the Architectural, Engineering and Construction (AEC sector. In spite of its well documented benefits the widespread integration of BIM into the project lifecycle is yet to occur. One commonly identified barrier to BIM adoption is the perceived legal risks associated with its integration, coupled with the need for implementation in a collaborative environment. Many existing standardised contracts used in the Australian AEC industry were drafted before the emergence of BIM. As BIM continues to become ingrained in the delivery process the shortcomings of these existing contracts have become apparent. This paper reports on a study that reviewed and consolidated the contractual and legal concerns associated with BIM implementation. The findings of the review were used to conduct a qualitative content analysis of the GC21 2nd edition, an Australian standardised construction contract, to identify possible changes to facilitate the implementation of BIM in a collaborative environment. The findings identified a number of changes including the need to adopt a collaborative contract structure with equitable risk and reward mechanisms, recognition of the model as a contract document and the need for standardisation of communication/information exchange.

  9. MODELING OF TECHNICAL CHANNELS OF INFORMATION LEAKAGE AT DISTRIBUTED CONTROL OBJECTS

    Directory of Open Access Journals (Sweden)

    Aleksander Vladimirovich Karpov

    2018-05-01

    Full Text Available The significant increase in requirements for distributed control objects’ functioning can’t be realized only at the expense of the widening and strengthening of security control measures. The first step in ensuring the information security at such objects is the analysis of the conditions of their functioning and modeling of technical channels of information leakage. The development of models of such channels is essentially the only method of complete study of their opportunities and it is pointed toward receiving quantitative assessments of the safe operation of compound objects. The evaluation data are necessary to make a decision on the degree of the information security from a leak according to the current criterion. The existing models are developed for the standard concentrated objects and allow to evaluate the level of information security from a leak on each of channels separately, what involves the significant increase in the required protective resource and time of assessment of information security on an object in general. The article deals with a logical-and-probabilistic method of a security assessment of structurally-compound objects. The model of a security leak on the distributed control objects is cited as an example. It is recommended to use a software package of an automated structurally-logistical modeling of compound systems, which allows to evaluate risk of information leakage in the loudspeaker. A possibility of information leakage by technical channels is evaluated and such differential characteristics of the safe operation of the distributed control objects as positive and negative contributions of the initiating events and conditions, which cause a leak are calculated. Purpose. The aim is a quantitative assessment of data risk, which is necessary for justifying the rational composition of organizational and technical protection measures, as well as a variant of the structure of the information security system from a

  10. Bayesian networks and information theory for audio-visual perception modeling.

    Science.gov (United States)

    Besson, Patricia; Richiardi, Jonas; Bourdin, Christophe; Bringoux, Lionel; Mestre, Daniel R; Vercher, Jean-Louis

    2010-09-01

    Thanks to their different senses, human observers acquire multiple information coming from their environment. Complex cross-modal interactions occur during this perceptual process. This article proposes a framework to analyze and model these interactions through a rigorous and systematic data-driven process. This requires considering the general relationships between the physical events or factors involved in the process, not only in quantitative terms, but also in term of the influence of one factor on another. We use tools from information theory and probabilistic reasoning to derive relationships between the random variables of interest, where the central notion is that of conditional independence. Using mutual information analysis to guide the model elicitation process, a probabilistic causal model encoded as a Bayesian network is obtained. We exemplify the method by using data collected in an audio-visual localization task for human subjects, and we show that it yields a well-motivated model with good predictive ability. The model elicitation process offers new prospects for the investigation of the cognitive mechanisms of multisensory perception.

  11. The illusion of common ground

    DEFF Research Database (Denmark)

    Cowley, Stephen; Harvey, Matthew

    2016-01-01

    When people talk about “common ground”, they invoke shared experiences, convictions, and emotions. In the language sciences, however, ‘common ground’ also has a technical sense. Many taking a representational view of language and cognition seek to explain that everyday feeling in terms of how...... isolated individuals “use” language to communicate. Autonomous cognitive agents are said to use words to communicate inner thoughts and experiences; in such a framework, ‘common ground’ describes a body of information that people allegedly share, hold common, and use to reason about how intentions have......, together with concerted bodily (and vocal) activity, serve to organize, regulate and coordinate both attention and the verbal and non-verbal activity that it gives rise to. Since wordings are normative, they can be used to develop skills for making cultural sense of environments and other peoples’ doings...

  12. The Role Innovative Housing Models Play in the Struggle against Social Exclusion in Cities: The Brisbane Common Ground Model

    Directory of Open Access Journals (Sweden)

    Petra Perolini

    2015-04-01

    Full Text Available The history of housing in Australia is a textbook example of socio-spatial exclusion as described, defined and analysed by commentators from Mumford to Lefebvre. It has been exacerbated by a culture of home ownership that has led to an affordability crisis. An examination of the history reveals that the problems are structural and must be approached not as a practical solution to the public provision of housing, but as a reshaping of lives, a reconnection to community, and as an ethical and equitable “right to the city”. This “Right to the City” has underpinned the Common Ground approach, emerging in a range of cities and adopted in South Brisbane, Queensland Australia. This paper examines the Common Ground approach and the impacts on its residents and in the community with a view to exploring further developments in this direction. A clear understanding of these lessons underpins, and should inform, a new approach to reconnecting the displaced and to developing solutions that not only enhance their lives but also the community at large.

  13. Application of isotopic information for estimating parameters in Philip infiltration model

    Directory of Open Access Journals (Sweden)

    Tao Wang

    2016-10-01

    Full Text Available Minimizing parameter uncertainty is crucial in the application of hydrologic models. Isotopic information in various hydrologic components of the water cycle can expand our knowledge of the dynamics of water flow in the system, provide additional information for parameter estimation, and improve parameter identifiability. This study combined the Philip infiltration model with an isotopic mixing model using an isotopic mass balance approach for estimating parameters in the Philip infiltration model. Two approaches to parameter estimation were compared: (a using isotopic information to determine the soil water transmission and then hydrologic information to estimate the soil sorptivity, and (b using hydrologic information to determine the soil water transmission and the soil sorptivity. Results of parameter estimation were verified through a rainfall infiltration experiment in a laboratory under rainfall with constant isotopic compositions and uniform initial soil water content conditions. Experimental results showed that approach (a, using isotopic and hydrologic information, estimated the soil water transmission in the Philip infiltration model in a manner that matched measured values well. The results of parameter estimation of approach (a were better than those of approach (b. It was also found that the analytical precision of hydrogen and oxygen stable isotopes had a significant effect on parameter estimation using isotopic information.

  14. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  15. Information Modeling for Direct Control of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob

    2013-01-01

    We present an architecture for an unbundled liberalized electricity market system where a virtual power plant (VPP) is able to control a number of distributed energy resources (DERs) directly through a two-way communication link. The aggregator who operates the VPP utilizes the accumulated...... a desired accumulated response. In this paper, we design such an information model based on the markets that the aggregator participates in and based on the flexibility characteristics of the remote controlled DERs. The information model is constructed in a modular manner making the interface suitable...

  16. GRAMMAR RULE BASED INFORMATION RETRIEVAL MODEL FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Nadana Ravishankar

    2015-07-01

    Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rule based approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rule base coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

  17. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  18. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  19. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  20. The Application of Use Case Modeling in Designing Medical Imaging Information Systems

    International Nuclear Information System (INIS)

    Safdari, Reza; Farzi, Jebraeil; Ghazisaeidi, Marjan; Mirzaee, Mahboobeh; Goodini, Azadeh

    2013-01-01

    Introduction. The essay at hand is aimed at examining the application of use case modeling in analyzing and designing information systems to support Medical Imaging services. Methods. The application of use case modeling in analyzing and designing health information systems was examined using electronic databases (Pubmed, Google scholar) resources and the characteristics of the modeling system and its effect on the development and design of the health information systems were analyzed. Results. Analyzing the subject indicated that Provident modeling of health information systems should provide for quick access to many health data resources in a way that patients' data can be used in order to expand distant services and comprehensive Medical Imaging advices. Also these experiences show that progress in the infrastructure development stages through gradual and repeated evolution process of user requirements is stronger and this can lead to a decline in the cycle of requirements engineering process in the design of Medical Imaging information systems. Conclusion. Use case modeling approach can be effective in directing the problems of health and Medical Imaging information systems towards understanding, focusing on the start and analysis, better planning, repetition, and control

  1. Proposed Model of Information Behaviour in Crisis: The Case of Hurricane Sandy

    Science.gov (United States)

    Lopatovska, Irene; Smiley, Bobby

    2013-01-01

    Introduction: The paper proposes a model of information behaviour in crisis. No previous model has attempted to integrate information resources, information behaviour and needs of the storm-affected communities within the temporal stages of a natural disaster. Method: The study was designed as autoethnography. The data were collected through a…

  2. Common carp disrupt ecosystem structure and function through middle-out effects

    Science.gov (United States)

    Kaemingk, Mark A.; Jolley, Jeffrey C.; Paukert, Craig P.; Willis, David W.; Henderson, Kjetil R.; Holland, Richard S.; Wanner, Greg A.; Lindvall, Mark L.

    2016-01-01

    Middle-out effects or a combination of top-down and bottom-up processes create many theoretical and empirical challenges in the realm of trophic ecology. We propose using specific autecology or species trait (i.e. behavioural) information to help explain and understand trophic dynamics that may involve complicated and non-unidirectional trophic interactions. The common carp (Cyprinus carpio) served as our model species for whole-lake observational and experimental studies; four trophic levels were measured to assess common carp-mediated middle-out effects across multiple lakes. We hypothesised that common carp could influence aquatic ecosystems through multiple pathways (i.e. abiotic and biotic foraging, early life feeding, nutrient). Both studies revealed most trophic levels were affected by common carp, highlighting strong middle-out effects likely caused by common carp foraging activities and abiotic influence (i.e. sediment resuspension). The loss of water transparency, submersed vegetation and a shift in zooplankton dynamics were the strongest effects. Trophic levels furthest from direct pathway effects were also affected (fish life history traits). The present study demonstrates that common carp can exert substantial effects on ecosystem structure and function. Species capable of middle-out effects can greatly modify communities through a variety of available pathways and are not confined to traditional top-down or bottom-up processes.

  3. The Value of Information for Populations in Varying Environments

    Science.gov (United States)

    Rivoire, Olivier; Leibler, Stanislas

    2011-04-01

    The notion of information pervades informal descriptions of biological systems, but formal treatments face the problem of defining a quantitative measure of information rooted in a concept of fitness, which is itself an elusive notion. Here, we present a model of population dynamics where this problem is amenable to a mathematical analysis. In the limit where any information about future environmental variations is common to the members of the population, our model is equivalent to known models of financial investment. In this case, the population can be interpreted as a portfolio of financial assets and previous analyses have shown that a key quantity of Shannon's communication theory, the mutual information, sets a fundamental limit on the value of information. We show that this bound can be violated when accounting for features that are irrelevant in finance but inherent to biological systems, such as the stochasticity present at the individual level. This leads us to generalize the measures of uncertainty and information usually encountered in information theory.

  4. Information operation/information warfare modeling and simulation

    OpenAIRE

    Buettner, Raymond

    2000-01-01

    Information Operations have always been a part of warfare. However, this aspect of warfare is having ever-greater importance as forces rely more and more on information as an enabler. Modern information systems make possible very rapid creation, distribution, and utilization of information. These same systems have vulnerabilities that can be exploited by enemy forces. Information force-on-force is important and complex. New tools and procedures are needed for this warfare arena. As these t...

  5. On the Enterprise Modelling of an Educational Information Infrastructure

    NARCIS (Netherlands)

    Widya, I.A.; Volman, C.J.A.M.; Pokraev, S.; de Diana, I.P.F.; Michiels, E.F.; Filipe, Joaquim; Sharp, Bernadette; Miranda, Paula

    2002-01-01

    This paper reports the modelling exercise of an educational information infrastructure that aims to support the organisation of teaching and learning activities suitable for a wide range of didactic policies. The modelling trajectory focuses on capturing invariant structures of relations between

  6. Using Interaction Scenarios to Model Information Systems

    DEFF Research Database (Denmark)

    Bækgaard, Lars; Bøgh Andersen, Peter

    The purpose of this paper is to define and discuss a set of interaction primitives that can be used to model the dynamics of socio-technical activity systems, including information systems, in a way that emphasizes structural aspects of the interaction that occurs in such systems. The primitives...... a number of case studies that indicate that interaction primitives can be useful modeling tools for supplementing conventional flow-oriented modeling of business processes....... are based on a unifying, conceptual definition of the disparate interaction types - a robust model of the types. The primitives can be combined and may thus represent mediated interaction. We present a set of visualizations that can be used to define multiple related interactions and we present and discuss...

  7. Using a logical information model-driven design process in healthcare.

    Science.gov (United States)

    Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen

    2011-01-01

    A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.

  8. Modelling Dynamic Forgetting in Distributed Information Systems

    NARCIS (Netherlands)

    N.F. Höning (Nicolas); M.C. Schut

    2010-01-01

    htmlabstractWe describe and model a new aspect in the design of distributed information systems. We build upon a previously described problem on the microlevel, which asks how quickly agents should discount (forget) their experience: If they cherish their memories, they can build their reports on

  9. A Stochastic Model for Improving Information Security in Supply Chain Systems

    OpenAIRE

    Ibrahim Al Kattan; Ahmed Al Nunu; Kassem Saleh

    2009-01-01

    This article presents a probabilistic security model for supply chain management systems (SCM) in which the basic goals of security (including confidentiality, integrity, availability and accountability, CIAA) are modeled and analyzed. Consequently, the weak points in system security are identified. A stochastic model using measurable values to describe the information system security of a SCM is introduced. Information security is a crucial and integral part of the network of supply chains. ...

  10. Modelling of information diffusion on social networks with applications to WeChat

    Science.gov (United States)

    Liu, Liang; Qu, Bo; Chen, Bin; Hanjalic, Alan; Wang, Huijuan

    2018-04-01

    Traces of user activities recorded in online social networks open new possibilities to systematically understand the information diffusion process on social networks. From the online social network WeChat, we collected a large number of information cascade trees, each of which tells the spreading trajectory of a message/information such as which user creates the information and which users view or forward the information shared by which neighbours. In this work, we propose two heterogeneous non-linear models, one for the topologies of the information cascade trees and the other for the stochastic process of information diffusion on a social network. Both models are validated by the WeChat data in reproducing and explaining key features of cascade trees. Specifically, we apply the Random Recursive Tree (RRT) to model the growth of cascade trees. The RRT model could capture key features, i.e. the average path length and degree variance of a cascade tree in relation to the number of nodes (size) of the tree. Its single identified parameter quantifies the relative depth or broadness of the cascade trees and indicates that information propagates via a star-like broadcasting or viral-like hop by hop spreading. The RRT model explains the appearance of hubs, thus a possibly smaller average path length as the cascade size increases, as observed in WeChat. We further propose the stochastic Susceptible View Forward Removed (SVFR) model to depict the dynamic user behaviour including creating, viewing, forwarding and ignoring a message on a given social network. Beside the average path length and degree variance of the cascade trees in relation to their sizes, the SVFR model could further explain the power-law cascade size distribution in WeChat and unravel that a user with a large number of friends may actually have a smaller probability to read a message (s)he receives due to limited attention.

  11. Validation of the DeLone and McLean Information Systems Success Model

    OpenAIRE

    Ojo, Adebowale I.

    2017-01-01

    Objectives This study is an adaptation of the widely used DeLone and McLean information system success model in the context of hospital information systems in a developing country. Methods A survey research design was adopted in the study. A structured questionnaire was used to collect data from 442 health information management personnel in five Nigerian teaching hospitals. A structural equation modeling technique was used to validate the model's constructs. Results It was revealed that syst...

  12. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  13. An online database for informing ecological network models: http://kelpforest.ucsc.edu.

    Science.gov (United States)

    Beas-Luna, Rodrigo; Novak, Mark; Carr, Mark H; Tinker, Martin T; Black, August; Caselle, Jennifer E; Hoban, Michael; Malone, Dan; Iles, Alison

    2014-01-01

    Ecological network models and analyses are recognized as valuable tools for understanding the dynamics and resiliency of ecosystems, and for informing ecosystem-based approaches to management. However, few databases exist that can provide the life history, demographic and species interaction information necessary to parameterize ecological network models. Faced with the difficulty of synthesizing the information required to construct models for kelp forest ecosystems along the West Coast of North America, we developed an online database (http://kelpforest.ucsc.edu/) to facilitate the collation and dissemination of such information. Many of the database's attributes are novel yet the structure is applicable and adaptable to other ecosystem modeling efforts. Information for each taxonomic unit includes stage-specific life history, demography, and body-size allometries. Species interactions include trophic, competitive, facilitative, and parasitic forms. Each data entry is temporally and spatially explicit. The online data entry interface allows researchers anywhere to contribute and access information. Quality control is facilitated by attributing each entry to unique contributor identities and source citations. The database has proven useful as an archive of species and ecosystem-specific information in the development of several ecological network models, for informing management actions, and for education purposes (e.g., undergraduate and graduate training). To facilitate adaptation of the database by other researches for other ecosystems, the code and technical details on how to customize this database and apply it to other ecosystems are freely available and located at the following link (https://github.com/kelpforest-cameo/databaseui).

  14. The Retrieval of Information in an Elementary School Library Media Center: An Alternative Method of Classification in the Common School Library, Amherst, Massachusetts.

    Science.gov (United States)

    Cooper, Linda

    1997-01-01

    Discusses the problems encountered by elementary school children in retrieving information from a library catalog, either the traditional card catalog or an OPAC (online public access catalog). An alternative system of classification using colors and symbols is described that was developed in the Common School (Amherst, Massachusetts). (Author/LRW)

  15. Package models and the information crisis of prebiotic evolution.

    Science.gov (United States)

    Silvestre, Daniel A M M; Fontanari, José F

    2008-05-21

    The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null.

  16. MODELING INFORMATION SYSTEM AVAILABILITY BY USING BAYESIAN BELIEF NETWORK APPROACH

    Directory of Open Access Journals (Sweden)

    Semir Ibrahimović

    2016-03-01

    Full Text Available Modern information systems are expected to be always-on by providing services to end-users, regardless of time and location. This is particularly important for organizations and industries where information systems support real-time operations and mission-critical applications that need to be available on 24  7  365 basis. Examples of such entities include process industries, telecommunications, healthcare, energy, banking, electronic commerce and a variety of cloud services. This article presents a modified Bayesian Belief Network model for predicting information system availability, introduced initially by Franke, U. and Johnson, P. (in article “Availability of enterprise IT systems – an expert based Bayesian model”. Software Quality Journal 20(2, 369-394, 2012 based on a thorough review of several dimensions of the information system availability, we proposed a modified set of determinants. The model is parameterized by using probability elicitation process with the participation of experts from the financial sector of Bosnia and Herzegovina. The model validation was performed using Monte Carlo simulation.

  17. A cascade model of information processing and encoding for retinal prosthesis.

    Science.gov (United States)

    Pei, Zhi-Jun; Gao, Guan-Xin; Hao, Bo; Qiao, Qing-Li; Ai, Hui-Jian

    2016-04-01

    Retinal prosthesis offers a potential treatment for individuals suffering from photoreceptor degeneration diseases. Establishing biological retinal models and simulating how the biological retina convert incoming light signal into spike trains that can be properly decoded by the brain is a key issue. Some retinal models have been presented, ranking from structural models inspired by the layered architecture to functional models originated from a set of specific physiological phenomena. However, Most of these focus on stimulus image compression, edge detection and reconstruction, but do not generate spike trains corresponding to visual image. In this study, based on state-of-the-art retinal physiological mechanism, including effective visual information extraction, static nonlinear rectification of biological systems and neurons Poisson coding, a cascade model of the retina including the out plexiform layer for information processing and the inner plexiform layer for information encoding was brought forward, which integrates both anatomic connections and functional computations of retina. Using MATLAB software, spike trains corresponding to stimulus image were numerically computed by four steps: linear spatiotemporal filtering, static nonlinear rectification, radial sampling and then Poisson spike generation. The simulated results suggested that such a cascade model could recreate visual information processing and encoding functionalities of the retina, which is helpful in developing artificial retina for the retinally blind.

  18. A condensed review of the intelligent user modeling of information retrieval system

    International Nuclear Information System (INIS)

    Choi, Kwang

    2001-10-01

    This study discussed theoretical aspects of user modeling, modeling cases of commecial systems and elements that need consideration when constructing user models. The results of this study are 1) Comprehensive and previous analysis of system users is required to bulid user model. 2) User information is collected from users directly and inference. 3) Frame structure is compatible to build user model. 4) Prototype user model is essential to bulid a user model and based on previous user analysis. 5) User model builder has interactive information collection, inference, flexibility, model updating functions. 6) User model builder has to reflect user's feedback

  19. A condensed review of the intelligent user modeling of information retrieval system

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kwang

    2001-10-01

    This study discussed theoretical aspects of user modeling, modeling cases of commecial systems and elements that need consideration when constructing user models. The results of this study are 1) Comprehensive and previous analysis of system users is required to bulid user model. 2) User information is collected from users directly and inference. 3) Frame structure is compatible to build user model. 4) Prototype user model is essential to bulid a user model and based on previous user analysis. 5) User model builder has interactive information collection, inference, flexibility, model updating functions. 6) User model builder has to reflect user's feedback.

  20. Management information system model supporting the quality assurance of schools in Thailand

    Directory of Open Access Journals (Sweden)

    Daoprakai Raso

    2017-07-01

    Full Text Available Management Information Systems are very important tools for Thai Schools in supporting the quality assurance process. This research therefore aimed to develop a Management Information System (MIS model which consisted of two phases. Phase 1 was the design of MIS model used in Thai school quality assurance (QA. Phase 2 was the evaluation of the model which consisted of four parts. There were the MIS circle which consisted of 1 System Investigation, System Analysis, System Design, System Implementation and System Maintenance. 2 The Management Information System, which consisted of data collecting, data processing, information presenting, information saving, and procedure controlling. 3 The factors that support the MIS, which includes information tools and equipment used factor and the information operator’s factor, and 4 the system theory which consisted of input, process, and output. The results showed that the level of opinions in all aspects was at a “high” level.

  1. Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation.

    Science.gov (United States)

    Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix

    2016-03-01

    To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. INFORMATION SYSTEM QUALITY INFLUENCE ON ORGANIZATION PERFORMANCE: A MODIFICATION OF TECHNOLOGY-BASED INFORMATION SYSTEM ACCEPTANCE AND SUCCESS MODEL

    Directory of Open Access Journals (Sweden)

    Trisnawati N.

    2017-12-01

    Full Text Available This study aims to examine the effect of information system quality on technology-based accounting information systems usage and their impact on organizational performance on local government. This study is based on Technology Acceptance Model (TAM, IS Success Model, and the success of technology-based information systems. This study is a combination of previous studies conducted by Seddon and Kiew (1997, Saeed and Helm (2008, and DeLone and McLean (1992. This study used survey method and took 101 respondents from accounting staff working in Malang and Mojokerto regencies. This study uses Partial Least Square to examine research data. Research result exhibits information system qualities affecting benefit perception and user satisfaction. Technology-based accounting information systems usage in local government is influenced by benefits perception and user satisfaction. Research result concluded that technology-based accounting information systems usage will affect the performance of local government organizations.

  3. Applicability of common stomatal conductance models in maize under varying soil moisture conditions.

    Science.gov (United States)

    Wang, Qiuling; He, Qijin; Zhou, Guangsheng

    2018-07-01

    In the context of climate warming, the varying soil moisture caused by precipitation pattern change will affect the applicability of stomatal conductance models, thereby affecting the simulation accuracy of carbon-nitrogen-water cycles in ecosystems. We studied the applicability of four common stomatal conductance models including Jarvis, Ball-Woodrow-Berry (BWB), Ball-Berry-Leuning (BBL) and unified stomatal optimization (USO) models based on summer maize leaf gas exchange data from a soil moisture consecutive decrease manipulation experiment. The results showed that the USO model performed best, followed by the BBL model, BWB model, and the Jarvis model performed worst under varying soil moisture conditions. The effects of soil moisture made a difference in the relative performance among the models. By introducing a water response function, the performance of the Jarvis, BWB, and USO models improved, which decreased the normalized root mean square error (NRMSE) by 15.7%, 16.6% and 3.9%, respectively; however, the performance of the BBL model was negative, which increased the NRMSE by 5.3%. It was observed that the models of Jarvis, BWB, BBL and USO were applicable within different ranges of soil relative water content (i.e., 55%-65%, 56%-67%, 37%-79% and 37%-95%, respectively) based on the 95% confidence limits. Moreover, introducing a water response function, the applicability of the Jarvis and BWB models improved. The USO model performed best with or without introducing the water response function and was applicable under varying soil moisture conditions. Our results provide a basis for selecting appropriate stomatal conductance models under drought conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Semantic concept-enriched dependence model for medical information retrieval.

    Science.gov (United States)

    Choi, Sungbin; Choi, Jinwook; Yoo, Sooyoung; Kim, Heechun; Lee, Youngho

    2014-02-01

    In medical information retrieval research, semantic resources have been mostly used by expanding the original query terms or estimating the concept importance weight. However, implicit term-dependency information contained in semantic concept terms has been overlooked or at least underused in most previous studies. In this study, we incorporate a semantic concept-based term-dependence feature into a formal retrieval model to improve its ranking performance. Standardized medical concept terms used by medical professionals were assumed to have implicit dependency within the same concept. We hypothesized that, by elaborately revising the ranking algorithms to favor documents that preserve those implicit dependencies, the ranking performance could be improved. The implicit dependence features are harvested from the original query using MetaMap. These semantic concept-based dependence features were incorporated into a semantic concept-enriched dependence model (SCDM). We designed four different variants of the model, with each variant having distinct characteristics in the feature formulation method. We performed leave-one-out cross validations on both a clinical document corpus (TREC Medical records track) and a medical literature corpus (OHSUMED), which are representative test collections in medical information retrieval research. Our semantic concept-enriched dependence model consistently outperformed other state-of-the-art retrieval methods. Analysis shows that the performance gain has occurred independently of the concept's explicit importance in the query. By capturing implicit knowledge with regard to the query term relationships and incorporating them into a ranking model, we could build a more robust and effective retrieval model, independent of the concept importance. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Homology modeling and docking analyses of M. leprae Mur ligases reveals the common binding residues for structure based drug designing to eradicate leprosy.

    Science.gov (United States)

    Shanmugam, Anusuya; Natarajan, Jeyakumar

    2012-06-01

    Multi drug resistance capacity for Mycobacterium leprae (MDR-Mle) demands the profound need for developing new anti-leprosy drugs. Since most of the drugs target a single enzyme, mutation in the active site renders the antibiotic ineffective. However, structural and mechanistic information on essential bacterial enzymes in a pathway could lead to the development of antibiotics that targets multiple enzymes. Peptidoglycan is an important component of the cell wall of M. leprae. The biosynthesis of bacterial peptidoglycan represents important targets for the development of new antibacterial drugs. Biosynthesis of peptidoglycan is a multi-step process that involves four key Mur ligase enzymes: MurC (EC:6.3.2.8), MurD (EC:6.3.2.9), MurE (EC:6.3.2.13) and MurF (EC:6.3.2.10). Hence in our work, we modeled the three-dimensional structure of the above Mur ligases using homology modeling method and analyzed its common binding features. The residues playing an important role in the catalytic activity of each of the Mur enzymes were predicted by docking these Mur ligases with their substrates and ATP. The conserved sequence motifs significant for ATP binding were predicted as the probable residues for structure based drug designing. Overall, the study was successful in listing significant and common binding residues of Mur enzymes in peptidoglycan pathway for multi targeted therapy.

  6. Microsoft Repository Version 2 and the Open Information Model.

    Science.gov (United States)

    Bernstein, Philip A.; Bergstraesser, Thomas; Carlson, Jason; Pal, Shankar; Sanders, Paul; Shutt, David

    1999-01-01

    Describes the programming interface and implementation of the repository engine and the Open Information Model for Microsoft Repository, an object-oriented meta-data management facility that ships in Microsoft Visual Studio and Microsoft SQL Server. Discusses Microsoft's component object model, object manipulation, queries, and information…

  7. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  8. Harvesting NASA's Common Metadata Repository

    Science.gov (United States)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  9. Building a model: developing genomic resources for common milkweed (Asclepias syriaca) with low coverage genome sequencing.

    Science.gov (United States)

    Straub, Shannon C K; Fishbein, Mark; Livshultz, Tatyana; Foster, Zachary; Parks, Matthew; Weitemier, Kevin; Cronn, Richard C; Liston, Aaron

    2011-05-04

    Milkweeds (Asclepias L.) have been extensively investigated in diverse areas of evolutionary biology and ecology; however, there are few genetic resources available to facilitate and compliment these studies. This study explored how low coverage genome sequencing of the common milkweed (Asclepias syriaca L.) could be useful in characterizing the genome of a plant without prior genomic information and for development of genomic resources as a step toward further developing A. syriaca as a model in ecology and evolution. A 0.5× genome of A. syriaca was produced using Illumina sequencing. A virtually complete chloroplast genome of 158,598 bp was assembled, revealing few repeats and loss of three genes: accD, clpP, and ycf1. A nearly complete rDNA cistron (18S-5.8S-26S; 7,541 bp) and 5S rDNA (120 bp) sequence were obtained. Assessment of polymorphism revealed that the rDNA cistron and 5S rDNA had 0.3% and 26.7% polymorphic sites, respectively. A partial mitochondrial genome sequence (130,764 bp), with identical gene content to tobacco, was also assembled. An initial characterization of repeat content indicated that Ty1/copia-like retroelements are the most common repeat type in the milkweed genome. At least one A. syriaca microread hit 88% of Catharanthus roseus (Apocynaceae) unigenes (median coverage of 0.29×) and 66% of single copy orthologs (COSII) in asterids (median coverage of 0.14×). From this partial characterization of the A. syriaca genome, markers for population genetics (microsatellites) and phylogenetics (low-copy nuclear genes) studies were developed. The results highlight the promise of next generation sequencing for development of genomic resources for any organism. Low coverage genome sequencing allows characterization of the high copy fraction of the genome and exploration of the low copy fraction of the genome, which facilitate the development of molecular tools for further study of a target species and its relatives. This study represents a first

  10. Aggression and Moral Development: Integrating Social Information Processing and Moral Domain Models

    Science.gov (United States)

    Arsenio, William F.; Lemerise, Elizabeth A.

    2004-01-01

    Social information processing and moral domain theories have developed in relative isolation from each other despite their common focus on intentional harm and victimization, and mutual emphasis on social cognitive processes in explaining aggressive, morally relevant behaviors. This article presents a selective summary of these literatures with…

  11. Inform: Efficient Information-Theoretic Analysis of Collective Behaviors

    Directory of Open Access Journals (Sweden)

    Douglas G. Moore

    2018-06-01

    Full Text Available The study of collective behavior has traditionally relied on a variety of different methodological tools ranging from more theoretical methods such as population or game-theoretic models to empirical ones like Monte Carlo or multi-agent simulations. An approach that is increasingly being explored is the use of information theory as a methodological framework to study the flow of information and the statistical properties of collectives of interacting agents. While a few general purpose toolkits exist, most of the existing software for information theoretic analysis of collective systems is limited in scope. We introduce Inform, an open-source framework for efficient information theoretic analysis that exploits the computational power of a C library while simplifying its use through a variety of wrappers for common higher-level scripting languages. We focus on two such wrappers here: PyInform (Python and rinform (R. Inform and its wrappers are cross-platform and general-purpose. They include classical information-theoretic measures, measures of information dynamics and information-based methods to study the statistical behavior of collective systems, and expose a lower-level API that allow users to construct measures of their own. We describe the architecture of the Inform framework, study its computational efficiency and use it to analyze three different case studies of collective behavior: biochemical information storage in regenerating planaria, nest-site selection in the ant Temnothorax rugatulus, and collective decision making in multi-agent simulations.

  12. A food web modeling analysis of a Midwestern, USA eutrophic lake dominated by non-native Common Carp and Zebra Mussels

    Science.gov (United States)

    Colvin, Michael E.; Pierce, Clay; Stewart, Timothy W.

    2015-01-01

    Food web modeling is recognized as fundamental to understanding the complexities of aquatic systems. Ecopath is the most common mass-balance model used to represent food webs and quantify trophic interactions among groups. We constructed annual Ecopath models for four consecutive years during the first half-decade of a zebra mussel invasion in shallow, eutrophic Clear Lake, Iowa, USA, to evaluate changes in relative biomass and total system consumption among food web groups, evaluate food web impacts of non-native common carp and zebra mussels on food web groups, and to interpret food web impacts in light of on-going lake restoration. Total living biomass increased each year of the study; the majority of the increase due to a doubling in planktonic blue green algae, but several other taxa also increased including a more than two-order of magnitude increase in zebra mussels. Common carp accounted for the largest percentage of total fish biomass throughout the study even with on-going harvest. Chironomids, common carp, and zebra mussels were the top-three ranking consumer groups. Non-native common carp and zebra mussels accounted for an average of 42% of the total system consumption. Despite the relatively high biomass densities of common carp and zebra mussel, food web impacts was minimal due to excessive benthic and primary production in this eutrophic system. Consumption occurring via benthic pathways dominated system consumption in Clear Lake throughout our study, supporting the argument that benthic food webs are significant in shallow, eutrophic lake ecosystems and must be considered if ecosystem-level understanding is to be obtained.

  13. A model for information retrieval driven by conceptual spaces

    OpenAIRE

    Tanase, D.

    2015-01-01

    A retrieval model describes the transformation of a query into a set of documents. The question is: what drives this transformation? For semantic information retrieval type of models this transformation is driven by the content and structure of the semantic models. In this case, Knowledge Organization Systems (KOSs) are the semantic models that encode the meaning employed for monolingual and cross-language retrieval. The focus of this research is the relationship between these meanings’ repre...

  14. A hierarchical modeling of information seeking behavior of school ...

    African Journals Online (AJOL)

    The aim of this study was to investigate the information seeking behavior of school teachers in the public primary schools of rural areas of Nigeria and to draw up a model of their information-seeking behavior. A Cross-sectional survey design research was employed to carry out the research. Findings showed that the ...

  15. The data-driven null models for information dissemination tree in social networks

    Science.gov (United States)

    Zhang, Zhiwei; Wang, Zhenyu

    2017-10-01

    For the purpose of detecting relatedness and co-occurrence between users, as well as the distribution features of nodes in spreading path of a social network, this paper explores topological characteristics of information dissemination trees (IDT) that can be employed indirectly to probe the information dissemination laws within social networks. Hence, three different null models of IDT are presented in this article, including the statistical-constrained 0-order IDT null model, the random-rewire-broken-edge 0-order IDT null model and the random-rewire-broken-edge 2-order IDT null model. These null models firstly generate the corresponding randomized copy of an actual IDT; then the extended significance profile, which is developed by adding the cascade ratio of information dissemination path, is exploited not only to evaluate degree correlation of two nodes associated with an edge, but also to assess the cascade ratio of different length of information dissemination paths. The experimental correspondences of the empirical analysis for several SinaWeibo IDTs and Twitter IDTs indicate that the IDT null models presented in this paper perform well in terms of degree correlation of nodes and dissemination path cascade ratio, which can be better to reveal the features of information dissemination and to fit the situation of real social networks.

  16. "When information is not enough": A model for understanding BRCA-positive previvors' information needs regarding hereditary breast and ovarian cancer risk.

    Science.gov (United States)

    Dean, Marleah; Scherr, Courtney L; Clements, Meredith; Koruo, Rachel; Martinez, Jennifer; Ross, Amy

    2017-09-01

    To investigate BRCA-positive, unaffected patients' - referred to as previvors - information needs after testing positive for a deleterious BRCA genetic mutation. 25 qualitative interviews were conducted with previvors. Data were analyzed using the constant comparison method of grounded theory. Analysis revealed a theoretical model of previvors' information needs related to the stage of their health journey. Specifically, a four-stage model was developed based on the data: (1) pre-testing information needs, (2) post-testing information needs, (3) pre-management information needs, and (4) post-management information needs. Two recurring dimensions of desired knowledge also emerged within the stages-personal/social knowledge and medical knowledge. While previvors may be genetically predisposed to develop cancer, they have not been diagnosed with cancer, and therefore have different information needs than cancer patients and cancer survivors. This model can serve as a framework for assisting healthcare providers in meeting the specific information needs of cancer previvors. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Big Atoms for Small Children: Building Atomic Models from Common Materials to Better Visualize and Conceptualize Atomic Structure

    Science.gov (United States)

    Cipolla, Laura; Ferrari, Lia A.

    2016-01-01

    A hands-on approach to introduce the chemical elements and the atomic structure to elementary/middle school students is described. The proposed classroom activity presents Bohr models of atoms using common and inexpensive materials, such as nested plastic balls, colored modeling clay, and small-sized pasta (or small plastic beads).

  18. Conceptual Modeling of Events as Information Objects and Change Agents

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    as a totality of an information object and a change agent. When an event is modeled as an information object it is comparable to an entity that exists only at a specific point in time. It has attributes and can be used for querying and specification of constraints. When an event is modeled as a change agent...... it is comparable to an executable transaction schema. Finally, we briefly compare our approach to object-oriented approaches based on encapsulated objects....

  19. Standardizing the information architecture for spacecraft operations

    Science.gov (United States)

    Easton, C. R.

    1994-01-01

    This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.

  20. Probabilistic analysis of ''common mode failures''

    International Nuclear Information System (INIS)

    Easterling, R.G.

    1978-01-01

    Common mode failure is a topic of considerable interest in reliability and safety analyses of nuclear reactors. Common mode failures are often discussed in terms of examples: two systems fail simultaneously due to an external event such as an earthquake; two components in redundant channels fail because of a common manufacturing defect; two systems fail because a component common to both fails; the failure of one system increases the stress on other systems and they fail. The common thread running through these is a dependence of some sort--statistical or physical--among multiple failure events. However, the nature of the dependence is not the same in all these examples. An attempt is made to model situations, such as the above examples, which have been termed ''common mode failures.'' In doing so, it is found that standard probability concepts and terms, such as statistically dependent and independent events, and conditional and unconditional probabilities, suffice. Thus, it is proposed that the term ''common mode failures'' be dropped, at least from technical discussions of these problems. A corollary is that the complementary term, ''random failures,'' should also be dropped. The mathematical model presented may not cover all situations which have been termed ''common mode failures,'' but provides insight into the difficulty of obtaining estimates of the probabilities of these events

  1. Model of informational system for freight insurance automation based on digital signature

    OpenAIRE

    Maxim E. SLOBODYANYUK

    2009-01-01

    In the article considered a model of informational system for freight insurance automation based on digital signature, showed architecture, macro flowchart of information flow in model, components (modules) and their functions. Described calculation method of costs on interactive cargo insurance via proposed system, represented main characteristics and options of existing transport management systems, conceptual cost models.

  2. Model of informational system for freight insurance automation based on digital signature

    Directory of Open Access Journals (Sweden)

    Maxim E. SLOBODYANYUK

    2009-01-01

    Full Text Available In the article considered a model of informational system for freight insurance automation based on digital signature, showed architecture, macro flowchart of information flow in model, components (modules and their functions. Described calculation method of costs on interactive cargo insurance via proposed system, represented main characteristics and options of existing transport management systems, conceptual cost models.

  3. A cyber-anima-based model of material conscious information network

    Directory of Open Access Journals (Sweden)

    Jianping Shen

    2017-03-01

    Full Text Available Purpose – This paper aims to study the node modeling, multi-agent architecture and addressing method for the material conscious information network (MCIN, which is a large-scaled, open-styled, self-organized and ecological intelligent network of supply–demand relationships. Design/methodology/approach – This study models the MCIN by node model definition, multi-agent architecture design and addressing method presentation. Findings – The prototype of novel E-commerce platform based on the MCIN shows the effectiveness and soundness of the MCIN modeling. By comparing to current internet, the authors also find that the MCIN has the advantages of socialization, information integration, collective intelligence, traceability, high robustness, unification of producing and consuming, high scalability and decentralization. Research limitations/implications – Leveraging the dimensions of structure, character, knowledge and experience, a modeling approach of the basic information can fit all kinds of the MCIN nodes. With the double chain structure for both basic and supply–demand information, the MCIN nodes can be modeled comprehensively. The anima-desire-intention-based multi-agent architecture makes the federated agents of the MCIN nodes self-organized and intelligent. The MCIN nodes can be efficiently addressed by the supply–demand-oriented method. However, the implementation of the MCIN is still in process. Practical implications – This paper lays the theoretical foundation for the future networked system of supply–demand relationship and the novel E-commerce platform. Originality/value – The authors believe that the MCIN, first proposed in this paper, is a transformational innovation which facilitates the infrastructure of the future networked system of supply–demand relationship.

  4. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  5. Supporting Fiscal Aspect of Land Administration through an LADM-based Valuation Information Model

    NARCIS (Netherlands)

    Kara, A.; Çağdaş, V.; Lemmen, C.H.J.; Işıkdağ, Ü.; van Oosterom, P.J.M.; Stubkjær, E.

    2018-01-01

    This paper presents an information system artifact for the fiscal aspect of land administration, a valuation information model for the specification of inventories or databases used in valuation for recurrently levied immovable property taxes. The information model is designed as an extension module

  6. Hazardous materials management using a Cradle-to-Grave Tracking and Information System (CGTIS)

    Energy Technology Data Exchange (ETDEWEB)

    Kjeldgaard, E.; Fish, J.; Campbell, D.; Freshour, N.; Hammond, B.; Bray, O. [Sandia National Labs., Albuquerque, NM (United States); Hollingsworth, M. [Ogden Environmental & Energy Services Co., Inc., Albuquerque, NM (United States)

    1995-03-01

    Hazardous materials management includes interactions among materials, personnel, facilities, hazards, and processes of various groups within a DOE site`s environmental, safety & health (ES&H) and line organizations. Although each group is charged with addressing a particular aspect of these properties and interactions, the information it requires must be gathered into a coherent set of common data for accurate and consistent hazardous material management and regulatory reporting. It is these common data requirements which the Cradle-to-Grave Tracking and Information System (CGTIS) is designed to satisfy. CGTIS collects information at the point at which a process begins or a material enters a facility, and maintains that information, for hazards management and regulatory reporting, throughout the entire life-cycle by providing direct on-line links to a site`s multitude of data bases to bring information together into one common data model.

  7. Hazardous materials management using a Cradle-to-Grave Tracking and Information System (CGTIS)

    International Nuclear Information System (INIS)

    Kjeldgaard, E.; Fish, J.; Campbell, D.; Freshour, N.; Hammond, B.; Bray, O.; Hollingsworth, M.

    1995-03-01

    Hazardous materials management includes interactions among materials, personnel, facilities, hazards, and processes of various groups within a DOE site's environmental, safety ampersand health (ES ampersand H) and line organizations. Although each group is charged with addressing a particular aspect of these properties and interactions, the information it requires must be gathered into a coherent set of common data for accurate and consistent hazardous material management and regulatory reporting. It is these common data requirements which the Cradle-to-Grave Tracking and Information System (CGTIS) is designed to satisfy. CGTIS collects information at the point at which a process begins or a material enters a facility, and maintains that information, for hazards management and regulatory reporting, throughout the entire life-cycle by providing direct on-line links to a site's multitude of data bases to bring information together into one common data model

  8. Ethics and rationality in information-enriched decisions: A model for technical communication

    Science.gov (United States)

    Dressel, S. B.; Carlson, P.; Killingsworth, M. J.

    1993-12-01

    In a technological culture, information has a crucial impact upon decisions, but exactly how information plays into decisions is not always clear. Decisions that are effective, efficient, and ethical must be rational. That is, we must be able to determine and present good reasons for our actions. The topic in this paper is how information relates to good reasons and thereby affects the best decisions. A brief sketch of a model for decision-making, is presented which offers a synthesis of theoretical approaches to argument and to information analysis. Then the model is applied to a brief hypothetical case. The main purpose is to put the model before an interested audience in hopes of stimulating discussion and further research.

  9. Dietary beliefs among informal caregivers regarding common childhood diseases in rural north-west India

    Directory of Open Access Journals (Sweden)

    Rajiv Kumar Gupta

    2017-09-01

    Full Text Available Background: Dietary practices among infants and children are predictor of their growth and development. India being a huge of diverse cultures, diversity in beliefs and practices regarding diet during childhood illnesses is expected. Harmful beliefs and practices can contribute to malnutrition among children. These beliefs can have adverse consequences in already sick children. Aims and Objectives: To assess the dietary knowledge, beliefs and practices of rural care givers during childhood illnesses. Material & Methods: This cross-sectional descriptive study was conducted among 271 rural informal (parent / family member caregivers in one of the sub-health centres which was selected using simple random sampling technique. In the context of this study, the word informal care-giver was used for parent/family member of the child, preferably a mother with a child / children aged less than five years. The survey tool was an open ended and pretested questionnaire which was developed by public health expert’s familiar with the culture of the study setting and was pilot tested before administration. For the purpose of recruiting the study participants a house to house survey was conducted and the data thus collected was analyzed in percentages. Results: Informal Caregivers had low knowledge of common childhood illnesses as well as the reasons of their causation. Majority of them consulted a doctor in the event of child’s illness. 53.81% reduced feeding and 31.93% diluted diet during child’ illness but significantly 77.85% didn’t change breastfeeding practice during illness. As far as the beliefs regarding dietary practices were concerned, it found that egg, meat, chicken and jaggery were labelled hot foods while curd, butter milk and vegetables were labelled as cold foods. Rice water and khichadi were preferred in diarrhoea but spicy food and milk were restricted. Ginger and Tulsi tea were preferred in respiratory infections while ice-cream and

  10. Dietary beliefs among informal caregivers regarding common childhood diseases in rural north-west India

    Directory of Open Access Journals (Sweden)

    Rajiv Kumar Gupta

    2017-09-01

    Full Text Available Background: Dietary practices among infants and children are predictor of their growth and development. India being a huge of diverse cultures, diversity in beliefs and practices regarding diet during childhood illnesses is expected. Harmful beliefs and practices can contribute to malnutrition among children. These beliefs can have adverse consequences in already sick children. Aims and Objectives: To assess the dietary knowledge, beliefs and practices of rural care givers during childhood illnesses. Material & Methods: This cross-sectional descriptive study was conducted among 271 rural informal (parent / family member caregivers in one of the sub-health centres which was selected using simple random sampling technique. In the context of this study, the word informal care-giver was used for parent/family member of the child, preferably a mother with a child / children aged less than five years. The survey tool was an open ended and pretested questionnaire which was developed by public health expert’s familiar with the culture of the study setting and was pilot tested before administration. For the purpose of recruiting the study participants a house to house survey was conducted and the data thus collected was analyzed in percentages. Results: Informal Caregivers had low knowledge of common childhood illnesses as well as the reasons of their causation. Majority of them consulted a doctor in the event of child’s illness. 53.81% reduced feeding and 31.93% diluted diet during child’ illness but significantly 77.85% didn’t change breastfeeding practice during illness. As far as the beliefs regarding dietary practices were concerned, it found that egg, meat, chicken and jaggery were labelled hot foods while curd, butter milk and vegetables were labelled as cold foods. Rice water and khichadi were preferred in diarrhoea but spicy food and milk were restricted. Ginger and Tulsi tea were preferred in respiratory infections while ice

  11. A multiprofessional information model for Brazilian primary care: Defining a consensus model towards an interoperable electronic health record.

    Science.gov (United States)

    Braga, Renata Dutra

    2016-06-01

    To develop a multiprofessional information model to be used in the decision-making process in primary care in Brazil. This was an observational study with a descriptive and exploratory approach, using action research associated with the Delphi method. A group of 13 health professionals made up a panel of experts that, through individual and group meetings, drew up a preliminary health information records model. The questionnaire used to validate this model included four questions based on a Likert scale. These questions evaluated the completeness and relevance of information on each of the four pillars that composed the model. The changes suggested in each round of evaluation were included when accepted by the majority (≥ 50%). This process was repeated as many times as necessary to obtain the desirable and recommended consensus level (> 50%), and the final version became the consensus model. Multidisciplinary health training of the panel of experts allowed a consensus model to be obtained based on four categories of health information, called pillars: Data Collection, Diagnosis, Care Plan and Evaluation. The obtained consensus model was considered valid by the experts and can contribute to the collection and recording of multidisciplinary information in primary care, as well as the identification of relevant concepts for defining electronic health records at this level of complexity in health care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Modelling the ICE standard with a formal language for information commerce

    NARCIS (Netherlands)

    Wombacher, Andreas; Aberer, K.

    Automatizing information commerce requires languages to represent the typical information commerce processes. Existing languages and standards cover either only very specific types of business models or are too general to capture in a concise way the specific properties of information commerce

  13. Decoding implicit information from the soil map of Belgium and implications for spatial modelling and soil classification

    Science.gov (United States)

    Dondeyne, Stefaan; Legrain, Xavier; Colinet, Gilles; Van Ranst, Eric; Deckers, Jozef

    2014-05-01

    A systematic soil survey of Belgium was conducted from 1948 to 1991. Field surveys were done at the detailed scale of 1:5000 with the final maps published at a 1:20,000 scale. Soil surveyors were classifying soils in the field according to physical and morphogenetic characteristics such as texture, drainage class and profile development. Mapping units are defined as a combination of these characteristics but to which modifiers can be added such as parent material, stoniness or depth to substrata. Interpretation of the map towards predicting soil properties seems straight forward. Consequently, since the soil map has been digitized, it has been used for e.g. hydrological modelling or for estimating soil organic carbon content at sub-national and national level. Besides the explicit information provided by the legend, a wealth of implicit information is embedded in the map. Based on three cases, we illustrate that by decoding this information, properties pertaining to soil drainage or soil organic carbon content can be assessed more accurately. First, the presence/absence of fragipans affects the soil hydraulic conductivity. Although a dedicated symbol exits for fragipans (suffix "...m"), it is only used explicitly in areas where fragipans are not all that common. In the Belgian Ardennes, where fragipans are common, their occurrence is implicitly implied for various soil types mentioned in explanatory booklets. Second, whenever seasonal or permanent perched water tables were observed, these were indicated by drainage class ".h." or ".i.", respectively. Stagnic properties have been under reported as typical stagnic mottling - i.e. when the surface of soil peds are lighter and/or paler than the more reddish interior - were not distinguished from mottling due to groundwater gley. Still, by combining information on topography and the occurrence of substratum layers, stagnic properties can be inferred. Thirdly, soils with deep anthropogenic enriched organic matter

  14. AEROMETRIC INFORMATION RETRIEVAL SYSTEM (AIRS) -GEOGRAPHIC, COMMON, AND MAINTENANCE SUBSYSTEM (GCS)

    Science.gov (United States)

    Aerometric Information Retrieval System (AIRS) is a computer-based repository of information about airborne pollution in the United States and various World Health Organization (WHO) member countries. AIRS is administered by the U.S. Environmental Protection Agency, and runs on t...

  15. Modeling Routinization in Games: An Information Theory Approach

    DEFF Research Database (Denmark)

    Wallner, Simon; Pichlmair, Martin; Hecher, Michael

    2015-01-01

    Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...

  16. Collaborative information seeking

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2008-01-01

    Since common ground is pivotal to collaboration, this paper proposes to define collaborative information seeking as the combined activity of information seeking and collaborative grounding. While information-seeking activities are necessary for collaborating actors to acquire new information......, the activities involved in information seeking are often performed by varying subgroups of actors. Consequently, collaborative grounding is necessary to share information among collaborating actors and, thereby, establish and maintain the common ground necessary for their collaborative work. By focusing...... on the collaborative level, collaborative information seeking aims to avoid both individual reductionism and group reductionism, while at the same time recognizing that only some information and understanding need be shared....

  17. NEW MODEL OF QUALITY ASSESSMENT IN PUBLIC ADMINISTRATION - UPGRADING THE COMMON ASSESSMENT FRAMEWORK (CAF

    Directory of Open Access Journals (Sweden)

    Mirna Macur

    2017-01-01

    Full Text Available In our study, we developed new model of quality assessment in public administration. The Common Assessment Framework (CAF is frequently used in continental Europe for this purpose. Its use has many benefits, however we believe its assessment logic is not adequate for public administration. Upgraded version of CAF is conceptually different: instead of analytical and linear CAF we get the instrument that measures organisation as a network of complex processes. Original and upgraded assessment approaches are presented in the paper and compared in the case of self-assessment of selected public administration organisation. The two approaches produced different, sometimes contradictory results. The upgraded model proved to be logically more consistent and it produced higher interpretation capacity.

  18. Information-theoretic analysis of the dynamics of an executable biological model.

    Directory of Open Access Journals (Sweden)

    Avital Sadot

    Full Text Available To facilitate analysis and understanding of biological systems, large-scale data are often integrated into models using a variety of mathematical and computational approaches. Such models describe the dynamics of the biological system and can be used to study the changes in the state of the system over time. For many model classes, such as discrete or continuous dynamical systems, there exist appropriate frameworks and tools for analyzing system dynamics. However, the heterogeneous information that encodes and bridges molecular and cellular dynamics, inherent to fine-grained molecular simulation models, presents significant challenges to the study of system dynamics. In this paper, we present an algorithmic information theory based approach for the analysis and interpretation of the dynamics of such executable models of biological systems. We apply a normalized compression distance (NCD analysis to the state representations of a model that simulates the immune decision making and immune cell behavior. We show that this analysis successfully captures the essential information in the dynamics of the system, which results from a variety of events including proliferation, differentiation, or perturbations such as gene knock-outs. We demonstrate that this approach can be used for the analysis of executable models, regardless of the modeling framework, and for making experimentally quantifiable predictions.

  19. Design information questionnaire for a model mixed oxide fuel fabrication facility

    International Nuclear Information System (INIS)

    Glancy, J.E.

    1976-05-01

    The model fuel plant is based on the proposed Westinghouse Anderson, S.C., plant and is typical of plants that will be constructed and operated in 1980 to 1990. A number of plant systems and procedures are uncertain, and in these cases judgment was used in describing relevant parameters in order to provide a complete model on which to design an inspection plan. The model plant does not, therefore, strictly represent any planned facility nor does it strictly represent the ideas of Westinghouse on plant design and material accountability. This report is divided into two sections. The first section is the IAEA Design Information Questionnaire form that contains an outline of all information requested. The second section is a complete listing of design information

  20. Preliminary review of critical shutdown heat removal items for common cause failure susceptibility on LMFBR's. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Allard, L.T.; Elerath, J.G.

    1976-02-01

    This document presents a common cause failure analysis for Critical LMFBR Shutdown Heat Removal Systems. The report is intended to outline a systematic approach to defining areas with significant potential for common causes of failure, and ultimately provide inputs to the reliability prediction model. A preliminary evaluation of postulatd single initiating causes resulting in multiple failures of LMFBR-SHRS items is presented in Appendix C. This document will be periodically updated to reflect new information and activity.