WorldWideScience

Sample records for modeling information based

  1. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  2. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  3. A Process Model for Goal-Based Information Retrieval

    Directory of Open Access Journals (Sweden)

    Harvey Hyman

    2014-12-01

    Full Text Available In this paper we examine the domain of information search and propose a "goal-based" approach to study search strategy. We describe "goal-based information search" using a framework of Knowledge Discovery. We identify two Information Retrieval (IR goals using the constructs of Knowledge Acquisition (KA and Knowledge Explanation (KE. We classify these constructs into two specific information problems: An exploration-exploitation problem and an implicit-explicit problem. Our proposed framework is an extension of prior work in this domain, applying an IR Process Model originally developed for Legal-IR and adapted to Medical-IR. The approach in this paper is guided by the recent ACM-SIG Medical Information Retrieval (MedIR Workshop definition: "methodologies and technologies that seek to improve access to medical information archives via a process of information retrieval."

  4. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  5. GRAMMAR RULE BASED INFORMATION RETRIEVAL MODEL FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Nadana Ravishankar

    2015-07-01

    Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rule based approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rule base coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

  6. Semantic reasoning with XML-based biomedical information models.

    Science.gov (United States)

    O'Connor, Martin J; Das, Amar

    2010-01-01

    The Extensible Markup Language (XML) is increasingly being used for biomedical data exchange. The parallel growth in the use of ontologies in biomedicine presents opportunities for combining the two technologies to leverage the semantic reasoning services provided by ontology-based tools. There are currently no standardized approaches for taking XML-encoded biomedical information models and representing and reasoning with them using ontologies. To address this shortcoming, we have developed a workflow and a suite of tools for transforming XML-based information models into domain ontologies encoded using OWL. In this study, we applied semantics reasoning methods to these ontologies to automatically generate domain-level inferences. We successfully used these methods to develop semantic reasoning methods for information models in the HIV and radiological image domains.

  7. Thermodynamics of information processing based on enzyme kinetics: An exactly solvable model of an information pump.

    Science.gov (United States)

    Cao, Yuansheng; Gong, Zongping; Quan, H T

    2015-06-01

    Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012)] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013)], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.

  8. Thermodynamics of information processing based on enzyme kinetics: An exactly solvable model of an information pump

    Science.gov (United States)

    Cao, Yuansheng; Gong, Zongping; Quan, H. T.

    2015-06-01

    Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012), 10.1073/pnas.1204263109] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013), 10.1103/PhysRevLett.111.030602], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.

  9. Information-based models for finance and insurance

    Science.gov (United States)

    Hoyle, Edward

    2010-10-01

    In financial markets, the information that traders have about an asset is reflected in its price. The arrival of new information then leads to price changes. The `information-based framework' of Brody, Hughston and Macrina (BHM) isolates the emergence of information, and examines its role as a driver of price dynamics. This approach has led to the development of new models that capture a broad range of price behaviour. This thesis extends the work of BHM by introducing a wider class of processes for the generation of the market filtration. In the BHM framework, each asset is associated with a collection of random cash flows. The asset price is the sum of the discounted expectations of the cash flows. Expectations are taken with respect (i) an appropriate measure, and (ii) the filtration generated by a set of so-called information processes that carry noisy or imperfect market information about the cash flows. To model the flow of information, we introduce a class of processes termed Lévy random bridges (LRBs), generalising the Brownian and gamma information processes of BHM. Conditioned on its terminal value, an LRB is identical in law to a Lévy bridge. We consider in detail the case where the asset generates a single cash flow X_T at a fixed date T. The flow of information about X_T is modelled by an LRB with random terminal value X_T. An explicit expression for the price process is found by working out the discounted conditional expectation of X_T with respect to the natural filtration of the LRB. New models are constructed using information processes related to the Poisson process, the Cauchy process, the stable-1/2 subordinator, the variance-gamma process, and the normal inverse-Gaussian process. These are applied to the valuation of credit-risky bonds, vanilla and exotic options, and non-life insurance liabilities.

  10. Closed Loop Brain Model of Neocortical Information Based Exchange

    Directory of Open Access Journals (Sweden)

    James eKozloski

    2016-01-01

    Full Text Available Here we describe an information based exchange' model of brain function that ascribes to neocortex, basal ganglia, and thalamus distinct network functions. The model allows us to analyze whole brain system set point measures, such as the rate and heterogeneity of transitions in striatum and neocortex, in the context of neuromodulation and other perturbations. Our closed-loop model is grounded in neuroanatomical observations, proposing a novel Grand Loop through neocortex, and invokes different forms of plasticity at specific tissue interfaces and their principle cell synapses to achieve these transitions. By implementing a system for maximum information based exchange of action potentials between modeled neocortical areas, we observe changes to these measures in simulation. We hypothesize that similar dynamic set points and modulations exist in the brain's resting state activity, and that different modifications to information based exchange may shift the risk profile of different component tissues, resulting in different neurodegenerative diseases. This model is targeted for further development using IBM's Neural Tissue Simulator, which allows scalable elaboration of networks, tissues, and their neural and synaptic components towards ever greater complexity and biological realism.

  11. A Spread Willingness Computing-Based Information Dissemination Model

    Science.gov (United States)

    Cui, Zhiming; Zhang, Shukui

    2014-01-01

    This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network. PMID:25110738

  12. A spread willingness computing-based information dissemination model.

    Science.gov (United States)

    Huang, Haojing; Cui, Zhiming; Zhang, Shukui

    2014-01-01

    This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  13. A Spread Willingness Computing-Based Information Dissemination Model

    Directory of Open Access Journals (Sweden)

    Haojing Huang

    2014-01-01

    Full Text Available This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user’s spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.

  14. Construction project investment control model based on instant information

    Institute of Scientific and Technical Information of China (English)

    WANG Xue-tong

    2006-01-01

    Change of construction conditions always influences project investment by causing the loss of construction work time and extending the duration. To resolve such problem as difficult dynamic control in work construction plan, this article presents a concept of instant optimization by ways of adjustment operation time of each working procedure to minimize investment change. Based on this concept, its mathematical model is established and a strict mathematical justification is performed. An instant optimization model takes advantage of instant information in the construction process to duly complete adjustment of construction; thus we maximize cost efficiency of project investment.

  15. An information spreading model based on online social networks

    Science.gov (United States)

    Wang, Tao; He, Juanjuan; Wang, Xiaoxia

    2018-01-01

    Online social platforms are very popular in recent years. In addition to spreading information, users could review or collect information on online social platforms. According to the information spreading rules of online social network, a new information spreading model, namely IRCSS model, is proposed in this paper. It includes sharing mechanism, reviewing mechanism, collecting mechanism and stifling mechanism. Mean-field equations are derived to describe the dynamics of the IRCSS model. Moreover, the steady states of reviewers, collectors and stiflers and the effects of parameters on the peak values of reviewers, collectors and sharers are analyzed. Finally, numerical simulations are performed on different networks. Results show that collecting mechanism and reviewing mechanism, as well as the connectivity of the network, make information travel wider and faster, and compared to WS network and ER network, the speed of reviewing, sharing and collecting information is fastest on BA network.

  16. A stream-based mathematical model for distributed information processing systems - SysLab system model

    OpenAIRE

    Klein, Cornel; Rumpe, Bernhard; Broy, Manfred

    2014-01-01

    In the SysLab project we develop a software engineering method based on a mathematical foundation. The SysLab system model serves as an abstract mathematical model for information systems and their components. It is used to formalize the semantics of all used description techniques such as object diagrams state automata sequence charts or data-flow diagrams. Based on the requirements for such a reference model, we define the system model including its different views and their relationships.

  17. Detecting Hotspot Information Using Multi-Attribute Based Topic Model.

    Directory of Open Access Journals (Sweden)

    Jing Wang

    Full Text Available Microblogging as a kind of social network has become more and more important in our daily lives. Enormous amounts of information are produced and shared on a daily basis. Detecting hot topics in the mountains of information can help people get to the essential information more quickly. However, due to short and sparse features, a large number of meaningless tweets and other characteristics of microblogs, traditional topic detection methods are often ineffective in detecting hot topics. In this paper, we propose a new topic model named multi-attribute latent dirichlet allocation (MA-LDA, in which the time and hashtag attributes of microblogs are incorporated into LDA model. By introducing time attribute, MA-LDA model can decide whether a word should appear in hot topics or not. Meanwhile, compared with the traditional LDA model, applying hashtag attribute in MA-LDA model gives the core words an artificially high ranking in results meaning the expressiveness of outcomes can be improved. Empirical evaluations on real data sets demonstrate that our method is able to detect hot topics more accurately and efficiently compared with several baselines. Our method provides strong evidence of the importance of the temporal factor in extracting hot topics.

  18. Detecting Hotspot Information Using Multi-Attribute Based Topic Model

    Science.gov (United States)

    Wang, Jing; Li, Li; Tan, Feng; Zhu, Ying; Feng, Weisi

    2015-01-01

    Microblogging as a kind of social network has become more and more important in our daily lives. Enormous amounts of information are produced and shared on a daily basis. Detecting hot topics in the mountains of information can help people get to the essential information more quickly. However, due to short and sparse features, a large number of meaningless tweets and other characteristics of microblogs, traditional topic detection methods are often ineffective in detecting hot topics. In this paper, we propose a new topic model named multi-attribute latent dirichlet allocation (MA-LDA), in which the time and hashtag attributes of microblogs are incorporated into LDA model. By introducing time attribute, MA-LDA model can decide whether a word should appear in hot topics or not. Meanwhile, compared with the traditional LDA model, applying hashtag attribute in MA-LDA model gives the core words an artificially high ranking in results meaning the expressiveness of outcomes can be improved. Empirical evaluations on real data sets demonstrate that our method is able to detect hot topics more accurately and efficiently compared with several baselines. Our method provides strong evidence of the importance of the temporal factor in extracting hot topics. PMID:26496635

  19. Avian Information Systems: Developing Web-Based Bird Avoidance Models

    Directory of Open Access Journals (Sweden)

    Judy Shamoun-Baranes

    2008-12-01

    Full Text Available Collisions between aircraft and birds, so-called "bird strikes," can result in serious damage to aircraft and even in the loss of lives. Information about the distribution of birds in the air and on the ground can be used to reduce the risk of bird strikes and their impact on operations en route and in and around air fields. Although a wealth of bird distribution and density data is collected by numerous organizations, these data are not readily available nor interpretable by aviation. This paper presents two national efforts, one in the Netherlands and one in the United States, to develop bird avoidance nodels for aviation. These models integrate data and expert knowledge on bird distributions and migratory behavior to provide hazard maps in the form of GIS-enabled Web services. Both models are in operational use for flight planning and flight alteration and for airfield and airfield vicinity management. These models and their presentation on the Internet are examples of the type of service that would be very useful in other fields interested in species distribution and movement information, such as conservation, disease transmission and prevention, or assessment and mitigation of anthropogenic risks to nature. We expect that developments in cyber-technology, a transition toward an open source philosophy, and higher demand for accessible biological data will result in an increase in the number of biological information systems available on the Internet.

  20. An Object-Oriented Information Model for Policy-based Management of Distributed Applications

    NARCIS (Netherlands)

    Diaz, G.; Gay, V.C.J.; Horlait, E.; Hamza, M.H.

    2002-01-01

    This paper presents an object-oriented information model to support a policy-based management for distributed multimedia applications. The information base contains application-level information about the users, the applications, and their profile. Our Information model is described in details and

  1. Fault diagnosis model for power transformers based on information fusion

    Science.gov (United States)

    Dong, Ming; Yan, Zhang; Yang, Li; Judd, Martin D.

    2005-07-01

    Methods used to assess the insulation status of power transformers before they deteriorate to a critical state include dissolved gas analysis (DGA), partial discharge (PD) detection and transfer function techniques, etc. All of these approaches require experience in order to correctly interpret the observations. Artificial intelligence (AI) is increasingly used to improve interpretation of the individual datasets. However, a satisfactory diagnosis may not be obtained if only one technique is used. For example, the exact location of PD cannot be predicted if only DGA is performed. However, using diverse methods may result in different diagnosis solutions, a problem that is addressed in this paper through the introduction of a fuzzy information infusion model. An inference scheme is proposed that yields consistent conclusions and manages the inherent uncertainty in the various methods. With the aid of information fusion, a framework is established that allows different diagnostic tools to be combined in a systematic way. The application of information fusion technique for insulation diagnostics of transformers is proved promising by means of examples.

  2. A Model Based on Cocitation for Web Information Retrieval

    Directory of Open Access Journals (Sweden)

    Yue Xie

    2014-01-01

    Full Text Available According to the relationship between authority and cocitation in HITS, we propose a new hyperlink weighting scheme to describe the strength of the relevancy between any two webpages. Then we combine hyperlink weight normalization and random surfing schemes as used in PageRank to justify the new model. In the new model based on cocitation (MBCC, the pages with stronger relevancy are assigned higher values, not just depending on the outlinks. This model combines both features of HITS and PageRank. Finally, we present the results of some numerical experiments, showing that the MBCC ranking agrees with the HITS ranking, especially in top 10. Meanwhile, MBCC keeps the superiority of PageRank, that is, existence and uniqueness of ranking vectors.

  3. Hybrid modelling framework by using mathematics-based and information-based methods

    International Nuclear Information System (INIS)

    Ghaboussi, J; Kim, J; Elnashai, A

    2010-01-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  4. Model based climate information on drought risk in Africa

    Science.gov (United States)

    Calmanti, S.; Syroka, J.; Jones, C.; Carfagna, F.; Dell'Aquila, A.; Hoefsloot, P.; Kaffaf, S.; Nikulin, G.

    2012-04-01

    The United Nations World Food Programme (WFP) has embarked upon the endeavor of creating a sustainable Africa-wide natural disaster risk management system. A fundamental building block of this initiative is the setup of a drought impact modeling platform called Africa Risk-View that aims to quantify and monitor weather-related food security risk in Africa. The modeling approach is based the Water Requirement Satisfaction Index (WRSI), as the fundamental indicator of the performances of agriculture and uses historical records of food assistance operation to project future potential needs for livelihood protection. By using climate change scenarios as an input to Africa Risk-View it is possible, in principles, to evaluate the future impact of climate variability on critical issues such as food security and the overall performance of the envisaged risk management system. A necessary preliminary step to this challenging task is the exploration of the sources of uncertainties affecting the assessment based on modeled climate change scenarios. For this purpose, a limited set of climate models have been selected in order verify the relevance of using climate model output data with Africa Risk-View and to explore a minimal range of possible sources of uncertainty. This first evaluation exercise started before the setup of the CORDEX framework and has relied on model output available at the time. In particular only one regional downscaling was available for the entire African continent from the ENSEMBLES project. The analysis shows that current coarse resolution global climate models can not directly feed into the Africa RiskView risk-analysis tool. However, regional downscaling may help correcting the inherent biases observed in the datasets. Further analysis is performed by using the first data available under the CORDEX framework. In particular, we consider a set of simulation driven with boundary conditions from the reanalysis ERA-Interim to evaluate the skill drought

  5. INFORMATION SYSTEM QUALITY INFLUENCE ON ORGANIZATION PERFORMANCE: A MODIFICATION OF TECHNOLOGY-BASED INFORMATION SYSTEM ACCEPTANCE AND SUCCESS MODEL

    Directory of Open Access Journals (Sweden)

    Trisnawati N.

    2017-12-01

    Full Text Available This study aims to examine the effect of information system quality on technology-based accounting information systems usage and their impact on organizational performance on local government. This study is based on Technology Acceptance Model (TAM, IS Success Model, and the success of technology-based information systems. This study is a combination of previous studies conducted by Seddon and Kiew (1997, Saeed and Helm (2008, and DeLone and McLean (1992. This study used survey method and took 101 respondents from accounting staff working in Malang and Mojokerto regencies. This study uses Partial Least Square to examine research data. Research result exhibits information system qualities affecting benefit perception and user satisfaction. Technology-based accounting information systems usage in local government is influenced by benefits perception and user satisfaction. Research result concluded that technology-based accounting information systems usage will affect the performance of local government organizations.

  6. An Abstraction-Based Data Model for Information Retrieval

    Science.gov (United States)

    McAllister, Richard A.; Angryk, Rafal A.

    Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.

  7. Landscape Epidemiology Modeling Using an Agent-Based Model and a Geographic Information System

    Directory of Open Access Journals (Sweden)

    S. M. Niaz Arifin

    2015-05-01

    Full Text Available A landscape epidemiology modeling framework is presented which integrates the simulation outputs from an established spatial agent-based model (ABM of malaria with a geographic information system (GIS. For a study area in Kenya, five landscape scenarios are constructed with varying coverage levels of two mosquito-control interventions. For each scenario, maps are presented to show the average distributions of three output indices obtained from the results of 750 simulation runs. Hot spot analysis is performed to detect statistically significant hot spots and cold spots. Additional spatial analysis is conducted using ordinary kriging with circular semivariograms for all scenarios. The integration of epidemiological simulation-based results with spatial analyses techniques within a single modeling framework can be a valuable tool for conducting a variety of disease control activities such as exploring new biological insights, monitoring epidemiological landscape changes, and guiding resource allocation for further investigation.

  8. Information Sharing In Shipbuilding based on the Product State Model

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    The paper provides a review of product modelling technologies and the overall architecture for the Product State Model (PSM) environment as a basis for how dynamically updated product data can improve control of production activities. Especially, the paper focuses on the circumstances prevailing...

  9. Model of informational system for freight insurance automation based on digital signature

    OpenAIRE

    Maxim E. SLOBODYANYUK

    2009-01-01

    In the article considered a model of informational system for freight insurance automation based on digital signature, showed architecture, macro flowchart of information flow in model, components (modules) and their functions. Described calculation method of costs on interactive cargo insurance via proposed system, represented main characteristics and options of existing transport management systems, conceptual cost models.

  10. Model of informational system for freight insurance automation based on digital signature

    Directory of Open Access Journals (Sweden)

    Maxim E. SLOBODYANYUK

    2009-01-01

    Full Text Available In the article considered a model of informational system for freight insurance automation based on digital signature, showed architecture, macro flowchart of information flow in model, components (modules and their functions. Described calculation method of costs on interactive cargo insurance via proposed system, represented main characteristics and options of existing transport management systems, conceptual cost models.

  11. Temporal expectation and information processing: A model-based analysis

    NARCIS (Netherlands)

    Jepma, M.; Wagenmakers, E.-J.; Nieuwenhuis, S.

    2012-01-01

    People are able to use temporal cues to anticipate the timing of an event, enabling them to process that event more efficiently. We conducted two experiments, using the fixed-foreperiod paradigm (Experiment 1) and the temporal-cueing paradigm (Experiment 2), to assess which components of information

  12. Temporal Expectation and Information Processing: A Model-Based Analysis

    Science.gov (United States)

    Jepma, Marieke; Wagenmakers, Eric-Jan; Nieuwenhuis, Sander

    2012-01-01

    People are able to use temporal cues to anticipate the timing of an event, enabling them to process that event more efficiently. We conducted two experiments, using the fixed-foreperiod paradigm (Experiment 1) and the temporal-cueing paradigm (Experiment 2), to assess which components of information processing are speeded when subjects use such…

  13. MDA-based interoperability establishment using language independent information models

    OpenAIRE

    Agostinho C.; Cerny J.; Jardim-Goncalves R.

    2012-01-01

    Part 2: Full Papers; International audience; Nowadays, more and more enterprises realize that one important step to success in their business is to create new and innovative products. Many times the solution to do that is to abandon the idea of an enterprise as an “isolated island”, and get collaboration with others: worldwide non-hierarchical networks are characterized by collaboration and non-centralized decision making. This paper proposes a conceptual model common to the entire business n...

  14. Research on information models for the construction schedule management based on the IFC standard

    Directory of Open Access Journals (Sweden)

    Weirui Xue

    2015-05-01

    Full Text Available Purpose: The purpose of this article is to study the description and extension of the Industry Foundation Classes (IFC standard in construction schedule management, which achieves the information exchange and sharing among the different information systems and stakeholders, and facilitates the collaborative construction in the construction projects. Design/methodology/approach: The schedule information processing and coordination are difficult in the complex construction project. Building Information Modeling (BIM provides the platform for exchanging and sharing information among information systems and stakeholders based on the IFC standard. Through analyzing the schedule plan, implementing, check and control, the information flow in the schedule management is reflected based on the IDEF. According to the IFC4, the information model for the schedule management is established, which not only includes the each aspect of the schedule management, but also includes the cost management, the resource management, the quality management and the risk management. Findings: The information requirement for the construction schedule management can be summarized into three aspects: the schedule plan information, the implementing information and the check and control information. The three aspects can be described through the existing and extended entities of IFC4, and the information models are established. Originality/value: The main contribution of the article is to establish the construction schedule management information model, which achieves the information exchange and share in the construction project, and facilitates the development of the application software to meet the requirements of the construction project.

  15. NEMO. Netherlands Energy demand MOdel. A top-down model based on bottom-up information

    International Nuclear Information System (INIS)

    Koopmans, C.C.; Te Velde, D.W.; Groot, W.; Hendriks, J.H.A.

    1999-06-01

    The title model links energy use to other production factors, (physical) production, energy prices, technological trends and government policies. It uses a 'putty-semiputty' vintage production structure, in which new investments, adaptations to existing capital goods (retrofit) and 'good-housekeeping' are discerned. Price elasticities are relatively large in the long term and small in the short term. Most predictions of energy use are based on either econometric models or on 'bottom-up information', i.e. disaggregated lists of technical possibilities for and costs of saving energy. Typically, one predicts more energy-efficiency improvements using bottom-up information than using econometric ('top-down') models. We bridged this so-called 'energy-efficiency gap' by designing our macro/meso model NEMO in such a way that we can use bottom-up (micro) information to estimate most model parameters. In our view, reflected in NEMO, the energy-efficiency gap arises for two reasons. The first is that firms and households use a fairly high discount rate of 15% when evaluating the profitability of energy-efficiency improvements. The second is that our bottom-up information ('ICARUS') for most economic sectors does not (as NEMO does) take account of the fact that implementation of new, energy-efficient technology in capital stock takes place only gradually. Parameter estimates for 19 sectors point at a long-term technological energy efficiency improvement trend in Netherlands final energy use of 0.8% per year. The long-term price elasticity is estimated to be 0.29. These values are comparable to other studies based on time series data. Simulations of the effects of the oil price shocks in the seventies and the subsequent fall of oil prices show that the NEMO's price elasticities are consistent with historical data. However, the present pace at which new technologies become available (reflected in NEMO) appears to be lower than in the seventies and eighties. This suggests that it

  16. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  17. A rule-based backchannel prediction model using pitch and pause information

    NARCIS (Netherlands)

    Truong, Khiet Phuong; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We manually designed rules for a backchannel (BC) prediction model based on pitch and pause information. In short, the model predicts a BC when there is a pause of a certain length that is preceded by a falling or rising pitch. This model was validated against the Dutch IFADV Corpus in a

  18. Russian and Foreign Experience of Integration of Agent-Based Models and Geographic Information Systems

    Directory of Open Access Journals (Sweden)

    Konstantin Anatol’evich Gulin

    2016-11-01

    Full Text Available The article provides an overview of the mechanisms of integration of agent-based models and GIS technology developed by Russian and foreign researchers. The basic framework of the article is based on critical analysis of domestic and foreign literature (monographs, scientific articles. The study is based on the application of universal scientific research methods: system approach, analysis and synthesis, classification, systematization and grouping, generalization and comparison. The article presents theoretical and methodological bases of integration of agent-based models and geographic information systems. The concept and essence of agent-based models are explained; their main advantages (compared to other modeling methods are identified. The paper characterizes the operating environment of agents as a key concept in the theory of agent-based modeling. It is shown that geographic information systems have a wide range of information resources for calculations, searching, modeling of the real world in various aspects, acting as an effective tool for displaying the agents’ operating environment and allowing to bring the model as close as possible to the real conditions. The authors also focus on a wide range of possibilities for various researches in different spatial and temporal contexts. Comparative analysis of platforms supporting the integration of agent-based models and geographic information systems has been carried out. The authors give examples of complex socio-economic models: the model of a creative city, humanitarian assistance model. In the absence of standards for research results description, the authors focus on the models’ elements such as the characteristics of the agents and their operation environment, agents’ behavior, rules of interaction between the agents and the external environment. The paper describes the possibilities and prospects of implementing these models

  19. Information driving force and its application in agent-based modeling

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2018-04-01

    Exploring the scientific impact of online big-data has attracted much attention of researchers from different fields in recent years. Complex financial systems are typical open systems profoundly influenced by the external information. Based on the large-scale data in the public media and stock markets, we first define an information driving force, and analyze how it affects the complex financial system. The information driving force is observed to be asymmetric in the bull and bear market states. As an application, we then propose an agent-based model driven by the information driving force. Especially, all the key parameters are determined from the empirical analysis rather than from statistical fitting of the simulation results. With our model, both the stationary properties and non-stationary dynamic behaviors are simulated. Considering the mean-field effect of the external information, we also propose a few-body model to simulate the financial market in the laboratory.

  20. QUALITY INSPECTION AND ANALYSIS OF THREE-DIMENSIONAL GEOGRAPHIC INFORMATION MODEL BASED ON OBLIQUE PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    S. Dong

    2018-04-01

    Full Text Available In order to promote the construction of digital geo-spatial framework in China and accelerate the construction of informatization mapping system, three-dimensional geographic information model emerged. The three-dimensional geographic information model based on oblique photogrammetry technology has higher accuracy, shorter period and lower cost than traditional methods, and can more directly reflect the elevation, position and appearance of the features. At this stage, the technology of producing three-dimensional geographic information models based on oblique photogrammetry technology is rapidly developing. The market demand and model results have been emerged in a large amount, and the related quality inspection needs are also getting larger and larger. Through the study of relevant literature, it is found that there are a lot of researches on the basic principles and technical characteristics of this technology, and relatively few studies on quality inspection and analysis. On the basis of summarizing the basic principle and technical characteristics of oblique photogrammetry technology, this paper introduces the inspection contents and inspection methods of three-dimensional geographic information model based on oblique photogrammetry technology. Combined with the actual inspection work, this paper summarizes the quality problems of three-dimensional geographic information model based on oblique photogrammetry technology, analyzes the causes of the problems and puts forward the quality control measures. It provides technical guidance for the quality inspection of three-dimensional geographic information model data products based on oblique photogrammetry technology in China and provides technical support for the vigorous development of three-dimensional geographic information model based on oblique photogrammetry technology.

  1. Quality Inspection and Analysis of Three-Dimensional Geographic Information Model Based on Oblique Photogrammetry

    Science.gov (United States)

    Dong, S.; Yan, Q.; Xu, Y.; Bai, J.

    2018-04-01

    In order to promote the construction of digital geo-spatial framework in China and accelerate the construction of informatization mapping system, three-dimensional geographic information model emerged. The three-dimensional geographic information model based on oblique photogrammetry technology has higher accuracy, shorter period and lower cost than traditional methods, and can more directly reflect the elevation, position and appearance of the features. At this stage, the technology of producing three-dimensional geographic information models based on oblique photogrammetry technology is rapidly developing. The market demand and model results have been emerged in a large amount, and the related quality inspection needs are also getting larger and larger. Through the study of relevant literature, it is found that there are a lot of researches on the basic principles and technical characteristics of this technology, and relatively few studies on quality inspection and analysis. On the basis of summarizing the basic principle and technical characteristics of oblique photogrammetry technology, this paper introduces the inspection contents and inspection methods of three-dimensional geographic information model based on oblique photogrammetry technology. Combined with the actual inspection work, this paper summarizes the quality problems of three-dimensional geographic information model based on oblique photogrammetry technology, analyzes the causes of the problems and puts forward the quality control measures. It provides technical guidance for the quality inspection of three-dimensional geographic information model data products based on oblique photogrammetry technology in China and provides technical support for the vigorous development of three-dimensional geographic information model based on oblique photogrammetry technology.

  2. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  3. Stochastic Modeling of Usage Patterns in a Web-Based Information System.

    Science.gov (United States)

    Chen, Hui-Min; Cooper, Michael D.

    2002-01-01

    Uses continuous-time stochastic models, mainly based on semi-Markov chains, to derive user state transition patterns, both in rates and in probabilities, in a Web-based information system. Describes search sessions from transaction logs of the University of California's MELVYL library catalog system and discusses sequential dependency. (Author/LRW)

  4. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    Science.gov (United States)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  5. Ontological Model-Based Transparent Access To Information In A Medical Multi-Agent System

    Directory of Open Access Journals (Sweden)

    Felicia GÎZĂ-BELCIUG

    2012-01-01

    Full Text Available Getting the full electronic medical record of a patient is an important step in providing a quality medical service. But the degree of heterogeneity of data from health unit informational systems is very high, because each unit can have a different model for storing patients’ medical data. In order to achieve the interoperability and integration of data from various medical units that store partial patient medical information, this paper proposes a multi-agent systems and ontology based approach. Therefore, we present an ontological model for describing the particular structure of the data integration process. The system is to be used for centralizing the information from a patient’s partial medical records. The main advantage of the proposed model is the low ratio between the complexity of the model and the amount of information that can be retrieved in order to generate the complete medical history of a patient.

  6. Implementation of Model View Controller (Mvc) Architecture on Building Web-based Information System

    OpenAIRE

    'Uyun, Shofwatul; Ma'arif, Muhammad Rifqi

    2010-01-01

    The purpose of this paper is to introduce the use of MVC architecture in web-based information systemsdevelopment. MVC (Model-View-Controller) architecture is a way to decompose the application into threeparts: model, view and controller. Originally applied to the graphical user interaction model of input,processing and output. Expected to use the MVC architecture, applications can be built maintenance of moremodular, rusable, and easy and migrate. We have developed a management system of sch...

  7. IMPLEMENTATION OF MODEL VIEW CONTROLLER (MVC) ARCHITECTURE ON BUILDING WEB-BASED INFORMATION SYSTEM

    OpenAIRE

    'Uyun, Shofwatul; Ma'arif, Muhammad Rifqi

    2010-01-01

    The purpose of this paper is to introduce the use of MVC architecture in web-based information systemsdevelopment. MVC (Model-View-Controller) architecture is a way to decompose the application into threeparts: model, view and controller. Originally applied to the graphical user interaction model of input,processing and output. Expected to use the MVC architecture, applications can be built maintenance of moremodular, rusable, and easy and migrate. We have developed a management system of sch...

  8. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    Science.gov (United States)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated

  9. An Improved Information Value Model Based on Gray Clustering for Landslide Susceptibility Mapping

    Directory of Open Access Journals (Sweden)

    Qianqian Ba

    2017-01-01

    Full Text Available Landslides, as geological hazards, cause significant casualties and economic losses. Therefore, it is necessary to identify areas prone to landslides for prevention work. This paper proposes an improved information value model based on gray clustering (IVM-GC for landslide susceptibility mapping. This method uses the information value derived from an information value model to achieve susceptibility classification and weight determination of landslide predisposing factors and, hence, obtain the landslide susceptibility of each study unit based on the clustering analysis. Using a landslide inventory of Chongqing, China, which contains 8435 landslides, three landslide susceptibility maps were generated based on the common information value model (IVM, an information value model improved by an analytic hierarchy process (IVM-AHP and our new improved model. Approximately 70% (5905 of the inventory landslides were used to generate the susceptibility maps, while the remaining 30% (2530 were used to validate the results. The training accuracies of the IVM, IVM-AHP and IVM-GC were 81.8%, 78.7% and 85.2%, respectively, and the prediction accuracies were 82.0%, 78.7% and 85.4%, respectively. The results demonstrate that all three methods perform well in evaluating landslide susceptibility. Among them, IVM-GC has the best performance.

  10. [Development method of healthcare information system integration based on business collaboration model].

    Science.gov (United States)

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  11. An Elaboration of a Strategic Alignment Model of University Information Systems based on SAM Model

    Directory of Open Access Journals (Sweden)

    S. Ahriz

    2018-02-01

    Full Text Available Information system is a guarantee of the universities' ability to anticipate the essential functions to their development and durability. The alignment of information system, one of the pillars of IT governance, has become a necessity. In this paper, we consider the problem of strategic alignment model implementation in Moroccan universities. Literature revealed that few studies have examined strategic alignment in the public sector, particularly in higher education institutions. Hence we opted for an exploratory approach that aims to better understanding the strategic alignment and to evaluate the degree of its use within Moroccan universities. The data gained primarily through interviews with top managers and IT managers reveal that the alignment is not formalized and that it would be appropriate to implement an alignment model. It is found that the implementation of our proposed model can help managers to maximize returns of IT investment and to increase their efficiency.

  12. The Research on Informal Learning Model of College Students Based on SNS and Case Study

    Science.gov (United States)

    Lu, Peng; Cong, Xiao; Bi, Fangyan; Zhou, Dongdai

    2017-03-01

    With the rapid development of network technology, informal learning based on online become the main way for college students to learn a variety of subject knowledge. The favor to the SNS community of students and the characteristics of SNS itself provide a good opportunity for the informal learning of college students. This research first analyzes the related research of the informal learning and SNS, next, discusses the characteristics of informal learning and theoretical basis. Then, it proposed an informal learning model of college students based on SNS according to the support role of SNS to the informal learning of students. Finally, according to the theoretical model and the principles proposed in this study, using the Elgg and related tools which is the open source SNS program to achieve the informal learning community. This research is trying to overcome issues such as the lack of social realism, interactivity, resource transfer mode in the current network informal learning communities, so as to provide a new way of informal learning for college students.

  13. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  14. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  15. On the impact of information delay on location-based relaying: a markov modeling approach

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2012-01-01

    For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability of the col......For centralized selection of communication relays, the necessary decision information needs to be collected from the mobile nodes by the access point (centralized decision point). In mobile scenarios, the required information collection and forwarding delays will affect the reliability...... of the collected information and hence will influence the performance of the relay selection method. This paper analyzes this influence in the decision process for the example of a mobile location-based relay selection approach using a continuous time Markov chain model. The model is used to obtain optimal relay...

  16. Does the Model of Evaluation Based on Fair Value Answer the Requests of Financial Information Users?

    OpenAIRE

    Mitea Neluta; Sarac Aldea Laura

    2010-01-01

    Does the model of evaluation based on the fair value answers the requests of the financial information users? The financial situations have as purposes the presentation of the information concerning the enterprise financial position, the performances and modifications of this position which, according to IASB and FASB, must be credible and useful. Both referential maintain the existence of several conventions regarding assessment, like historical cost, actual cost, the realizable value or act...

  17. Variable cycle control model for intersection based on multi-source information

    Science.gov (United States)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  18. Modeling and Security Threat Assessments of Data Processed in Cloud Based Information Systems

    Directory of Open Access Journals (Sweden)

    Darya Sergeevna Simonenkova

    2016-03-01

    Full Text Available The subject of the research is modeling and security threat assessments of data processed in cloud based information systems (CBIS. This method allow to determine the current security threats of CBIS, state of the system in which vulnerabilities exists, level of possible violators, security properties and to generate recommendations for neutralizing security threats of CBIS.

  19. Sustainable Manufacturing via Multi-Scale, Physics-Based Process Modeling and Manufacturing-Informed Design

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-04-01

    This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.

  20. A Model for Web-based Information Systems in E-Retailing.

    Science.gov (United States)

    Wang, Fang; Head, Milena M.

    2001-01-01

    Discusses the use of Web-based information systems (WIS) by electronic retailers to attract and retain consumers and deliver business functions and strategy. Presents an abstract model for WIS design in electronic retailing; discusses customers, business determinants, and business interface; and suggests future research. (Author/LRW)

  1. Quantum-like model of processing of information in the brain based on classical electromagnetic field.

    Science.gov (United States)

    Khrennikov, Andrei

    2011-09-01

    We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    Science.gov (United States)

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  3. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    Science.gov (United States)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  4. In-House Communication Support System Based on the Information Propagation Model Utilizes Social Network

    Science.gov (United States)

    Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji

    Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.

  5. Analysis of the Effect of Information System Quality to Intention to Reuse of Employee Management Information System (Simpeg Based on Information Systems Success Model

    Directory of Open Access Journals (Sweden)

    Suryanto Tri Lathif Mardi

    2016-01-01

    Full Text Available This study examines the effect of Information Quality, Systems Quality and Service Quality on the user intention to reuse Employee Management Information System (SIMPEG in University in the city of Surabaya, based on the theoretical foundation of DeLone and McLane Information Systems Success (ISS Model. The distribution of questionnaire was conducted to 120 employees of different universities by means of stratified random sampling. The results showed that: (1 there is a significant positive effect of the System Quality on the Quality of Information, (2 there is a significant positive effect of the Information Quality on the Intention to Reuse, information related to the fulfillment of the user’s needs; (3 there is a significant positive effect of the Quality of the Intention on system re-use, the system related to the fulfillment of the needs of users; (4 there is no effect of the Quality of Service to the Intention to Reuse. In the end, the results of this study provide an analysis and advice to The University officials that can be used as a consideration for Information Technology/Information System investment and development in accordance with the Success of Information System and Intention to Reuse model.

  6. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  7. New Challenges for the Management of the Development of Information Systems Based on Complex Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    2002-01-01

    has been initiated with the scope of investigating the questions that mathematical modelling technology poses to traditional information systems development projects. Based on the past body of research, this study proposes a framework to guide decision making for managing projects of information......The advancements in complexity and sophistication of mathematical models for manufacturing scheduling and control and the increase of the ratio power/cost of computers are beginning to provide the manufacturing industry with new software tools to improve production. A Danish action research project...... systems development. In a presented case the indications of the model are compared with the decisions taken during the development. The results highlight discrepancies between the structure and predictions of the model and the case observations, especially with regard to the importance given to the users...

  8. Advanced model for expansion of natural gas distribution networks based on geographic information systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Rosado, I.J.; Fernandez-Jimenez, L.A.; Garcia-Garrido, E.; Zorzano-Santamaria, P.; Zorzano-Alba, E. [La Rioja Univ., La Rioja (Spain). Dept. of Electrical Engineering; Miranda, V.; Montneiro, C. [Porto Univ., Porto (Portugal). Faculty of Engineering]|[Inst. de Engenharia de Sistemas e Computadores do Porto, Porto (Portugal)

    2005-07-01

    An advanced geographic information system (GIS) model of natural gas distribution networks was presented. The raster-based model was developed to evaluate costs associated with the expansion of electrical networks due to increased demand in the La Rioja region of Spain. The model was also used to evaluate costs associated with maintenance and amortization of the already existing distribution network. Expansion costs of the distribution network were modelled in various demand scenarios. The model also considered a variety of technical factors associated with pipeline length and topography. Soil and slope data from previous pipeline projects were used to estimate real costs per unit length of pipeline. It was concluded that results obtained by the model will be used by planners to select zones where expansion is economically feasible. 4 refs., 5 figs.

  9. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  10. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  11. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  12. Geographic information system-coupling sediment delivery distributed modeling based on observed data.

    Science.gov (United States)

    Lee, S E; Kang, S H

    2014-01-01

    Spatially distributed sediment delivery (SEDD) models are of great interest in estimating the expected effect of changes on soil erosion and sediment yield. However, they can only be applied if the model can be calibrated using observed data. This paper presents a geographic information system (GIS)-based method to calculate the sediment discharge from basins to coastal areas. For this, an SEDD model, with a sediment rating curve method based on observed data, is proposed and validated. The model proposed here has been developed using the combined application of the revised universal soil loss equation (RUSLE) and a spatially distributed sediment delivery ratio, within Model Builder of ArcGIS's software. The model focuses on spatial variability and is useful for estimating the spatial patterns of soil loss and sediment discharge. The model consists of two modules, a soil erosion prediction component and a sediment delivery model. The integrated approach allows for relatively practical and cost-effective estimation of spatially distributed soil erosion and sediment delivery, for gauged or ungauged basins. This paper provides the first attempt at estimating sediment delivery ratio based on observed data in the monsoon region of Korea.

  13. Model-free stochastic processes studied with q-wavelet-based informational tools

    International Nuclear Information System (INIS)

    Perez, D.G.; Zunino, L.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2007-01-01

    We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework

  14. A cyber-anima-based model of material conscious information network

    Directory of Open Access Journals (Sweden)

    Jianping Shen

    2017-03-01

    Full Text Available Purpose – This paper aims to study the node modeling, multi-agent architecture and addressing method for the material conscious information network (MCIN, which is a large-scaled, open-styled, self-organized and ecological intelligent network of supply–demand relationships. Design/methodology/approach – This study models the MCIN by node model definition, multi-agent architecture design and addressing method presentation. Findings – The prototype of novel E-commerce platform based on the MCIN shows the effectiveness and soundness of the MCIN modeling. By comparing to current internet, the authors also find that the MCIN has the advantages of socialization, information integration, collective intelligence, traceability, high robustness, unification of producing and consuming, high scalability and decentralization. Research limitations/implications – Leveraging the dimensions of structure, character, knowledge and experience, a modeling approach of the basic information can fit all kinds of the MCIN nodes. With the double chain structure for both basic and supply–demand information, the MCIN nodes can be modeled comprehensively. The anima-desire-intention-based multi-agent architecture makes the federated agents of the MCIN nodes self-organized and intelligent. The MCIN nodes can be efficiently addressed by the supply–demand-oriented method. However, the implementation of the MCIN is still in process. Practical implications – This paper lays the theoretical foundation for the future networked system of supply–demand relationship and the novel E-commerce platform. Originality/value – The authors believe that the MCIN, first proposed in this paper, is a transformational innovation which facilitates the infrastructure of the future networked system of supply–demand relationship.

  15. Modeling web-based information seeking by users who are blind.

    Science.gov (United States)

    Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti

    2011-01-01

    This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.

  16. Integrating 3D geological information with a national physically-based hydrological modelling system

    Science.gov (United States)

    Lewis, Elizabeth; Parkin, Geoff; Kessler, Holger; Whiteman, Mark

    2016-04-01

    Robust numerical models are an essential tool for informing flood and water management and policy around the world. Physically-based hydrological models have traditionally not been used for such applications due to prohibitively large data, time and computational resource requirements. Given recent advances in computing power and data availability, a robust, physically-based hydrological modelling system for Great Britain using the SHETRAN model and national datasets has been created. Such a model has several advantages over less complex systems. Firstly, compared with conceptual models, a national physically-based model is more readily applicable to ungauged catchments, in which hydrological predictions are also required. Secondly, the results of a physically-based system may be more robust under changing conditions such as climate and land cover, as physical processes and relationships are explicitly accounted for. Finally, a fully integrated surface and subsurface model such as SHETRAN offers a wider range of applications compared with simpler schemes, such as assessments of groundwater resources, sediment and nutrient transport and flooding from multiple sources. As such, SHETRAN provides a robust means of simulating numerous terrestrial system processes which will add physical realism when coupled to the JULES land surface model. 306 catchments spanning Great Britain have been modelled using this system. The standard configuration of this system performs satisfactorily (NSE > 0.5) for 72% of catchments and well (NSE > 0.7) for 48%. Many of the remaining 28% of catchments that performed relatively poorly (NSE land cover change studies and integrated assessments of groundwater and surface water resources.

  17. Comparison of co-expression measures: mutual information, correlation, and model based indices.

    Science.gov (United States)

    Song, Lin; Langfelder, Peter; Horvath, Steve

    2012-12-09

    Co-expression measures are often used to define networks among genes. Mutual information (MI) is often used as a generalized correlation measure. It is not clear how much MI adds beyond standard (robust) correlation measures or regression model based association measures. Further, it is important to assess what transformations of these and other co-expression measures lead to biologically meaningful modules (clusters of genes). We provide a comprehensive comparison between mutual information and several correlation measures in 8 empirical data sets and in simulations. We also study different approaches for transforming an adjacency matrix, e.g. using the topological overlap measure. Overall, we confirm close relationships between MI and correlation in all data sets which reflects the fact that most gene pairs satisfy linear or monotonic relationships. We discuss rare situations when the two measures disagree. We also compare correlation and MI based approaches when it comes to defining co-expression network modules. We show that a robust measure of correlation (the biweight midcorrelation transformed via the topological overlap transformation) leads to modules that are superior to MI based modules and maximal information coefficient (MIC) based modules in terms of gene ontology enrichment. We present a function that relates correlation to mutual information which can be used to approximate the mutual information from the corresponding correlation coefficient. We propose the use of polynomial or spline regression models as an alternative to MI for capturing non-linear relationships between quantitative variables. The biweight midcorrelation outperforms MI in terms of elucidating gene pairwise relationships. Coupled with the topological overlap matrix transformation, it often leads to more significantly enriched co-expression modules. Spline and polynomial networks form attractive alternatives to MI in case of non-linear relationships. Our results indicate that MI

  18. Study on a Threat-Countermeasure Model Based on International Standard Information

    Directory of Open Access Journals (Sweden)

    Guillermo Horacio Ramirez Caceres

    2008-12-01

    Full Text Available Many international standards exist in the field of IT security. This research is based on the ISO/IEC 15408, 15446, 19791, 13335 and 17799 standards. In this paper, we propose a knowledge base comprising a threat countermeasure model based on international standards for identifying and specifying threats which affect IT environments. In addition, the proposed knowledge base system aims at fusing similar security control policies and objectives in order to create effective security guidelines for specific IT environments. As a result, a knowledge base of security objectives was developed on the basis of the relationships inside the standards as well as the relationships between different standards. In addition, a web application was developed which displays details about the most common threats to information systems, and for each threat presents a set of related security control policies from different international standards, including ISO/IEC 27002.

  19. A geographic information system-based 3D city estate modeling and simulation system

    Science.gov (United States)

    Chong, Xiaoli; Li, Sha

    2015-12-01

    This paper introduces a 3D city simulation system which is based on geographic information system (GIS), covering all commercial housings of the city. A regional- scale, GIS-based approach is used to capture, describe, and track the geographical attributes of each house in the city. A sorting algorithm of "Benchmark + Parity Rate" is developed to cluster houses with similar spatial and construction attributes. This system is applicable for digital city modeling, city planning, housing evaluation, housing monitoring, and visualizing housing transaction. Finally, taking Jingtian area of Shenzhen as an example, the each unit of 35,997 houses in the area could be displayed, tagged, and easily tracked by the GIS-based city modeling and simulation system. The match market real conditions well and can be provided to house buyers as reference.

  20. An information-motivation-behavioral skills (IMB) model-based intervention for CABG patients.

    Science.gov (United States)

    Zarani, Fariba; Besharat, Mohammad Ali; Sarami, Gholamreza; Sadeghian, Saeed

    2012-12-01

    In order to benefit from a coronary artery bypass graft (CABG) surgery, patients must adhere to medical recommendations and health advices. Despite the importance of adherence in CABG patients, adherence rates are disappointingly low. Despite the low adherence rates, very few articles regarding adherence-enhancing intervention among heart patients have been published. The goal of this study was to assess the effects of the Information-Motivation-Behavioral Skills (IMB) model-based intervention on the IMB model constructs among patients undergoing CABG and to evaluate the relationship of information, motivation, and behavioral skills with adherence. A total of 152 CABG patients were randomly assigned to either an intervention group or to a standard care control group. Participants completed pretest measures and were reassessed 1 month later. Findings showed mixed support for the effectiveness of the intervention. There was a significant effect of IMB intervention on information and motivation of patients, but no significant effect on behavioral skills. Furthermore, the results revealed that intervention constructs (information, motivation, and behavioral skills) were significantly related to patients' adherence. Findings provided initial evidence for the effectiveness of IMB-based interventions on the IMB constructs and supported the importance of these constructs to improve adherence; however, there are additional factors that need to be identified in order to improve behavioral skills more effectively.

  1. An information diffusion model based on retweeting mechanism for online social media

    International Nuclear Information System (INIS)

    Xiong, Fei; Liu, Yun; Zhang, Zhen-jiang; Zhu, Jiang; Zhang, Ying

    2012-01-01

    To characterize information propagation on online microblogs, we propose a diffusion model (SCIR) which contains four possible states: Susceptible, contacted, infected and refractory. Agents that read the information but have not decided to spread it, stay in the contacted state. They may become infected or refractory, and both the infected and refractory state are stable. Results show during the evolution process, more contacted agents appear in scale-free networks than in regular lattices. The degree based density of infected agents increases with the degree monotonously, but larger average network degree doesn't always mean less relaxation time. -- Highlights: ► We study information diffusion on microblogs based on retweeting mechanism. ► We present a propagation model that contains four states, two of which are absorbing. ► The threshold value of spreading rate, almost approaches zero. ► The degree based density of infected agents increases with the degree monotonously. ► Influences between topics occur only when topics originate in the same neighborhood.

  2. Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

    Directory of Open Access Journals (Sweden)

    Weiying Wang

    2014-01-01

    Full Text Available Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  3. Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.

    Science.gov (United States)

    Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei

    2014-01-01

    Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

  4. Effective pollutant emission heights for atmospheric transport modelling based on real-world information

    International Nuclear Information System (INIS)

    Pregger, Thomas; Friedrich, Rainer

    2009-01-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling. - The comprehensive analysis of real-world stack data provides detailed default parameter values for improving vertical emission distribution in atmospheric modelling

  5. Applying an expectancy-value model to study motivators for work-task based information seeking

    DEFF Research Database (Denmark)

    Sigaard, Karen Tølbøl; Skov, Mette

    2015-01-01

    on the theory of expectancy-value and on the operationalisation used when the model was first developed. Data for the analysis were collected from a sample of seven informants working as consultants in Danish municipalities. Each participant filled out a questionnaire, kept a log book for a week...... for interpersonal and internal sources increased when the task had high-value motivation or low-expectancy motivation or both. Research limitations/implications: The study is based on a relatively small sample and considers only one motivation theory. This should be addressed in future research along...... with a broadening of the studied group to involve other professions than municipality consultants. Originality/value: Motivational theories from the field of psychology have been used sparsely in studies of information seeking. This study operationalises and verifies such a theory based on a theoretical adaptation...

  6. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  7. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang; Wang, Suojin; Huang, Jianhua Z.

    2013-01-01

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non

  8. Model for Electromagnetic Information Leakage

    OpenAIRE

    Mao Jian; Li Yongmei; Zhang Jiemin; Liu Jinming

    2013-01-01

    Electromagnetic leakage will happen in working information equipments; it could lead to information leakage. In order to discover the nature of information in electromagnetic leakage, this paper combined electromagnetic theory with information theory as an innovative research method. It outlines a systematic model of electromagnetic information leakage, which theoretically describes the process of information leakage, intercept and reproduction based on electromagnetic radiation, and ana...

  9. Supporting Fiscal Aspect of Land Administration through an LADM-based Valuation Information Model

    NARCIS (Netherlands)

    Kara, A.; Çağdaş, V.; Lemmen, C.H.J.; Işıkdağ, Ü.; van Oosterom, P.J.M.; Stubkjær, E.

    2018-01-01

    This paper presents an information system artifact for the fiscal aspect of land administration, a valuation information model for the specification of inventories or databases used in valuation for recurrently levied immovable property taxes. The information model is designed as an extension module

  10. Exploring nursing e-learning systems success based on information system success model.

    Science.gov (United States)

    Chang, Hui-Chuan; Liu, Chung-Feng; Hwang, Hsin-Ginn

    2011-12-01

    E-learning is thought of as an innovative approach to enhance nurses' care service knowledge. Extensive research has provided rich information toward system development, courses design, and nurses' satisfaction with an e-learning system. However, a comprehensive view in understanding nursing e-learning system success is an important but less focused-on topic. The purpose of this research was to explore net benefits of nursing e-learning systems based on the updated DeLone and McLean's Information System Success Model. The study used a self-administered questionnaire to collected 208 valid nurses' responses from 21 of Taiwan's medium- and large-scale hospitals that have implemented nursing e-learning systems. The result confirms that the model is sufficient to explore the nurses' use of e-learning systems in terms of intention to use, user satisfaction, and net benefits. However, while the three exogenous quality factors (system quality, information quality, and service quality) were all found to be critical factors affecting user satisfaction, only information quality showed a direct effect on the intention to use. This study provides useful insights for evaluating nursing e-learning system qualities as well as an understanding of nurses' intentions and satisfaction related to performance benefits.

  11. Multi-agent control system with information fusion based comfort model for smart buildings

    International Nuclear Information System (INIS)

    Wang, Zhu; Wang, Lingfeng; Dounis, Anastasios I.; Yang, Rui

    2012-01-01

    Highlights: ► Proposed a model to manage indoor energy and comfort for smart buildings. ► Developed a control system to maximize comfort with minimum energy consumption. ► Information fusion with ordered weighted averaging aggregation is used. ► Multi-agent technology and heuristic intelligent optimization are deployed in developing the control system. -- Abstract: From the perspective of system control, a smart and green building is a large-scale dynamic system with high complexity and a huge amount of information. Proper combination of the available information and effective control of the overall building system turns out to be a big challenge. In this study, we proposed a building indoor energy and comfort management model based on information fusion using ordered weighted averaging (OWA) aggregation. A multi-agent control system with heuristic intelligent optimization is developed to achieve a high level of comfort with the minimum power consumption. Case studies and simulation results are presented and discussed in this paper.

  12. INTEGRATIVE METHOD OF TEACHING INFORMATION MODELING IN PRACTICAL HEALTH SERVICE BASED ON MICROSOFT ACCESS QUERIES

    Directory of Open Access Journals (Sweden)

    Svetlana A. Firsova

    2016-06-01

    Full Text Available Introduction: this article explores the pedagogical technology employed to teach medical students foundations of work with MICROSOFT ACCESS databases. The above technology is based on integrative approach to the information modeling in public health practice, drawing upon basic didactic concepts that pertain to objects and tools databases created in MICROSOFT ACCESS. The article examines successive steps in teaching the topic “Queries in MICROSOFT ACCESS” – from simple queries to complex ones. The main attention is paid to such components of methodological system, as the principles and teaching methods classified according to the degree of learners’ active cognitive activity. The most interesting is the diagram of the relationship of learning principles, teaching methods and specific types of requests. Materials and Methods: the authors used comparative analysis of literature, syllabi, curricula in medical informatics taught at leading medical universities in Russia. Results: the original technique of training in putting queries with databases of MICROSOFT ACCESS is presented for analysis of information models in practical health care. Discussion and Conclusions: it is argued that the proposed pedagogical technology will significantly improve the effectiveness of teaching the course “Medical Informatics”, that includes development and application of models to simulate the operation of certain facilities and services of the health system which, in turn, increases the level of information culture of practitioners.

  13. Value-based choice: An integrative, neuroscience-informed model of health goals.

    Science.gov (United States)

    Berkman, Elliot T

    2018-01-01

    Traditional models of health behaviour focus on the roles of cognitive, personality and social-cognitive constructs (e.g. executive function, grit, self-efficacy), and give less attention to the process by which these constructs interact in the moment that a health-relevant choice is made. Health psychology needs a process-focused account of how various factors are integrated to produce the decisions that determine health behaviour. I present an integrative value-based choice model of health behaviour, which characterises the mechanism by which a variety of factors come together to determine behaviour. This model imports knowledge from research on behavioural economics and neuroscience about how choices are made to the study of health behaviour, and uses that knowledge to generate novel predictions about how to change health behaviour. I describe anomalies in value-based choice that can be exploited for health promotion, and review neuroimaging evidence about the involvement of midline dopamine structures in tracking and integrating value-related information during choice. I highlight how this knowledge can bring insights to health psychology using illustrative case of healthy eating. Value-based choice is a viable model for health behaviour and opens new avenues for mechanism-focused intervention.

  14. A Model for Information

    Directory of Open Access Journals (Sweden)

    Paul Walton

    2014-09-01

    Full Text Available This paper uses an approach drawn from the ideas of computer systems modelling to produce a model for information itself. The model integrates evolutionary, static and dynamic views of information and highlights the relationship between symbolic content and the physical world. The model includes what information technology practitioners call “non-functional” attributes, which, for information, include information quality and information friction. The concepts developed in the model enable a richer understanding of Floridi’s questions “what is information?” and “the informational circle: how can information be assessed?” (which he numbers P1 and P12.

  15. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  16. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  17. An information entropy model on clinical assessment of patients based on the holographic field of meridian

    Science.gov (United States)

    Wu, Jingjing; Wu, Xinming; Li, Pengfei; Li, Nan; Mao, Xiaomei; Chai, Lihe

    2017-04-01

    Meridian system is not only the basis of traditional Chinese medicine (TCM) method (e.g. acupuncture, massage), but also the core of TCM's basic theory. This paper has introduced a new informational perspective to understand the reality and the holographic field of meridian. Based on maximum information entropy principle (MIEP), a dynamic equation for the holographic field has been deduced, which reflects the evolutionary characteristics of meridian. By using self-organizing artificial neural network as algorithm, the evolutionary dynamic equation of the holographic field can be resolved to assess properties of meridians and clinically diagnose the health characteristics of patients. Finally, through some cases from clinical patients (e.g. a 30-year-old male patient, an apoplectic patient, an epilepsy patient), we use this model to assess the evolutionary properties of meridians. It is proved that this model not only has significant implications in revealing the essence of meridian in TCM, but also may play a guiding role in clinical assessment of patients based on the holographic field of meridians.

  18. An estimation framework for building information modeling (BIM)-based demolition waste by type.

    Science.gov (United States)

    Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook

    2017-12-01

    Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.

  19. Construction Process Simulation and Safety Analysis Based on Building Information Model and 4D Technology

    Institute of Scientific and Technical Information of China (English)

    HU Zhenzhong; ZHANG Jianping; DENG Ziyin

    2008-01-01

    Time-dependent structure analysis theory has been proved to be more accurate and reliable com-pared to commonly used methods during construction. However, so far applications are limited to partial pe-riod and part of the structure because of immeasurable artificial intervention. Based on the building informa-tion model (BIM) and four-dimensional (4D) technology, this paper proposes an improves structure analysis method, which can generate structural geometry, resistance model, and loading conditions automatically by a close interlink of the schedule information, architectural model, and material properties. The method was applied to a safety analysis during a continuous and dynamic simulation of the entire construction process.The results show that the organic combination of the BIM, 4D technology, construction simulation, and safety analysis of time-dependent structures is feasible and practical. This research also lays a foundation for further researches on building lifecycle management by combining architectural design, structure analy-sis, and construction management.

  20. Adaptive design optimization: a mutual information-based approach to model discrimination in cognitive science.

    Science.gov (United States)

    Cavagnaro, Daniel R; Myung, Jay I; Pitt, Mark A; Kujala, Janne V

    2010-04-01

    Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.

  1. Experimental Robot Model Adjustments Based on Force–Torque Sensor Information

    Directory of Open Access Journals (Sweden)

    Santiago Martinez

    2018-03-01

    Full Text Available The computational complexity of humanoid robot balance control is reduced through the application of simplified kinematics and dynamics models. However, these simplifications lead to the introduction of errors that add to other inherent electro-mechanic inaccuracies and affect the robotic system. Linear control systems deal with these inaccuracies if they operate around a specific working point but are less precise if they do not. This work presents a model improvement based on the Linear Inverted Pendulum Model (LIPM to be applied in a non-linear control system. The aim is to minimize the control error and reduce robot oscillations for multiple working points. The new model, named the Dynamic LIPM (DLIPM, is used to plan the robot behavior with respect to changes in the balance status denoted by the zero moment point (ZMP. Thanks to the use of information from force–torque sensors, an experimental procedure has been applied to characterize the inaccuracies and introduce them into the new model. The experiments consist of balance perturbations similar to those of push-recovery trials, in which step-shaped ZMP variations are produced. The results show that the responses of the robot with respect to balance perturbations are more precise and the mechanical oscillations are reduced without comprising robot dynamics.

  2. Business Value of Information Technology Service Quality Based on Probabilistic Business-Driven Model

    Directory of Open Access Journals (Sweden)

    Jaka Sembiring

    2015-08-01

    Full Text Available The business value of information technology (IT services is often difficult to assess, especially from the point of view of a non-IT manager. This condition could severely impact organizational IT strategic decisions. Various approaches have been proposed to quantify the business value, but some are trapped in technical complexity while others misguide managers into directly and subjectively judging some technical entities outside their domain of expertise. This paper describes a method on how to properly capture both perspectives based on a  probabilistic business-driven model. The proposed model presents a procedure to calculate the business value of IT services. The model also covers IT security services and their business value as an important aspect of IT services that is not covered in previously published researches. The impact of changes in the quality of IT services on business value will also be discussed. A simulation and a case illustration are provided to show the possible application of the proposed model for a simple business process in an enterprise.

  3. Modeling bidding decision in engineering field with incomplete information: A static game–based approach

    Directory of Open Access Journals (Sweden)

    Zhi-xing Huang

    2016-01-01

    Full Text Available Corporate investment decision about engineering projects is a key issue for project management. This article aims to study the process of bidding decision-making in engineering field under the condition of incomplete information and investigating the influence of bidders’ game behaviors on investment decision. With reasonable assumed scenes, this article uses an approach to describe the decision process for bidding. The approach is based on the static game theory. With the proposed model, the effectiveness of game participants and the objective function are put forward, and the characteristics of price quotation and the best strategies of bidders under the equilibrium condition are discussed. The results can give a better understanding of investment decision in engineering management and are helpful for tenderees to avoid excessive competition among bidders.

  4. Agent-Based Model of Information Security System: Architecture and Formal Framework for Coordinated Intelligent Agents Behavior Specification

    National Research Council Canada - National Science Library

    Gorodetski, Vladimir

    2001-01-01

    The contractor will research and further develop the technology supporting an agent-based architecture for an information security system and a formal framework to specify a model of distributed knowledge...

  5. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  6. Dynamic relationships between microbial biomass, respiration, inorganic nutrients and enzyme activities: informing enzyme based decomposition models

    Directory of Open Access Journals (Sweden)

    Daryl L Moorhead

    2013-08-01

    Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.

  7. A Bayesian spatial model for neuroimaging data based on biologically informed basis functions.

    Science.gov (United States)

    Huertas, Ismael; Oldehinkel, Marianne; van Oort, Erik S B; Garcia-Solis, David; Mir, Pablo; Beckmann, Christian F; Marquand, Andre F

    2017-11-01

    . This spatial model constitutes an elegant alternative to voxel-based approaches in neuroimaging studies; not only are their atoms biologically informed, they are also adaptive to high resolutions, represent high dimensions efficiently, and capture long-range spatial dependencies, which are important and challenging objectives for neuroimaging data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Supporting virtual enterprise design by a web-based information model

    Science.gov (United States)

    Li, Dong; Barn, Balbir; McKay, Alison; de Pennington, Alan

    2001-10-01

    Development of IT and its applications have led to significant changes in business processes. To pursue agility, flexibility and best service to customers, enterprises focus on their core competence and dynamically build relationships with partners to form virtual enterprises as customer driven temporary demand chains/networks. Building the networked enterprise needs responsively interactive decisions instead of a single-direction partner selection process. Benefits and risks in the combination should be systematically analysed, and aggregated information about value-adding abilities and risks of networks needs to be derived from interactions of all partners. In this research, a hierarchical information model to assess partnerships for designing virtual enterprises was developed. Internet technique has been applied to the evaluation process so that interactive decisions can be visualised and made responsively during the design process. The assessment is based on the process which allows each partner responds to requirements of the virtual enterprise by planning its operational process as a bidder. The assessment is then produced by making an aggregated value to represent prospect of the combination of partners given current bidding. Final design is a combination of partners with the greatest total value-adding capability and lowest risk.

  9. The experiential health information processing model: supporting collaborative web-based patient education.

    Science.gov (United States)

    O'Grady, Laura A; Witteman, Holly; Wathen, C Nadine

    2008-12-16

    First generation Internet technologies such as mailing lists or newsgroups afforded unprecedented levels of information exchange within a variety of interest groups, including those who seek health information. With emergence of the World Wide Web many communication applications were ported to web browsers. One of the driving factors in this phenomenon has been the exchange of experiential or anecdotal knowledge that patients share online, and there is emerging evidence that participation in these forums may be having an impact on people's health decision making. Theoretical frameworks supporting this form of information seeking and learning have yet to be proposed. In this article, we propose an adaptation of Kolb's experiential learning theory to begin to formulate an experiential health information processing model that may contribute to our understanding of online health information seeking behaviour in this context. An experiential health information processing model is proposed that can be used as a research framework. Future research directions include investigating the utility of this model in the online health information seeking context, studying the impact of collaborating in these online environments on patient decision making and on health outcomes are provided.

  10. The experiential health information processing model: supporting collaborative web-based patient education

    Science.gov (United States)

    O'Grady, Laura A; Witteman, Holly; Wathen, C Nadine

    2008-01-01

    Background First generation Internet technologies such as mailing lists or newsgroups afforded unprecedented levels of information exchange within a variety of interest groups, including those who seek health information. With emergence of the World Wide Web many communication applications were ported to web browsers. One of the driving factors in this phenomenon has been the exchange of experiential or anecdotal knowledge that patients share online, and there is emerging evidence that participation in these forums may be having an impact on people's health decision making. Theoretical frameworks supporting this form of information seeking and learning have yet to be proposed. Results In this article, we propose an adaptation of Kolb's experiential learning theory to begin to formulate an experiential health information processing model that may contribute to our understanding of online health information seeking behaviour in this context. Conclusion An experiential health information processing model is proposed that can be used as a research framework. Future research directions include investigating the utility of this model in the online health information seeking context, studying the impact of collaborating in these online environments on patient decision making and on health outcomes are provided. PMID:19087353

  11. The experiential health information processing model: supporting collaborative web-based patient education

    Directory of Open Access Journals (Sweden)

    Wathen C Nadine

    2008-12-01

    Full Text Available Abstract Background First generation Internet technologies such as mailing lists or newsgroups afforded unprecedented levels of information exchange within a variety of interest groups, including those who seek health information. With emergence of the World Wide Web many communication applications were ported to web browsers. One of the driving factors in this phenomenon has been the exchange of experiential or anecdotal knowledge that patients share online, and there is emerging evidence that participation in these forums may be having an impact on people's health decision making. Theoretical frameworks supporting this form of information seeking and learning have yet to be proposed. Results In this article, we propose an adaptation of Kolb's experiential learning theory to begin to formulate an experiential health information processing model that may contribute to our understanding of online health information seeking behaviour in this context. Conclusion An experiential health information processing model is proposed that can be used as a research framework. Future research directions include investigating the utility of this model in the online health information seeking context, studying the impact of collaborating in these online environments on patient decision making and on health outcomes are provided.

  12. Role-based typology of information technology : Model development and assessment.

    NARCIS (Netherlands)

    Zand, F.; Solaimani, H. (Sam); Beers, van C.

    2015-01-01

    Managers aim to explain how and why IT creates business value, recognize their IT-based capabilities, and select the appropriate IT to enhance and leverage those capabilities. This article synthesizes the Organizational Information Processing Theory and Resource-Based View into a descriptive

  13. Model-based system-of-systems engineering for space-based command, control, communication, and information architecture design

    Science.gov (United States)

    Sindiy, Oleg V.

    This dissertation presents a model-based system-of-systems engineering (SoSE) approach as a design philosophy for architecting in system-of-systems (SoS) problems. SoS refers to a special class of systems in which numerous systems with operational and managerial independence interact to generate new capabilities that satisfy societal needs. Design decisions are more complicated in a SoS setting. A revised Process Model for SoSE is presented to support three phases in SoS architecting: defining the scope of the design problem, abstracting key descriptors and their interrelations in a conceptual model, and implementing computer-based simulations for architectural analyses. The Process Model enables improved decision support considering multiple SoS features and develops computational models capable of highlighting configurations of organizational, policy, financial, operational, and/or technical features. Further, processes for verification and validation of SoS models and simulations are also important due to potential impact on critical decision-making and, thus, are addressed. Two research questions frame the research efforts described in this dissertation. The first concerns how the four key sources of SoS complexity---heterogeneity of systems, connectivity structure, multi-layer interactions, and the evolutionary nature---influence the formulation of SoS models and simulations, trade space, and solution performance and structure evaluation metrics. The second question pertains to the implementation of SoSE architecting processes to inform decision-making for a subset of SoS problems concerning the design of information exchange services in space-based operations domain. These questions motivate and guide the dissertation's contributions. A formal methodology for drawing relationships within a multi-dimensional trade space, forming simulation case studies from applications of candidate architecture solutions to a campaign of notional mission use cases, and

  14. Factors associated with adoption of health information technology: a conceptual model based on a systematic review.

    Science.gov (United States)

    Kruse, Clemens Scott; DeShazo, Jonathan; Kim, Forest; Fulton, Lawrence

    2014-05-23

    The Health Information Technology for Economic and Clinical Health Act (HITECH) allocated $19.2 billion to incentivize adoption of the electronic health record (EHR). Since 2009, Meaningful Use Criteria have dominated information technology (IT) strategy. Health care organizations have struggled to meet expectations and avoid penalties to reimbursements from the Center for Medicare and Medicaid Services (CMS). Organizational theories attempt to explain factors that influence organizational change, and many theories address changes in organizational strategy. However, due to the complexities of the health care industry, existing organizational theories fall short of demonstrating association with significant health care IT implementations. There is no organizational theory for health care that identifies, groups, and analyzes both internal and external factors of influence for large health care IT implementations like adoption of the EHR. The purpose of this systematic review is to identify a full-spectrum of both internal organizational and external environmental factors associated with the adoption of health information technology (HIT), specifically the EHR. The result is a conceptual model that is commensurate with the complexity of with the health care sector. We performed a systematic literature search in PubMed (restricted to English), EBSCO Host, and Google Scholar for both empirical studies and theory-based writing from 1993-2013 that demonstrated association between influential factors and three modes of HIT: EHR, electronic medical record (EMR), and computerized provider order entry (CPOE). We also looked at published books on organizational theories. We made notes and noted trends on adoption factors. These factors were grouped as adoption factors associated with various versions of EHR adoption. The resulting conceptual model summarizes the diversity of independent variables (IVs) and dependent variables (DVs) used in articles, editorials, books, as

  15. Use of stratigraphic, petrographic, hydrogeologic and geochemical information for hydrogeologic modelling based on geostatistical simulation

    International Nuclear Information System (INIS)

    Rohlig, K.J.; Fischer, H.; Poltl, B.

    2004-01-01

    This paper describes the stepwise utilization of geologic information from various sources for the construction of hydrogeological models of a sedimentary site by means of geostatistical simulation. It presents a practical application of aquifer characterisation by firstly simulating hydrogeological units and then the hydrogeological parameters. Due to the availability of a large amount of hydrogeological, geophysical and other data and information, the Gorleben site (Northern Germany) has been used for a case study in order to demonstrate the approach. The study, which has not yet been completed, tries to incorporate as much as possible of the available information and to characterise the remaining uncertainties. (author)

  16. Neurally and ocularly informed graph-based models for searching 3D environments

    Science.gov (United States)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  17. Neurally and ocularly informed graph-based models for searching 3D environments.

    Science.gov (United States)

    Jangraw, David C; Wang, Jun; Lance, Brent J; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions-our implicit 'labeling' of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the 'similar' objects it identifies. We show that by exploiting the subjects' implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.

  18. A Public-key based Information Management Model for Mobile Agents

    OpenAIRE

    Rodriguez, Diego; Sobrado, Igor

    2000-01-01

    Mobile code based computing requires development of protection schemes that allow digital signature and encryption of data collected by the agents in untrusted hosts. These algorithms could not rely on carrying encryption keys if these keys could be stolen or used to counterfeit data by hostile hosts and agents. As a consequence, both information and keys must be protected in a way that only authorized hosts, that is the host that provides information and the server that has sent the mobile a...

  19. IRaPPA: information retrieval based integration of biophysical models for protein assembly selection.

    Science.gov (United States)

    Moal, Iain H; Barradas-Bautista, Didier; Jiménez-García, Brian; Torchala, Mieczyslaw; van der Velde, Arjan; Vreven, Thom; Weng, Zhiping; Bates, Paul A; Fernández-Recio, Juan

    2017-06-15

    In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. IRaPPA has been implemented in the SwarmDock server ( http://bmm.crick.ac.uk/∼SwarmDock/ ), pyDock server ( http://life.bsc.es/pid/pydockrescoring/ ) and ZDOCK server ( http://zdock.umassmed.edu/ ), with code available on request. moal@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  20. Intervention Strategies Based on Information-Motivation-Behavioral Skills Model for Health Behavior Change: A Systematic Review

    OpenAIRE

    Chang, Sun Ju; Choi, Suyoung; Kim, Se-An; Song, Misoon

    2014-01-01

    Purpose: This study systematically reviewed research on behavioral interventions based on the information-motivation-behavioral skills (IMB) model to investigate specific intervention strategies that focus on information, motivation, and behavioral skills and to evaluate their effectiveness for people with chronic diseases. Methods: A systematic review was conducted in accordance with the guidelines of both the National Evidence-based Healthcare Collaborating Agency and Im and Chang. A lit...

  1. The informational system model of Ukrainian national transport workflow improvement based on electronic signature introduction management

    Directory of Open Access Journals (Sweden)

    Grigoriy NECHAEY

    2007-01-01

    Full Text Available Proposed model of informational system supposes improvement of newconceptual method on the work with e-signature in transport nformational systems. Problems and aims that may be solved with the help of this system and the most important economical and technical advantages of the proposed system in comparison with traditional methods of e-signing use are marked out.

  2. A geo-information theoretical approach to inductive erosion modelling based on terrain mapping units

    NARCIS (Netherlands)

    Suryana, N.

    1997-01-01

    Three main aspects of the research, namely the concept of object orientation, the development of an Inductive Erosion Model (IEM) and the development of a framework for handling uncertainty in the data or information resulting from a GIS are interwoven in this thesis. The first and the second aspect

  3. Architecture Model of Bussines, Information System and Technology in BAKOSURTANAL Based on TOGAF

    Directory of Open Access Journals (Sweden)

    Iyan Supriyana

    2010-04-01

    Full Text Available The information technology (IT is a necessary in BAKOSURTANAL to support business in relation with data and spatial information. Users will get the advantage through easy and fast access to data and spatial information. The important of the enterprise architecture (EA to play a role to support company is proven because it provides technology and process structure which are fundamental aspects in IT strategy. Enterprise architecture framework (EAF will accelerate and simplify the development of EA by ascertaining comprehensive coverage of solutions, ensuring the result of EA is always in line with the growth of enterprise. This paper explains the open group architecture framework (TOGAF from several of EAF. The result shows that the most suitable EAF for BAKOSURTANAL in Blueprint development is by proposing EA model that covers business, information system, and technology architecture which are relied on recommended technical basics that is possible to be implemented.

  4. Information Exchange in Global Logistics Chains : An application for Model-based Auditing (abstract)

    NARCIS (Netherlands)

    Veenstra, A.W.; Hulstijn, J.; Christiaanse, R.; Tan, Y.

    2013-01-01

    An integrated data pipeline has been proposed to meet requirements for supply chain visibility and control. How can data integration be used for risk assessment, monitoring and control in global supply chains? We argue that concepts from model-based auditing can be used to model the ‘ideal’ flow of

  5. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    Directory of Open Access Journals (Sweden)

    Saleh Alwahaishi

    2013-03-01

    Full Text Available The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As the use of ICT expands globally, there is need for further research into cultural aspects and implications of ICT. The acceptance of Information Technology (IT has become a fundamental part of the research plan for most organizations (Igbaria 1993. In IT research, numerous theories are used to understand users’ adoption of new technologies. Various models were developed including the Technology Acceptance Model, Theory of Reasoned Action, Theory of Planned Behavior, and recently, the Unified Theory of Acceptance and Use of Technology. Each of these models has sought to identify the factors which influence a citizen’s intention or actual use of information technology. Drawing on the UTAUT model and Flow Theory, this research composes a new hybrid theoretical framework to identify the factors affecting the acceptance and use of Mobile Internet -as an ICT application- in a consumer context. The proposed model incorporates eight constructs: Performance Expectancy, Effort Expectancy, Facilitating Conditions, Social Influences, Perceived Value, Perceived Playfulness, Attention Focus, and Behavioral intention. Data collected online from 238 respondents in Saudi Arabia were tested against the research model, using the structural equation modeling approach. The proposed model was mostly supported by the empirical data. The findings of this study provide several crucial implications for ICT and, in particular, mobile Internet service practitioners and researchers

  6. Carbon emission analysis and evaluation of industrial departments in China: An improved environmental DEA cross model based on information entropy.

    Science.gov (United States)

    Han, Yongming; Long, Chang; Geng, Zhiqiang; Zhang, Keyu

    2018-01-01

    Environmental protection and carbon emission reduction play a crucial role in the sustainable development procedure. However, the environmental efficiency analysis and evaluation based on the traditional data envelopment analysis (DEA) cross model is subjective and inaccurate, because all elements in a column or a row of the cross evaluation matrix (CEM) in the traditional DEA cross model are given the same weight. Therefore, this paper proposes an improved environmental DEA cross model based on the information entropy to analyze and evaluate the carbon emission of industrial departments in China. The information entropy is applied to build the entropy distance based on the turbulence of the whole system, and calculate the weights in the CEM of the environmental DEA cross model in a dynamic way. The theoretical results show that the new weight constructed based on the information entropy is unique and optimal globally by using the Monte Carlo simulation. Finally, compared with the traditional environmental DEA and DEA cross model, the improved environmental DEA cross model has a better efficiency discrimination ability based on the data of industrial departments in China. Moreover, the proposed model can obtain the potential of carbon emission reduction of industrial departments to improve the energy efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A novel model to combine clinical and pathway-based transcriptomic information for the prognosis prediction of breast cancer.

    Directory of Open Access Journals (Sweden)

    Sijia Huang

    2014-09-01

    Full Text Available Breast cancer is the most common malignancy in women worldwide. With the increasing awareness of heterogeneity in breast cancers, better prediction of breast cancer prognosis is much needed for more personalized treatment and disease management. Towards this goal, we have developed a novel computational model for breast cancer prognosis by combining the Pathway Deregulation Score (PDS based pathifier algorithm, Cox regression and L1-LASSO penalization method. We trained the model on a set of 236 patients with gene expression data and clinical information, and validated the performance on three diversified testing data sets of 606 patients. To evaluate the performance of the model, we conducted survival analysis of the dichotomized groups, and compared the areas under the curve based on the binary classification. The resulting prognosis genomic model is composed of fifteen pathways (e.g., P53 pathway that had previously reported cancer relevance, and it successfully differentiated relapse in the training set (log rank p-value = 6.25e-12 and three testing data sets (log rank p-value < 0.0005. Moreover, the pathway-based genomic models consistently performed better than gene-based models on all four data sets. We also find strong evidence that combining genomic information with clinical information improved the p-values of prognosis prediction by at least three orders of magnitude in comparison to using either genomic or clinical information alone. In summary, we propose a novel prognosis model that harnesses the pathway-based dysregulation as well as valuable clinical information. The selected pathways in our prognosis model are promising targets for therapeutic intervention.

  8. Informing hydrological models with ground-based time-lapse relative gravimetry: potential and limitations

    DEFF Research Database (Denmark)

    Bauer-Gottwein, Peter; Christiansen, Lars; Rosbjerg, Dan

    2011-01-01

    parameter uncertainty decreased significantly when TLRG data was included in the inversion. The forced infiltration experiment caused changes in unsaturated zone storage, which were monitored using TLRG and ground-penetrating radar. A numerical unsaturated zone model was subsequently conditioned on both......Coupled hydrogeophysical inversion emerges as an attractive option to improve the calibration and predictive capability of hydrological models. Recently, ground-based time-lapse relative gravity (TLRG) measurements have attracted increasing interest because there is a direct relationship between...

  9. A computational model for knowledge-driven monitoring of nuclear power plant operators based on information theory

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2006-01-01

    To develop operator behavior models such as IDAC, quantitative models for the cognitive activities of nuclear power plant (NPP) operators in abnormal situations are essential. Among them, only few quantitative models for the monitoring and detection have been developed. In this paper, we propose a computational model for the knowledge-driven monitoring, which is also known as model-driven monitoring, of NPP operators in abnormal situations, based on the information theory. The basic assumption of the proposed model is that the probability that an operator shifts his or her attention to an information source is proportional to the expected information from the information source. A small experiment performed to evaluate the feasibility of the proposed model shows that the predictions made by the proposed model have high correlations with the experimental results. Even though it has been argued that heuristics might play an important role on human reasoning, we believe that the proposed model can provide part of the mathematical basis for developing quantitative models for knowledge-driven monitoring of NPP operators when NPP operators are assumed to behave very logically

  10. ISG hybrid powertrain: a rule-based driver model incorporating look-ahead information

    Science.gov (United States)

    Shen, Shuiwen; Zhang, Junzhi; Chen, Xiaojiang; Zhong, Qing-Chang; Thornton, Roger

    2010-03-01

    According to European regulations, if the amount of regenerative braking is determined by the travel of the brake pedal, more stringent standards must be applied, otherwise it may adversely affect the existing vehicle safety system. The use of engine or vehicle speed to derive regenerative braking is one way to avoid strict design standards, but this introduces discontinuity in powertrain torque when the driver releases the acceleration pedal or applies the brake pedal. This is shown to cause oscillations in the pedal input and powertrain torque when a conventional driver model is adopted. Look-ahead information, together with other predicted vehicle states, are adopted to control the vehicle speed, in particular, during deceleration, and to improve the driver model so that oscillations can be avoided. The improved driver model makes analysis and validation of the control strategy for an integrated starter generator (ISG) hybrid powertrain possible.

  11. An agent-based information management model of the Chinese pig sector

    NARCIS (Netherlands)

    Osinga, S.A.; Kramer, M.R.; Hofstede, G.J.; Roozmand, O.; Beulens, A.J.M.

    2010-01-01

    This paper investigates the effect of a selected top-down measure (what-if scenario) on actual agent behaviour and total system behaviour by means of an agent-based simulation model, when agents’ behaviour cannot fully be managed because the agents are autonomous. The Chinese pork sector serves as

  12. Information exchange in global logistics chains : An application for model-based auditing,

    NARCIS (Netherlands)

    Veenstra, A.W.; Hulstijn, J.; Christiaanse, R.M.J.; Tan, Y.

    2013-01-01

    An integrated data pipeline has been proposed to meet requirements for visibility, supervision and control in global supply chains. How can data integration be used for risk assessment, monitoring and control in global supply chains? We argue that concepts from model-based auditing can be used to

  13. Integration of Life Cycle Assessment Into Agent-Based Modeling : Toward Informed Decisions on Evolving Infrastructure Systems

    NARCIS (Netherlands)

    Davis, C.B.; Nikoli?, I.; Dijkema, G.P.J.

    2009-01-01

    A method is presented that allows for a life cycle assessment (LCA) to provide environmental information on an energy infrastructure system while it evolves. Energy conversion facilities are represented in an agent-based model (ABM) as distinct instances of technologies with owners capable of making

  14. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  15. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    Science.gov (United States)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  16. Information in relational data bases

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, R B

    1982-01-01

    A new knowledge representation scheme is proposed for representing incomplete information in relational data bases. The knowledge representation scheme introduces a novel convention for negative information based on modal logic and a novel data structure obtained by introducing tuple flags in the relational model of data. Standard and minimal forms are defined for relations conforming to the new data structure. The conventional relational operators, select, project and join, the redefined so they can be used to manipulate relations containing incomplete information. Conditions are presented for the lossless decomposition of relations containing incomplete information. 20 references.

  17. Bio-AIMS Collection of Chemoinformatics Web Tools based on Molecular Graph Information and Artificial Intelligence Models.

    Science.gov (United States)

    Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro

    2015-01-01

    The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.

  18. Exploring User Engagement in Information Networks: Behavioural – based Navigation Modelling, Ideas and Directions

    Directory of Open Access Journals (Sweden)

    Vesna Kumbaroska

    2017-04-01

    Full Text Available Revealing an endless array of user behaviors in an online environment is a very good indicator of the user’s interests either in the process of browsing or in purchasing. One such behavior is the navigation behavior, so detected user navigation patterns are able to be used for practical purposes such as: improving user engagement, turning most browsers into buyers, personalize content or interface, etc. In this regard, our research represents a connection between navigation modelling and user engagement. A usage of the Generalized Stochastic Petri Nets concept for stochastic behavioral-based modelling of the navigation process is proposed for measuring user engagement components. Different types of users are automatically identified and clustered according to their navigation behaviors, thus the developed model gives great insight into the navigation process. As part of this study, Peterson’s model for measuring the user engagement is explored and a direct calculation of its components is illustrated. At the same time, asssuming that several user sessions/visits are initialized in a certain time frame, following the Petri Nets dynamics is indicating that the proposed behavioral – based model could be used for user engagement metrics calculation, thus some basic ideas are discussed, and initial directions are given.

  19. Building information modelling (BIM)

    CSIR Research Space (South Africa)

    Conradie, Dirk CU

    2009-02-01

    Full Text Available The concept of a Building Information Model (BIM) also known as a Building Product Model (BPM) is nothing new. A short article on BIM will never cover the entire filed, because it is a particularly complex filed that is recently beginning to receive...

  20. Cloud decision model for selecting sustainable energy crop based on linguistic intuitionistic information

    Science.gov (United States)

    Peng, Hong-Gang; Wang, Jian-Qiang

    2017-11-01

    In recent years, sustainable energy crop has become an important energy development strategy topic in many countries. Selecting the most sustainable energy crop is a significant problem that must be addressed during any biofuel production process. The focus of this study is the development of an innovative multi-criteria decision-making (MCDM) method to handle sustainable energy crop selection problems. Given that various uncertain data are encountered in the evaluation of sustainable energy crops, linguistic intuitionistic fuzzy numbers (LIFNs) are introduced to present the information necessary to the evaluation process. Processing qualitative concepts requires the effective support of reliable tools; then, a cloud model can be used to deal with linguistic intuitionistic information. First, LIFNs are converted and a novel concept of linguistic intuitionistic cloud (LIC) is proposed. The operations, score function and similarity measurement of the LICs are defined. Subsequently, the linguistic intuitionistic cloud density-prioritised weighted Heronian mean operator is developed, which served as the basis for the construction of an applicable MCDM model for sustainable energy crop selection. Finally, an illustrative example is provided to demonstrate the proposed method, and its feasibility and validity are further verified by comparing it with other existing methods.

  1. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  2. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  3. Testing of money multiplier model for Pakistan: does monetary base carry any information?

    Directory of Open Access Journals (Sweden)

    Muhammad Arshad Khan

    2010-02-01

    Full Text Available This paper tests the constancy and stationarity of mechanic version of the money multiplier model for Pakistan using monthly data over the period 1972M1-2009M2. We split the data into pre-liberalization (1972M1-1990M12 and post-liberalization (1991M1-2009M2 periods to examine the impact of financial sector reforms. We first examine the constancy and stationarity of the money multiplier and the results suggest the money multiplier remains non-stationary for the entire sample period and sub-periods. We then tested cointegration between money supply and monetary base and find the evidence of cointegration between two variables for the entire period and two sub-periods. The coefficient restrictions are satisfied only for the post-liberalization period. Two-way long-run causality between money supply and monetary base is found for the entire period and post-liberalization. For the post-liberalization period the evidence of short-run causality running from monetary base to money supply is also identified. On the whole, the results suggest that money multiplier model can serve as framework for conducting short-run monetary policy in Pakistan. However, the monetary authority may consider the co-movements between money supply and reserve money at the time of conducting monetary policy.

  4. A data-informed PIF hierarchy for model-based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Groth, Katrina M.; Mosleh, Ali

    2012-01-01

    This paper addresses three problems associated with the use of Performance Shaping Factors in Human Reliability Analysis. (1) There are more than a dozen Human Reliability Analysis (HRA) methods that use Performance Influencing Factors (PIFs) or Performance Shaping Factors (PSFs) to model human performance, but there is not a standard set of PIFs used among the methods, nor is there a framework available to compare the PIFs used in various methods. (2) The PIFs currently in use are not defined specifically enough to ensure consistent interpretation of similar PIFs across methods. (3) There are few rules governing the creation, definition, and usage of PIF sets. This paper introduces a hierarchical set of PIFs that can be used for both qualitative and quantitative HRA. The proposed PIF set is arranged in a hierarchy that can be collapsed or expanded to meet multiple objectives. The PIF hierarchy has been developed with respect to a set fundamental principles necessary for PIF sets, which are also introduced in this paper. This paper includes definitions of the PIFs to allow analysts to map the proposed PIFs onto current and future HRA methods. The standardized PIF hierarchy will allow analysts to combine different types of data and will therefore make the best use of the limited data in HRA. The collapsible hierarchy provides the structure necessary to combine multiple types of information without reducing the quality of the information.

  5. INFORMATION MODEL OF SOCIAL TRANSFORMATIONS

    Directory of Open Access Journals (Sweden)

    Мария Васильевна Комова

    2013-09-01

    Full Text Available The social transformation is considered as a process of qualitative changes of the society, creating a new level of organization in all areas of life, in different social formations, societies of different types of development. The purpose of the study is to create a universal model for studying social transformations based on their understanding as the consequence of the information exchange processes in the society. After defining the conceptual model of the study, the author uses the following methods: the descriptive method, analysis, synthesis, comparison.Information, objectively existing in all elements and systems of the material world, is an integral attribute of the society transformation as well. The information model of social transformations is based on the definition of the society transformation as the change in the information that functions in the society’s information space. The study of social transformations is the study of information flows circulating in the society and being characterized by different spatial, temporal, and structural states. Social transformations are a highly integrated system of social processes and phenomena, the nature, course and consequences of which are affected by the factors representing the whole complex of material objects. The integrated information model of social transformations foresees the interaction of the following components: social memory, information space, and the social ideal. To determine the dynamics and intensity of social transformations the author uses the notions of "information threshold of social transformations" and "information pressure".Thus, the universal nature of information leads to considering social transformations as a system of information exchange processes. Social transformations can be extended to any episteme actualized by social needs. The establishment of an information threshold allows to simulate the course of social development, to predict the

  6. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  7. Towards the Building Information Modeling-Based Capital Project Lifecycle Management in the Luxury Yacht Industry

    Directory of Open Access Journals (Sweden)

    Liu Fuyong

    2017-11-01

    Full Text Available It will be a new approach that BIM’s capital project lifecycle management (CPLM applied to the yacht industry. This paper explored the feasibility of applying the principles and rationales of BIM for capital project lifecycle management in luxury yacht design, engineering, fabrication, construction and operation. The paper examined the premises and backbone technology of BIM. It then evaluated leading naval engineering and shipbuilding software applications and their development trends from the functional lens of BIM. To systematically investigate a BIM-based approach for capital project lifecycle management (CPLM in the luxury yacht industry, the paper proposed and outlined an implementation framework. A case study and a student competition use case were discussed to delineate the core constituents and processes of the proposed framework. The case of BIM was reviewed. Through the domestic custom luxury yacht design and prototyping student competition, the application of this framework in educational research is demonstrated and the initial quantitative assessment of the framework is carried out. Conclusions: a BIM-based CPLM implementation framework can help the luxury yacht industry capitalize on the global transformation to an information-centric and data-driven new business paradigm in shipbuilding with integrated design, manufacturing and production.

  8. Lotus Base: An integrated information portal for the model legume Lotus japonicus.

    Science.gov (United States)

    Mun, Terry; Bachmann, Asger; Gupta, Vikas; Stougaard, Jens; Andersen, Stig U

    2016-12-23

    Lotus japonicus is a well-characterized model legume widely used in the study of plant-microbe interactions. However, datasets from various Lotus studies are poorly integrated and lack interoperability. We recognize the need for a comprehensive repository that allows comprehensive and dynamic exploration of Lotus genomic and transcriptomic data. Equally important are user-friendly in-browser tools designed for data visualization and interpretation. Here, we present Lotus Base, which opens to the research community a large, established LORE1 insertion mutant population containing an excess of 120,000 lines, and serves the end-user tightly integrated data from Lotus, such as the reference genome, annotated proteins, and expression profiling data. We report the integration of expression data from the L. japonicus gene expression atlas project, and the development of tools to cluster and export such data, allowing users to construct, visualize, and annotate co-expression gene networks. Lotus Base takes advantage of modern advances in browser technology to deliver powerful data interpretation for biologists. Its modular construction and publicly available application programming interface enable developers to tap into the wealth of integrated Lotus data. Lotus Base is freely accessible at: https://lotus.au.dk.

  9. Designing an integrated model based on the indicators Quality and Earned Value for risk management in Information Technology Projects

    OpenAIRE

    TATLARI, Mohammad Reza; KAZEMİPOOR, Hamed

    2015-01-01

    There are two effective factors on Information Technology (IT) projects risk including quality and earned value so that by controlling these two factors and their increased level in IT projects, the corresponding risk can be decreased. Therefore in present study, an integrated model was designed based on quality and earned value indicators for risk management in IT projects on a new and efficient approach. The proposed algorithm included the steps such as preparing a list of several indicator...

  10. A Complex Network Model for Analyzing Railway Accidents Based on the Maximal Information Coefficient

    International Nuclear Information System (INIS)

    Shao Fu-Bo; Li Ke-Ping

    2016-01-01

    It is an important issue to identify important influencing factors in railway accident analysis. In this paper, employing the good measure of dependence for two-variable relationships, the maximal information coefficient (MIC), which can capture a wide range of associations, a complex network model for railway accident analysis is designed in which nodes denote factors of railway accidents and edges are generated between two factors of which MIC values are larger than or equal to the dependent criterion. The variety of network structure is studied. As the increasing of the dependent criterion, the network becomes to an approximate scale-free network. Moreover, employing the proposed network, important influencing factors are identified. And we find that the annual track density-gross tonnage factor is an important factor which is a cut vertex when the dependent criterion is equal to 0.3. From the network, it is found that the railway development is unbalanced for different states which is consistent with the fact. (paper)

  11. Using the model statement to elicit information and cues to deceit in interpreter-based interviews.

    Science.gov (United States)

    Vrij, Aldert; Leal, Sharon; Mann, Samantha; Dalton, Gary; Jo, Eunkyung; Shaboltas, Alla; Khaleeva, Maria; Granskaya, Juliana; Houston, Kate

    2017-06-01

    We examined how the presence of an interpreter during an interview affects eliciting information and cues to deceit, while using a method that encourages interviewees to provide more detail (model statement, MS). A total of 199 Hispanic, Korean and Russian participants were interviewed either in their own native language without an interpreter, or through an interpreter. Interviewees either lied or told the truth about a trip they made during the last twelve months. Half of the participants listened to a MS at the beginning of the interview. The dependent variables were 'detail', 'complications', 'common knowledge details', 'self-handicapping strategies' and 'ratio of complications'. In the MS-absent condition, the interviews resulted in less detail when an interpreter was present than when an interpreter was absent. In the MS-present condition, the interviews resulted in a similar amount of detail in the interpreter present and absent conditions. Truthful statements included more complications and fewer common knowledge details and self-handicapping strategies than deceptive statements, and the ratio of complications was higher for truth tellers than liars. The MS strengthened these results, whereas an interpreter had no effect on these results. Copyright © 2017. Published by Elsevier B.V.

  12. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  13. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  14. Modeling and Development of Medical Information System Based on Support Vector Machine in Web Network

    Directory of Open Access Journals (Sweden)

    Chuanfu Hu

    2017-12-01

    Full Text Available This paper aims at improving and utilizing the ontology information in ontology design of FOAF and vCard in real time, and the application of open relational data technology, SPARQL query information results and sending RDF/JSON data format. In addition, improve the effectiveness and efficiency of patient information extraction from the medical information website. This article includes two web search engines that are used to inform patients about medical care information. The experiment uses Drupal as the main software tool, and the Drupal RDF extension module provides some meaningful mapping. In the evaluation part, the structure of the experimental test platform is established and the system function test is carried out. The evaluation results include consumers or patients retrieving the latest doctor information and comparing search capabilities and techniques, between our system and existing systems.

  15. Hybrid attribute-based recommender system for learning material using genetic algorithm and a multidimensional information model

    Directory of Open Access Journals (Sweden)

    Mojtaba Salehi

    2013-03-01

    Full Text Available In recent years, the explosion of learning materials in the web-based educational systems has caused difficulty of locating appropriate learning materials to learners. A personalized recommendation is an enabling mechanism to overcome information overload occurred in the new learning environments and deliver suitable materials to learners. Since users express their opinions based on some specific attributes of items, this paper proposes a hybrid recommender system for learning materials based on their attributes to improve the accuracy and quality of recommendation. The presented system has two main modules: explicit attribute-based recommender and implicit attribute-based recommender. In the first module, weights of implicit or latent attributes of materials for learner are considered as chromosomes in genetic algorithm then this algorithm optimizes the weights according to historical rating. Then, recommendation is generated by Nearest Neighborhood Algorithm (NNA using the optimized weight vectors implicit attributes that represent the opinions of learners. In the second, preference matrix (PM is introduced that can model the interests of learner based on explicit attributes of learning materials in a multidimensional information model. Then, a new similarity measure between PMs is introduced and recommendations are generated by NNA. The experimental results show that our proposed method outperforms current algorithms on accuracy measures and can alleviate some problems such as cold-start and sparsity.

  16. Information model of economy

    Directory of Open Access Journals (Sweden)

    N.S.Gonchar

    2006-01-01

    Full Text Available A new stochastic model of economy is developed that takes into account the choice of consumers are the dependent random fields. Axioms of such a model are formulated. The existence of random fields of consumer's choice and decision making by firms are proved. New notions of conditionally independent random fields and random fields of evaluation of information by consumers are introduced. Using the above mentioned random fields the random fields of consumer choice and decision making by firms are constructed. The theory of economic equilibrium is developed.

  17. Towards second-generation smart card-based authentication in health information systems: the secure server model.

    Science.gov (United States)

    Hallberg, J; Hallberg, N; Timpka, T

    2001-01-01

    Conventional smart card-based authentication systems used in health care alleviate some of the security issues in user and system authentication. Existing models still do not cover all security aspects. To enable new protective measures to be developed, an extended model of the authentication process is presented. This model includes a new entity referred to as secure server. Assuming a secure server, a method where the smart card is aware of the status of the terminal integrity verification becomes feasible. The card can then act upon this knowledge and restrict the exposure of sensitive information to the terminal as required in order to minimize the risks. The secure server model can be used to illuminate the weaknesses of current approaches and the need for extensions which alleviate the resulting risks.

  18. A Comprehensive Decision-Making Approach Based on Hierarchical Attribute Model for Information Fusion Algorithms’ Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Lianhui Li

    2014-01-01

    Full Text Available Aiming at the problem of fusion algorithm performance evaluation in multiradar information fusion system, firstly the hierarchical attribute model of track relevance performance evaluation model is established based on the structural model and functional model and quantization methods of evaluation indicators are given; secondly a combination weighting method is proposed to determine the weights of evaluation indicators, in which the objective and subjective weights are separately determined by criteria importance through intercriteria correlation (CRITIC and trapezoidal fuzzy scale analytic hierarchy process (AHP, and then experience factor is introduced to obtain the combination weight; at last the improved technique for order preference by similarity to ideal solution (TOPSIS replacing Euclidean distance with Kullback-Leibler divergence (KLD is used to sort the weighted indicator value of the evaluation object. An example is given to illustrate the correctness and feasibility of the proposed method.

  19. Executive Information Systems' Multidimensional Models

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.

  20. Dynamic Model of an Ammonia Synthesis Reactor Based on Open Information

    OpenAIRE

    Jinasena, Asanthi; Lie, Bernt; Glemmestad, Bjørn

    2016-01-01

    Ammonia is a widely used chemical, hence the ammonia manufacturing process has become a standard case study in the scientific community. In the field of mathematical modeling of the dynamics of ammonia synthesis reactors, there is a lack of complete and well documented models. Therefore, the main aim of this work is to develop a complete and well documented mathematical model for observing the dynamic behavior of an industrial ammonia synthesis reactor system. The model is complete enough to ...

  1. Modelling the Effects of Information Campaigns Using Agent-Based Simulation

    National Research Council Canada - National Science Library

    Wragg, Tony

    2006-01-01

    .... The study highlighted the requirement for accurate data concerning a population's social hierarchy, social networks, behavior patterns, human geography and their subsequent impact on the success of both word-of-mouth and mass media driven information campaigns.

  2. An OMG model-based approach for aligning information systems requirements and architectures with business

    OpenAIRE

    Salgado, Carlos Eduardo Rodrigues Teixeira

    2017-01-01

    Tese de Doutoramento (Programa Doutoral em Tecnologias e Sistemas de Informação) The challenges involved in developing information systems (which are able to adapt to rapidly changing business and technological conditions) are directly related to the importance of their alignment with the business counterpart. These challenges comprise issues that cross management and information systems domains, relating and aligning them in order to attain superior performance for the organiz...

  3. New frontiers in information and production systems modelling and analysis incentive mechanisms, competence management, knowledge-based production

    CERN Document Server

    Novikov, Dmitry; Bakhtadze, Natalia; Zaikin, Oleg

    2016-01-01

    This book demonstrates how to apply modern approaches to complex system control in practical applications involving knowledge-based systems. The dimensions of knowledge-based systems are extended by incorporating new perspectives from control theory, multimodal systems and simulation methods.  The book is divided into three parts: theory, production system and information system applications. One of its main focuses is on an agent-based approach to complex system analysis. Moreover, specialised forms of knowledge-based systems (like e-learning, social network, and production systems) are introduced with a new formal approach to knowledge system modelling.   The book, which offers a valuable resource for researchers engaged in complex system analysis, is the result of a unique cooperation between scientists from applied computer science (mainly from Poland) and leading system control theory researchers from the Russian Academy of Sciences’ Trapeznikov Institute of Control Sciences.

  4. Common data model for natural language processing based on two existing standard information models: CDA+GrAF.

    Science.gov (United States)

    Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D

    2012-08-01

    An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. A resilience-based model for performance evaluation of information systems: the case of a gas company

    Science.gov (United States)

    Azadeh, A.; Salehi, V.; Salehi, R.

    2017-10-01

    Information systems (IS) are strongly influenced by changes in new technology and should react swiftly in response to external conditions. Resilience engineering is a new method that can enable these systems to absorb changes. In this study, a new framework is presented for performance evaluation of IS that includes DeLone and McLean's factors of success in addition to resilience. Hence, this study is an attempt to evaluate the impact of resilience on IS by the proposed model in Iranian Gas Engineering and Development Company via the data obtained from questionnaires and Fuzzy Data Envelopment Analysis (FDEA) approach. First, FDEA model with α-cut = 0.05 was identified as the most suitable model to this application by performing all Banker, Charnes and Cooper and Charnes, Cooper and Rhodes models of and FDEA and selecting the appropriate model based on maximum mean efficiency. Then, the factors were ranked based on the results of sensitivity analysis, which showed resilience had a significantly higher impact on the proposed model relative to other factors. The results of this study were then verified by conducting the related ANOVA test. This is the first study that examines the impact of resilience on IS by statistical and mathematical approaches.

  6. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    Science.gov (United States)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  7. Neurally and Ocularly Informed Graph-Based Models for Searching 3D Environments

    Science.gov (United States)

    2014-06-03

    mmedia). riding in a car. The environment was displayed to the subject on a 30′′ Apple Cinema HD display with a 60 Hz refresh rate, and subtended...simulations in Pohlmeyer et al (2011), supplementary information). The identity of the image, and the side of the subject’s viewpoint on which it

  8. Information Technology Security Professionals' Knowledge and Use Intention Based on UTAUT Model

    Science.gov (United States)

    Kassa, Woldeloul

    2016-01-01

    Information technology (IT) security threats and vulnerabilities have become a major concern for organizations in the United States. However, there has been little research on assessing the effect of IT security professionals' knowledge on the use of IT security controls. This study examined the unified theory of acceptance and use of technology…

  9. Electric vehicle energy consumption modelling and prediction based on road information

    NARCIS (Netherlands)

    Wang, J.; Besselink, I.J.M.; Nijmeijer, H.

    The limited driving range is considered as a significant barrier to the spread of electric vehicles. One effective method to reduce “range anxiety” is to offer accurate information to the driver on the remaining driving range. However, the energy consumption during driving is largely determined by

  10. A Costing Model for Project-Based Information and Communication Technology Systems

    Science.gov (United States)

    Stewart, Brian; Hrenewich, Dave

    2009-01-01

    A major difficulty facing IT departments is ensuring that the projects and activities to which information and communications technologies (ICT) resources are committed represent an effective, economic, and efficient use of those resources. This complex problem has no single answer. To determine effective use requires, at the least, a…

  11. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  12. A method for evaluating cognitively informed micro-targeted campaign strategies: An agent-based model proof of principle.

    Science.gov (United States)

    Madsen, Jens Koed; Pilditch, Toby D

    2018-01-01

    In political campaigns, perceived candidate credibility influences the persuasiveness of messages. In campaigns aiming to influence people's beliefs, micro-targeted campaigns (MTCs) that target specific voters using their psychological profile have become increasingly prevalent. It remains open how effective MTCs are, notably in comparison to population-targeted campaign strategies. Using an agent-based model, the paper applies recent insights from cognitive models of persuasion, extending them to the societal level in a novel framework for exploring political campaigning. The paper provides an initial treatment of the complex dynamics of population level political campaigning in a psychologically informed manner. Model simulations show that MTCs can take advantage of the psychology of the electorate by targeting voters favourable disposed towards the candidate. Relative to broad campaigning, MTCs allow for efficient and adaptive management of complex campaigns. Findings show that disliked MTC candidates can beat liked population-targeting candidates, pointing to societal questions concerning campaign regulations.

  13. Informed herbivore movement and interplant communication determine the effects of induced resistance in an individual-based model.

    Science.gov (United States)

    Rubin, Ilan N; Ellner, Stephen P; Kessler, André; Morrell, Kimberly A

    2015-09-01

    1. Plant induced resistance to herbivory affects the spatial distribution of herbivores, as well as their performance. In recent years, theories regarding the benefit to plants of induced resistance have shifted from ideas of optimal resource allocation towards a more eclectic set of theories that consider spatial and temporal plant variability and the spatial distribution of herbivores among plants. However, consensus is lacking on whether induced resistance causes increased herbivore aggregation or increased evenness, as both trends have been experimentally documented. 2. We created a spatial individual-based model that can describe many plant-herbivore systems with induced resistance, in order to analyse how different aspects of induced resistance might affect herbivore distribution, and the total damage to a plant population, during a growing season. 3. We analyse the specific effects on herbivore aggregation of informed herbivore movement (preferential movement to less-damaged plants) and of information transfer between plants about herbivore attacks, in order to identify mechanisms driving both aggregation and evenness. We also investigate how the resulting herbivore distributions affect the total damage to plants and aggregation of damage. 4. Even, random and aggregated herbivore distributions can all occur in our model with induced resistance. Highest levels of aggregation occurred in the models with informed herbivore movement, and the most even distributions occurred when the average number of herbivores per plant was low. With constitutive resistance, only random distributions occur. Damage to plants was spatially correlated, unless plants recover very quickly from damage; herbivore spatial autocorrelation was always weak. 5. Our model and results provide a simple explanation for the apparent conflict between experimental results, indicating that both increased aggregation and increased evenness of herbivores can result from induced resistance. We

  14. Semantic-Based Knowledge Management in E-Government: Modeling Attention for Proactive Information Delivery

    Science.gov (United States)

    Samiotis, Konstantinos; Stojanovic, Nenad

    E-government has become almost synonymous with a consumer-led revolution of government services inspired and made possible by the Internet. With technology being the least of the worries for government organizations nowadays, attention is shifting towards managing complexity as one of the basic antecedents of operational and decision-making inefficiency. Complexity has been traditionally preoccupying public administrations and owes its origins to several sources. Among them we encounter primarily the cross-functional nature and the degree of legal structuring of administrative work. Both of them have strong reliance to the underlying process and information infrastructure of public organizations. Managing public administration work thus implies managing its processes and information. Knowledge management (KM) and business process reengineering (BPR) have been deployed already by private organizations with success for the same purposes and certainly comprise improvement practices that are worthwhile investigating. Our contribution through this paper is on the utilization of KM for the e-government.

  15. Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    OpenAIRE

    Alwahaishi, Saleh; Snásel, Václav

    2013-01-01

    The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT) are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As th...

  16. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model

    OpenAIRE

    Saleh Alwahaishi; Václav Snášel

    2013-01-01

    The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT) are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As th...

  17. Analysis of a SCADA System Anomaly Detection Model Based on Information Entropy

    Science.gov (United States)

    2014-03-27

    Gerard, 2005:3) The NTSB report lists alarm management as one of the top five areas for improvement in pipeline SCADA systems (Gerard, 2005:1...Zhang, Qin, Wang, and Liang for leak detection in a SCADA -run pipeline system. A concept derived from information theory improved leak detection...System for West Products Pipeline . Journal of Loss Prevention in the Process Industries, 22(6), 981-989. Zhu, B., & Sastry, S. (2010). SCADA

  18. Analyzing the performance of PROSPECT model inversion based on different spectral information for leaf biochemical properties retrieval

    Science.gov (United States)

    Sun, Jia; Shi, Shuo; Yang, Jian; Du, Lin; Gong, Wei; Chen, Biwu; Song, Shalei

    2018-01-01

    Leaf biochemical constituents provide useful information about major ecological processes. As a fast and nondestructive method, remote sensing techniques are critical to reflect leaf biochemistry via models. PROSPECT model has been widely applied in retrieving leaf traits by providing hemispherical reflectance and transmittance. However, the process of measuring both reflectance and transmittance can be time-consuming and laborious. Contrary to use reflectance spectrum alone in PROSPECT model inversion, which has been adopted by many researchers, this study proposes to use transmission spectrum alone, with the increasing availability of the latter through various remote sensing techniques. Then we analyzed the performance of PROSPECT model inversion with (1) only transmission spectrum, (2) only reflectance and (3) both reflectance and transmittance, using synthetic datasets (with varying levels of random noise and systematic noise) and two experimental datasets (LOPEX and ANGERS). The results show that (1) PROSPECT-5 model inversion based solely on transmission spectrum is viable with results generally better than that based solely on reflectance spectrum; (2) leaf dry matter can be better estimated using only transmittance or reflectance than with both reflectance and transmittance spectra.

  19. Modelling hen harrier dynamics to inform human-wildlife conflict resolution: a spatially-realistic, individual-based approach.

    Directory of Open Access Journals (Sweden)

    Johannes P M Heinonen

    Full Text Available Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.

  20. Modelling hen harrier dynamics to inform human-wildlife conflict resolution: a spatially-realistic, individual-based approach.

    Science.gov (United States)

    Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J

    2014-01-01

    Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.

  1. Theory of Compliance: Indicator Checklist Statistical Model and Instrument Based Program Monitoring Information System.

    Science.gov (United States)

    Fiene, Richard J.; Woods, Lawrence

    Two unanswered questions about child care are: (1) Does compliance with state child care regulations have a positive impact on children? and (2) Have predictors of program quality been identified? This paper explores a research study and related model that have had some success in answering these questions. Section I, a general introduction,…

  2. Developing, choosing and using landscape evolution models to inform field-based landscape reconstruction studies

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Armitage, J.; Attal, M.; Gorp, van Wouter; Coulthard, T.J.; Schoorl, J.M.

    2017-01-01

    Landscape evolution models (LEMs) are an increasingly popular resource for geomorphologists as they can operate as virtual laboratories where the implications of hypotheses about processes over human to geological timescales can be visualized at spatial scales from catchments to mountain ranges.

  3. Models of language: towards a practice-based account of information in natural language

    NARCIS (Netherlands)

    Andrade-Lotero, E.J.

    2012-01-01

    Edgar Andrade-Lotero onderzocht twee modellen van taalkundige informatie. Hij richt zich met name op de filosofische vooronderstellingen van deze modellen. Eén van deze modellen is afkomstig uit de formele semantiek; het andere model is gebaseerd op een specifiek onderzoek naar de rol van tekens in

  4. Combining Empirical Relationships with Data Based Mechanistic Modeling to Inform Solute Tracer Investigations across Stream Orders

    Science.gov (United States)

    Herrington, C.; Gonzalez-Pinzon, R.; Covino, T. P.; Mortensen, J.

    2015-12-01

    Solute transport studies in streams and rivers often begin with the introduction of conservative and reactive tracers into the water column. Information on the transport of these substances is then captured within tracer breakthrough curves (BTCs) and used to estimate, for instance, travel times and dissolved nutrient and carbon dynamics. Traditionally, these investigations have been limited to systems with small discharges (turbidity (e.g., nitrate signals with SUNA instruments or fluorescence measures) and/or high total dissolved solids (e.g., making prohibitively expensive the use of salt tracers such as NaCl) in larger systems. Additionally, a successful time-of-travel study is valuable for only a single discharge and river stage. We have developed a method to predict tracer BTCs to inform sampling frequencies at small and large stream orders using empirical relationships developed from multiple tracer injections spanning several orders of magnitude in discharge and reach length. This method was successfully tested in 1st to 8th order systems along the Middle Rio Grande River Basin in New Mexico, USA.

  5. Integrated environmental modeling : an SDI - based framework for integrated assessment of agricultural information

    NARCIS (Netherlands)

    Imran, Muhammad; Zurita-Milla, R.; de By, R.A.

    2011-01-01

    Urban villages are widespread in many Chinese cities, providing affordable and accessible hous-ing for rural migrants. These urban villages are developed by the indigenous village population base on a self-help approach and in an unauthorized style. Consequently, urban villages are characterized by

  6. A fuzzy model of a European index based on automatically extracted content information

    NARCIS (Netherlands)

    Milea, D.V.; Almeida, R.J.; Kaymak, U.; Frasincar, F.

    2011-01-01

    In this paper we build on previous work related to predicting the MSCI EURO index based on content analysis of ECB statements. Our focus is on reducing the number of features employed for prediction through feature selection. For this purpose we rely on two methodologies: (stepwise) linear

  7. Agent Based Modelling of Communication Costs: Why Information Can Be Free

    Science.gov (United States)

    Čače, Ivana; Bryson, Joanna J.

    What purposes, other than facilitating the sharing of information, can language have served? First, it may not have evolved to serve any purpose at all. It is possible that language is just a side effect of the large human brain — a spandrel or exaptation — that only became useful later. If language is adaptive, this does not necessarily mean that it is adaptive for the purpose of communication. For example Dennett (1996) and Chomsky (1980) have stressed the utility of language in thinking. Also, there are different ways to view communication. The purpose of language according to Dunbar (1993), is to replace grooming as a social bonding process and in this way to ensure the stability of large social groups.

  8. Auto-Mapping and Configuration Method of IEC 61850 Information Model Based on OPC UA

    Directory of Open Access Journals (Sweden)

    In-Jae Shin

    2016-11-01

    Full Text Available The open-platform communication (OPC unified architecture (UA (IEC62541 is introduced as a key technology for realizing a variety of smart grid (SG use cases enabling relevant automation and control tasks. The OPC UA can expand interoperability between power systems. The top-level SG management platform needs independent middleware to transparently manage the power information technology (IT systems, including the IEC 61850. To expand interoperability between the power system for a large number of stakeholders and various standards, this paper focuses on the IEC 61850 for the digital substation. In this paper, we propose the interconnection method to integrate communication with OPC UA and convert OPC UA AddressSpace using system configuration description language (SCL of IEC 61850. We implemented the mapping process for the verification of the interconnection method. The interconnection method in this paper can expand interoperability between power systems for OPC UA integration for various data structures in the smart grid.

  9. Multiple Perspective Approach for the Development of Information Systems Based on Advanced Mathematical Models

    DEFF Research Database (Denmark)

    Carugati, Andrea

    through negotiation and democratic decision making will it be possible for the team members to have their current weltanschauung represented in decision making. Thirdly, geographical distribution and loose coupling foster individualist rather than group behavior. The more the social tissue is disconnected...... to the customers of the system. The use of democratic decision making that brings together the team members on regular basis contributes to both the reconstruction of the social tissue and to the satisfaction of the development team as customer of the project. Fourth, the novelty of the technology created problems......This dissertation presents the results of a three-year long case study of an information systems development project where a scheduling and control system was developed for a manufacturing company. The project goal was to test the feasibility of a new technology called advanced mathematical...

  10. Review evaluation indicators of health information technology course of master's degree in medical sciences universities' based on CIPP Model.

    Science.gov (United States)

    Yarmohammadian, Mohammad Hossein; Mohebbi, Nooshin

    2015-01-01

    Sensitivity of teaching and learning processes in universities emphasizes the necessity of assessment of the quality of education which improves the efficiency and effectiveness of the country. This study was conducted with an aim to review and develop the evaluation criteria of health information technology course at Master of Science level in Tehran, Shahid Beheshti, Isfahan, Shiraz, and Kashan medical universities in 2012 by using CIPP model. This was an applied and descriptive research with statistical population of faculty members (23), students (97), directorates (5), and library staff (5), with a total of 130 people, and sampling was done as a census. In order to collect data, four questionnaires were used based on Likert scale with scores ranging from 1 to 5. Questionnaires' validity was confirmed by consulting with health information technology and educational evaluation experts, and questionnaires' reliability of directorates, faculty, students, and library staff was tested using the Cronbach's alpha coefficient formula, which gave r = 0.74, r = 0.93, r = 0.98, and r = 0.80, respectively. SPSS software for data analysis and both descriptive and inferential statistics containing mean, frequency percentage, standard deviation, Pearson correlation, and Spearman correlation were used. With studies from various sources, commentary of experts, and based on the CIPP evaluation model, 139 indicators were determined and then evaluated, which were associated with this course based on the three factors of context, input, and process in the areas of human resources professional, academic services, students, directors, faculty, curriculum, budget, facilities, teaching-learning activities, and scientific research activities of students and faculty, and the activities of the library staff. This study showed that in total, the health information technology course at the Master of Science level is relatively good, but trying to improve and correct it in some areas and

  11. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  12. Using Geographic Information System-based Ecologic Niche Models to Forecast the Risk of Hantavirus Infection in Shandong Province, China

    Science.gov (United States)

    Wei, Lan; Qian, Quan; Wang, Zhi-Qiang; Glass, Gregory E.; Song, Shao-Xia; Zhang, Wen-Yi; Li, Xiu-Jun; Yang, Hong; Wang, Xian-Jun; Fang, Li-Qun; Cao, Wu-Chun

    2011-01-01

    Hemorrhagic fever with renal syndrome (HFRS) is an important public health problem in Shandong Province, China. In this study, we combined ecologic niche modeling with geographic information systems (GIS) and remote sensing techniques to identify the risk factors and affected areas of hantavirus infections in rodent hosts. Land cover and elevation were found to be closely associated with the presence of hantavirus-infected rodent hosts. The averaged area under the receiver operating characteristic curve was 0.864, implying good performance. The predicted risk maps based on the model were validated both by the hantavirus-infected rodents' distribution and HFRS human case localities with a good fit. These findings have the applications for targeting control and prevention efforts. PMID:21363991

  13. Modeling and analyzing cascading dynamics of the Internet based on local congestion information

    Science.gov (United States)

    Zhu, Qian; Nie, Jianlong; Zhu, Zhiliang; Yu, Hai; Xue, Yang

    2018-06-01

    Cascading failure has already become one of the vital issues in network science. By considering realistic network operational settings, we propose the congestion function to represent the congested extent of node and construct a local congestion-aware routing strategy with a tunable parameter. We investigate the cascading failures on the Internet triggered by deliberate attacks. Simulation results show that the tunable parameter has an optimal value that makes the network achieve a maximum level of robustness. The robustness of the network has a positive correlation with tolerance parameter, but it has a negative correlation with the packets generation rate. In addition, there exists a threshold of the attacking proportion of nodes that makes the network achieve the lowest robustness. Moreover, by introducing the concept of time delay for information transmission on the Internet, we found that an increase of the time delay will decrease the robustness of the network rapidly. The findings of the paper will be useful for enhancing the robustness of the Internet in the future.

  14. Remote information service access system based on a client-server-service model

    Science.gov (United States)

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  15. Factors Affecting Acceptance of Hospital Information Systems Based on Extended Technology Acceptance Model: A Case Study in Three Paraclinical Departments.

    Science.gov (United States)

    Nadri, Hamed; Rahimi, Bahlol; Lotfnezhad Afshar, Hadi; Samadbeik, Mahnaz; Garavand, Ali

    2018-04-01

     Regardless of the acceptance of users, information and communication systems can be considered as a health intervention designed to improve the care delivered to patients. This study aimed to determine the adoption and use of the extended Technology Acceptance Model (TAM2) by the users of hospital information system (HIS) in paraclinical departments including laboratory, radiology, and nutrition and to investigate the key factors of adoption and use of these systems.  A standard questionnaire was used to collect the data from nearly 253 users of these systems in paraclinical departments of eight university hospitals in two different cities of Iran. A total of 202 questionnaires including valid responses were used in this study (105 in Urmia and 97 in Khorramabad). The data were processed using LISREL and SPSS software and statistical analysis technique was based on the structural equation modeling (SEM).  It was found that the original TAM constructs had a significant impact on the staffs' behavioral intention to adopt HIS in paraclinical departments. The results of this study indicated that cognitive instrumental processes (job relevance, output quality, result demonstrability, and perceived ease of use), except for result demonstrability, were significant predictors of intention to use, whereas the result revealed no significant relationship between social influence processes (subjective norm, voluntariness, and image) and the users' behavioral intention to use the system.  The results confirmed that several factors in the TAM2 that were important in previous studies were not significant in paraclinical departments and in government-owned hospitals. The users' behavior factors are essential for successful usage of the system and should be considered. It provides valuable information for hospital system providers and policy makers in understanding the adoption challenges as well as practical guidance for the successful implementation of information

  16. Modelling and Analysis of Automobile Vibration System Based on Fuzzy Theory under Different Road Excitation Information

    Directory of Open Access Journals (Sweden)

    Xue-wen Chen

    2018-01-01

    Full Text Available A fuzzy increment controller is designed aimed at the vibration system of automobile active suspension with seven degrees of freedom (DOF. For decreasing vibration, an active control force is acquired by created Proportion-Integration-Differentiation (PID controller. The controller’s parameters are adjusted by a fuzzy increment controller with self-modifying parameters functions, which adopts the deviation and its rate of change of the body’s vertical vibration velocity and the desired value in the position of the front and rear suspension as the input variables based on 49 fuzzy control rules. Adopting Simulink, the fuzzy increment controller is validated under different road excitation, such as the white noise input with four-wheel correlation in time-domain, the sinusoidal input, and the pulse input of C-grade road surface. The simulation results show that the proposed controller can reduce obviously the vehicle vibration compared to other independent control types in performance indexes, such as, the root mean square value of the body’s vertical vibration acceleration, pitching, and rolling angular acceleration.

  17. Intervention Strategies Based on Information-Motivation-Behavioral Skills Model for Health Behavior Change: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Sun Ju Chang, RN, PhD

    2014-09-01

    Conclusion: This review indicates the potential strength of the IMB model as a theoretical framework to develop behavioral interventions. The specific integration strategies delineated for each construct of the model can be utilized to design model-based interventions.

  18. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    Science.gov (United States)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  19. Global Earth Observation System of Systems: Characterizing Uncertainties of Space- based Measurements and Earth System Models Informing Decision Tools

    Science.gov (United States)

    Birk, R. J.; Frederick, M.

    2006-05-01

    The Global Earth Observation System of Systems (GEOSS) framework identifies the benefits of systematically and scientifically networking the capacity of organizations and systems into solutions that benefit nine societal benefit areas. The U.S. Integrated Earth Observation System (IEOS), the U.S. contribution to the GEOSS, focuses on near-term, mid-term, and long-term opportunities to establish integrated system solutions based on capacities and capabilities of member agencies and affiliations. Scientists at NASA, NOAA, DOE, NSF and other U.S. agencies are evolving the predictive capacity of models of Earth processes based on space-based, airborne and surface-based instruments and their measurements. NASA research activities include advancing the power and accessibility of computational resources (i.e. Project Columbia) to enable robust science data analysis, modeling, and assimilation techniques to rapidly advance. The integration of the resulting observations and predictions into decision support tools require characterization of the accuracies of a range of input measurements includes temperature and humidity profiles, wind speed, ocean height, sea surface temperature, and atmospheric constituents that are measured globally by U.S. deployed spacecraft. These measurements are stored in many data formats on many different information systems with widely varying accessibility and have processes whose documentation ranges from extremely detailed to very minimal. Integrated and interdisciplinary modeling (enabled by the Earth System Model Framework) enable the types of ensemble analysis that are useful for decision processes associated with energy management, public health risk assessments, and optimizing transportation safety and efficiency. Interdisciplinary approaches challenge systems integrators (both scientists and engineers) to expand beyond the traditional boundaries of particular disciplines to develop, verify and validate, and ultimately benchmark the

  20. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    Science.gov (United States)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  1. Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time

    Directory of Open Access Journals (Sweden)

    Andreas Glockner

    2011-02-01

    Full Text Available Research on the processing of recognition information has focused on testing the recognition heuristic (RH. On the aggregate, the noncompensatory use of recognition information postulated by the RH was rejected in several studies, while RH could still account for a considerable proportion of choices. These results can be explained if either a a part of the subjects used RH or b nobody used it but its choice predictions were accidentally in line with predictions of the strategy used. In the current study, which exemplifies a new approach to model testing, we determined individuals' decision strategies based on a maximum-likelihood classification method, taking into account choices, response times and confidence ratings simultaneously. Unlike most previous studies of the RH, our study tested the RH under conditions in which we provided information about cue values of unrecognized objects (which we argue is fairly common and thus of some interest. For 77.5% of the subjects, overall behavior was best explained by a compensatory parallel constraint satisfaction (PCS strategy. The proportion of subjects using an enhanced RH heuristic (RHe was negligible (up to 7.5%; 15% of the subjects seemed to use a take the best strategy (TTB. A more-fine grained analysis of the supplemental behavioral parameters conditional on strategy use supports PCS but calls into question process assumptions for apparent users of RH, RHe, and TTB within our experimental context. Our results are consistent with previous literature highlighting the importance of individual strategy classification as compared to aggregated analyses.

  2. Isotope-based quantum information

    International Nuclear Information System (INIS)

    Plekhanov, Vladimir G.

    2012-01-01

    The present book provides to the main ideas and techniques of the rapid progressing field of quantum information and quantum computation using isotope - mixed materials. It starts with an introduction to the isotope physics and then describes of the isotope - based quantum information and quantum computation. The ability to manipulate and control electron and/or nucleus spin in semiconductor devices provides a new route to expand the capabilities of inorganic semiconductor-based electronics and to design innovative devices with potential application in quantum computing. One of the major challenges towards these objectives is to develop semiconductor-based systems and architectures in which the spatial distribution of spins and their properties can be controlled. For instance, to eliminate electron spin decoherence resulting from hyperfine interaction due to nuclear spin background, isotopically controlled devices are needed (i.e., nuclear spin-depleted). In other emerging concepts, the control of the spatial distribution of isotopes with nuclear spins is a prerequisite to implement the quantum bits (or qbits). Therefore, stable semiconductor isotopes are important elements in the development of solid-state quantum information. There are not only different algorithms of quantum computation discussed but also the different models of quantum computers are presented. With numerous illustrations this small book is of great interest for undergraduate students taking courses in mesoscopic physics or nanoelectronics as well as quantum information, and academic and industrial researches working in this field.

  3. Information Retrieval Models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Göker, Ayse; Davies, John

    2009-01-01

    Many applications that handle information on the internet would be completely inadequate without the support of information retrieval technology. How would we find information on the world wide web if there were no web search engines? How would we manage our email without spam filtering? Much of the

  4. The Path of New Information Technology Affecting Educational Equality in the New Digital Divide--Based on Information System Success Model

    Science.gov (United States)

    Zheng, Qian; Liang, Chang-Yong

    2017-01-01

    New information technology (new IT) plays an increasingly important role in the field of education, which greatly enriches the teaching means and promotes the sharing of education resources. However, because of the New Digital Divide existing, the impact of new IT on educational equality has yet to be discussed. Based on Information System Success…

  5. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  6. Modeling gross primary production of agro-forestry ecosystems by assimilation of satellite-derived information in a process-based model.

    Science.gov (United States)

    Migliavacca, Mirco; Meroni, Michele; Busetto, Lorenzo; Colombo, Roberto; Zenone, Terenzio; Matteucci, Giorgio; Manca, Giovanni; Seufert, Guenther

    2009-01-01

    In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP) using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC) with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale.

  7. Modeling Gross Primary Production of Agro-Forestry Ecosystems by Assimilation of Satellite-Derived Information in a Process-Based Model

    Directory of Open Access Journals (Sweden)

    Guenther Seufert

    2009-02-01

    Full Text Available In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC with the aims of i improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale.

  8. Sustainable funding for biocuration: The Arabidopsis Information Resource (TAIR) as a case study of a subscription-based funding model.

    Science.gov (United States)

    Reiser, Leonore; Berardini, Tanya Z; Li, Donghui; Muller, Robert; Strait, Emily M; Li, Qian; Mezheritsky, Yarik; Vetushko, Andrey; Huala, Eva

    2016-01-01

    Databases and data repositories provide essential functions for the research community by integrating, curating, archiving and otherwise packaging data to facilitate discovery and reuse. Despite their importance, funding for maintenance of these resources is increasingly hard to obtain. Fueled by a desire to find long term, sustainable solutions to database funding, staff from the Arabidopsis Information Resource (TAIR), founded the nonprofit organization, Phoenix Bioinformatics, using TAIR as a test case for user-based funding. Subscription-based funding has been proposed as an alternative to grant funding but its application has been very limited within the nonprofit sector. Our testing of this model indicates that it is a viable option, at least for some databases, and that it is possible to strike a balance that maximizes access while still incentivizing subscriptions. One year after transitioning to subscription support, TAIR is self-sustaining and Phoenix is poised to expand and support additional resources that wish to incorporate user-based funding strategies. Database URL: www.arabidopsis.org. © The Author(s) 2016. Published by Oxford University Press.

  9. Information acquisition during online decision-making : A model-based exploration using eye-tracking data

    NARCIS (Netherlands)

    Shi, W.; Wedel, M.; Pieters, R.

    2013-01-01

    We propose a model of eye-tracking data to understand information acquisition patterns on attribute-by-product matrices, which are common in online choice environments such as comparison websites. The objective is to investigate how consumers gather product and attribute information from moment to

  10. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  11. Enhancing the performance of model-based elastography by incorporating additional a priori information in the modulus image reconstruction process

    International Nuclear Information System (INIS)

    Doyley, Marvin M; Srinivasan, Seshadri; Dimidenko, Eugene; Soni, Nirmal; Ophir, Jonathan

    2006-01-01

    Model-based elastography is fraught with problems owing to the ill-posed nature of the inverse elasticity problem. To overcome this limitation, we have recently developed a novel inversion scheme that incorporates a priori information concerning the mechanical properties of the underlying tissue structures, and the variance incurred during displacement estimation in the modulus image reconstruction process. The information was procured by employing standard strain imaging methodology, and introduced in the reconstruction process through the generalized Tikhonov approach. In this paper, we report the results of experiments conducted on gelatin phantoms to evaluate the performance of modulus elastograms computed with the generalized Tikhonov (GTK) estimation criterion relative to those computed by employing the un-weighted least-squares estimation criterion, the weighted least-squares estimation criterion and the standard Tikhonov method (i.e., the generalized Tikhonov method with no modulus prior). The results indicate that modulus elastograms computed with the generalized Tikhonov approach had superior elastographic contrast discrimination and contrast recovery. In addition, image reconstruction was more resilient to structural decorrelation noise when additional constraints were imposed on the reconstruction process through the GTK method

  12. Multidimensional Models of Information Need

    OpenAIRE

    Yun-jie (Calvin) Xu; Kai Huang (Joseph) Tan

    2009-01-01

    User studies in information science have recognised relevance as a multidimensional construct. An implication of multidimensional relevance is that a user's information need should be modeled by multiple data structures to represent different relevance dimensions. While the extant literature has attempted to model multiple dimensions of a user's information need, the fundamental assumption that a multidimensional model is better than a uni-dimensional model has not been addressed. This study ...

  13. Towards a topological reasoning service for IFC-based building information models in a Semantic Web context

    NARCIS (Netherlands)

    Beetz, J.; Leeuwen, van J.P.; Vries, de B.; Rivard, H.; Cheung, M.M.S.; Melhem, H.G.; Miresco, E.T.; Amor, R.; Ribeiro, F.L.

    2006-01-01

    One of the classic problems identified in the interdisciplinary use of Building Information Models (BIM) is the different representation requirements regarding topology (Eastman 1999). Although this problem has been addressed in several modeling efforts (Augenbro 1995) the most widely spread BIM to

  14. Gender-Based Violence and Armed Conflict: A Community-Informed Socioecological Conceptual Model From Northeastern Uganda

    Science.gov (United States)

    Mootz, Jennifer J.; Stabb, Sally D.; Mollen, Debra

    2017-01-01

    The high prevalence of gender-based violence (GBV) in armed conflict has been documented in various national contexts, but less is known about the complex pathways that constitute the relation between the two. Employing a community-based collaborative approach, we constructed a community-informed socioecological conceptual model from a feminist perspective, detailing how armed conflict relates to GBV in a conflict-affected rural community in Northeastern Uganda. The research questions were as follows: (1) How does the community conceptualize GBV? and (2) How does armed conflict relate to GBV? Nine focus group discussions divided by gender, age, and profession and six key informant interviews were conducted. Participants’ ages ranged from 9 to 80 years (n =34 girls/women, n = 43 boys/men). Grounded theory was used in analysis. Participants conceptualized eight forms of and 22 interactive variables that contributed to GBV. Armed conflict affected physical violence/quarreling, sexual violence, early marriage, and land grabbing via a direct pathway and four indirect pathways initiated through looting of resources, militarization of the community, death of a parent(s) or husband, and sexual violence. The findings suggest that community, organizational, and policy-level interventions, which include attention to intersecting vulnerabilities for exposure to GBV in conflict-affected settings, should be prioritized. While tertiary psychological interventions with women and girls affected by GBV in these areas should not be eliminated, we suggest that policy makers and members of community and organizational efforts make systemic and structural changes. Online slides for instructors who want to use this article for teaching are available on PWQ’s website at http://journals.sagepub.com/page/pwq/suppl/index PMID:29563663

  15. Gender-Based Violence and Armed Conflict: A Community-Informed Socioecological Conceptual Model From Northeastern Uganda.

    Science.gov (United States)

    Mootz, Jennifer J; Stabb, Sally D; Mollen, Debra

    2017-01-01

    The high prevalence of gender-based violence (GBV) in armed conflict has been documented in various national contexts, but less is known about the complex pathways that constitute the relation between the two. Employing a community-based collaborative approach, we constructed a community-informed socioecological conceptual model from a feminist perspective, detailing how armed conflict relates to GBV in a conflict-affected rural community in Northeastern Uganda. The research questions were as follows: (1) How does the community conceptualize GBV? and (2) How does armed conflict relate to GBV? Nine focus group discussions divided by gender, age, and profession and six key informant interviews were conducted. Participants' ages ranged from 9 to 80 years ( n =34 girls/women, n = 43 boys/men). Grounded theory was used in analysis. Participants conceptualized eight forms of and 22 interactive variables that contributed to GBV. Armed conflict affected physical violence/quarreling, sexual violence, early marriage, and land grabbing via a direct pathway and four indirect pathways initiated through looting of resources, militarization of the community, death of a parent(s) or husband, and sexual violence. The findings suggest that community, organizational, and policy-level interventions, which include attention to intersecting vulnerabilities for exposure to GBV in conflict-affected settings, should be prioritized. While tertiary psychological interventions with women and girls affected by GBV in these areas should not be eliminated, we suggest that policy makers and members of community and organizational efforts make systemic and structural changes. Online slides for instructors who want to use this article for teaching are available on PWQ 's website at http://journals.sagepub.com/page/pwq/suppl/index.

  16. Information risk and security modeling

    Science.gov (United States)

    Zivic, Predrag

    2005-03-01

    This research paper presentation will feature current frameworks to addressing risk and security modeling and metrics. The paper will analyze technical level risk and security metrics of Common Criteria/ISO15408, Centre for Internet Security guidelines, NSA configuration guidelines and metrics used at this level. Information IT operational standards view on security metrics such as GMITS/ISO13335, ITIL/ITMS and architectural guidelines such as ISO7498-2 will be explained. Business process level standards such as ISO17799, COSO and CobiT will be presented with their control approach to security metrics. Top level, the maturity standards such as SSE-CMM/ISO21827, NSA Infosec Assessment and CobiT will be explored and reviewed. For each defined level of security metrics the research presentation will explore the appropriate usage of these standards. The paper will discuss standards approaches to conducting the risk and security metrics. The research findings will demonstrate the need for common baseline for both risk and security metrics. This paper will show the relation between the attribute based common baseline and corporate assets and controls for risk and security metrics. IT will be shown that such approach spans over all mentioned standards. The proposed approach 3D visual presentation and development of the Information Security Model will be analyzed and postulated. Presentation will clearly demonstrate the benefits of proposed attributes based approach and defined risk and security space for modeling and measuring.

  17. Green Template for Life Cycle Assessment of Buildings Based on Building Information Modeling: Focus on Embodied Environmental Impact

    Directory of Open Access Journals (Sweden)

    Sungwoo Lee

    2015-12-01

    Full Text Available The increased popularity of building information modeling (BIM for application in the construction of eco-friendly green buildings has given rise to techniques for evaluating green buildings constructed using BIM features. Existing BIM-based green building evaluation techniques mostly rely on externally provided evaluation tools, which pose problems associated with interoperability, including a lack of data compatibility and the amount of time required for format conversion. To overcome these problems, this study sets out to develop a template (the “green template” for evaluating the embodied environmental impact of using a BIM design tool as part of BIM-based building life-cycle assessment (LCA technology development. Firstly, the BIM level of detail (LOD was determined to evaluate the embodied environmental impact, and constructed a database of the impact factors of the embodied environmental impact of the major building materials, thereby adopting an LCA-based approach. The libraries of major building elements were developed by using the established databases and compiled evaluation table of the embodied environmental impact of the building materials. Finally, the green template was developed as an embodied environmental impact evaluation tool and a case study was performed to test its applicability. The results of the green template-based embodied environmental impact evaluation of a test building were validated against those of its actual quantity takeoff (2D takeoff, and its reliability was confirmed by an effective error rate of ≤5%. This study aims to develop a system for assessing the impact of the substances discharged from concrete production process on six environmental impact categories, i.e., global warming (GWP, acidification (AP, eutrophication (EP, abiotic depletion (ADP, ozone depletion (ODP, and photochemical oxidant creation (POCP, using the life a cycle assessment (LCA method. To achieve this, we proposed an LCA method

  18. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering.

    Science.gov (United States)

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-04-01

    Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Informal information for web-based engineering catalogues

    Science.gov (United States)

    Allen, Richard D.; Culley, Stephen J.; Hicks, Ben J.

    2001-10-01

    Success is highly dependent on the ability of a company to efficiently produce optimal designs. In order to achieve this companies must minimize time to market and possess the ability to make fully informed decisions at the early phase of the design process. Such decisions may include the choice of component and suppliers, as well as cost and maintenance considerations. Computer modeling and electronic catalogues are becoming the preferred medium for the selection and design of mechanical components. In utilizing these techniques, the designer demands the capability to identify, evaluate and select mechanical components both quantitatively and qualitatively. Quantitative decisions generally encompass performance data included in the formal catalogue representation. It is in the area of qualitative decisions that the use of what the authors call 'Informal Information' is of crucial importance. Thus, 'Informal Information' must often be incorporated into the selection process and selection systems. This would enable more informed decisions to be made quicker, without the need for information retrieval via discussion with colleagues in the design environment. This paper provides an overview of the use of electronic information in the design of mechanical systems, including a discussion of limitations of current technology. The importance of Informal Information is discussed and the requirements for association with web based electronic catalogues are developed. This system is based on a flexible XML schema and enables the storage, classification and recall of Informal Information packets. Furthermore, a strategy for the inclusion of Informal Information is proposed, and an example case is used to illustrate the benefits.

  20. Combining Livestock Production Information in a Process-Based Vegetation Model to Reconstruct the History of Grassland Management

    Science.gov (United States)

    Chang, Jinfeng; Ciais, Philippe; Herrero, Mario; Havlik, Petr; Campioli, Matteo; Zhang, Xianzhou; Bai, Yongfei; Viovy, Nicolas; Joiner, Joanna; Wang, Xuhui; hide

    2016-01-01

    Grassland management type (grazed or mown) and intensity (intensive or extensive) play a crucial role in the greenhouse gas balance and surface energy budget of this biome, both at field scale and at large spatial scale. However, global gridded historical information on grassland management intensity is not available. Combining modelled grass-biomass productivity with statistics of the grass-biomass demand by livestock, we reconstruct gridded maps of grassland management intensity from 1901 to 2012. These maps include the minimum area of managed vs. maximum area of unmanaged grasslands and the fraction of mown vs. grazed area at a resolution of 0.5deg by 0.5deg. The grass-biomass demand is derived from a livestock dataset for 2000, extended to cover the period 19012012. The grass-biomass supply (i.e. forage grass from mown grassland and biomass grazed) is simulated by the process-based model ORCHIDEE-GM driven by historical climate change, risingCO2 concentration, and changes in nitrogen fertilization. The global area of managed grassland obtained in this study increases from 6.1 x 10(exp 6) km(exp 2) in 1901 to 12.3 x 10(exp 6) kmI(exp 2) in 2000, although the expansion pathway varies between different regions. ORCHIDEE-GM also simulated augmentation in global mean productivity and herbage-use efficiency over managed grassland during the 20th century, indicating a general intensification of grassland management at global scale but with regional differences. The gridded grassland management intensity maps are model dependent because they depend on modelled productivity. Thus specific attention was given to the evaluation of modelled productivity against a series of observations from site-level net primary productivity (NPP) measurements to two global satellite products of gross primary productivity (GPP) (MODIS-GPP and SIF data). Generally, ORCHIDEE-GM captures the spatial pattern, seasonal cycle, and inter-annual variability of grassland productivity at global

  1. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  2. A Framework for Effective Assessment of Model-based Projections of Biodiversity to Inform the Next Generation of Global Conservation Targets

    Science.gov (United States)

    Myers, B.; Beard, T. D.; Weiskopf, S. R.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.; Casey, K.; Lenton, T. M.; Leidner, A. K.; Ruane, A. C.; Ferrier, S.; Serbin, S.; Matsuda, H.; Shiklomanov, A. N.; Rosa, I.

    2017-12-01

    Biodiversity and ecosystems services underpin political targets for the conservation of biodiversity; however, previous incarnations of these biodiversity-related targets have not relied on integrated model based projections of possible outcomes based on climate and land use change. Although a few global biodiversity models are available, most biodiversity models lie along a continuum of geography and components of biodiversity. Model-based projections of the future of global biodiversity are critical to support policymakers in the development of informed global conservation targets, but the scientific community lacks a clear strategy for integrating diverse data streams in developing, and evaluating the performance of, such biodiversity models. Therefore, in this paper, we propose a framework for ongoing testing and refinement of model-based projections of biodiversity trends and change, by linking a broad variety of biodiversity models with data streams generated by advances in remote sensing, coupled with new and emerging in-situ observation technologies to inform development of essential biodiversity variables, future global biodiversity targets, and indicators. Our two main objectives are to (1) develop a framework for model testing and refining projections of a broad range of biodiversity models, focusing on global models, through the integration of diverse data streams and (2) identify the realistic outputs that can be developed and determine coupled approaches using remote sensing and new and emerging in-situ observations (e.g., metagenomics) to better inform the next generation of global biodiversity targets.

  3. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR MODELS

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders

    2015-01-01

    In this article, we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular, we compare the efficacy of the most widely used information criteria, such as Akaike Information Criterion....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms......-based method to over-estimate the co-integration rank in relatively small sample sizes....

  4. Conceptual models of information processing

    Science.gov (United States)

    Stewart, L. J.

    1983-01-01

    The conceptual information processing issues are examined. Human information processing is defined as an active cognitive process that is analogous to a system. It is the flow and transformation of information within a human. The human is viewed as an active information seeker who is constantly receiving, processing, and acting upon the surrounding environmental stimuli. Human information processing models are conceptual representations of cognitive behaviors. Models of information processing are useful in representing the different theoretical positions and in attempting to define the limits and capabilities of human memory. It is concluded that an understanding of conceptual human information processing models and their applications to systems design leads to a better human factors approach.

  5. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information

    NARCIS (Netherlands)

    L.M. Lamers (Leida)

    1999-01-01

    textabstractOBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness

  6. Closed-loop EMG-informed model-based analysis of human musculoskeletal mechanics on rough terrains

    NARCIS (Netherlands)

    Varotto, C.; Sawacha, Z.; Gizzi, L; Farina, D.; Sartori, M.

    2017-01-01

    This work aims at estimating the musculoskeletal forces acting in the human lower extremity during locomotion on rough terrains. We employ computational models of the human neuro-musculoskeletal system that are informed by multi-modal movement data including foot-ground reaction forces, 3D marker

  7. Spiral model pilot project information model

    Science.gov (United States)

    1991-01-01

    The objective was an evaluation of the Spiral Model (SM) development approach to allow NASA Marshall to develop an experience base of that software management methodology. A discussion is presented of the Information Model (IM) that was used as part of the SM methodology. A key concept of the SM is the establishment of an IM to be used by management to track the progress of a project. The IM is the set of metrics that is to be measured and reported throughout the life of the project. These metrics measure both the product and the process to ensure the quality of the final delivery item and to ensure the project met programmatic guidelines. The beauty of the SM, along with the IM, is the ability to measure not only the correctness of the specification and implementation of the requirements but to also obtain a measure of customer satisfaction.

  8. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  9. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  10. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  11. Context based multimedia information retrieval

    DEFF Research Database (Denmark)

    Mølgaard, Lasse Lohilahti

    The large amounts of digital media becoming available require that new approaches are developed for retrieving, navigating and recommending the data to users in a way that refl ects how we semantically perceive the content. The thesis investigates ways to retrieve and present content for users...... topics from a large collection of the transcribed speech to improve retrieval of spoken documents. The context modelling is done using a variant of probabilistic latent semantic analysis (PLSA), to extract properties of the textual sources that refl ect how humans perceive context. We perform PLSA...... of Wikipedia , as well as text-based semantic similarity. The final aspect investigated is how to include some of the structured data available in Wikipedia to include temporal information. We show that a multiway extension of PLSA makes it possible to extract temporally meaningful topics, better than using...

  12. Identifying appropriate reference data models for comparative effectiveness research (CER) studies based on data from clinical information systems.

    Science.gov (United States)

    Ogunyemi, Omolola I; Meeker, Daniella; Kim, Hyeon-Eui; Ashish, Naveen; Farzaneh, Seena; Boxwala, Aziz

    2013-08-01

    The need for a common format for electronic exchange of clinical data prompted federal endorsement of applicable standards. However, despite obvious similarities, a consensus standard has not yet been selected in the comparative effectiveness research (CER) community. Using qualitative metrics for data retrieval and information loss across a variety of CER topic areas, we compare several existing models from a representative sample of organizations associated with clinical research: the Observational Medical Outcomes Partnership (OMOP), Biomedical Research Integrated Domain Group, the Clinical Data Interchange Standards Consortium, and the US Food and Drug Administration. While the models examined captured a majority of the data elements that are useful for CER studies, data elements related to insurance benefit design and plans were most detailed in OMOP's CDM version 4.0. Standardized vocabularies that facilitate semantic interoperability were included in the OMOP and US Food and Drug Administration Mini-Sentinel data models, but are left to the discretion of the end-user in Biomedical Research Integrated Domain Group and Analysis Data Model, limiting reuse opportunities. Among the challenges we encountered was the need to model data specific to a local setting. This was handled by extending the standard data models. We found that the Common Data Model from the OMOP met the broadest complement of CER objectives. Minimal information loss occurred in mapping data from institution-specific data warehouses onto the data models from the standards we assessed. However, to support certain scenarios, we found a need to enhance existing data dictionaries with local, institution-specific information.

  13. Impacts of Irrigation and Climate Change on Water Security: Using Stakeholder Engagement to Inform a Process-based Crop Model

    Science.gov (United States)

    Leonard, A.; Flores, A. N.; Han, B.; Som Castellano, R.; Steimke, A.

    2016-12-01

    Irrigation is an essential component for agricultural production in arid and semi-arid regions, accounting for a majority of global freshwater withdrawals used for human consumption. Since climate change affects both the spatiotemporal demand and availability of water in irrigated areas, agricultural productivity and water efficiency depend critically on how producers adapt and respond to climate change. It is necessary, therefore, to understand the coevolution and feedbacks between humans and agricultural systems. Integration of social and hydrologic processes can be achieved by active engagement with local stakeholders and applying their expertise to models of coupled human-environment systems. Here, we use a process based crop simulation model (EPIC) informed by stakeholder engagement to determine how both farm management and climate change influence regional agricultural water use and production in the Lower Boise River Basin (LBRB) of southwest Idaho. Specifically, we investigate how a shift from flood to sprinkler fed irrigation would impact a watershed's overall agricultural water use under RCP 4.5 and RCP 8.5 climate scenarios. The LBRB comprises about 3500 km2, of which 20% is dedicated to irrigated crops and another 40% to grass/pasture grazing land. Via interviews of stakeholders in the LBRB, we have determined that approximately 70% of irrigated lands in the region are flood irrigated. We model four common crops produced in the LBRB (alfalfa, corn, winter wheat, and sugarbeets) to investigate both hydrologic and agricultural impacts of irrigation and climatic drivers. Factors influencing farmers' decision to switch from flood to sprinkler irrigation include potential economic benefits, external financial incentives, and providing a buffer against future water shortages. These two irrigation practices are associated with significantly different surface water and energy budgets, and large-scale shifts in practice could substantially impact regional

  14. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  15. An evaluation of the coping patterns of rape victims: integration with a schema-based information-processing model.

    Science.gov (United States)

    Littleton, Heather

    2007-08-01

    The current study sought to provide an expansion of Resick and Schnicke's information-processing model of interpersonal violence response. Their model posits that interpersonal violence threatens victims' schematic beliefs and that victims can resolve this threat through assimilation, accommodation, or overaccommodation. In addition, it is hypothesized that how victims resolve schematic threat affects their coping strategies. To test this hypothesis, a cluster analysis of rape victims' coping patterns was conducted. Victims' coping patterns were related to distress, self-worth, and rape label in ways consistent with predictions. Thus, future research should focus on the implications of how victims integrate trauma with schemas.

  16. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  17. Integrated modelling of module behavior and energy aspects in mechatronics. Energy optimization of production facilities based on model information; Modellintegration von Verhaltens- und energetischen Aspekten fuer mechatronische Module. Energieoptimierung von Produktionsanlagen auf Grundlage von Modellinformationen

    Energy Technology Data Exchange (ETDEWEB)

    Schuetz, Daniel; Vogel-Heuser, Birgit [Technische Univ. Muenchen (Germany). Lehrstuhl fuer Informationstechnik im Maschinenwesen

    2011-01-15

    In this Paper a modelling approach is presented that merges the operation characteristics and the energy aspects of automation modules into one model. A characteristic of this approach is the state-based behavior model. An example is used to demonstrate how the information in the model can be used for an energy-optimized operation controlled by software agents. (orig.)

  18. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...

  19. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning.

    Directory of Open Access Journals (Sweden)

    Abe D Hofman

    Full Text Available We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779, and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808. For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.

  20. Why Don’t More Farmers Go Organic? Using A Stakeholder-Informed Exploratory Agent-Based Model to Represent the Dynamics of Farming Practices in the Philippines

    Directory of Open Access Journals (Sweden)

    Laura Schmitt Olabisi

    2015-10-01

    Full Text Available In spite of a growing interest in organic agriculture; there has been relatively little research on why farmers might choose to adopt organic methods, particularly in the developing world. To address this shortcoming, we developed an exploratory agent-based model depicting Philippine smallholder farmer decisions to implement organic techniques in rice paddy systems. Our modeling exercise was novel in its combination of three characteristics: first, agent rules were based on focus group data collected in the system of study. Second, a social network structure was built into the model. Third, we utilized variance-based sensitivity analysis to quantify model outcome variability, identify influential drivers, and suggest ways in which further modeling efforts could be focused and simplified. The model results indicated an upper limit on the number of farmers adopting organic methods. The speed of information spread through the social network; crop yields; and the size of a farmer’s plot were highly influential in determining agents’ adoption rates. The results of this stylized model indicate that rates of organic farming adoption are highly sensitive to the yield drop after switchover to organic techniques, and to the speed of information spread through existing social networks. Further research and model development should focus on these system characteristics.

  1. A geographical information system-based web model of arbovirus transmission risk in the continental United States of America

    Directory of Open Access Journals (Sweden)

    Sarah K. Konrad

    2012-11-01

    Full Text Available A degree-day (DD model of West Nile virus capable of forecasting real-time transmission risk in the continental United States of America up to one week in advance using a 50-km grid is available online at https://sites. google.com/site/arbovirusmap/. Daily averages of historical risk based on temperatures for 1994-2003 are available at 10- km resolution. Transmission risk maps can be downloaded from 2010 to the present. The model can be adapted to work with any arbovirus for which the temperature-related parameters are known, e.g. Rift Valley fever virus. To more effectively assess virus establishment and transmission, the model incorporates “compound risk” maps and forecasts, which includes livestock density as a parameter.

  2. Using social network analysis and agent-based modelling to explore information flow using common operational pictures for maritime search and rescue operations.

    Science.gov (United States)

    Baber, C; Stanton, N A; Atkinson, J; McMaster, R; Houghton, R J

    2013-01-01

    The concept of common operational pictures (COPs) is explored through the application of social network analysis (SNA) and agent-based modelling to a generic search and rescue (SAR) scenario. Comparing the command structure that might arise from standard operating procedures with the sort of structure that might arise from examining information-in-common, using SNA, shows how one structure could be more amenable to 'command' with the other being more amenable to 'control' - which is potentially more suited to complex multi-agency operations. An agent-based model is developed to examine the impact of information sharing with different forms of COPs. It is shown that networks using common relevant operational pictures (which provide subsets of relevant information to groups of agents based on shared function) could result in better sharing of information and a more resilient structure than networks that use a COP. SNA and agent-based modelling are used to compare different forms of COPs for maritime SAR operations. Different forms of COP change the communications structures in the socio-technical systems in which they operate, which has implications for future design and development of a COP.

  3. A New Prediction Model for Transformer Winding Hotspot Temperature Fluctuation Based on Fuzzy Information Granulation and an Optimized Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Li Zhang

    2017-12-01

    Full Text Available Winding hotspot temperature is the key factor affecting the load capacity and service life of transformers. For the early detection of transformer winding hotspot temperature anomalies, a new prediction model for the hotspot temperature fluctuation range based on fuzzy information granulation (FIG and the chaotic particle swarm optimized wavelet neural network (CPSO-WNN is proposed in this paper. The raw data are firstly processed by FIG to extract useful information from each time window. The extracted information is then used to construct a wavelet neural network (WNN prediction model. Furthermore, the structural parameters of WNN are optimized by chaotic particle swarm optimization (CPSO before it is used to predict the fluctuation range of the hotspot temperature. By analyzing the experimental data with four different prediction models, we find that the proposed method is more effective and is of guiding significance for the operation and maintenance of transformers.

  4. Information Systems Outsourcing Relationship Model

    Directory of Open Access Journals (Sweden)

    Richard Flemming

    2007-09-01

    Full Text Available Increasing attention is being paid to what determines the success of an information systems outsourcing arrangement. The current research aims to provide an improved understanding of the factors influencing the outcome of an information systems outsourcing relationship and to provide a preliminary validation of an extended outsourcing relationship model by interviews with information systems outsourcing professionals in both the client and vendor of a major Australian outsourcing relationship. It also investigates whether the client and the vendor perceive the relationship differently and if so, how they perceive it differently and whether the two perspectives are interrelated.

  5. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  6. Towards socio-hydroinformatics: optimal design and integration of citizen-based information in water-system models

    Science.gov (United States)

    Solomatine, Dimitri; Mazzoleni, Maurizio; Alfonso, Leonardo; Chacon Hurtado, Juan Carlos

    2017-04-01

    -hydroinformatics can be a potential application demonstrates that citizens not only play an active role in information capturing, evaluation and communication, but also help to improve models and thus increase flood resilience.

  7. Isotope-based quantum information

    CERN Document Server

    G Plekhanov, Vladimir

    2012-01-01

    The present book provides to the main ideas and techniques of the rapid progressing field of quantum information and quantum computation using isotope - mixed materials. It starts with an introduction to the isotope physics and then describes of the isotope - based quantum information and quantum computation. The ability to manipulate and control electron and/or nucleus spin in semiconductor devices provides a new route to expand the capabilities of inorganic semiconductor-based electronics and to design innovative devices with potential application in quantum computing. One of the major challenges towards these objectives is to develop semiconductor-based systems and architectures in which the spatial distribution of spins and their properties can be controlled. For instance, to eliminate electron spin decoherence resulting from hyperfine interaction due to nuclear spin background, isotopically controlled devices are needed (i.e., nuclear spin-depleted). In other emerging concepts, the control of the spatial...

  8. Modeling Human Information Acquisition Strategies

    NARCIS (Netherlands)

    Heuvelink, Annerieke; Klein, Michel C. A.; van Lambalgen, Rianne; Taatgen, Niels A.; Rijn, Hedderik van

    2009-01-01

    The focus of this paper is the development of a computational model for intelligent agents that decides on whether to acquire required information by retrieving it from memory or by interacting with the world. First, we present a task for which such decisions have to be made. Next, we discuss an

  9. Elaboration of a velocity model of the Bogota basin (Colombia) based on microtremors arrays measurements, gravity data, and geological information

    Science.gov (United States)

    Pulido Hernandez, N. E.; Senna, S.; Garcia, H. Mr; Montejo, S.; Reyes, J. C.

    2017-12-01

    Bogotá, a megacity with almost 8 million inhabitants is prone to a significant earthquake hazard due to nearby active faults as well as subduction megathrust earthquakes. The city has been severely affected by many historical earthquakes in the last 500 years, reaching MM intensities of 8 or more in Bogotá. The city is also located at a large lacustrine basin composed of extremely soft soils which may strongly amplify the ground shaking from earthquakes. The basin extends approximately 40 km from North to South, is bounded by the Andes range to the East and South, and sharply deepens towards the West of Bogotá. The city has been the subject of multiple microzonations studies which have contributed to gain a good knowledge on the geotechnical zonation of the city and tectonic setting of the region. To improve our knowledge on the seismic risk of the city as one of the topics, we started a 5 years project sponsored by SATREPS (a joint program of JICA and JST), entitled "Application of state of the art technologies to strengthen research and response to seismic, volcanic and tsunami events and enhance risk management in Colombia (2015-2019)". In this paper we will show our results for the elaboration of a velocity model of the city. To construct a velocity model of the basin we conducted multi-sized microtremors arrays measurements (radius from 60 cm up to 1000 m) at 41 sites within the city. We calculated dispersion curves and inferred velocity profiles at all the sites. We combine these results with gravity measurements as well as geological information to obtain the initial velocity model of the basin. Ackowledgments This research is funded by SATREPS (a joint program of JICA and JST).

  10. Assimilating Merged Remote Sensing and Ground based Snowpack Information for Runoff Simulation and Forecasting using Hydrological Models

    Science.gov (United States)

    Infante Corona, J. A.; Lakhankar, T.; Khanbilvardi, R.; Pradhanang, S. M.

    2013-12-01

    Stream flow estimation and flood prediction influenced by snow melting processes have been studied for the past couple of decades because of their destruction potential, money losses and demises. It has been observed that snow, that was very stationary during its seasons, now is variable in shorter time-scales (daily and hourly) and rapid snowmelt can contribute or been the cause of floods. Therefore, good estimates of snowpack properties on ground are necessary in order to have an accurate prediction of these destructive events. The snow thermal model (SNTHERM) is a 1-dimensional model that analyzes the snowpack properties given the climatological conditions of a particular area. Gridded data from both, in-situ meteorological observations and remote sensing data will be produced using interpolation methods; thus, snow water equivalent (SWE) and snowmelt estimations can be obtained. The soil and water assessment tool (SWAT) is a hydrological model capable of predicting runoff quantity and quality of a watershed given its main physical and hydrological properties. The results from SNTHERM will be used as an input for SWAT in order to have simulated runoff under snowmelt conditions. This project attempts to improve the river discharge estimation considering both, excess rainfall runoff and the snow melting process. Obtaining a better estimation of the snowpack properties and evolution is expected. A coupled use of SNTHERM and SWAT based on meteorological in situ and remote sensed data will improve the temporal and spatial resolution of the snowpack characterization and river discharge estimations, and thus flood prediction.

  11. Geographical information system based model of land suitability for good yield of rice in prachuap khiri khan province, thailand

    International Nuclear Information System (INIS)

    Hussain, W.; Sohaib, O.

    2012-01-01

    Correct assessment of land is a major issue in agricultural sector to use possible capability of any land, to raise cultivation and production of rice. Geographical Information System (GIS) provides broad techniques for suitable land classifications. This study is GIS based on land suitability analysis for rice farming in Prachuap Khiri Khan Province, Thailand, where the main livelihood of people is rice farming. This analysis was conducted considering the relationship of rice production with various data layers of elevation, slope, soil pH, rainfall, fertilizer use and land use. ArcView GIS 3.2 software is used to consider each layer according to related data to weight every coefficient, ranking techniques are used. It was based on determining correlation of rice production and these variables. This analysis showed a positive correlation with these variables in varying degrees depending on the magnitude and quality of these factors. By combining both data layers of GIS and weighted linear combination, various suitable lands have been developed for cultivation of rice. Integrated suitable assessment map and current land were compared to find suitable land in Prachuap Khiri Khan Province of Thailand. As a result of this comparison, we get a land which is suitable for optimum utilization for rice production in Prachuap Khiri Khan Province. (author)

  12. Web information retrieval based on ontology

    Science.gov (United States)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  13. NASA's Carbon Cycle OSSE Initiative - Informing future space-based observing strategies through advanced modeling and data assimilation

    Science.gov (United States)

    Ott, L.; Sellers, P. J.; Schimel, D.; Moore, B., III; O'Dell, C.; Crowell, S.; Kawa, S. R.; Pawson, S.; Chatterjee, A.; Baker, D. F.; Schuh, A. E.

    2017-12-01

    Satellite observations of carbon dioxide (CO2) and methane (CH4) are critically needed to improve understanding of the contemporary carbon budget and carbon-climate feedbacks. Though current carbon observing satellites have provided valuable data in regions not covered by surface in situ measurements, limited sampling of key regions and small but spatially coherent biases have limited the ability to estimate fluxes at the time and space scales needed for improved process-level understanding and informed decision-making. Next generation satellites will improve coverage in data sparse regions, either through use of active remote sensing, a geostationary vantage point, or increased swath width, but all techniques have limitations. The relative strengths and weaknesses of these approaches and their synergism have not previously been examined. To address these needs, a significant subset of the US carbon modeling community has come together with support from NASA to conduct a series of coordinated observing system simulation experiments (OSSEs), with close collaboration in framing the experiments and in analyzing the results. Here, we report on the initial phase of this initiative, which focused on creating realistic, physically consistent synthetic CO2 and CH4 observational datasets for use in inversion and signal detection experiments. These datasets have been created using NASA's Goddard Earth Observing System Model (GEOS) to represent the current state of atmospheric carbon as well as best available estimates of expected flux changes. Scenarios represented include changes in urban emissions, release of permafrost soil carbon, changes in carbon uptake in tropical and mid-latitude forests, changes in the Southern Ocean sink, and changes in both anthropogenic and natural methane emissions. This GEOS carbon `nature run' was sampled by instrument simulators representing the most prominent observing strategies with a focus on consistently representing the impacts of

  14. An evidence synthesis of the international knowledge base for new care models to inform and mobilise knowledge for multispecialty community providers (MCPs).

    Science.gov (United States)

    Turner, Alison; Mulla, Abeda; Booth, Andrew; Aldridge, Shiona; Stevens, Sharon; Battye, Fraser; Spilsbury, Peter

    2016-10-01

    NHS England's Five Year Forward View (NHS England, Five Year Forward View, 2014) formally introduced a strategy for new models of care driven by simultaneous pressures to contain costs, improve care and deliver services closer to home through integrated models. This synthesis focuses on a multispecialty community provider (MCP) model. This new model of care seeks to overcome the limitations in current models of care, often based around single condition-focused pathways, in contrast to patient-focused delivery (Royal College of General Practitioners, The 2022 GP: compendium of evidence, 2012) which offers greater continuity of care in recognition of complex needs and multimorbidity. The synthesis, an innovative combination of best fit framework synthesis and realist synthesis, will develop a "blueprint" which articulates how and why MCP models work, to inform design of future iterations of the MCP model. A systematic search will be conducted to identify research and practice-derived evidence to achieve a balance that captures the historical legacy of MCP models but focuses on contemporary evidence. Sources will include bibliographic databases including MEDLINE, PreMEDLINE, CINAHL, Embase, HMIC and Cochrane Library; and grey literature sources. The Best Fit synthesis methodology will be combined with a synthesis following realist principles which are particularly suited to exploring what works, when, for whom and in what circumstances. The aim of this synthesis is to provide decision makers in health and social care with a practical evidence base relating to the multispecialty community provider (MCP) model of care. PROSPERO CRD42016039552 .

  15. Geographic information systems-based expert system modelling for shoreline sensitivity to oil spill disaster in Rivers State, Nigeria

    Directory of Open Access Journals (Sweden)

    Olanrewaju Lawal

    2017-07-01

    Full Text Available In the absence of adequate and appropriate actions, hazards often result in disaster. Oil spills across any environment are very hazardous; thus, oil spill contingency planning is pertinent, supported by Environmental Sensitivity Index (ESI mapping. However, a significant data gap exists across many low- and middle-income countries in aspect of environmental monitoring. This study developed a geographic information system (GIS-based expert system (ES for shoreline sensitivity to oiling. It focused on the biophysical attributes of the shoreline with Rivers State as a case study. Data on elevation, soil, relative wave exposure and satellite imageries were collated and used for the development of ES decision rules within GIS. Results show that about 70% of the shoreline are lined with swamp forest/mangroves/nympa palm, and 97% have silt and clay as dominant sediment type. From the ES, six ranks were identified; 61% of the shoreline has a rank of 9 and 19% has a rank of 3 for shoreline sensitivity. A total of 568 km out of the 728 km shoreline is highly sensitive (ranks 7–10. There is a clear indication that the study area is a complex mixture of sensitive environments to oil spill. GIS-based ES with classification rules for shoreline sensitivity represents a rapid and flexible framework for automatic ranking of shoreline sensitivity to oiling. It is expected that this approach would kick-start sensitivity index mapping which is comprehensive and openly available to support disaster risk management around the oil producing regions of the country.

  16. Building Information Modeling Comprehensive Overview

    Directory of Open Access Journals (Sweden)

    Sergey Kalinichuk

    2015-07-01

    Full Text Available The article is addressed to provide a comprehensive review on recently accelerated development of the Information Technology within project market such as industrial, engineering, procurement and construction. Author’s aim is to cover the last decades of the growth of the Information and Communication Technology in construction industry in particular Building Information Modeling and testifies that the problem of a choice of the effective project realization method not only has not lost its urgency, but has also transformed into one of the major condition of the intensive technology development. All of it has created a great impulse on shortening the project duration and has led to the development of various schedule compression techniques what becomes a focus of modern construction.

  17. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information

    Science.gov (United States)

    Unke, Oliver T.; Meuwly, Markus

    2018-06-01

    Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.

  18. Conjunction of wavelet transform and SOM-mutual information data pre-processing approach for AI-based Multi-Station nitrate modeling of watersheds

    Science.gov (United States)

    Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika

    2017-05-01

    Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.

  19. Simulation of Escherichia coli Dynamics in Biofilms and Submerged Colonies with an Individual-Based Model Including Metabolic Network Information.

    Science.gov (United States)

    Tack, Ignace L M M; Nimmegeers, Philippe; Akkermans, Simen; Hashem, Ihab; Van Impe, Jan F M

    2017-01-01

    Clustered microbial communities are omnipresent in the food industry, e.g., as colonies of microbial pathogens in/on food media or as biofilms on food processing surfaces. These clustered communities are often characterized by metabolic differentiation among their constituting cells as a result of heterogeneous environmental conditions in the cellular surroundings. This paper focuses on the role of metabolic differentiation due to oxygen gradients in the development of Escherichia coli cell communities, whereby low local oxygen concentrations lead to cellular secretion of weak acid products. For this reason, a metabolic model has been developed for the facultative anaerobe E. coli covering the range of aerobic, microaerobic, and anaerobic environmental conditions. This metabolic model is expressed as a multiparametric programming problem, in which the influence of low extracellular pH values and the presence of undissociated acid cell products in the environment has been taken into account. Furthermore, the developed metabolic model is incorporated in MICRODIMS, an in-house developed individual-based modeling framework to simulate microbial colony and biofilm dynamics. Two case studies have been elaborated using the MICRODIMS simulator: (i) biofilm growth on a substratum surface and (ii) submerged colony growth in a semi-solid mixed food product. In the first case study, the acidification of the biofilm environment and the emergence of typical biofilm morphologies have been observed, such as the mushroom-shaped structure of mature biofilms and the formation of cellular chains at the exterior surface of the biofilm. The simulations show that these morphological phenomena are respectively dependent on the initial affinity of pioneer cells for the substratum surface and the cell detachment process at the outer surface of the biofilm. In the second case study, a no-growth zone emerges in the colony center due to a local decline of the environmental pH. As a result

  20. Simulation of Escherichia coli Dynamics in Biofilms and Submerged Colonies with an Individual-Based Model Including Metabolic Network Information

    Directory of Open Access Journals (Sweden)

    Ignace L. M. M. Tack

    2017-12-01

    Full Text Available Clustered microbial communities are omnipresent in the food industry, e.g., as colonies of microbial pathogens in/on food media or as biofilms on food processing surfaces. These clustered communities are often characterized by metabolic differentiation among their constituting cells as a result of heterogeneous environmental conditions in the cellular surroundings. This paper focuses on the role of metabolic differentiation due to oxygen gradients in the development of Escherichia coli cell communities, whereby low local oxygen concentrations lead to cellular secretion of weak acid products. For this reason, a metabolic model has been developed for the facultative anaerobe E. coli covering the range of aerobic, microaerobic, and anaerobic environmental conditions. This metabolic model is expressed as a multiparametric programming problem, in which the influence of low extracellular pH values and the presence of undissociated acid cell products in the environment has been taken into account. Furthermore, the developed metabolic model is incorporated in MICRODIMS, an in-house developed individual-based modeling framework to simulate microbial colony and biofilm dynamics. Two case studies have been elaborated using the MICRODIMS simulator: (i biofilm growth on a substratum surface and (ii submerged colony growth in a semi-solid mixed food product. In the first case study, the acidification of the biofilm environment and the emergence of typical biofilm morphologies have been observed, such as the mushroom-shaped structure of mature biofilms and the formation of cellular chains at the exterior surface of the biofilm. The simulations show that these morphological phenomena are respectively dependent on the initial affinity of pioneer cells for the substratum surface and the cell detachment process at the outer surface of the biofilm. In the second case study, a no-growth zone emerges in the colony center due to a local decline of the environmental p

  1. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  2. An Agent-Based Model of Private Woodland Owner Management Behavior Using Social Interactions, Information Flow, and Peer-To-Peer Networks.

    Directory of Open Access Journals (Sweden)

    Emily Silver Huff

    Full Text Available Privately owned woodlands are an important source of timber and ecosystem services in North America and worldwide. Impacts of management on these ecosystems and timber supply from these woodlands are difficult to estimate because complex behavioral theory informs the owner's management decisions. The decision-making environment consists of exogenous market factors, internal cognitive processes, and social interactions with fellow landowners, foresters, and other rural community members. This study seeks to understand how social interactions, information flow, and peer-to-peer networks influence timber harvesting behavior using an agent-based model. This theoretical model includes forested polygons in various states of 'harvest readiness' and three types of agents: forest landowners, foresters, and peer leaders (individuals trained in conservation who use peer-to-peer networking. Agent rules, interactions, and characteristics were parameterized with values from existing literature and an empirical survey of forest landowner attitudes, intentions, and demographics. The model demonstrates that as trust in foresters and peer leaders increases, the percentage of the forest that is harvested sustainably increases. Furthermore, peer leaders can serve to increase landowner trust in foresters. Model output and equations will inform forest policy and extension/outreach efforts. The model also serves as an important testing ground for new theories of landowner decision making and behavior.

  3. Information Filtering Based on Users' Negative Opinions

    Science.gov (United States)

    Guo, Qiang; Li, Yang; Liu, Jian-Guo

    2013-05-01

    The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.

  4. How people learn about causal influence when there are many possible causes: A model based on informative transitions.

    Science.gov (United States)

    Derringer, Cory; Rottman, Benjamin Margolin

    2018-05-01

    Four experiments tested how people learn cause-effect relations when there are many possible causes of an effect. When there are many cues, even if all the cues together strongly predict the effect, the bivariate relation between each individual cue and the effect can be weak, which can make it difficult to detect the influence of each cue. We hypothesized that when detecting the influence of a cue, in addition to learning from the states of the cues and effect (e.g., a cue is present and the effect is present), which is hypothesized by multiple existing theories of learning, participants would also learn from transitions - how the cues and effect change over time (e.g., a cue turns on and the effect turns on). We found that participants were better able to identify positive and negative cues in an environment in which only one cue changed from one trial to the next, compared to multiple cues changing (Experiments 1A, 1B). Within a single learning sequence, participants were also more likely to update their beliefs about causal strength when one cue changed at a time ('one-change transitions') than when multiple cues changed simultaneously (Experiment 2). Furthermore, learning was impaired when the trials were grouped by the state of the effect (Experiment 3) or when the trials were grouped by the state of a cue (Experiment 4), both of which reduce the number of one-change transitions. We developed a modification of the Rescorla-Wagner algorithm to model this 'Informative Transitions' learning processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. A Conceptually Simple Modeling Approach for Jason-1 Sea State Bias Correction Based on 3 Parameters Exclusively Derived from Altimetric Information

    Directory of Open Access Journals (Sweden)

    Nelson Pires

    2016-07-01

    Full Text Available A conceptually simple formulation is proposed for a new empirical sea state bias (SSB model using information retrieved entirely from altimetric data. Nonparametric regression techniques are used, based on penalized smoothing splines adjusted to each predictor and then combined by a Generalized Additive Model. In addition to the significant wave height (SWH and wind speed (U10, a mediator parameter designed by the mean wave period derived from radar altimetry, has proven to improve the model performance in explaining some of the SSB variability, especially in swell ocean regions with medium-high SWH and low U10. A collinear analysis of scaled sea level anomalies (SLA variance differences shows conformity between the proposed model and the established SSB models. The new formulation aims to be a fast, reliable and flexible SSB model, in line with the well-settled SSB corrections, depending exclusively on altimetric information. The suggested method is computationally efficient and capable of generating a stable model with a small training dataset, a useful feature for forthcoming missions.

  6. Topic Models in Information Retrieval

    Science.gov (United States)

    2007-08-01

    Information Processing Systems, Cambridge, MA, MIT Press, 2004. Brown, P.F., Della Pietra, V.J., deSouza, P.V., Lai, J.C. and Mercer, R.L., Class-based...2003. http://www.wkap.nl/prod/b/1-4020-1216-0. Croft, W.B., Lucia , T.J., Cringean, J., and Willett, P., Retrieving Documents By Plausible Inference

  7. Model-based scenario planning to inform climate change adaptation in the Northern Great Plains—Final report

    Science.gov (United States)

    Symstad, Amy J.; Miller, Brian W.; Friedman, Jonathan M.; Fisichelli, Nicholas A.; Ray, Andrea J.; Rowland, Erika; Schuurman, Gregor W.

    2017-12-18

    Public SummaryWe worked with managers in two focal areas to plan for the uncertain future by integrating quantitative climate change scenarios and simulation modeling into scenario planning exercises.In our central North Dakota focal area, centered on Knife River Indian Villages National Historic Site, managers are concerned about how changes in flood severity and growing conditions for native and invasive plants may affect archaeological resources and cultural landscapes associated with the Knife and Missouri Rivers. Climate projections and hydrological modeling based on those projections indicate plausible changes in spring and summer soil moisture ranging from a 7 percent decrease to a 13 percent increase and maximum winter snowpack (important for spring flooding) changes ranging from a 13 percent decrease to a 47 percent increase. Facilitated discussions among managers and scientists exploring the implications of these different climate scenarios for resource management revealed potential conflicts between protecting archeological sites and fostering riparian cottonwood forests. The discussions also indicated the need to prioritize archeological sites for excavation or protection and culturally important plant species for intensive management attention.In our southwestern South Dakota focal area, centered on Badlands National Park, managers are concerned about how changing climate will affect vegetation production, wildlife populations, and erosion of fossils, archeological artifacts, and roads. Climate scenarios explored by managers and scientists in this focal area ranged from a 13 percent decrease to a 33 percent increase in spring precipitation, which is critical to plant growth in the northern Great Plains region, and a slight decrease to a near doubling of intense rain events. Facilitated discussions in this focal area concluded that greater effort should be put into preparing for emergency protection, excavation, and preservation of exposed fossils or

  8. Comparison on information-seeking behavior of postgraduated students in Isfahan University of Medical Sciences and University of Isfahan in writing dissertation based on Kuhlthau model of information search process.

    Science.gov (United States)

    Abedi, Mahnaz; Ashrafi-Rizi, Hasan; Zare-Farashbandi, Firoozeh; Nouri, Rasoul; Hassanzadeh, Akbar

    2014-01-01

    Information-seeking behaviors have been one of the main focuses of researchers in order to identify and solve the problems users face in information recovery. The aim of this research is Comparative on Information-Seeking Behavior of the Postgraduate Students in Isfahan University of Medical Sciences and Isfahan University in Writing Dissertation based on Kuhlthau Model of Information Search Process in 2012. The research method followed is survey and the data collection tool is Narmenji questionnaire. Statistical population was all postgraduate students in Isfahan University of Medical Sciences and Isfahan University. The sample size was 196 people and sampling was stratified randomly. The type of statistical analyses were descriptive (mean and frequency) and inferential (independent t test and Pearson's correlation) and the software used was SPSS20. The findings showed that Isfahan Medical Sciences University followed 20% of the order steps of this model and Isfahan University did not follow this model. In the first stage (Initiation) and sixth (Presentation) of feelings aspects and in actions (total stages) significant difference was found between students from the two universities. Between gender and fourth stage (Formulation) and the total score of feelings the Kuhlthau model there has a significant relationship. Also there was a significant and inverse relationship between the third stage (Exploration) of feelings and age of the students. The results showed that in writing dissertation there were some major differences in following up the Kuhlthau model between students of the two Universities. There are significant differences between some of the stages of feelings and actions of students' information-seeking behavior from the two universities. There is a significant relationship between the fourth stage (Formulation) of feelings in the Kuhlthau Model with gender and third stage of the Feelings (Exploration) with age.

  9. Information in general medical practices: the information processing model.

    Science.gov (United States)

    Crowe, Sarah; Tully, Mary P; Cantrill, Judith A

    2010-04-01

    The need for effective communication and handling of secondary care information in general practices is paramount. To explore practice processes on receiving secondary care correspondence in a way that integrates the information needs and perceptions of practice staff both clinical and administrative. Qualitative study using semi-structured interviews with a wide range of practice staff (n = 36) in nine practices in the Northwest of England. Analysis was based on the framework approach using N-Vivo software and involved transcription, familiarization, coding, charting, mapping and interpretation. The 'information processing model' was developed to describe the six stages involved in practice processing of secondary care information. These included the amendment or updating of practice records whilst simultaneously or separately actioning secondary care recommendations, using either a 'one-step' or 'two-step' approach, respectively. Many factors were found to influence each stage and impact on the continuum of patient care. The primary purpose of processing secondary care information is to support patient care; this study raises the profile of information flow and usage within practices as an issue requiring further consideration.

  10. ANN multiscale model of anti-HIV drugs activity vs AIDS prevalence in the US at county level based on information indices of molecular graphs and social networks.

    Science.gov (United States)

    González-Díaz, Humberto; Herrera-Ibatá, Diana María; Duardo-Sánchez, Aliuska; Munteanu, Cristian R; Orbegozo-Medina, Ricardo Alfredo; Pazos, Alejandro

    2014-03-24

    This work is aimed at describing the workflow for a methodology that combines chemoinformatics and pharmacoepidemiology methods and at reporting the first predictive model developed with this methodology. The new model is able to predict complex networks of AIDS prevalence in the US counties, taking into consideration the social determinants and activity/structure of anti-HIV drugs in preclinical assays. We trained different Artificial Neural Networks (ANNs) using as input information indices of social networks and molecular graphs. We used a Shannon information index based on the Gini coefficient to quantify the effect of income inequality in the social network. We obtained the data on AIDS prevalence and the Gini coefficient from the AIDSVu database of Emory University. We also used the Balaban information indices to quantify changes in the chemical structure of anti-HIV drugs. We obtained the data on anti-HIV drug activity and structure (SMILE codes) from the ChEMBL database. Last, we used Box-Jenkins moving average operators to quantify information about the deviations of drugs with respect to data subsets of reference (targets, organisms, experimental parameters, protocols). The best model found was a Linear Neural Network (LNN) with values of Accuracy, Specificity, and Sensitivity above 0.76 and AUROC > 0.80 in training and external validation series. This model generates a complex network of AIDS prevalence in the US at county level with respect to the preclinical activity of anti-HIV drugs in preclinical assays. To train/validate the model and predict the complex network we needed to analyze 43,249 data points including values of AIDS prevalence in 2,310 counties in the US vs ChEMBL results for 21,582 unique drugs, 9 viral or human protein targets, 4,856 protocols, and 10 possible experimental measures.

  11. Strategies for control of sudden oak death in Humboldt County-informed guidance based on a parameterized epidemiological model

    Science.gov (United States)

    João A. N. Filipe; Richard C. Cobb; David M. Rizzo; Ross K. Meentemeyer; Christopher A.. Gilligan

    2010-01-01

    Landscape- to regional-scale models of plant epidemics are direly needed to predict largescale impacts of disease and assess practicable options for control. While landscape heterogeneity is recognized as a major driver of disease dynamics, epidemiological models are rarely applied to realistic landscape conditions due to computational and data limitations. Here we...

  12. A systematic review and qualitative analysis to inform the development of a new emergency department-based geriatric case management model.

    Science.gov (United States)

    Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce

    2011-06-01

    We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one

  13. Data Model Management for Space Information Systems

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool

  14. A descriptive model of information problem solving while using internet

    NARCIS (Netherlands)

    Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber

    2009-01-01

    This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information

  15. Early Engagement of Stakeholders with Individual-Based Modeling Can Inform Research for Improving Invasive Species Management: The Round Goby as a Case Study

    Directory of Open Access Journals (Sweden)

    Emma Samson

    2017-11-01

    Full Text Available Individual-based models (IBMs incorporating realistic representations of key range-front processes such as dispersal can be used as tools to investigate the dynamics of invasive species. Managers can apply insights from these models to take effective action to prevent further spread and prioritize measures preventing establishment of invasive species. We highlight here how early-stage IBMs (constructed under constraints of time and data availability can also play an important role in defining key research priorities for providing key information on the biology of an invasive species in order that subsequent models can provide robust insight into potential management interventions. The round goby, Neogobius melanostomus, is currently spreading through the Baltic Sea, with major negative effects being reported in the wake of its invasion. Together with stakeholders, we parameterize an IBM to investigate the goby's potential spread pattern throughout the Gulf of Gdansk and the Baltic Sea. Model parameters were assigned by integrating information obtained through stakeholder interaction, from scientific literature, or estimated using an inverse modeling approach when not available. IBMs can provide valuable direction to research on invasive species even when there is limited data and/or time available to parameterize/fit them to the degree to which we might aspire in an ideal world. Co-development of models with stakeholders can be used to recognize important invasion patterns, in addition to identifying and estimating unknown environmental parameters, thereby guiding the direction of future research. Well-parameterized and validated models are not required in the earlier stages of the modeling cycle where their main utility is as a tool for thought.

  16. A Meteorological Information Mining-Based Wind Speed Model for Adequacy Assessment of Power Systems With Wind Power

    DEFF Research Database (Denmark)

    Guo, Yifei; Gao, Houlei; Wu, Qiuwei

    2017-01-01

    Accurate wind speed simulation is an essential prerequisite to analyze the power systems with wind power. A wind speed model considering meteorological conditions and seasonal variations is proposed in this paper. Firstly, using the path analysis method, the influence weights of meteorological...... systems with wind power. The assessment results of the modified IEEE-RTS79 and IEEE-RTS96 demonstrated the effectiveness and accuracy of the proposed model....

  17. Base Information Transport Infrastructure Wired (BITI Wired)

    Science.gov (United States)

    2016-03-01

    2016 Major Automated Information System Annual Report Base Information Transport Infrastructure Wired (BITI Wired) Defense Acquisition Management...Combat Information Transport System program was restructured into two pre-Major Automated Information System (pre-MAIS) components: Information...Major Automated Information System MAIS OE - MAIS Original Estimate MAR – MAIS Annual Report MDA - Milestone Decision Authority MDD - Materiel

  18. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  19. Development of an interactive exploratory web-based modelling platform for informed decision-making and knowledgeable responses to global change

    Science.gov (United States)

    Holman, I.; Harrison, P.; Cojocaru, G.

    2013-12-01

    Informed decision-making and knowledgeable responses to global climate change impacts on natural resources and ecosystem services requires access to information resources that are credible, accurate, easy to understand, and appropriate. Too often stakeholders are limited to restricted scientific outputs produced by inaccessible models, generated from a limited number of scenario simulations chosen arbitrarily by researchers. This paper describes the outcomes of the CLIMSAVE project (www.climsave.eu), which has attempted to democratise climate change impacts, adaptation and vulnerability modelling, through developing the public domain interactive exploratory web-based CLIMSAVE Integrated Assessment (IA) Platform. The CLIMSAVE Integrated Assessment (IA) Platform aims to enable a wide range of stakeholders to improve their understanding surrounding impacts, adaptation responses and vulnerability of natural resources and ecosystem services under uncertain futures across Europe. The CLIMSAVE IA Platform contain linked simulation models (of the urban, water, agriculture, forestry, biodiversity and other sectors), IPCC AR4 climate scenarios and CLIMSAVE socio-economic scenarios, enabling users to select their inputs (climate and socioeconomic), rapidly run the models across Europe using their input settings and view their selected Impact (before, or after, adaptation) and Vulnerability (Figure 1) indicators. The CLIMSAVE IA Platform has been designed to promote both cognitive accessibility - the ease of understanding - and practical accessibility - the ease of application. Based upon partner and CLIMSAVE international experts' experience, examination of other participatory model interfaces and potential user requirements, we describe the design concepts and functionality that were identified, incorporated into the prototype CLIMSAVE IA Platform and further refined based on stakeholder feedback. The CLIMSAVE IA Platform is designed to facilitate a two-way iterative process

  20. A Frequency-Based Assignment Model under Day-to-Day Information Evolution of Oversaturated Conditions on a Feeder Bus Service

    Directory of Open Access Journals (Sweden)

    Silin Zhang

    2017-02-01

    Full Text Available Day-to-day information is increasingly being implemented in transit networks worldwide. Feeder bus service (FBS plays a vital role in a public transit network by providing feeder access to hubs and rails. As a feeder service, a space-time path for frequent passengers is decided by its dynamic strategy procedure, in which a day-to-day information self-learning mechanism is identified and analyzed from our survey data. We formulate a frequency-based assignment model considering day-to-day evolution under oversaturated conditions, which takes into account the residual capacity of bus and the comfort feelings of sitting or standing. The core of our proposed model is to allocate the passengers on each segment belonging to their own paths according to multi-utilities transformed from the time values and parametric demands, such as frequency, bus capacity, seat comfort, and stop layout. The assignment method, albeit general, allows us to formulate an equivalent optimization problem in terms of interaction between the FBS’ operation and frequent passengers’ rational behaviors. Finally, a real application case is generated to test the ability of the modeling framework capturing the theoretical consequents, serving the passengers’ dynamic externalities.

  1. Modeling of groundwater potential of the sub-basin of Siriri river, Sergipe state, Brazil, based on Geographic Information System and Remote Sensing

    Directory of Open Access Journals (Sweden)

    Washington Franca Rocha

    2011-08-01

    Full Text Available The use of Geographic Information System (GIS and Remote Sensing for modeling groundwater potential give support for the analysis and decision-making processes about water resource management in watersheds. The objective of this work consisted in modeling the groundwater water potential of Siriri river sub-basin, Sergipe state, based on its natural environment (soil, land use, slope, drainage density, lineament density, rainfall and geology using Remote Sensing and Geographic Information System as an integration environment. The groundwater potential map was done using digital image processing procedures of ENVI 4.4 software and map algebra of ArcGIS 9.3®. The Analytical Hierarchy Method was used for modeling the weights definition of the different criteria (maps. Loads and weights of the different classes were assigned to each map according to their influence on the overall objective of the work. The integration of these maps in a GIS environment and the AHP technique application allowed the development of the groundwater potential map in five classes: very low, low, moderate, high, very high. The average flow rates of wells confirm the potential of aquifers Sapucari, Barriers and Maruim since they are the most exploited in this sub-basin, with average flows of 78,113 L/h, 19,332 L/h and 12,085 L/h, respectively.

  2. Perspectives on why digital ecologies matter: combining population genetics and ecologically informed agent-based models with GIS for managing dipteran livestock pests.

    Science.gov (United States)

    Peck, Steven L

    2014-10-01

    It is becoming clear that handling the inherent complexity found in ecological systems is an essential task for finding ways to control insect pests of tropical livestock such as tsetse flies, and old and new world screwworms. In particular, challenging multivalent management programs, such as Area Wide Integrated Pest Management (AW-IPM), face daunting problems of complexity at multiple spatial scales, ranging from landscape level processes to those of smaller scales such as the parasite loads of individual animals. Daunting temporal challenges also await resolution, such as matching management time frames to those found on ecological and even evolutionary temporal scales. How does one deal with representing processes with models that involve multiple spatial and temporal scales? Agent-based models (ABM), combined with geographic information systems (GIS), may allow for understanding, predicting and managing pest control efforts in livestock pests. This paper argues that by incorporating digital ecologies in our management efforts clearer and more informed decisions can be made. I also point out the power of these models in making better predictions in order to anticipate the range of outcomes possible or likely. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  3. Parsimonious Language Models for Information Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo

    We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,

  4. Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.

    Science.gov (United States)

    Staccini, Pascal M; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius

    2002-01-01

    Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation.

  5. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  6. Knowledge-based information systems in practice

    CERN Document Server

    Jain, Lakhmi; Watada, Junzo; Howlett, Robert

    2015-01-01

    This book contains innovative research from leading researchers who presented their work at the 17th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES 2013, held in Kitakyusha, Japan, in September 2013. The conference provided a competitive field of 236 contributors, from which 38 authors expanded their contributions and only 21 published. A plethora of techniques and innovative applications are represented within this volume. The chapters are organized using four themes. These topics include: data mining, knowledge management, advanced information processes and system modelling applications. Each topic contains multiple contributions and many offer case studies or innovative examples. Anyone that wants to work with information repositories or process knowledge should consider reading one or more chapters focused on their technique of choice. They may also benefit from reading other chapters to assess if an alternative technique represents a more suitable app...

  7. Refreshing Information Literacy: Learning from Recent British Information Literacy Models

    Science.gov (United States)

    Martin, Justine

    2013-01-01

    Models play an important role in helping practitioners implement and promote information literacy. Over time models can lose relevance with the advances in technology, society, and learning theory. Practitioners and scholars often call for adaptations or transformations of these frameworks to articulate the learning needs in information literacy…

  8. Analysis of the quality of hospital information systems in Isfahan teaching hospitals based on the DeLone and McLean model.

    Science.gov (United States)

    Saghaeiannejad-Isfahani, Sakineh; Saeedbakhsh, Saeed; Jahanbakhsh, Maryam; Habibi, Mahboobeh

    2015-01-01

    Quality is one of the most important criteria for the success of an information system, which refers to its desirable features of the processing system itself. The aim of this study was the analysis of system quality of hospital information systems (HIS) in teaching hospitals of Isfahan based on the DeLone and McLean model. This research was an applied and analytical-descriptive study. It was performed in teaching hospitals of Isfahan in 2010. The research population consisted of the HIS's users, system designers and hospital information technology (IT) authorities who were selected by random sampling method from users' group (n = 228), and system designers and IT authorities (n = 52) using census method. The data collection tool was two researcher-designed questionnaires. Questionnaires' reliability was estimated by using Cronbach's alpha was calculated. It was 97.1% for the system designers and IT authorities' questionnaire and 92.3% for system users' questionnaire. Findings showed that the mean of system quality score in a variety of HIS and among different hospitals was significantly different and not the same (P value ≥ 0.05). In general, Kosar (new version) system and Rahavard Rayaneh system have dedicated the highest and the lowest mean scores to themselves. The system quality criterion overall mean was 59.6% for different HIS and 57.5% among different hospitals respectively. According to the results of the research, it can be stated that based on the applied model, the investigated systems were relatively desirable in terms of quality. Thus, in order to achieve a good optimal condition, it is necessary to pay particular attention to the improving factors of system quality, type of activity, type of specialty and hospital ownership type.

  9. Function Model for Community Health Service Information

    Science.gov (United States)

    Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong

    In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.

  10. Item Information in the Rasch Model

    NARCIS (Netherlands)

    Engelen, Ron J.H.; van der Linden, Willem J.; Oosterloo, Sebe J.

    1988-01-01

    Fisher's information measure for the item difficulty parameter in the Rasch model and its marginal and conditional formulations are investigated. It is shown that expected item information in the unconditional model equals information in the marginal model, provided the assumption of sampling

  11. Ontology-based Information Retrieval

    DEFF Research Database (Denmark)

    Styltsvig, Henrik Bulskov

    In this thesis, we will present methods for introducing ontologies in information retrieval. The main hypothesis is that the inclusion of conceptual knowledge such as ontologies in the information retrieval process can contribute to the solution of major problems currently found in information...... retrieval. This utilization of ontologies has a number of challenges. Our focus is on the use of similarity measures derived from the knowledge about relations between concepts in ontologies, the recognition of semantic information in texts and the mapping of this knowledge into the ontologies in use......, as well as how to fuse together the ideas of ontological similarity and ontological indexing into a realistic information retrieval scenario. To achieve the recognition of semantic knowledge in a text, shallow natural language processing is used during indexing that reveals knowledge to the level of noun...

  12. Fisher information framework for time series modeling

    Science.gov (United States)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  13. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  14. Model Information Exchange System (MIXS).

    Science.gov (United States)

    2013-08-01

    Many travel demand forecast models operate at state, regional, and local levels. While they share the same physical network in overlapping geographic areas, they use different and uncoordinated modeling networks. This creates difficulties for models ...

  15. Biological information systems: Evolution as cognition-based information management.

    Science.gov (United States)

    Miller, William B

    2018-05-01

    An alternative biological synthesis is presented that conceptualizes evolutionary biology as an epiphenomenon of integrated self-referential information management. Since all biological information has inherent ambiguity, the systematic assessment of information is required by living organisms to maintain self-identity and homeostatic equipoise in confrontation with environmental challenges. Through their self-referential attachment to information space, cells are the cornerstone of biological action. That individualized assessment of information space permits self-referential, self-organizing niche construction. That deployment of information and its subsequent selection enacted the dominant stable unicellular informational architectures whose biological expressions are the prokaryotic, archaeal, and eukaryotic unicellular forms. Multicellularity represents the collective appraisal of equivocal environmental information through a shared information space. This concerted action can be viewed as systematized information management to improve information quality for the maintenance of preferred homeostatic boundaries among the varied participants. When reiterated in successive scales, this same collaborative exchange of information yields macroscopic organisms as obligatory multicellular holobionts. Cognition-Based Evolution (CBE) upholds that assessment of information precedes biological action, and the deployment of information through integrative self-referential niche construction and natural cellular engineering antecedes selection. Therefore, evolutionary biology can be framed as a complex reciprocating interactome that consists of the assessment, communication, deployment and management of information by self-referential organisms at multiple scales in continuous confrontation with environmental stresses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological review of health technology assessments

    Directory of Open Access Journals (Sweden)

    Bethany Shinkins

    2017-04-01

    Full Text Available Abstract Background Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1 what evidence aside from test accuracy was searched for and synthesised, 2 which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3 how/whether threshold effects were explored, 4 how the potential dependency between multiple tests in a pathway was accounted for, and 5 for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings

  17. Using Interaction Scenarios to Model Information Systems

    DEFF Research Database (Denmark)

    Bækgaard, Lars; Bøgh Andersen, Peter

    The purpose of this paper is to define and discuss a set of interaction primitives that can be used to model the dynamics of socio-technical activity systems, including information systems, in a way that emphasizes structural aspects of the interaction that occurs in such systems. The primitives...... a number of case studies that indicate that interaction primitives can be useful modeling tools for supplementing conventional flow-oriented modeling of business processes....... are based on a unifying, conceptual definition of the disparate interaction types - a robust model of the types. The primitives can be combined and may thus represent mediated interaction. We present a set of visualizations that can be used to define multiple related interactions and we present and discuss...

  18. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...

  19. Information as a Measure of Model Skill

    Science.gov (United States)

    Roulston, M. S.; Smith, L. A.

    2002-12-01

    Physicist Paul Davies has suggested that rather than the quest for laws that approximate ever more closely to "truth", science should be regarded as the quest for compressibility. The goodness of a model can be judged by the degree to which it allows us to compress data describing the real world. The "logarithmic scoring rule" is a method for evaluating probabilistic predictions of reality that turns this philosophical position into a practical means of model evaluation. This scoring rule measures the information deficit or "ignorance" of someone in possession of the prediction. A more applied viewpoint is that the goodness of a model is determined by its value to a user who must make decisions based upon its predictions. Any form of decision making under uncertainty can be reduced to a gambling scenario. Kelly showed that the value of a probabilistic prediction to a gambler pursuing the maximum return on their bets depends on their "ignorance", as determined from the logarithmic scoring rule, thus demonstrating a one-to-one correspondence between data compression and gambling returns. Thus information theory provides a way to think about model evaluation, that is both philosophically satisfying and practically oriented. P.C.W. Davies, in "Complexity, Entropy and the Physics of Information", Proceedings of the Santa Fe Institute, Addison-Wesley 1990 J. Kelly, Bell Sys. Tech. Journal, 35, 916-926, 1956.

  20. Directory of Energy Information Administration Models 1994

    International Nuclear Information System (INIS)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994

  1. Directory of Energy Information Administration Models 1994

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    This directory revises and updates the 1993 directory and includes 15 models of the National Energy Modeling System (NEMS). Three other new models in use by the Energy Information Administration (EIA) have also been included: the Motor Gasoline Market Model (MGMM), Distillate Market Model (DMM), and the Propane Market Model (PPMM). This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses and requirements. Sources for additional information are identified. Included in this directory are 37 EIA models active as of February 1, 1994.

  2. Investigating the Challenges for Adopting and Implementing of Information and Communication Technologies (ICT by Isfahan High Schools Teachers: Based On the Model of Barriers in ICT Usage

    Directory of Open Access Journals (Sweden)

    Bibi Eshrat Zaman

    2012-02-01

    Full Text Available Relevance and usefulness of information and communication technologies (ICT have been investigated in many researches. There are many challenges for ICT users, especially for teachers that act as inhibitor factors for using ICT in their jobs. The main purpose of this paper was to investigate these challenges in the view point of high school teachers in Isfahan city based on ICT use barriers model. In the model, barriers have divided into four groups: organizational, managerial, educational, and financial-instrumental. The research was based on qualitative method. For analyzing data descriptive-analysis method was used. For gathering data, researcher made questionnaire including 5 open ended had been used. Survey population included teachers of all high schools in Isfahan city in 1387-88 academic years. 110 teachers were selected by using cluster random sampling method. For data analysis, content analysis method was used to calculate the mean and frequencies. Findings indicated that most teachers have explained the lack of proper in-service training programs for their use of ICT as the most important obstacles for using ICT in teaching. Lack of suitable managerial strategies for implementing ICT in curriculum, lack of organizational support and lack of financial resources and equipments in schools, respectively, were other barriers in using ICT in Iranian high schools.

  3. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  4. Rule-based Information Integration

    NARCIS (Netherlands)

    de Keijzer, Ander; van Keulen, Maurice

    2005-01-01

    In this report, we show the process of information integration. We specifically discuss the language used for integration. We show that integration consists of two phases, the schema mapping phase and the data integration phase. We formally define transformation rules, conversion, evolution and

  5. Toward risk assessment 2.0: Safety supervisory control and model-based hazard monitoring for risk-informed safety interventions

    International Nuclear Information System (INIS)

    Favarò, Francesca M.; Saleh, Joseph H.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a staple in the engineering risk community, and it has become to some extent synonymous with the entire quantitative risk assessment undertaking. Limitations of PRA continue to occupy researchers, and workarounds are often proposed. After a brief review of this literature, we propose to address some of PRA's limitations by developing a novel framework and analytical tools for model-based system safety, or safety supervisory control, to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. Our work shifts the emphasis from the pervading probabilistic mindset in risk assessment toward the notions of danger indices and hazard temporal contingency. The framework and tools here developed are grounded in Control Theory and make use of the state-space formalism in modeling dynamical systems. We show that the use of state variables enables the definition of metrics for accident escalation, termed hazard levels or danger indices, which measure the “proximity” of the system state to adverse events, and we illustrate the development of such indices. Monitoring of the hazard levels provides diagnostic information to support both on-line and off-line safety interventions. For example, we show how the application of the proposed tools to a rejected takeoff scenario provides new insight to support pilots’ go/no-go decisions. Furthermore, we augment the traditional state-space equations with a hazard equation and use the latter to estimate the times at which critical thresholds for the hazard level are (b)reached. This estimation process provides important prognostic information and produces a proxy for a time-to-accident metric or advance notice for an impending adverse event. The ability to estimate these two hazard coordinates, danger index and time-to-accident, offers many possibilities for informing system control strategies and improving accident prevention and risk mitigation

  6. Workflow management based on information management

    NARCIS (Netherlands)

    Lutters, Diederick; Mentink, R.J.; van Houten, Frederikus J.A.M.; Kals, H.J.J.

    2001-01-01

    In manufacturing processes, the role of the underlying information is of the utmost importance. Based on three different types of integration (function, information and control), as well as the theory of information management and the accompanying information structures, the entire product creation

  7. Granite metallogenic specialization study based on RS information model-A case of hydrothermal uranium and tungsten deposits in Nanling region

    International Nuclear Information System (INIS)

    Huang Hongye; Qin Qiming

    2009-01-01

    According to the granite hydrothermal metallogenic principle, metallogenic specialization information model for uranium producing and tungsten producing granites in Nanling region is built up and the group factor system of granite metallogenic specialization is initially proposed by using RS information model. On the basis of the above aspects, the geographical index and coefficients of information model of granite metallogenic specialization are respectively analyzed, metallogenic specialization discrimination criterion is built up. After the non-discriminatory massif is forecasted, the results are basically accordant with geological fact, at the same time they are used in the geological metallogenic research, which indicates that metallogenic specialization information model is objective and operative, realizes quantitative appraisal on metallogenic specialization and provides a scientific basis for further discriminating the ore-forming massif. (authors)

  8. COMPLEMENTARITY OF HISTORIC BUILDING INFORMATION MODELLING AND GEOGRAPHIC INFORMATION SYSTEMS

    Directory of Open Access Journals (Sweden)

    X. Yang

    2016-06-01

    Full Text Available In this paper, we discuss the potential of integrating both semantically rich models from Building Information Modelling (BIM and Geographical Information Systems (GIS to build the detailed 3D historic model. BIM contributes to the creation of a digital representation having all physical and functional building characteristics in several dimensions, as e.g. XYZ (3D, time and non-architectural information that are necessary for construction and management of buildings. GIS has potential in handling and managing spatial data especially exploring spatial relationships and is widely used in urban modelling. However, when considering heritage modelling, the specificity of irregular historical components makes it problematic to create the enriched model according to its complex architectural elements obtained from point clouds. Therefore, some open issues limiting the historic building 3D modelling will be discussed in this paper: how to deal with the complex elements composing historic buildings in BIM and GIS environment, how to build the enriched historic model, and why to construct different levels of details? By solving these problems, conceptualization, documentation and analysis of enriched Historic Building Information Modelling are developed and compared to traditional 3D models aimed primarily for visualization.

  9. Juxta-Vascular Pulmonary Nodule Segmentation in PET-CT Imaging Based on an LBF Active Contour Model with Information Entropy and Joint Vector

    Directory of Open Access Journals (Sweden)

    Rui Hao

    2018-01-01

    Full Text Available The accurate segmentation of pulmonary nodules is an important preprocessing step in computer-aided diagnoses of lung cancers. However, the existing segmentation methods may cause the problem of edge leakage and cannot segment juxta-vascular pulmonary nodules accurately. To address this problem, a novel automatic segmentation method based on an LBF active contour model with information entropy and joint vector is proposed in this paper. Our method extracts the interest area of pulmonary nodules by a standard uptake value (SUV in Positron Emission Tomography (PET images, and automatic threshold iteration is used to construct an initial contour roughly. The SUV information entropy and the gray-value joint vector of Positron Emission Tomography–Computed Tomography (PET-CT images are calculated to drive the evolution of contour curve. At the edge of pulmonary nodules, evolution will be stopped and accurate results of pulmonary nodule segmentation can be obtained. Experimental results show that our method can achieve 92.35% average dice similarity coefficient, 2.19 mm Hausdorff distance, and 3.33% false positive with the manual segmentation results. Compared with the existing methods, our proposed method that segments juxta-vascular pulmonary nodules in PET-CT images is more accurate and efficient.

  10. A proposed general model of information behaviour.

    Directory of Open Access Journals (Sweden)

    2003-01-01

    Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.

  11. The Information Warfare Life Cycle Model

    Directory of Open Access Journals (Sweden)

    Brett van Niekerk

    2011-11-01

    Full Text Available Information warfare (IW is a dynamic and developing concept, which constitutes a number of disciplines. This paper aims to develop a life cycle model for information warfare that is applicable to all of the constituent disciplines. The model aims to be scalable and applicable to civilian and military incidents where information warfare tactics are employed. Existing information warfare models are discussed, and a new model is developed from the common aspects of these existing models. The proposed model is then applied to a variety of incidents to test its applicability and scalability. The proposed model is shown to be applicable to multiple disciplines of information warfare and is scalable, thus meeting the objectives of the model.

  12. The Information Warfare Life Cycle Model

    Directory of Open Access Journals (Sweden)

    Brett van Niekerk

    2011-03-01

    Full Text Available Information warfare (IW is a dynamic and developing concept, which constitutes a number of disciplines. This paper aims to develop a life cycle model for information warfare that is applicable to all of the constituent disciplines. The model aims to be scalable and applicable to civilian and military incidents where information warfare tactics are employed. Existing information warfare models are discussed, and a new model is developed from the common aspects of these existing models. The proposed model is then applied to a variety of incidents to test its applicability and scalability. The proposed model is shown to be applicable to multiple disciplines of information warfare and is scalable, thus meeting the objectives of the model.

  13. Spinal Cord Injury Model System Information Network

    Science.gov (United States)

    ... the UAB-SCIMS More The UAB-SCIMS Information Network The University of Alabama at Birmingham Spinal Cord Injury Model System (UAB-SCIMS) maintains this Information Network as a resource to promote knowledge in the ...

  14. Modelling the Replication Management in Information Systems

    Directory of Open Access Journals (Sweden)

    Cezar TOADER

    2017-01-01

    Full Text Available In the modern economy, the benefits of Web services are significant because they facilitates the activities automation in the framework of Internet distributed businesses as well as the cooperation between organizations through interconnection process running in the computer systems. This paper presents the development stages of a model for a reliable information system. This paper describes the communication between the processes within the distributed system, based on the message exchange, and also presents the problem of distributed agreement among processes. A list of objectives for the fault-tolerant systems is defined and a framework model for distributed systems is proposed. This framework makes distinction between management operations and execution operations. The proposed model promotes the use of a central process especially designed for the coordination and control of other application processes. The execution phases and the protocols for the management and the execution components are presented. This model of a reliable system could be a foundation for an entire class of distributed systems models based on the management of replication process.

  15. Acceptance model of a Hospital Information System.

    Science.gov (United States)

    Handayani, P W; Hidayanto, A N; Pinem, A A; Hapsari, I C; Sandhyaduhita, P I; Budi, I

    2017-03-01

    The purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance. This study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0. The study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals. Based on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS. Copyright © 2016

  16. GEOGRAPHIC INFORMATION SYSTEM-BASED MODELING AND ANALYSIS FOR SITE SELECTION OF GREEN MUSSEL, Perna viridis, MARICULTURE IN LADA BAY, PANDEGLANG, BANTEN PROVINCE

    Directory of Open Access Journals (Sweden)

    I Nyoman Radiarta

    2011-06-01

    Full Text Available Green mussel is one of important species cultured in Lada Bay, Pandeglang. To provide a necessary guidance regarding green mussel mariculture development, finding suitable site is an important step. This study was conducted to identify suitable site for green mussel mariculture development using geographic information system (GIS based models. Seven important parameters were grouped into two submodels, namely environmental (water temperature, salinity, suspended solid, dissolve oxygen, and bathymetry and infrastructural (distance to settlement and pond aquaculture. A constraint data was used to exclude the area from suitability maps that cannot be allowed to develop green mussel mariculture, including area of floating net fishing activity and area near electricity station. Analyses of factors and constraints indicated that about 31% of potential area with bottom depth less than 25 m had the most suitable area. This area was shown to have an ideal condition for green mussel mariculture in this study region. This study shows that GIS model is a powerful tool for site selection decision making. The tool can be a valuable tool in solving problems in local, regional, and/or continent areas.

  17. Enhanced Publications: Data Models and Information Systems

    Directory of Open Access Journals (Sweden)

    Alessia Bardi

    2014-04-01

    Full Text Available “Enhanced publications” are commonly intended as digital publications that consist of a mandatory narrative part (the description of the research conducted plus related “parts”, such as datasets, other publications, images, tables, workflows, devices. The state-of-the-art on information systems for enhanced publications has today reached the point where some kind of common understanding is required, in order to provide the methodology and language for scientists to compare, analyse, or simply discuss the multitude of solutions in the field. In this paper, we thoroughly examined the literature with a two-fold aim: firstly, introducing the terminology required to describe and compare structural and semantic features of existing enhanced publication data models; secondly, proposing a classification of enhanced publication information systems based on their main functional goals.

  18. Knowledge base, information search and intention to adopt innovation

    NARCIS (Netherlands)

    Rijnsoever, van F.J.; Castaldi, C.

    2008-01-01

    Innovation is a process that involves searching for new information. This paper builds upon theoretical insights on individual and organizational learning and proposes a knowledge based model of how actors search for information when confronted with innovation. The model takes into account different

  19. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  20. Evaluation of location and number of aid post for sustainable humanitarian relief using agent based modeling (ABM) and geographic information system (GIS)

    Science.gov (United States)

    Khair, Fauzi; Sopha, Bertha Maya

    2017-12-01

    One of the crucial phases in disaster management is the response phase or the emergency response phase. It requires a sustainable system and a well-integrated management system. Any errors in the system on this phase will impact on significant increase of the victims number as well as material damage caused. Policies related to the location of aid posts are important decisions. The facts show that there are many failures in the process of providing assistance to the refugees due to lack of preparation and determination of facilities and aid post location. Therefore, this study aims to evaluate the number and location of aid posts on Merapi eruption in 2010. This study uses an integration between Agent Based Modeling (ABM) and Geographic Information System (GIS) about evaluation of the number and location of the aid post using some scenarios. The ABM approach aims to describe the agents behaviour (refugees and volunteers) in the event of a disaster with their respective characteristics. While the spatial data, GIS useful to describe real condition of the Sleman regency road. Based on the simulation result, it shows alternative scenarios that combine DERU UGM post, Maguwoharjo Stadium, Tagana Post and Pakem Main Post has better result in handling and distributing aid to evacuation barrack compared to initial scenario. Alternative scenarios indicates the unmet demands are less than the initial scenario.

  1. INFORMATION MODEL OF A GENERAL PRACTITIONER

    Directory of Open Access Journals (Sweden)

    S. M. Zlepko

    2016-06-01

    Full Text Available In the paper the authors developed information model family doctor shows its innovation and functionality. The proposed model meets the requirements of the current job description and criteria World Organization of Family Doctors.

  2. Developing, choosing and using landscape evolution models to inform field-based landscape reconstruction studies : Developing, choosing and using landscape evolution models

    NARCIS (Netherlands)

    Temme, A.j.a.m.; Armitage, J.; Attal, M.; Van Gorp, W.; Coulthard, T.j.; Schoorl, J.m.

    2017-01-01

    Landscape evolution models (LEMs) are an increasingly popular resource for geomorphologists as they can operate as virtual laboratories where the implications of hypotheses about processes over human to geological timescales can be visualized at spatial scales from catchments to mountain ranges.

  3. The Influence of Base Rate and Case Information on Health-Risk Perceptions: A Unified Model of Self-Positivity and Self-Negativity

    OpenAIRE

    Dengfeng Yan; Jaideep Sengupta

    2013-01-01

    This research examines how consumers use base rate (e.g., disease prevalence in a population) and case information (e.g., an individual's disease symptoms) to estimate health risks. Drawing on construal level theory, we propose that consumers' reliance on base rate (case information) will be enhanced (weakened) by psychological distance. A corollary of this premise is that self-positivity (i.e., underestimating self-risk vs. other-risk) is likely when the disease base rate is high but the cas...

  4. 基于不完全信息博弈的PROT项目融资模型研究%PROT Project Financing Model Based on Incomplete Information Game

    Institute of Scientific and Technical Information of China (English)

    王艳伟; 刘艳慧; 程静; 高鑫; 张仙

    2015-01-01

    以中小水电项目为代表的经营性公共基础设施项目,在采用 PROT 项目融资模式经营运行过程中,由于涉及的利益相关方之间存在信息不对称的情况,致使各方在博弈过程中存在“道德风险”,从而损害PROT项目的整体利益。将不完全信息博弈和熵理论引入 PROT 项目融资模式当中,分析社会投资者、政府以及公众三方的支付函数、期望收益以及项目融资系统熵的大小,建立了基于不完全信息博弈的 PROT 项目融资模型,通过该模型可以深入了解各利益相关方博弈的内在机理,并通过项目融资系统熵的变化来有效防范和监控产生的“道德风险”。经实例验证表明该模型具有较好的适用性。%Because of the existence of information asymmetry between the stakeholders,small and medium hydropower project will result "moral hazard",which may damage the whole benefit of PROT project in the operation process of PROT project. This paper introduces game theory and entropy theory into PROT project financing model. Firstly,through the analysis of payoff function,expected revenue of three parties and project financing system entropy,the paper established the PROT project financing model based on incomplete information game. The model can understand the intrinsic mechanism of the stakeholders’ game and effectively prevent and control the moral hazard through the changes of project financing system entropy. Finally,through the examples,it has better effect.

  5. The asymmetric reactions of mean and volatility of stock returns to domestic and international information based on a four-regime double-threshold GARCH model

    Science.gov (United States)

    Chen, Cathy W. S.; Yang, Ming Jing; Gerlach, Richard; Jim Lo, H.

    2006-07-01

    In this paper, we investigate the asymmetric reactions of mean and volatility of stock returns in five major markets to their own local news and the US information via linear and nonlinear models. We introduce a four-regime Double-Threshold GARCH (DTGARCH) model, which allows asymmetry in both the conditional mean and variance equations simultaneously by employing two threshold variables, to analyze the stock markets’ reactions to different types of information (good/bad news) generated from the domestic markets and the US stock market. By applying the four-regime DTGARCH model, this study finds that the interaction between the information of domestic and US stock markets leads to the asymmetric reactions of stock returns and their variability. In addition, this research also finds that the positive autocorrelation reported in the previous studies of financial markets may in fact be mis-specified, and actually due to the local market's positive response to the US stock market.

  6. Energy Information Data Base: corporate author entries

    International Nuclear Information System (INIS)

    1980-06-01

    One of the controls for information entered into the data bases created and maintained by the DOE Technical Information Center is the standardized name for the corporate entity or the corporate author. The purpose of Energy Information Data Base: Corporate Author Entries is to provide a means for the consistent citing of the names of organizations in bibliographic records. These entries serve as guides for users of the DOE/RECON computerized data bases who want to locate information originating in particular organizations. The entries in this revision include the corporate entries used in report bibliographic citations since 1973 and list approximately 28,000 corporate sources

  7. Compilation of information on melter modeling

    International Nuclear Information System (INIS)

    Eyler, L.L.

    1996-03-01

    The objective of the task described in this report is to compile information on modeling capabilities for the High-Temperature Melter and the Cold Crucible Melter and issue a modeling capabilities letter report summarizing existing modeling capabilities. The report is to include strategy recommendations for future modeling efforts to support the High Level Waste (BLW) melter development

  8. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  9. Information-Processing Models and Curriculum Design

    Science.gov (United States)

    Calfee, Robert C.

    1970-01-01

    "This paper consists of three sections--(a) the relation of theoretical analyses of learning to curriculum design, (b) the role of information-processing models in analyses of learning processes, and (c) selected examples of the application of information-processing models to curriculum design problems." (Author)

  10. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  11. Energy information data base: subject thesaurus

    International Nuclear Information System (INIS)

    1979-10-01

    The technical staff of the DOE Technical Information Center, during its subject indexing activities, develops and structures a vocabulary that allows consistent machine storage and retrieval of information necessary to the accomplishment of the DOE mission. This thesaurus incorporates that structured vocabulary. The terminology of this thesaurus is used for the subject control of information announced in DOE Energy Research Abstracts, Energy Abstracts for Policy Analysis, Solar Energy Update, Geothermal Energy Update, Fossil Energy Update, Fusion Energy Update, and Energy Conservation Update. This terminology also facilitates subject searching of the DOE energy information data base, a research in progress data base, a general and practical energy information data base, power reactor docket information data base, nuclear science abstracts data base, and the federal energy information data base on the DOE on-line retrieval system, RECON. The rapid expansion of the DOE's activities will result in a concomitant thesaurus expansion as information relating to new activities is indexed. Only the terms used in the indexing of documents at the Technical Information Center to date are included

  12. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  13. Directory of Energy Information Administration models 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    This directory revises and updates the Directory of Energy Information Administration Models 1995, DOE/EIA-0293(95), Energy Information Administration (EIA), U.S. Department of Energy, July 1995. Four models have been deleted in this directory as they are no longer being used: (1) Market Penetration Model for Ground-Water Heat Pump Systems (MPGWHP); (2) Market Penetration Model for Residential Rooftop PV Systems (MPRESPV-PC); (3) Market Penetration Model for Active and Passive Solar Technologies (MPSOLARPC); and (4) Revenue Requirements Modeling System (RRMS).

  14. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  15. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  16. Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.

    OpenAIRE

    Staccini, Pascal M.; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius

    2002-01-01

    Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning...

  17. The Nature of Information Science: Changing Models

    Science.gov (United States)

    Robinson, Lyn; Karamuftuoglu, Murat

    2010-01-01

    Introduction: This paper considers the nature of information science as a discipline and profession. Method: It is based on conceptual analysis of the information science literature, and consideration of philosophical perspectives, particularly those of Kuhn and Peirce. Results: It is argued that information science may be understood as a field of…

  18. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  19. User-Oriented and Cognitive Models of Information Retrieval

    DEFF Research Database (Denmark)

    Skov, Mette; Järvelin, Kalervo; Ingwersen, Peter

    2018-01-01

    The domain of user-oriented and cognitive information retrieval (IR) is first discussed, followed by a discussion on the dimensions and types of models one may build for the domain. The focus of the present entry is on the models of user-oriented and cognitive IR, not on their empirical...... applications. Several models with different emphases on user-oriented and cognitive IR are presented—ranging from overall approaches and relevance models to procedural models, cognitive models, and task-based models. The present entry does not discuss empirical findings based on the models....

  20. Mobile-Based Dictionary of Information and Communication Technology

    Science.gov (United States)

    Liando, O. E. S.; Mewengkang, A.; Kaseger, D.; Sangkop, F. I.; Rantung, V. P.; Rorimpandey, G. C.

    2018-02-01

    This study aims to design and build mobile-based dictionary of information and communication technology applications to provide access to information in the form of glossary of terms in the context of information and communication technologies. Applications built in this study using the Android platform, with SQLite database model. This research uses prototype model development method which covers the stages of communication, Quick Plan, Quick Design Modeling, Construction of Prototype, Deployment Delivery & Feedback, and Full System Transformation. The design of this application is designed in such a way as to facilitate the user in the process of learning and understanding the new terms or vocabularies encountered in the world of information and communication technology. Mobile-based dictionary of Information And Communication Technology applications that have been built can be an alternative to learning literature. In its simplest form, this application is able to meet the need for a comprehensive and accurate dictionary of Information And Communication Technology function.

  1. Early engagement of stakeholders with individual-based modelling can inform research for improving invasive species management: the round goby as a case study

    DEFF Research Database (Denmark)

    Samson, Emma; Hirsch, Philipp E.; Palmer, Stephen C.

    2017-01-01

    Individual-based models (IBMs) incorporating realistic representations of key range-front processes such as dispersal can be used as tools to investigate the dynamics of invasive species. Managers can apply insights from these models to take effective action to prevent further spread and prioriti...

  2. Levy Random Bridges and the Modelling of Financial Information

    OpenAIRE

    Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea

    2009-01-01

    The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...

  3. Perceived Threat and Corroboration: Key Factors That Improve a Predictive Model of Trust in Internet-based Health Information and Advice

    Science.gov (United States)

    Harris, Peter R; Briggs, Pam

    2011-01-01

    Background How do people decide which sites to use when seeking health advice online? We can assume, from related work in e-commerce, that general design factors known to affect trust in the site are important, but in this paper we also address the impact of factors specific to the health domain. Objective The current study aimed to (1) assess the factorial structure of a general measure of Web trust, (2) model how the resultant factors predicted trust in, and readiness to act on, the advice found on health-related websites, and (3) test whether adding variables from social cognition models to capture elements of the response to threatening, online health-risk information enhanced the prediction of these outcomes. Methods Participants were asked to recall a site they had used to search for health-related information and to think of that site when answering an online questionnaire. The questionnaire consisted of a general Web trust questionnaire plus items assessing appraisals of the site, including threat appraisals, information checking, and corroboration. It was promoted on the hungersite.com website. The URL was distributed via Yahoo and local print media. We assessed the factorial structure of the measures using principal components analysis and modeled how well they predicted the outcome measures using structural equation modeling (SEM) with EQS software. Results We report an analysis of the responses of participants who searched for health advice for themselves (N = 561). Analysis of the general Web trust questionnaire revealed 4 factors: information quality, personalization, impartiality, and credible design. In the final SEM model, information quality and impartiality were direct predictors of trust. However, variables specific to eHealth (perceived threat, coping, and corroboration) added substantially to the ability of the model to predict variance in trust and readiness to act on advice on the site. The final model achieved a satisfactory fit: χ2 5 = 10

  4. Information retrieval system based on INIS tapes

    International Nuclear Information System (INIS)

    Pultorak, G.

    1976-01-01

    An information retrieval system based on the INIS computer tapes is described. It includes the three main elements of a computerized information system: a data base on a machine -readable medium, a collection of queries which represent the information needs from the data - base, and a set of programs by which the actual retrieval is done, according to the user's queries. The system is built for the center's computer, a CDC 3600, and its special features characterize, to a certain degree, the structure of the programs. (author)

  5. Information Clustering Based on Fuzzy Multisets.

    Science.gov (United States)

    Miyamoto, Sadaaki

    2003-01-01

    Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…

  6. Directory of energy information administration models 1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-13

    This updated directory has been published annually; after this issue, it will be published only biennially. The Disruption Impact Simulator Model in use by EIA is included. Model descriptions have been updated according to revised documentation approved during the past year. This directory contains descriptions about each model, including title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included are 37 EIA models active as of February 1, 1995. The first group is the National Energy Modeling System (NEMS) models. The second group is all other EIA models that are not part of NEMS. Appendix A identifies major EIA modeling systems and the models within these systems. Appendix B is a summary of the `Annual Energy Outlook` Forecasting System.

  7. Directory of Energy Information Administration Models 1993

    Energy Technology Data Exchange (ETDEWEB)

    1993-07-06

    This directory contains descriptions about each model, including the title, acronym, purpose, followed by more detailed information on characteristics, uses, and requirements. Sources for additional information are identified. Included in this directory are 35 EIA models active as of May 1, 1993. Models that run on personal computers are identified by ``PC`` as part of the acronym. EIA is developing new models, a National Energy Modeling System (NEMS), and is making changes to existing models to include new technologies, environmental issues, conservation, and renewables, as well as extend forecast horizon. Other parts of the Department are involved in this modeling effort. A fully operational model is planned which will integrate completed segments of NEMS for its first official application--preparation of EIA`s Annual Energy Outlook 1994. Abstracts for the new models will be included in next year`s version of this directory.

  8. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  9. Information-Theoretic Perspectives on Geophysical Models

    Science.gov (United States)

    Nearing, Grey

    2016-04-01

    To test any hypothesis about any dynamic system, it is necessary to build a model that places that hypothesis into the context of everything else that we know about the system: initial and boundary conditions and interactions between various governing processes (Hempel and Oppenheim, 1948, Cartwright, 1983). No hypothesis can be tested in isolation, and no hypothesis can be tested without a model (for a geoscience-related discussion see Clark et al., 2011). Science is (currently) fundamentally reductionist in the sense that we seek some small set of governing principles that can explain all phenomena in the universe, and such laws are ontological in the sense that they describe the object under investigation (Davies, 1990 gives several competing perspectives on this claim). However, since we cannot build perfect models of complex systems, any model that does not also contain an epistemological component (i.e., a statement, like a probability distribution, that refers directly to the quality of of the information from the model) is falsified immediately (in the sense of Popper, 2002) given only a small number of observations. Models necessarily contain both ontological and epistemological components, and what this means is that the purpose of any robust scientific method is to measure the amount and quality of information provided by models. I believe that any viable philosophy of science must be reducible to this statement. The first step toward a unified theory of scientific models (and therefore a complete philosophy of science) is a quantitative language that applies to both ontological and epistemological questions. Information theory is one such language: Cox' (1946) theorem (see Van Horn, 2003) tells us that probability theory is the (only) calculus that is consistent with Classical Logic (Jaynes, 2003; chapter 1), and information theory is simply the integration of convex transforms of probability ratios (integration reduces density functions to scalar

  10. THE MODEL FOR RISK ASSESSMENT ERP-SYSTEMS INFORMATION SECURITY

    Directory of Open Access Journals (Sweden)

    V. S. Oladko

    2016-12-01

    Full Text Available The article deals with the problem assessment of information security risks in the ERP-system. ERP-system functions and architecture are studied. The model malicious impacts on levels of ERP-system architecture are composed. Model-based risk assessment, which is the quantitative and qualitative approach to risk assessment, built on the partial unification 3 methods for studying the risks of information security - security models with full overlapping technique CRAMM and FRAP techniques developed.

  11. Protection and security of data base information

    Directory of Open Access Journals (Sweden)

    Mariuţa ŞERBAN

    2011-06-01

    Full Text Available Data bases are one of the most important components in every large informatics system which stores and processes data and information. Because data bases contain all of the valuable information about a company, its clients, its financial activity, they represent one of the key elements in the structure of an organization, which determines imperatives such as confidentiality, integrity and ease of data access. The current paper discuses the integrity of data bases and it refers to the validity and the coherence of stored data. Usually, integrity is defined in connection with terms of constraint, that are rules regarding coherence which the data base cannot infringe. Data base that integrity refers to information correctness and assumes to detect, correct and prevent errors that might have an effect on the data comprised by the data bases.

  12. [Lack of access to information on oral health problems among adults: an approach based on the theoretical model for literacy in health].

    Science.gov (United States)

    Roberto, Luana Leal; Noronha, Daniele Durães; Souza, Taiane Oliveira; Miranda, Ellen Janayne Primo; Martins, Andréa Maria Eleutério de Barros Lima; Paula, Alfredo Maurício Batista De; Ferreira, Efigênia Ferreira E; Haikal, Desirée Sant'ana

    2018-03-01

    This study sought to investigate factors associated with the lack of access to information on oral health among adults. It is a cross-sectional study, carried out among 831 adults (35-44 years of age). The dependent variable was access to information on how to avoid oral problems, and the independent variables were gathered into subgroups according to the theoretical model for literacy in health. Binary logistic regression was carried out, and results were corrected by the design effect. It was observed that 37.5% had no access to information about dental problems. The lack of access was higher among adults who had lower per capita income, were dissatisfied with the dental services provided, did not use dental floss, had unsatisfactory physical control of the quality of life, and self-perceived their oral health as fair/poor/very poor. The likelihood of not having access to information about dental problems among those dissatisfied with the dental services used was 3.28 times higher than for those satisfied with the dental services used. Thus, decreased access to information was related to unfavorable conditions among adults. Health services should ensure appropriate information to their users in order to increase health literacy levels and improve satisfaction and equity.

  13. Complementarity of information sent via different bases

    DEFF Research Database (Denmark)

    Wu, Shengjun; Yu, Sixia; Mølmer, Klaus

    2009-01-01

    We discuss quantitatively the complementarity of information transmitted by a quantum system prepared in a basis state in one out of several different mutually unbiased bases (MUBs). We obtain upper bounds on the information available to a receiver who has no knowledge of which MUB was chosen...

  14. Information and Communication Technology and School Based ...

    African Journals Online (AJOL)

    Information and Communication technology and school based assessment (SBA) is practice that broadens the form mode, means and scope of assessment in the school using modern technologies in order to facilitate and enhance learning. This study sought to ascertain the efficacy of Information and Communication ...

  15. A Model for an Electronic Information Marketplace

    Directory of Open Access Journals (Sweden)

    Wei Ge

    2005-11-01

    Full Text Available As the information content on the Internet increases, the task of locating desired information and assessing its quality becomes increasingly difficult. This development causes users to be more willing to pay for information that is focused on specific issues, verifiable, and available upon request. Thus, the nature of the Internet opens up the opportunity for information trading. In this context, the Internet cannot only be used to close the transaction, but also to deliver the product - desired information - to the user. Early attempts to implement such business models have fallen short of expectations. In this paper, we discuss the limitations of such practices and present a modified business model for information trading, which uses a reverse auction approach together with a multiple-buyer price discovery process

  16. Agricultural information dissemination using ICTs: A review and analysis of information dissemination models in China

    Directory of Open Access Journals (Sweden)

    Yun Zhang

    2016-03-01

    Full Text Available Over the last three decades, China’s agriculture sector has been transformed from the traditional to modern practice through the effective deployment of Information and Communication Technologies (ICTs. Information processing and dissemination have played a critical role in this transformation process. Many studies in relation to agriculture information services have been conducted in China, but few of them have attempted to provide a comprehensive review and analysis of different information dissemination models and their applications. This paper aims to review and identify the ICT based information dissemination models in China and to share the knowledge and experience in applying emerging ICTs in disseminating agriculture information to farmers and farm communities to improve productivity and economic, social and environmental sustainability. The paper reviews and analyzes the development stages of China’s agricultural information dissemination systems and different mechanisms for agricultural information service development and operations. Seven ICT-based information dissemination models are identified and discussed. Success cases are presented. The findings provide a useful direction for researchers and practitioners in developing future ICT based information dissemination systems. It is hoped that this paper will also help other developing countries to learn from China’s experience and best practice in their endeavor of applying emerging ICTs in agriculture information dissemination and knowledge transfer.

  17. Information pricing based on trusted system

    Science.gov (United States)

    Liu, Zehua; Zhang, Nan; Han, Hongfeng

    2018-05-01

    Personal information has become a valuable commodity in today's society. So our goal aims to develop a price point and a pricing system to be realistic. First of all, we improve the existing BLP system to prevent cascading incidents, design a 7-layer model. Through the cost of encryption in each layer, we develop PI price points. Besides, we use association rules mining algorithms in data mining algorithms to calculate the importance of information in order to optimize informational hierarchies of different attribute types when located within a multi-level trusted system. Finally, we use normal distribution model to predict encryption level distribution for users in different classes and then calculate information prices through a linear programming model with the help of encryption level distribution above.

  18. Geospatial Information System Capability Maturity Models

    Science.gov (United States)

    2017-06-01

    To explore how State departments of transportation (DOTs) evaluate geospatial tool applications and services within their own agencies, particularly their experiences using capability maturity models (CMMs) such as the Urban and Regional Information ...

  19. THE INFORMATION MODEL «SOCIAL EXPLOSION»

    Directory of Open Access Journals (Sweden)

    Alexander Chernyavskiy

    2012-01-01

    Full Text Available Article is dedicated to examination and analysis of the construction of the information model «social explosion», which corresponds to the newest «colored» revolutions. The analysis of model makes it possible to see effective approaches to the initiation of this explosion and by the use of contemporary information communications as honeycomb connection and the mobile Internet

  20. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  1. Energy Information Data Base: serial titles

    International Nuclear Information System (INIS)

    1980-06-01

    The Department of Energy Technical Information Center (TIC) is responsible for creating bibliographic data bases that are used in the announcement and retrieval of publications dealing with all phases of energy. The TIC interactive information processing system makes use of a number of computerized authorities so that consistency can be maintained and indexes can be produced. One such authority is the Energy Information Data Base: Serial Titles. This authority contains the full and abbreviated journal title, country of publication, CODEN, and certain codes. This revision replaces previous revisions of this document

  2. High resolution reservoir geological modelling using outcrop information

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Changmin; Lin Kexiang; Liu Huaibo [Jianghan Petroleum Institute, Hubei (China)] [and others

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  3. Millennial Students' Mental Models of Information Retrieval

    Science.gov (United States)

    Holman, Lucy

    2009-01-01

    This qualitative study examines first-year college students' online search habits in order to identify patterns in millennials' mental models of information retrieval. The study employed a combination of modified contextual inquiry and concept mapping methodologies to elicit students' mental models. The researcher confirmed previously observed…

  4. Enterprise Modelling for an Educational Information Infrastructure

    NARCIS (Netherlands)

    Widya, I.A.; Michiels, E.F.; Volman, C.J.A.M.; Pokraev, S.; de Diana, I.P.F.; Filipe, J.; Sharp, B.; Miranda, P.

    2001-01-01

    This paper reports the modelling exercise of an educational information infrastructure that aims to support the organisation of teaching and learning activities suitable for a wide range of didactic policies. The modelling trajectory focuses on capturing invariant structures of relations between

  5. Multi-dimensional indoor location information model

    NARCIS (Netherlands)

    Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.

    2013-01-01

    Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and

  6. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  7. Information theory based approaches to cellular signaling.

    Science.gov (United States)

    Waltermann, Christian; Klipp, Edda

    2011-10-01

    Cells interact with their environment and they have to react adequately to internal and external changes such changes in nutrient composition, physical properties like temperature or osmolarity and other stresses. More specifically, they must be able to evaluate whether the external change is significant or just in the range of noise. Based on multiple external parameters they have to compute an optimal response. Cellular signaling pathways are considered as the major means of information perception and transmission in cells. Here, we review different attempts to quantify information processing on the level of individual cells. We refer to Shannon entropy, mutual information, and informal measures of signaling pathway cross-talk and specificity. Information theory in systems biology has been successfully applied to identification of optimal pathway structures, mutual information and entropy as system response in sensitivity analysis, and quantification of input and output information. While the study of information transmission within the framework of information theory in technical systems is an advanced field with high impact in engineering and telecommunication, its application to biological objects and processes is still restricted to specific fields such as neuroscience, structural and molecular biology. However, in systems biology dealing with a holistic understanding of biochemical systems and cellular signaling only recently a number of examples for the application of information theory have emerged. This article is part of a Special Issue entitled Systems Biology of Microorganisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. BIM. Building Information Model. Special issue; BIM. Building Information Model. Themanummer

    Energy Technology Data Exchange (ETDEWEB)

    Van Gelder, A.L.A. [Arta and Consultancy, Lage Zwaluwe (Netherlands); Van den Eijnden, P.A.A. [Stichting Marktwerking Installatietechniek, Zoetermeer (Netherlands); Veerman, J.; Mackaij, J.; Borst, E. [Royal Haskoning DHV, Nijmegen (Netherlands); Kruijsse, P.M.D. [Wolter en Dros, Amersfoort (Netherlands); Buma, W. [Merlijn Media, Waddinxveen (Netherlands); Bomhof, F.; Willems, P.H.; Boehms, M. [TNO, Delft (Netherlands); Hofman, M.; Verkerk, M. [ISSO, Rotterdam (Netherlands); Bodeving, M. [VIAC Installatie Adviseurs, Houten (Netherlands); Van Ravenswaaij, J.; Van Hoven, H. [BAM Techniek, Bunnik (Netherlands); Boeije, I.; Schalk, E. [Stabiplan, Bodegraven (Netherlands)

    2012-11-15

    A series of 14 articles illustrates the various aspects of the Building Information Model (BIM). The essence of BIM is to capture information about the building process and the building product. [Dutch] In 14 artikelen worden diverse aspecten m.b.t. het Building Information Model (BIM) belicht. De essentie van BIM is het vastleggen van informatie over het bouwproces en het bouwproduct.

  9. Information Literacy for Health Professionals: Teaching Essential Information Skills with the Big6 Information Literacy Model

    Science.gov (United States)

    Santana Arroyo, Sonia

    2013-01-01

    Health professionals frequently do not possess the necessary information-seeking abilities to conduct an effective search in databases and Internet sources. Reference librarians may teach health professionals these information and technology skills through the Big6 information literacy model (Big6). This article aims to address this issue. It also…

  10. Energy Information Data Base: corporate author entries

    International Nuclear Information System (INIS)

    1978-06-01

    The DOE Energy Information Data Base has been created and is maintained by the DOE Technical Information Center. One of the controls for information entered into the base is the standardized name of the corporate entity or the corporate author. The purpose of this list of authorized or standardized corporate entries is to provide a means for the consistent citing of the names of organizations in bibliographic records. It also serves as a guide for users who retrieve information from a bibliographic data base and who want to locate information originating in particular organizations. This authority is a combination of entries established by the Technical Information Center and the International Atomic Energy Agency's International Nuclear Information System (INIS). The format calls, in general, for the name of the organization represented by the literature being cataloged to be cited as follows: the largest element, the place, the smallest element, e.g., Brigham Young Univ., Provo, Utah (USA), Dept. of Chemical Engineering. Code numbers are assigned to each entry to provide manipulation by computer. Cross references are used to reflect name changes and invalid entries

  11. Listener: a probe into information based material specification

    DEFF Research Database (Denmark)

    Ramsgaard Thomsen, Mette; Karmon, Ayelet

    2011-01-01

    This paper presents the thinking and making of the architectural research probe Listener. Developed as an interdisciplinary collaboration between textile design and architecture, Listener explores how information based fabrication technologies are challenging the material practices of architecture....... The paper investigates how textile design can be understood as a model for architectural production providing new strategies for material specification and allowing the thinking of material as inherently variegated and performative. The paper traces the two fold information based strategies present...

  12. Information retrieval models foundations and relationships

    CERN Document Server

    Roelleke, Thomas

    2013-01-01

    Information Retrieval (IR) models are a core component of IR research and IR systems. The past decade brought a consolidation of the family of IR models, which by 2000 consisted of relatively isolated views on TF-IDF (Term-Frequency times Inverse-Document-Frequency) as the weighting scheme in the vector-space model (VSM), the probabilistic relevance framework (PRF), the binary independence retrieval (BIR) model, BM25 (Best-Match Version 25, the main instantiation of the PRF/BIR), and language modelling (LM). Also, the early 2000s saw the arrival of divergence from randomness (DFR).Regarding in

  13. Five-factor model personality disorder prototypes in a community sample: self- and informant-reports predicting interview-based DSM diagnoses.

    Science.gov (United States)

    Lawton, Erin M; Shields, Andrew J; Oltmanns, Thomas F

    2011-10-01

    The need for an empirically validated, dimensional system of personality disorders is becoming increasingly apparent. While a number of systems have been investigated in this regard, the five-factor model of personality has demonstrated the ability to adequately capture personality pathology. In particular, the personality disorder prototypes developed by Lynam and Widiger (2001) have been tested in a number of samples. The goal of the present study is to extend this literature by validating the prototypes in a large, representative community sample of later middle-aged adults using both self and informant reports. We found that the prototypes largely work well in this age group. Schizoid, Borderline, Histrionic, Narcissistic, and Avoidant personality disorders demonstrate good convergent validity, with a particularly strong pattern of discriminant validity for the latter four. Informant-reported prototypes show similar patterns to self reports for all analyses. This demonstrates that informants are not succumbing to halo representations of the participants, but are rather describing participants in nuanced ways. It is important that informant reports add significant predictive validity for Schizoid, Antisocial, Borderline, Histrionic, and Narcissistic personality disorders. Implications of our results and directions for future research are discussed.

  14. Five-Factor Model personality disorder prototypes in a community sample: Self- and informant-reports predicting interview-based DSM diagnoses

    Science.gov (United States)

    Lawton, Erin M.; Shields, Andrew J.; Oltmanns, Thomas F.

    2011-01-01

    The need for an empirically-validated, dimensional system of personality disorders is becoming increasingly apparent. While a number of systems have been investigated in this regard, the five-factor model of personality has demonstrated the ability to adequately capture personality pathology. In particular, the personality disorder prototypes developed by Lynam and Widiger (2001) have been tested in a number of samples. The goal of the present study is to extend this literature by validating the prototypes in a large, representative community sample of later middle-aged adults using both self and informant reports. We found that the prototypes largely work well in this age group. Schizoid, Borderline, Histrionic, Narcissistic, and Avoidant personality disorders demonstrate good convergent validity, with a particularly strong pattern of discriminant validity for the latter four. Informant-reported prototypes show similar patterns to self reports for all analyses. This demonstrates that informants are not succumbing to halo representations of the participants, but are rather describing participants in nuanced ways. Importantly, informant reports add significant predictive validity for Schizoid, Antisocial, Borderline, Histrionic, and Narcissistic personality disorders. Implications of our results and directions for future research are discussed. PMID:22200006

  15. Propagating semantic information in biochemical network models

    Directory of Open Access Journals (Sweden)

    Schulz Marvin

    2012-01-01

    Full Text Available Abstract Background To enable automatic searches, alignments, and model combination, the elements of systems biology models need to be compared and matched across models. Elements can be identified by machine-readable biological annotations, but assigning such annotations and matching non-annotated elements is tedious work and calls for automation. Results A new method called "semantic propagation" allows the comparison of model elements based not only on their own annotations, but also on annotations of surrounding elements in the network. One may either propagate feature vectors, describing the annotations of individual elements, or quantitative similarities between elements from different models. Based on semantic propagation, we align partially annotated models and find annotations for non-annotated model elements. Conclusions Semantic propagation and model alignment are included in the open-source library semanticSBML, available on sourceforge. Online services for model alignment and for annotation prediction can be used at http://www.semanticsbml.org.

  16. An ecologically based model of alcohol-consumption decision making: evidence for the discriminative and predictive role of contextual reward and punishment information.

    Science.gov (United States)

    Bogg, Tim; Finn, Peter R

    2009-05-01

    Using insights from Ecological Systems Theory and Reinforcement Sensitivity Theory, the current study assessed the utility of a series of hypothetical role-based alcohol-consumption scenarios that varied in their presentation of rewarding and punishing information. The scenarios, along with measures of impulsive sensation seeking and a self-report of weekly alcohol consumption, were administered to a sample of alcohol-dependent and non-alcohol-dependent college-age individuals (N = 170). The results showed scenario attendance decisions were largely unaffected by alcohol-dependence status and variations in contextual reward and punishment information. In contrast to the attendance findings, the results for the alcohol-consumption decisions showed alcohol-dependent individuals reported a greater frequency of deciding to drink, as well as indicating greater alcohol consumption in the contexts of complementary rewarding or nonpunishing information. Regression results provided evidence for the criterion-related validity of scenario outcomes in an account of diagnostic alcohol problems. The results are discussed in terms of the conceptual and predictive gains associated with an assessment approach to alcohol-consumption decision making that combines situational information organized and balanced through the frameworks of Ecological Systems Theory and Reinforcement Sensitivity Theory.

  17. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  18. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  19. Conceptual Modeling of Events as Information Objects and Change Agents

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    as a totality of an information object and a change agent. When an event is modeled as an information object it is comparable to an entity that exists only at a specific point in time. It has attributes and can be used for querying and specification of constraints. When an event is modeled as a change agent...... it is comparable to an executable transaction schema. Finally, we briefly compare our approach to object-oriented approaches based on encapsulated objects....

  20. Energy Information Data Base: corporate author entries

    International Nuclear Information System (INIS)

    1980-03-01

    One of the controls for information entered into the data bases created and maintained by the DOE Technical Information Center is the standardized name for the corporate entity or the corporate author. The purpose of Energy Information Data Base: Corporate Author Entries (TID-4585-R1) and this supplemental list of authorized or standardized corporate entries is to provide a means for the consistent citing of the names of organizations in bibliographic records. In general, an entry in Corporate Author Entries consists of the seven-digit code number assigned to the particular corporate entity, the two-letter country code, the largest element of the corporate name, the location of the corporate entity, and the smallest element of the corporate name (if provided). This supplement [DOE/TIC-4585-R1(Suppl.5)] contains additions to the base document (TID-4585-R1) and is intended to be used with that publication

  1. Information encryption systems based on Boolean functions

    Directory of Open Access Journals (Sweden)

    Aureliu Zgureanu

    2011-02-01

    Full Text Available An information encryption system based on Boolean functions is proposed. Information processing is done using multidimensional matrices, performing logical operations with these matrices. At the basis of ensuring high level security of the system the complexity of solving the problem of building systems of Boolean functions that depend on many variables (tens and hundreds is set. Such systems represent the private key. It varies both during the encryption and decryption of information, and during the transition from one message to another.

  2. Improving information for community-based adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Huq, Saleemul

    2011-10-15

    Community-based adaptation aims to empower local people to cope with and plan for the impacts of climate change. In a world where knowledge equals power, you could be forgiven for thinking that enabling this type of adaptation boils down to providing local people with information. Conventional approaches to planning adaptation rely on 'expert' advice and credible 'science' from authoritative information providers such as the Intergovernmental Panel on Climate Change. But to truly support the needs of local communities, this information needs to be more site-specific, more user-friendly and more inclusive of traditional knowledge and existing coping practices.

  3. Information model of the 'Ukryttya' object

    International Nuclear Information System (INIS)

    Batij, E.V.; Ermolenko, A.A.; Kotlyarov, V.T.

    2008-01-01

    There were described the building principles and content of the 'Ukryttya' object information model that has been developed at the Institute for Safety Problems of NPP. Using the client/server architecture in this system (the simultaneous access of the many users), Autodesk Map Guide and ASP.NET technologies allowed avoiding the typical defects of the 'stand-alone desktop' information systems (that aimed for a single user)

  4. Mathematical models of information and stochastic systems

    CERN Document Server

    Kornreich, Philipp

    2008-01-01

    From ancient soothsayers and astrologists to today's pollsters and economists, probability theory has long been used to predict the future on the basis of past and present knowledge. Mathematical Models of Information and Stochastic Systems shows that the amount of knowledge about a system plays an important role in the mathematical models used to foretell the future of the system. It explains how this known quantity of information is used to derive a system's probabilistic properties. After an introduction, the book presents several basic principles that are employed in the remainder of the t

  5. The Esri 3D city information model

    International Nuclear Information System (INIS)

    Reitz, T; Schubiger-Banz, S

    2014-01-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases

  6. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Rasmussen, Martin; Ulrich, Thomas; Groth, Katrina; Smith, Curtis

    2016-01-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  7. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rasmussen, Martin [Norwegian Univ. of Science and Technology, Trondheim (Norway). Social Research; Herberger, Sarah [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ulrich, Thomas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-06-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  8. Aligning building information model tools and construction management methods

    NARCIS (Netherlands)

    Hartmann, Timo; van Meerveld, H.J.; Vossebeld, N.; Adriaanse, Adriaan Maria

    2012-01-01

    Few empirical studies exist that can explain how different Building Information Model (BIM) based tool implementation strategies work in practical contexts. To help overcoming this gap, this paper describes the implementation of two BIM based tools, the first, to support the activities at an

  9. Informed Systems: Enabling Collaborative Evidence Based Organizational Learning

    Directory of Open Access Journals (Sweden)

    Mary M. Somerville

    2015-12-01

    Full Text Available Objective – In response to unrelenting disruptions in academic publishing and higher education ecosystems, the Informed Systems approach supports evidence based professional activities to make decisions and take actions. This conceptual paper presents two core models, Informed Systems Leadership Model and Collaborative Evidence-Based Information Process Model, whereby co-workers learn to make informed decisions by identifying the decisions to be made and the information required for those decisions. This is accomplished through collaborative design and iterative evaluation of workplace systems, relationships, and practices. Over time, increasingly effective and efficient structures and processes for using information to learn further organizational renewal and advance nimble responsiveness amidst dynamically changing circumstances. Methods – The integrated Informed Systems approach to fostering persistent workplace inquiry has its genesis in three theories that together activate and enable robust information usage and organizational learning. The information- and learning-intensive theories of Peter Checkland in England, which advance systems design, stimulate participants’ appreciation during the design process of the potential for using information to learn. Within a co-designed environment, intentional social practices continue workplace learning, described by Christine Bruce in Australia as informed learning enacted through information experiences. In addition, in Japan, Ikujiro Nonaka’s theories foster information exchange processes and knowledge creation activities within and across organizational units. In combination, these theories promote the kind of learning made possible through evolving and transferable capacity to use information to learn through design and usage of collaborative communication systems with associated professional practices. Informed Systems therein draws from three antecedent theories to create an original

  10. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  11. Language-based multimedia information retrieval

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Gauvain, J.L.; Hiemstra, Djoerd; Netter, K.

    2000-01-01

    This paper describes various methods and approaches for language-based multimedia information retrieval, which have been developed in the projects POP-EYE and OLIVE and which will be developed further in the MUMIS project. All of these project aim at supporting automated indexing of video material

  12. Study on geo-information modelling

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana

    2006-01-01

    Roč. 5, č. 5 (2006), s. 1108-1113 ISSN 1109-2777 Institutional research plan: CEZ:AV0Z10750506 Keywords : control GIS * geo-information modelling * uncertainty * spatial temporal approach Web Services Subject RIV: BC - Control Systems Theory

  13. Modelling Dynamic Forgetting in Distributed Information Systems

    NARCIS (Netherlands)

    N.F. Höning (Nicolas); M.C. Schut

    2010-01-01

    htmlabstractWe describe and model a new aspect in the design of distributed information systems. We build upon a previously described problem on the microlevel, which asks how quickly agents should discount (forget) their experience: If they cherish their memories, they can build their reports on

  14. Asset Condition, Information Systems and Decision Models

    CERN Document Server

    Willett, Roger; Brown, Kerry; Mathew, Joseph

    2012-01-01

    Asset Condition, Information Systems and Decision Models, is the second volume of the Engineering Asset Management Review Series. The manuscripts provide examples of implementations of asset information systems as well as some practical applications of condition data for diagnostics and prognostics. The increasing trend is towards prognostics rather than diagnostics, hence the need for assessment and decision models that promote the conversion of condition data into prognostic information to improve life-cycle planning for engineered assets. The research papers included here serve to support the on-going development of Condition Monitoring standards. This volume comprises selected papers from the 1st, 2nd, and 3rd World Congresses on Engineering Asset Management, which were convened under the auspices of ISEAM in collaboration with a number of organisations, including CIEAM Australia, Asset Management Council Australia, BINDT UK, and Chinese Academy of Sciences, Beijing University of Chemical Technology, Chin...

  15. Improving Agent Based Modeling of Critical Incidents

    Directory of Open Access Journals (Sweden)

    Robert Till

    2010-04-01

    Full Text Available Agent Based Modeling (ABM is a powerful method that has been used to simulate potential critical incidents in the infrastructure and built environments. This paper will discuss the modeling of some critical incidents currently simulated using ABM and how they may be expanded and improved by using better physiological modeling, psychological modeling, modeling the actions of interveners, introducing Geographic Information Systems (GIS and open source models.

  16. Trust-based information system architecture for personal wellness.

    Science.gov (United States)

    Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd

    2014-01-01

    Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.

  17. An Algebraic Approach to Knowledge Bases Informational Equivalence

    OpenAIRE

    Plotkin, B.; Plotkin, T.

    2003-01-01

    In this paper we study the notion of knowledge from the positions of universal algebra and algebraic logic. We consider first order knowledge which is based on first order logic. We define categories of knowledge and knowledge bases. These notions are defined for the fixed subject of knowledge. The key notion of informational equivalence of two knowledge bases is introduced. We use the idea of equivalence of categories in this definition. We prove that for finite models there is a clear way t...

  18. Evaluation of clinical information modeling tools.

    Science.gov (United States)

    Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak

    2016-11-01

    Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida

  19. METHODOLOGICAL APPROACH TO ANALYSIS AND EVALUATION OF INFORMATION PROTECTION IN INFORMATION SYSTEMS BASED ON VULNERABILITY DANGER

    Directory of Open Access Journals (Sweden)

    Y. M. Krotiuk

    2008-01-01

    Full Text Available The paper considers a methodological approach to an analysis and estimation of information security in the information systems which is based on the analysis of vulnerabilities and an extent of their hazard. By vulnerability hazard it is meant a complexity of its operation as a part of an information system. The required and sufficient vulnerability operational conditions  have  been  determined in the paper. The paper proposes a generalized model for attack realization which is used as a basis for construction of an attack realization model for an operation of a particular vulnerability. A criterion for estimation of information protection in the information systems which is based on the estimation of vulnerability hazard is formulated in the paper. The proposed approach allows to obtain a quantitative estimation of the information system security on the basis of the proposed schemes on realization of typical attacks for the distinguished classes of vulnerabilities.The methodical approach is used for choosing variants to be applied for realization of protection mechanisms in the information systems as well as for estimation of information safety in the operating information systems.

  20. Web-based Construction Information Management System

    Directory of Open Access Journals (Sweden)

    David Scott

    2012-11-01

    Full Text Available Centralised information systems that are accessible to all parties in a construction project are powerful tools in the quest to improve efficiency and to enhance the flow of information within the construction industry. This report points out the maturity of the necessary IT technology, the availability and the suitability of existing commercial products.Some of these products have been studied and analysed. An evaluation and selection process based on the functions offered in the products and their utility is presented. A survey of local construction personnel has been used to collect typical weighting data and performance criteria used in the evaluation process.

  1. Process and building information modelling in the construction industry by using information delivery manuals and model view definitions

    DEFF Research Database (Denmark)

    Karlshøj, Jan

    2012-01-01

    The construction industry is gradually increasing its use of structured information and building information modelling.To date, the industry has suffered from the disadvantages of a project-based organizational structure and ad hoc solutions. Furthermore, it is not used to formalizing the flow...... of information and specifying exactly which objects and properties are needed for each process and which information is produced by the processes. The present study is based on reviewing the existing methodology of Information Delivery Manuals (IDM) from Buildingsmart, which also is also an ISO standard 29481...... Part 1; and the Model View Definition (MVD) methodology developed by Buildingsmart and BLIS. The research also includes a review of concrete IDM development projects that have been developed over the last five years. Although the study has identified interest in the IDM methodology in a number...

  2. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  3. Recommender system based on scarce information mining.

    Science.gov (United States)

    Lu, Wei; Chung, Fu-Lai; Lai, Kunfeng; Zhang, Liang

    2017-09-01

    Guessing what user may like is now a typical interface for video recommendation. Nowadays, the highly popular user generated content sites provide various sources of information such as tags for recommendation tasks. Motivated by a real world online video recommendation problem, this work targets at the long tail phenomena of user behavior and the sparsity of item features. A personalized compound recommendation framework for online video recommendation called Dirichlet mixture probit model for information scarcity (DPIS) is hence proposed. Assuming that each clicking sample is generated from a representation of user preferences, DPIS models the sample level topic proportions as a multinomial item vector, and utilizes topical clustering on the user part for recommendation through a probit classifier. As demonstrated by the real-world application, the proposed DPIS achieves better performance in accuracy, perplexity as well as diversity in coverage than traditional methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Information Modeling for Direct Control of Distributed Energy Resources

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob

    2013-01-01

    We present an architecture for an unbundled liberalized electricity market system where a virtual power plant (VPP) is able to control a number of distributed energy resources (DERs) directly through a two-way communication link. The aggregator who operates the VPP utilizes the accumulated...... a desired accumulated response. In this paper, we design such an information model based on the markets that the aggregator participates in and based on the flexibility characteristics of the remote controlled DERs. The information model is constructed in a modular manner making the interface suitable...

  5. Ontology-based information standards development

    OpenAIRE

    Heravi, Bahareh Rahmanzadeh

    2012-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Standards may be argued to be important enablers for achieving interoperability as they aim to provide unambiguous specifications for error-free exchange of documents and information. By implication, therefore, it is important to model and represent the concept of a standard in a clear, precise and unambiguous way. Although standards development organisations usually provide guidelines for th...

  6. Fast mutual-information-based contrast enhancement

    Science.gov (United States)

    Cao, Gang; Yu, Lifang; Tian, Huawei; Huang, Xianglin; Wang, Yongbin

    2017-07-01

    Recently, T. Celik proposed an effective image contrast enhancement (CE) method based on spatial mutual information and PageRank (SMIRANK). According to the state-of-the-art evaluation criteria, it achieves the best visual enhancement quality among existing global CE methods. However, SMIRANK runs much slower than the other counterparts, such as histogram equalization (HE) and adaptive gamma correction. Low computational complexity is also required for good CE algorithms. In this paper, we novelly propose a fast SMIRANK algorithm, called FastSMIRANK. It integrates both spatial and gray-level downsampling into the generation of pixel value mapping function. Moreover, the computation of rank vectors is speeded up by replacing PageRank with a simple yet efficient row-based operation of mutual information matrix. Extensive experimental results show that the proposed FastSMIRANK could accelerate the processing speed of SMIRANK by about 20 times, and is even faster than HE. Comparable enhancement quality is preserved simultaneously.

  7. An evaluation of web-based information.

    Science.gov (United States)

    Murphy, Rebecca; Frost, Susie; Webster, Peter; Schmidt, Ulrike

    2004-03-01

    To evaluate the quality of web-based information on the treatment of eating disorders and to investigate potential indicators of content quality. Two search engines were queried to obtain 15 commonly accessed websites about eating disorders. Two reviewers evaluated the characteristics, quality of content, and accountability of the sites. Intercorrelations between variables were calculated. The overall quality of the sites was poor based on the outcome measures used. All quality of content measures correlated with a measure of accountability (Silberg, W.M., Lundberg, G.D., & Mussachio, R.A., 1993). There is a lack of quality information on the treatment of eating disorders on the web. Although accountability criteria may be useful indicators of content quality, there is a need to investigate whether these can be usefully applied to other mental health areas. Copyright 2004 by Wiley Periodicals, Inc. Int J Eat Disord 35: 145-154, 2004.

  8. Improving information extraction using a probability-based approach

    DEFF Research Database (Denmark)

    Kim, S.; Ahmed, Saeema; Wallace, K.

    2007-01-01

    Information plays a crucial role during the entire life-cycle of a product. It has been shown that engineers frequently consult colleagues to obtain the information they require to solve problems. However, the industrial world is now more transient and key personnel move to other companies...... or retire. It is becoming essential to retrieve vital information from archived product documents, if it is available. There is, therefore, great interest in ways of extracting relevant and sharable information from documents. A keyword-based search is commonly used, but studies have shown...... the recall, while maintaining the high precision, a learning approach that makes identification decisions based on a probability model, rather than simply looking up the presence of the pre-defined variations, looks promising. This paper presents the results of developing such a probability-based entity...

  9. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  10. Multiagent Based Information Dissemination in Vehicular Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    S.S. Manvi

    2009-01-01

    Full Text Available Vehicular Ad hoc Networks (VANETs are a compelling application of ad hoc networks, because of the potential to access specific context information (e.g. traffic conditions, service updates, route planning and deliver multimedia services (Voice over IP, in-car entertainment, instant messaging, etc.. This paper proposes an agent based information dissemination model for VANETs. A two-tier agent architecture is employed comprising of the following: 1 'lightweight', network-facing, mobile agents; 2 'heavyweight', application-facing, norm-aware agents. The limitations of VANETs lead us to consider a hybrid wireless network architecture that includes Wireless LAN/Cellular and ad hoc networking for analyzing the proposed model. The proposed model provides flexibility, adaptability and maintainability for traffic information dissemination in VANETs as well as supports robust and agile network management. The proposed model has been simulated in various network scenarios to evaluate the effectiveness of the approach.

  11. Safety Case Development as an Information Modelling Problem

    Science.gov (United States)

    Lewis, Robert

    This paper considers the benefits from applying information modelling as the basis for creating an electronically-based safety case. It highlights the current difficulties of developing and managing large document-based safety cases for complex systems such as those found in Air Traffic Control systems. After a review of current tools and related literature on this subject, the paper proceeds to examine the many relationships between entities that can exist within a large safety case. The paper considers the benefits to both safety case writers and readers from the future development of an ideal safety case tool that is able to exploit these information models. The paper also introduces the idea that the safety case has formal relationships between entities that directly support the safety case argument using a methodology such as GSN, and informal relationships that provide links to direct and backing evidence and to supporting information.

  12. Information operation/information warfare modeling and simulation

    OpenAIRE

    Buettner, Raymond

    2000-01-01

    Information Operations have always been a part of warfare. However, this aspect of warfare is having ever-greater importance as forces rely more and more on information as an enabler. Modern information systems make possible very rapid creation, distribution, and utilization of information. These same systems have vulnerabilities that can be exploited by enemy forces. Information force-on-force is important and complex. New tools and procedures are needed for this warfare arena. As these t...

  13. Ontology for cell-based geographic information

    Science.gov (United States)

    Zheng, Bin; Huang, Lina; Lu, Xinhai

    2009-10-01

    Inter-operability is a key notion in geographic information science (GIS) for the sharing of geographic information (GI). That requires a seamless translation among different information sources. Ontology is enrolled in GI discovery to settle the semantic conflicts for its natural language appearance and logical hierarchy structure, which are considered to be able to provide better context for both human understanding and machine cognition in describing the location and relationships in the geographic world. However, for the current, most studies on field ontology are deduced from philosophical theme and not applicable for the raster expression in GIS-which is a kind of field-like phenomenon but does not physically coincide to the general concept of philosophical field (mostly comes from the physics concepts). That's why we specifically discuss the cell-based GI ontology in this paper. The discussion starts at the investigation of the physical characteristics of cell-based raster GI. Then, a unified cell-based GI ontology framework for the recognition of the raster objects is introduced, from which a conceptual interface for the connection of the human epistemology and the computer world so called "endurant-occurrant window" is developed for the better raster GI discovery and sharing.

  14. Organization model and formalized description of nuclear enterprise information system

    International Nuclear Information System (INIS)

    Yuan Feng; Song Yafeng; Li Xudong

    2012-01-01

    Organization model is one of the most important models of Nuclear Enterprise Information System (NEIS). Scientific and reasonable organization model is the prerequisite that NEIS has robustness and extendibility, and is also the foundation of the integration of heterogeneous system. Firstly, the paper describes the conceptual model of the NEIS on ontology chart, which provides a consistent semantic framework of organization. Then it discusses the relations between the concepts in detail. Finally, it gives the formalized description of the organization model of NEIS based on six-tuple array. (authors)

  15. Modeling Interoperable Information Systems with 3LGM² and IHE.

    Science.gov (United States)

    Stäubert, S; Schaaf, M; Jahn, F; Brandner, R; Winter, A

    2015-01-01

    Strategic planning of information systems (IS) in healthcare requires descriptions of the current and the future IS state. Enterprise architecture planning (EAP) tools like the 3LGM² tool help to build up and to analyze IS models. A model of the planned architecture can be derived from an analysis of current state IS models. Building an interoperable IS, i. e. an IS consisting of interoperable components, can be considered a relevant strategic information management goal for many IS in healthcare. Integrating the healthcare enterprise (IHE) is an initiative which targets interoperability by using established standards. To link IHE concepts to 3LGM² concepts within the 3LGM² tool. To describe how an information manager can be supported in handling the complex IHE world and planning interoperable IS using 3LGM² models. To describe how developers or maintainers of IHE profiles can be supported by the representation of IHE concepts in 3LGM². Conceptualization and concept mapping methods are used to assign IHE concepts such as domains, integration profiles actors and transactions to the concepts of the three-layer graph-based meta-model (3LGM²). IHE concepts were successfully linked to 3LGM² concepts. An IHE-master-model, i. e. an abstract model for IHE concepts, was modeled with the help of 3LGM² tool. Two IHE domains were modeled in detail (ITI, QRPH). We describe two use cases for the representation of IHE concepts and IHE domains as 3LGM² models. Information managers can use the IHE-master-model as reference model for modeling interoperable IS based on IHE profiles during EAP activities. IHE developers are supported in analyzing consistency of IHE concepts with the help of the IHE-master-model and functions of the 3LGM² tool The complex relations between IHE concepts can be modeled by using the EAP method 3LGM². 3LGM² tool offers visualization and analysis features which are now available for the IHE-master-model. Thus information managers and IHE

  16. Information and organization in public health institutes: an ontology-based modeling of the entities in the reception-analysis-report phases.

    Science.gov (United States)

    Pozza, Giandomenico; Borgo, Stefano; Oltramari, Alessandro; Contalbrigo, Laura; Marangon, Stefano

    2016-09-08

    Ontologies are widely used both in the life sciences and in the management of public and private companies. Typically, the different offices in an organization develop their own models and related ontologies to capture specific tasks and goals. Although there might be an overall coordination, the use of distinct ontologies can jeopardize the integration of data across the organization since data sharing and reusability are sensitive to modeling choices. The paper provides a study of the entities that are typically found at the reception, analysis and report phases in public institutes in the life science domain. Ontological considerations and techniques are introduced and their implementation exemplified by studying the Istituto Zooprofilattico Sperimentale delle Venezie (IZSVe), a public veterinarian institute with different geographical locations and several laboratories. Different modeling issues are discussed like the identification and characterization of the main entities in these phases; the classification of the (types of) data; the clarification of the contexts and the roles of the involved entities. The study is based on a foundational ontology and shows how it can be extended to a comprehensive and coherent framework comprising the different institute's roles, processes and data. In particular, it shows how to use notions lying at the borderline between ontology and applications, like that of knowledge object. The paper aims to help the modeler to understand the core viewpoint of the organization and to improve data transparency. The study shows that the entities at play can be analyzed within a single ontological perspective allowing us to isolate a single ontological framework for the whole organization. This facilitates the development of coherent representations of the entities and related data, and fosters the use of integrated software for data management and reasoning across the company.

  17. The Design and Implement of Tourism Information System Based on GIS

    Science.gov (United States)

    Chunchang, Fu; Nan, Zhang

    From the geographical information system concept, discusses the main contents of the geographic information system, and the current of the geographic information system key technological measures of tourism information system, the application of tourism information system for specific requirements and goals, and analyzes a relational database model based on the tourist information system in GIS application methods of realization.

  18. Formal approach to modeling of modern Information Systems

    Directory of Open Access Journals (Sweden)

    Bálint Molnár

    2016-01-01

    Full Text Available Most recently, the concept of business documents has started to play double role. On one hand, a business document (word processing text or calculation sheet can be used as specification tool, on the other hand the business document is an immanent constituent of business processes, thereby essential component of business Information Systems. The recent tendency is that the majority of documents and their contents within business Information Systems remain in semi-structured format and a lesser part of documents is transformed into schemas of structured databases. In order to keep the emerging situation in hand, we suggest the creation (1 a theoretical framework for modeling business Information Systems; (2 and a design method for practical application based on the theoretical model that provides the structuring principles. The modeling approach that focuses on documents and their interrelationships with business processes assists in perceiving the activities of modern Information Systems.

  19. An information search model for online social Networks - MOBIRSE

    Directory of Open Access Journals (Sweden)

    Miguel Angel Niño Zambrano

    2015-09-01

    Full Text Available Online Social Networks (OSNs have been gaining great importance among Internet users in recent years.  These are sites where it is possible to meet people, publish, and share content in a way that is both easy and free of charge. As a result, the volume of information contained in these websites has grown exponentially, and web search has consequently become an important tool for users to easily find information relevant to their social networking objectives. Making use of ontologies and user profiles can make these searches more effective. This article presents a model for Information Retrieval in OSNs (MOBIRSE based on user profile and ontologies which aims to improve the relevance of retrieved information on these websites. The social network Facebook was chosen for a case study and as the instance for the proposed model. The model was validated using measures such as At-k Precision and Kappa statistics, to assess its efficiency.

  20. Introduction of Building Information Modeling (BIM) Technologies in Construction

    Science.gov (United States)

    Milyutina, M. A.

    2018-05-01

    The issues of introduction of building information modeling (BIM) in construction industry are considered in this work. The advantages of this approach and perspectives of the transition to new design technologies, construction process management, and operation in the near future are stated. The importance of development of pilot projects that should identify the ways and means of verification of the regulatory and technical base, as well as economic indicators in the transition to Building Information Technologies in the construction, is noted.

  1. Implementation of Web-based Information Systems in Distributed Organizations

    DEFF Research Database (Denmark)

    Bødker, Keld; Pors, Jens Kaaber; Simonsen, Jesper

    2004-01-01

    This article presents results elicited from studies conducted in relation to implementing a web-based information system throughout a large distributed organization. We demonstrate the kind of expectations and conditions for change that management face in relation to open-ended, configurable......, and context specific web-based information systems like Lotus QuickPlace. Our synthesis from the empirical findings is related to two recent models, the improvisational change management model suggested by Orlikowski and Hofman (1997), and Gallivan's (2001) model for organizational adoption and assimilation....... In line with comparable approaches from the knowledge management area (Dixon 2000; Markus 2001), we relate to, refine, and operationalize the models from an overall organizational view by identifying and characterizing four different and general implementation contexts...

  2. Human-Assisted Machine Information Exploitation: a crowdsourced investigation of information-based problem solving

    Science.gov (United States)

    Kase, Sue E.; Vanni, Michelle; Caylor, Justine; Hoye, Jeff

    2017-05-01

    The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical operational environment. These types of environments are characteristic of intelligence workflow processes conducted during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of crowds to model the interaction of the information consumer and the information required to solve a problem at different levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems (DSS) research represent the experimental conditions in this online single-player against-the-clock game where the player, acting in the role of an intelligence analyst, is tasked with a Commander's Critical Information Requirement (CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks (annotation, relation identification, and link diagram formation) with the assistance of `HAMIE the robot' who offers varying levels of information understanding dependent on question complexity. We provide preliminary results from a pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research platform.

  3. Infographic Modeling Based on 3d Laser Surveying for Informed Universal Design in Archaeological Areas: the Case of Oppidum of the Ancient City of Tusculum

    Science.gov (United States)

    Cemoli, L.; D'Auria, S.; De Silla, F.; Pucci, S.; Strollo, R. M.

    2017-08-01

    The valorisation of archaeological sites represents a fundamental action for the social and economic development of a country. An archaeological park is often a territory characterized by significant testimonies of antiquity of great landscape value. For this reason, it should be configured as an authentic outdoor museum, enriched by natural, environmental, architectural and urban components. In order to fulfill these requirements, it is fundamental the elaboration of a coherent scientific project of preservation, fruition and valorisation of the area, which merge the different components necessary for the establishment of an archaeological museum-park. One of the most critical aspects related to the fruition of archaeological sites is the accessibility to areas and routes, not always - if ever - designed for people with reduced mobility, also temporary (for example elderly, obese, visually impaired, etc.). In general, an established principle used in the new design is to pay attention to the so-called wide users, in accordance with the international guidelines summarized in the concept of Universal Design. In particular, this paper presents the use of three-dimensional models obtained from laser scanning surveys for the design of walking trails for people with reduced mobility in the Tusculum Archaeological-Cultural Park. The work was based on the fundamental implementation of the three-dimensional survey with terrestrial laser scanning for the construction and the control of the complex morphology of the site, and on the subsequent integration of models of the intervention in the three-dimensional reality "as-built" of the site. The obtained infographic model allowed to study and simulate the impact of the routes for people with reduced mobility, and to verify its efficiency in the historical and landscape context. Moreover, it was possible to verify the construction of other facilities in the real conditions of the site.

  4. INFOGRAPHIC MODELING BASED ON 3D LASER SURVEYING FOR INFORMED UNIVERSAL DESIGN IN ARCHAEOLOGICAL AREAS: THE CASE OF OPPIDUM OF THE ANCIENT CITY OF TUSCULUM

    Directory of Open Access Journals (Sweden)

    L. Cemoli

    2017-08-01

    Full Text Available The valorisation of archaeological sites represents a fundamental action for the social and economic development of a country. An archaeological park is often a territory characterized by significant testimonies of antiquity of great landscape value. For this reason, it should be configured as an authentic outdoor museum, enriched by natural, environmental, architectural and urban components. In order to fulfill these requirements, it is fundamental the elaboration of a coherent scientific project of preservation, fruition and valorisation of the area, which merge the different components necessary for the establishment of an archaeological museum-park. One of the most critical aspects related to the fruition of archaeological sites is the accessibility to areas and routes, not always – if ever – designed for people with reduced mobility, also temporary (for example elderly, obese, visually impaired, etc.. In general, an established principle used in the new design is to pay attention to the so-called wide users, in accordance with the international guidelines summarized in the concept of Universal Design. In particular, this paper presents the use of three-dimensional models obtained from laser scanning surveys for the design of walking trails for people with reduced mobility in the Tusculum Archaeological-Cultural Park. The work was based on the fundamental implementation of the three-dimensional survey with terrestrial laser scanning for the construction and the control of the complex morphology of the site, and on the subsequent integration of models of the intervention in the three-dimensional reality "as-built" of the site. The obtained infographic model allowed to study and simulate the impact of the routes for people with reduced mobility, and to verify its efficiency in the historical and landscape context. Moreover, it was possible to verify the construction of other facilities in the real conditions of the site.

  5. Semantic Information Modeling for Emerging Applications in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Natarajan, Sreedhar; Simmhan, Yogesh; Prasanna, Viktor

    2012-04-16

    Smart Grid modernizes power grid by integrating digital and information technologies. Millions of smart meters, intelligent appliances and communication infrastructures are under deployment allowing advanced IT applications to be developed to secure and manage power grid operations. Demand response (DR) is one such emerging application to optimize electricity demand by curtailing/shifting power load when peak load occurs. Existing DR approaches are mostly based on static plans such as pricing policies and load shedding schedules. However, improvements to power management applications rely on data emanating from existing and new information sources with the growth of Smart Grid information space. In particular, dynamic DR algorithms depend on information from smart meters that report interval-based power consumption measurement, HVAC systems that monitor buildings heat and humidity, and even weather forecast services. In order for emerging Smart Grid applications to take advantage of the diverse data influx, extensible information integration is required. In this paper, we develop an integrated Smart Grid information model using Semantic Web techniques and present case studies of using semantic information for dynamic DR. We show the semantic model facilitates information integration and knowledge representation for developing the next generation Smart Grid applications.

  6. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Simon Benjaminsson

    2010-08-01

    Full Text Available Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of Independent Component Analysis (ICA have been the methods mostly used on fMRI data, e.g. in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  7. INFORMATIONAL MODEL OF MENTAL ROTATION OF FIGURES

    Directory of Open Access Journals (Sweden)

    V. A. Lyakhovetskiy

    2016-01-01

    Full Text Available Subject of Study.The subject of research is the information structure of objects internal representations and operations over them, used by man to solve the problem of mental rotation of figures. To analyze this informational structure we considered not only classical dependencies of the correct answers on the angle of rotation, but also the other dependencies obtained recently in cognitive psychology. Method.The language of technical computing Matlab R2010b was used for developing information model of the mental rotation of figures. Such model parameters as the number of bits in the internal representation, an error probability in a single bit, discrete rotation angle, comparison threshold, and the degree of difference during rotation can be changed. Main Results.The model reproduces qualitatively such psychological dependencies as the linear increase of time of correct answers and the number of errors on the angle of rotation for identical figures, "flat" dependence of the time of correct answers and the number of errors on the angle of rotation for mirror-like figures. The simulation results suggest that mental rotation is an iterative process of finding a match between the two figures, each step of which can lead to a significant distortion of the internal representation of the stored objects. Matching is carried out within the internal representations that have no high invariance to rotation angle. Practical Significance.The results may be useful for understanding the role of learning (including the learning with a teacher in the development of effective information representation and operations on them in artificial intelligence systems.

  8. Design and analysis of information model hotel complex

    Directory of Open Access Journals (Sweden)

    Garyaev Nikolai

    2016-01-01

    Full Text Available The article analyzes the innovation in 3D modeling and development of process design approaches based on visualization of information technology and computer-aided design systems. The problems arising in the modern design and the approach to address them.

  9. Information behavior versus communication: application models in multidisciplinary settings

    Directory of Open Access Journals (Sweden)

    Cecília Morena Maria da Silva

    2015-05-01

    Full Text Available This paper deals with the information behavior as support for models of communication design in the areas of Information Science, Library and Music. The communication models proposition is based on models of Tubbs and Moss (2003, Garvey and Griffith (1972, adapted by Hurd (1996 and Wilson (1999. Therefore, the questions arose: (i what are the informational skills required of librarians who act as mediators in scholarly communication process and informational user behavior in the educational environment?; (ii what are the needs of music related researchers and as produce, seek, use and access the scientific knowledge of your area?; and (iii as the contexts involved in scientific collaboration processes influence in the scientific production of information science field in Brazil? The article includes a literature review on the information behavior and its insertion in scientific communication considering the influence of context and/or situation of the objects involved in motivating issues. The hypothesis is that the user information behavior in different contexts and situations influence the definition of a scientific communication model. Finally, it is concluded that the same concept or a set of concepts can be used in different perspectives, reaching up, thus, different results.

  10. How informative are slip models for aftershock forecasting?

    Science.gov (United States)

    Bach, Christoph; Hainzl, Sebastian

    2013-04-01

    Coulomb stress changes (ΔCFS) have been recognized as a major trigger mechanism for earthquakes, in particular aftershock distributions and the spatial patterns of ΔCFS are often found to be correlated. However, the Coulomb stress calculations are based on slip inversions and the receiver fault mechanisms which both contain large uncertainties. In particular, slip inversions are usually non-unique and often differ strongly for the same earthquakes. Here we want to address the information content of those inversions with respect to aftershock forecasting. Therefore we compare the slip models to randomized fractal slip models which are only constrained by fault information and moment magnitude. The uncertainty of the aftershock mechanisms is considered by using many receiver fault orientations, and by calculating ΔCFS at several depth layers. The stress change is then converted into an aftershock probability map utilizing a clock advance model. To estimate the information content of the slip models, we use an Epidemic Type Aftershock Sequence (ETAS) model approach introduced by Bach and Hainzl (2012), where the spatial probability density of direct aftershocks is related to the ΔCFS calculations. Besides the directly triggered aftershocks, this approach also takes secondary aftershock triggering into account. We quantify our results by calculating the information gain of the randomized slip models relative to the corresponding published slip model. As case studies, we investigate the aftershock sequences of several well-known main shocks such as 1992 Landers, 1999 Hector Mine, 2004 Parkfield, 2002 Denali. First results show a huge difference in the information content of slip models. For some of the cases up to 90% of the random slip models are found to perform better than the originally published model, for some other cases only few random models are found performing better than the published slip model.

  11. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  12. Secure information transfer based on computing reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Szmoski, R.M.; Ferrari, F.A.S. [Department of Physics, Universidade Estadual de Ponta Grossa, 84030-900, Ponta Grossa (Brazil); Pinto, S.E. de S, E-mail: desouzapinto@pq.cnpq.br [Department of Physics, Universidade Estadual de Ponta Grossa, 84030-900, Ponta Grossa (Brazil); Baptista, M.S. [Institute for Complex Systems and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen (United Kingdom); Viana, R.L. [Department of Physics, Universidade Federal do Parana, 81531-990, Curitiba, Parana (Brazil)

    2013-04-01

    There is a broad area of research to ensure that information is transmitted securely. Within this scope, chaos-based cryptography takes a prominent role due to its nonlinear properties. Using these properties, we propose a secure mechanism for transmitting data that relies on chaotic networks. We use a nonlinear on–off device to cipher the message, and the transfer entropy to retrieve it. We analyze the system capability for sending messages, and we obtain expressions for the operating time. We demonstrate the system efficiency for a wide range of parameters. We find similarities between our method and the reservoir computing.

  13. INIS information retrieval based on IBM's IRMS

    International Nuclear Information System (INIS)

    Gadjokov, V.; Schmid, H.; Del Bigio, G.

    1975-01-01

    An information retrieval system for the INIS data base is described. It allows for batch processing on an IBM/360 or /370 computer operated under OS or VS. The program package consists basically of IBM's IRMS system which was converted from DOS to OS and adapted for INIS requirements. Sections 1-9 present the system from the user's point of view, deliberately omitting all the programming details. Program descriptions with data set definitions and file formats are given in sections 10-12. (author)

  14. Implications of Information Theory for Computational Modeling of Schizophrenia.

    Science.gov (United States)

    Silverstein, Steven M; Wibral, Michael; Phillips, William A

    2017-10-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

  15. A cloud-based information repository for bridge monitoring applications

    Science.gov (United States)

    Jeong, Seongwoon; Zhang, Yilan; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.

    2016-04-01

    This paper describes an information repository to support bridge monitoring applications on a cloud computing platform. Bridge monitoring, with instrumentation of sensors in particular, collects significant amount of data. In addition to sensor data, a wide variety of information such as bridge geometry, analysis model and sensor description need to be stored. Data management plays an important role to facilitate data utilization and data sharing. While bridge information modeling (BrIM) technologies and standards have been proposed and they provide a means to enable integration and facilitate interoperability, current BrIM standards support mostly the information about bridge geometry. In this study, we extend the BrIM schema to include analysis models and sensor information. Specifically, using the OpenBrIM standards as the base, we draw on CSI Bridge, a commercial software widely used for bridge analysis and design, and SensorML, a standard schema for sensor definition, to define the data entities necessary for bridge monitoring applications. NoSQL database systems are employed for data repository. Cloud service infrastructure is deployed to enhance scalability, flexibility and accessibility of the data management system. The data model and systems are tested using the bridge model and the sensor data collected at the Telegraph Road Bridge, Monroe, Michigan.

  16. Multi-UAV Doppler Information Fusion for Target Tracking Based on Distributed High Degrees Information Filters

    Directory of Open Access Journals (Sweden)

    Hamza Benzerrouk

    2018-03-01

    Full Text Available Multi-Unmanned Aerial Vehicle (UAV Doppler-based target tracking has not been widely investigated, specifically when using modern nonlinear information filters. A high-degree Gauss–Hermite information filter, as well as a seventh-degree cubature information filter (CIF, is developed to improve the fifth-degree and third-degree CIFs proposed in the most recent related literature. These algorithms are applied to maneuvering target tracking based on Radar Doppler range/range rate signals. To achieve this purpose, different measurement models such as range-only, range rate, and bearing-only tracking are used in the simulations. In this paper, the mobile sensor target tracking problem is addressed and solved by a higher-degree class of quadrature information filters (HQIFs. A centralized fusion architecture based on distributed information filtering is proposed, and yielded excellent results. Three high dynamic UAVs are simulated with synchronized Doppler measurement broadcasted in parallel channels to the control center for global information fusion. Interesting results are obtained, with the superiority of certain classes of higher-degree quadrature information filters.

  17. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  18. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Huimin Lu

    2013-01-01

    Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  19. Presenting a model for display and user interface specifications of web based OPACs on the basis of available universal standards and experts views in order to compare the Iranian library and information centers OPACs

    OpenAIRE

    Zavaraqi, Rasoul

    2005-01-01

    The aim of this study is to present a model for display and user interface specifications of web-based OPACs on the basis of available universal standards and experts’ views in order to compare the present Iranian library and information centers OPACs. Three method were used for data collection in this research: literature review, survey of opinions by means of a checklist, and evaluation of the available web-based OPACs. The community of Iranian experts in OPAC issues and all of 6 available ...

  20. A review of building information modelling

    Science.gov (United States)

    Wang, Wen; Han, Rui

    2018-05-01

    Building Information Modelling (BIM) is widely seen as a catalyst for innovation and productivity. It is becoming standard for new construction and is the most significant technology changing how we design, build, use and manage the building. It is a dominant technological trend in the software industry and although the theoretical groundwork was laid in the previous century, it is a popular topic in academic research. BIM is discussed in this study, which results can provide better and more comprehensive choices for building owners, designers, and developers in future.

  1. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  2. Building information modelling (BIM: now and beyond

    Directory of Open Access Journals (Sweden)

    Salman Azhar

    2015-10-01

    Full Text Available Building Information Modeling (BIM, also called n-D Modeling or Virtual Prototyping Technology, is a revolutionary development that is quickly reshaping the Architecture-Engineering-Construction (AEC industry. BIM is both a technology and a process. The technology component of BIM helps project stakeholders to visualize what is to be built in a simulated environment to identify any potential design, construction or operational issues. The process component enables close collaboration and encourages integration of the roles of all stakeholders on a project. The paper presents an overview of BIM with focus on its core concepts, applications in the project life cycle and benefits for project stakeholders with the help of case studies. The paper also elaborates risks and barriers to BIM implementation and future trends.

  3. Building information modelling (BIM: now and beyond

    Directory of Open Access Journals (Sweden)

    Salman Azhar

    2012-12-01

    Full Text Available Building Information Modeling (BIM, also called n-D Modeling or Virtual Prototyping Technology, is a revolutionary development that is quickly reshaping the Architecture-Engineering-Construction (AEC industry. BIM is both a technology and a process. The technology component of BIM helps project stakeholders to visualize what is to be built in a simulated environment to identify any potential design, construction or operational issues. The process component enables close collaboration and encourages integration of the roles of all stakeholders on a project. The paper presents an overview of BIM with focus on its core concepts, applications in the project life cycle and benefits for project stakeholders with the help of case studies. The paper also elaborates risks and barriers to BIM implementation and future trends.

  4. CONCEPTUAL MODEL OF INFORMATION SYSTEM OF THE AGRICULTURAL ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Uladzimir Buts

    2017-02-01

    Full Text Available Abstract. Research subject represented by the theoretical and practical issues use of information resources in the agricultural business. Research aim is to formation of a conceptual model of information system of agricultural enterprises according to the requirements of sustainable development. Research methods. The work is prepared on basis of several scientific methods and approaches including monographic, analytical, computational and constructive methods of mathematical and structural logic simulation of information systems. Research results. Based on the assessment of the results of research information systems in agribusiness, as reflected in the theoretical review, the author designed principles of the information system for the agricultural enterprise for sustainable development of agribusiness. Sphere of application of the research results. State and regional authorities of economic regulation. Agricultural enterprises and farmers.

  5. A Compositional Relevance Model for Adaptive Information Retrieval

    Science.gov (United States)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  6. Multiscale information modelling for heart morphogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Abdulla, T; Imms, R; Summers, R [Department of Electronic and Electrical Engineering, Loughborough University, Loughborough (United Kingdom); Schleich, J M, E-mail: T.Abdulla@lboro.ac.u [LTSI Signal and Image Processing Laboratory, University of Rennes 1, Rennes (France)

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  7. Multiscale information modelling for heart morphogenesis

    International Nuclear Information System (INIS)

    Abdulla, T; Imms, R; Summers, R; Schleich, J M

    2010-01-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  8. Physically-based Canopy Reflectance Model Inversion of Vegetation Biophysical-Structural Information from Terra-MODIS Imagery in Boreal and Mountainous Terrain for Ecosystem, Climate and Carbon Models using the BIOPHYS-MFM Algorithm

    Science.gov (United States)

    Peddle, D. R.; Hall, F.

    2009-12-01

    The BIOPHYS algorithm provides innovative and flexible methods for the inversion of canopy reflectance models (CRM) to derive essential biophysical structural information (BSI) for quantifying vegetation state and disturbance, and for input to ecosystem, climate and carbon models. Based on spectral, angular, temporal and scene geometry inputs that can be provided or automatically derived, the BIOPHYS Multiple-Forward Mode (MFM) approach generates look-up tables (LUTs) that comprise reflectance data, structural inputs over specified or computed ranges, and the associated CRM output from forward mode runs. Image pixel and model LUT spectral values are then matched. The corresponding BSI retrieved from the LUT matches is output as the BSI results. BIOPHYS-MFM has been extensively used with agencies in Canada and the USA over the past decade (Peddle et al 2000-09; Soenen et al 2005-09; Gamon et al 2004; Cihlar et al 2003), such as CCRS, CFS, AICWR, NASA LEDAPS, BOREAS and MODIS Science Teams, and for the North American Carbon Program. The algorithm generates BSI products such as land cover, biomass, stand volume, stem density, height, crown closure, leaf area index (LAI) and branch area, crown dimension, productivity, topographic correction, structural change from harvest, forest fires and mountain pine beetle damage, and water / hydrology applications. BIOPHYS-MFM has been applied in different locations in Canada (six provinces from Newfoundland to British Columbia) and USA (NASA COVER, MODIS and LEDAPS sites) using 7 different CRM models and a variety of imagery (e.g. MODIS, Landsat, SPOT, IKONOS, airborne MSV, MMR, casi, Probe-1, AISA). In this paper we summarise the BIOPHYS-MFM algorithm and results from Terra-MODIS imagery from MODIS validation sites at Kananaskis Alberta in the Canadian Rocky Mountains, and from the Boreal Ecosystem Atmosphere Study (BOREAS) in Saskatchewan Canada. At the montane Rocky Mountain site, BIOPHYS-MFM density estimates were within

  9. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  10. Building information models for astronomy projects

    Science.gov (United States)

    Ariño, Javier; Murga, Gaizka; Campo, Ramón; Eletxigerra, Iñigo; Ampuero, Pedro

    2012-09-01

    A Building Information Model is a digital representation of physical and functional characteristics of a building. BIMs represent the geometrical characteristics of the Building, but also properties like bills of quantities, definition of COTS components, status of material in the different stages of the project, project economic data, etc. The BIM methodology, which is well established in the Architecture Engineering and Construction (AEC) domain for conventional buildings, has been brought one step forward in its application for Astronomical/Scientific facilities. In these facilities steel/concrete structures have high dynamic and seismic requirements, M&E installations are complex and there is a large amount of special equipment and mechanisms involved as a fundamental part of the facility. The detail design definition is typically implemented by different design teams in specialized design software packages. In order to allow the coordinated work of different engineering teams, the overall model, and its associated engineering database, is progressively integrated using a coordination and roaming software which can be used before starting construction phase for checking interferences, planning the construction sequence, studying maintenance operation, reporting to the project office, etc. This integrated design & construction approach will allow to efficiently plan construction sequence (4D). This is a powerful tool to study and analyze in detail alternative construction sequences and ideally coordinate the work of different construction teams. In addition engineering, construction and operational database can be linked to the virtual model (6D), what gives to the end users a invaluable tool for the lifecycle management, as all the facility information can be easily accessed, added or replaced. This paper presents the BIM methodology as implemented by IDOM with the E-ELT and ATST Enclosures as application examples.

  11. Computational Methods for Physical Model Information Management: Opening the Aperture

    International Nuclear Information System (INIS)

    Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.

    2015-01-01

    The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)

  12. A Memory-based Robot Architecture based on Contextual Information

    OpenAIRE

    Pratama, Ferdian; Mastrogiovanni, Fulvio; Chong, Nak Young

    2014-01-01

    In this paper, we present a preliminary conceptual design for a robot long-term memory architecture based on the notion of context. Contextual information is used to organize the data flow between Working Memory (including Perceptual Memory) and Long-Term Memory components. We discuss the major influence of the notion of context within Episodic Memory on Semantic and Procedural Memory, respectively. We address how the occurrence of specific object-related events in time impacts on the semanti...

  13. Automated Physico-Chemical Cell Model Development through Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  14. National health information infrastructure model: a milestone for health information management education realignment.

    Science.gov (United States)

    Meidani, Zahra; Sadoughi, Farhnaz; Ahmadi, Maryam; Maleki, Mohammad Reza; Zohoor, Alireza; Saddik, Basema

    2012-01-01

    Challenges and drawbacks of the health information management (HIM) curriculum at the Master's degree were examined, including lack of well-established computing sciences and inadequacy to give rise to specific competencies. Information management was condensed to the hospital setting to intensify the indispensability of a well-organized educational campaign. The healthcare information dimensions of a national health information infrastructure (NHII) model present novel requirements for HIM education. Articles related to challenges and barriers to adoption of the personal health record (PHR), the core component of personal health dimension of an NHII, were searched through sources including Science Direct, ProQuest, and PubMed. Through a literature review, concerns about the PHR that are associated with HIM functions and responsibilities were extracted. In the community/public health dimension of the NHII the main components have been specified, and the targeted information was gathered through literature review, e-mail, and navigation of international and national organizations. Again, topics related to HIM were evoked. Using an information system (decision support system, artificial neural network, etc.) to support PHR media and content, patient education, patient-HIM communication skills, consumer health information, conducting a surveillance system in other areas of healthcare such as a risk factor surveillance system, occupational health, using an information system to analyze aggregated data including a geographic information system, data mining, online analytical processing, public health vocabulary and classification system, and emerging automated coding systems pose major knowledge gaps in HIM education. Combining all required skills and expertise to handle personal and public dimensions of healthcare information in a single curriculum is simply impractical. Role expansion and role extension for HIM professionals should be defined based on the essence of

  15. The Knowledge Base Interface for Parametric Grid Information

    International Nuclear Information System (INIS)

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-01-01

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary

  16. Modeling Routinization in Games: An Information Theory Approach

    DEFF Research Database (Denmark)

    Wallner, Simon; Pichlmair, Martin; Hecher, Michael

    2015-01-01

    Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...

  17. Information Governance: A Model for Security in Medical Practice

    Directory of Open Access Journals (Sweden)

    Patricia A.H. Williams

    2007-03-01

    Full Text Available Information governance is becoming an important aspect of organisational accountability. In consideration that information is an integral asset of most organisations, the protection of this asset will increasingly rely on organisational capabilities in security.  In the medical arena this information is primarily sensitive patient-based information. Previous research has shown that application of security measures is a low priority for primary care medical practice and that awareness of the risks are seriously underestimated. Consequently, information security governance will be a key issue for medical practice in the future. Information security governance is a relatively new term and there is little existing research into how to meet governance requirements. The limited research that exists describes information security governance frameworks at a strategic level. However, since medical practice is already lagging in the implementation of appropriate security, such definition may not be practical although it is obviously desirable. This paper describes an on-going action research project undertaken in the area of medical information security, and presents a tactical approach model aimed at addressing information security governance and the protection of medical data. 

  18. Activity-Based Information Integrating the operations strategy

    Directory of Open Access Journals (Sweden)

    José Augusto da Rocha de Araujo

    2005-12-01

    Full Text Available In the globalized world, companies seek for new operations strategies to ensure world corporate success. This article analyzes how the cost management models – both traditional and activity-based, aid the planning and management of corporate globalized operations. The efficacy of the models application depends on their alignment with the competitive strategy. Companies must evaluate the nature of the competition and its competitive priorities; they should then define the necessary and sufficient dependence level on costs information. In this article, three dependence levels are presented: operational, decision support and strategic control. The result of the research shows the importance of alignment between the cost management model and the competitive strategy for corporate success, and confirms the adequacy of the activity-based costing model as a supporting tool for decision taking in a global strategy. Case studies in world class companies in Brazil are presented.

  19. Information Models of Acupuncture Analgesia and Meridian Channels

    Directory of Open Access Journals (Sweden)

    Chang Hua Zou

    2010-12-01

    Full Text Available Acupuncture and meridian channels have been major components of Chinese and Eastern Asian medicine—especially for analgesia—for over 2000 years. In recent decades, electroacupuncture (EA analgesia has been applied clinically and experimentally. However, there were controversial results between different treatment frequencies, or between the active and the placebo treatments; and the mechanisms of the treatments and the related meridian channels are still unknown. In this study, we propose a new term of infophysics therapy and develop information models of acupuncture (or EA analgesia and meridian channels, to understand the mechanisms and to explain the controversial results, based on Western theories of information, trigonometry and Fourier series, and physics, as well as published biomedical data. We are trying to build a bridge between Chinese medicine and Western medicine by investigating the Eastern acupuncture analgesia and meridian channels with Western sciences; we model the meridians as a physiological system that is mostly constructed with interstices in or between other physiological systems; we consider frequencies, amplitudes and wave numbers of electric field intensity (EFI as information data. Our modeling results demonstrate that information regulated with acupuncture (or EA is different from pain information, we provide answers to explain the controversial published results, and suggest that mechanisms of acupuncture (or EA analgesia could be mostly involved in information regulation of frequencies and amplitudes of EFI as well as neuronal transmitters such as endorphins.

  20. An information based approach to improving overhead imagery collection

    Science.gov (United States)

    Sourwine, Matthew J.; Hintz, Kenneth J.

    2011-06-01

    Recent growth in commercial imaging satellite development has resulted in a complex and diverse set of systems. To simplify this environment for both customer and vendor, an information based sensor management model was built to integrate tasking and scheduling systems. By establishing a relationship between image quality and information, tasking by NIIRS can be utilized to measure the customer's required information content. Focused on a reduction in uncertainty about a target of interest, the sensor manager finds the best sensors to complete the task given the active suite of imaging sensors' functions. This is done through determination of which satellite will meet customer information and timeliness requirements with low likelihood of interference at the highest rate of return.

  1. Information spreading in Delay Tolerant Networks based on nodes' behaviors

    Science.gov (United States)

    Wu, Yahui; Deng, Su; Huang, Hongbin

    2014-07-01

    Information spreading in DTNs (Delay Tolerant Networks) adopts a store-carry-forward method, and nodes receive the message from others directly. However, it is hard to judge whether the information is safe in this communication mode. In this case, a node may observe other nodes' behaviors. At present, there is no theoretical model to describe the varying rule of the nodes' trusting level. In addition, due to the uncertainty of the connectivity in DTN, a node is hard to get the global state of the network. Therefore, a rational model about the node's trusting level should be a function of the node's own observing result. For example, if a node finds k nodes carrying a message, it may trust the information with probability p(k). This paper does not explore the real distribution of p(k), but instead presents a unifying theoretical framework to evaluate the performance of the information spreading in above case. This framework is an extension of the traditional SI (susceptible-infected) model, and is useful when p(k) conforms to any distribution. Simulations based on both synthetic and real motion traces show the accuracy of the framework. Finally, we explore the impact of the nodes' behaviors based on certain special distributions through numerical results.

  2. Traffic congestion forecasting model for the INFORM System. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Azarm, A.; Mughabghab, S.; Stock, D.

    1995-05-01

    This report describes a computerized traffic forecasting model, developed by Brookhaven National Laboratory (BNL) for a portion of the Long Island INFORM Traffic Corridor. The model has gone through a testing phase, and currently is able to make accurate traffic predictions up to one hour forward in time. The model will eventually take on-line traffic data from the INFORM system roadway sensors and make projections as to future traffic patterns, thus allowing operators at the New York State Department of Transportation (D.O.T.) INFORM Traffic Management Center to more optimally manage traffic. It can also form the basis of a travel information system. The BNL computer model developed for this project is called ATOP for Advanced Traffic Occupancy Prediction. The various modules of the ATOP computer code are currently written in Fortran and run on PC computers (pentium machine) faster than real time for the section of the INFORM corridor under study. The following summarizes the various routines currently contained in the ATOP code: Statistical forecasting of traffic flow and occupancy using historical data for similar days and time (long term knowledge), and the recent information from the past hour (short term knowledge). Estimation of the empirical relationships between traffic flow and occupancy using long and short term information. Mechanistic interpolation using macroscopic traffic models and based on the traffic flow and occupancy forecasted (item-1), and the empirical relationships (item-2) for the specific highway configuration at the time of simulation (construction, lane closure, etc.). Statistical routine for detection and classification of anomalies and their impact on the highway capacity which are fed back to previous items.

  3. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  4. Public health component in building information modeling

    Science.gov (United States)

    Trufanov, A. I.; Rossodivita, A.; Tikhomirov, A. A.; Berestneva, O. G.; Marukhina, O. V.

    2018-05-01

    A building information modelling (BIM) conception has established itself as an effective and practical approach to plan, design, construct, and manage buildings and infrastructure. Analysis of the governance literature has shown that the BIM-developed tools do not take fully into account the growing demands from ecology and health fields. In this connection, it is possible to offer an optimal way of adapting such tools to the necessary consideration of the sanitary and hygienic specifications of materials used in construction industry. It is proposed to do it through the introduction of assessments that meet the requirements of national sanitary standards. This approach was demonstrated in the case study of Revit® program.

  5. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  6. Dynamic Information Management and Exchange for Command and Control Applications, Modelling and Enforcing Category-Based Access Control via Term Rewriting

    Science.gov (United States)

    2015-03-01

    a hotel and a hospital. 2. Event handler for emergency policies (item 2 above): this has been implemented in two UG projects, one project developed a...Workshop on Logical and Se- mantic Frameworks, with Applications, Brasilia, Brazil , September 2014. Electronic Notes in Theoretical Computer Science (to...Brasilia, Brazil , September 2014, 2015. [3] S. Barker. The next 700 access control models or a unifying meta-model? In SACMAT 2009, 14th ACM Symposium on

  7. Development and validation of a preoperative prediction model for colorectal cancer T-staging based on MDCT images and clinical information.

    Science.gov (United States)

    Sa, Sha; Li, Jing; Li, Xiaodong; Li, Yongrui; Liu, Xiaoming; Wang, Defeng; Zhang, Huimao; Fu, Yu

    2017-08-15

    This study aimed to establish and evaluate the efficacy of a prediction model for colorectal cancer T-staging. T-staging was positively correlated with the level of carcinoembryonic antigen (CEA), expression of carbohydrate antigen 19-9 (CA19-9), wall deformity, blurred outer edges, fat infiltration, infiltration into the surrounding tissue, tumor size and wall thickness. Age, location, enhancement rate and enhancement homogeneity were negatively correlated with T-staging. The predictive results of the model were consistent with the pathological gold standard, and the kappa value was 0.805. The total accuracy of staging improved from 51.04% to 86.98% with the proposed model. The clinical, imaging and pathological data of 611 patients with colorectal cancer (419 patients in the training group and 192 patients in the validation group) were collected. A spearman correlation analysis was used to validate the relationship among these factors and pathological T-staging. A prediction model was trained with the random forest algorithm. T staging of the patients in the validation group was predicted by both prediction model and traditional method. The consistency, accuracy, sensitivity, specificity and area under the curve (AUC) were used to compare the efficacy of the two methods. The newly established comprehensive model can improve the predictive efficiency of preoperative colorectal cancer T-staging.

  8. Using a High-Resolution Ensemble Modeling Method to Inform Risk-Based Decision-Making at Taylor Park Dam, Colorado

    Science.gov (United States)

    Mueller, M.; Mahoney, K. M.; Holman, K. D.

    2015-12-01

    The Bureau of Reclamation (Reclamation) is responsible for the safety of Taylor Park Dam, located in central Colorado at an elevation of 9300 feet. A key aspect of dam safety is anticipating extreme precipitation, runoff and the associated inflow of water to the reservoir within a probabilistic framework for risk analyses. The Cooperative Institute for Research in Environmental Sciences (CIRES) has partnered with Reclamation to improve understanding and estimation of precipitation in the western United States, including the Taylor Park watershed. A significant challenge is that Taylor Park Dam is located in a relatively data-sparse region, surrounded by mountains exceeding 12,000 feet. To better estimate heavy precipitation events in this basin, a high-resolution modeling approach is used. The Weather Research and Forecasting (WRF) model is employed to simulate events that have produced observed peaks in streamflow at the location of interest. Importantly, an ensemble of model simulations are run on each event so that uncertainty bounds (i.e., forecast error) may be provided such that the model outputs may be more effectively used in Reclamation's risk assessment framework. Model estimates of precipitation (and the uncertainty thereof) are then used in rainfall runoff models to determine the probability of inflows to the reservoir for use in Reclamation's dam safety risk analyses.

  9. Entropic information of dynamical AdS/QCD holographic models

    Energy Technology Data Exchange (ETDEWEB)

    Bernardini, Alex E., E-mail: alexeb@ufscar.br [Departamento de Física, Universidade Federal de São Carlos, PO Box 676, 13565-905, São Carlos, SP (Brazil); Rocha, Roldão da, E-mail: roldao.rocha@ufabc.edu.br [Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, UFABC, 09210-580, Santo André (Brazil)

    2016-11-10

    The Shannon based conditional entropy that underlies five-dimensional Einstein–Hilbert gravity coupled to a dilaton field is investigated in the context of dynamical holographic AdS/QCD models. Considering the UV and IR dominance limits of such AdS/QCD models, the conditional entropy is shown to shed some light onto the meson classification schemes, which corroborate with the existence of light-flavor mesons of lower spins in Nature. Our analysis is supported by a correspondence between statistical mechanics and information entropy which establishes the physical grounds to the Shannon information entropy, also in the context of statistical mechanics, and provides some specificities for accurately extending the entropic discussion to continuous modes of physical systems. From entropic informational grounds, the conditional entropy allows one to identify the lower experimental/phenomenological occurrence of higher spin mesons in Nature. Moreover, it introduces a quantitative theoretical apparatus for studying the instability of high spin light-flavor mesons.

  10. Neighborhood Hypergraph Based Classification Algorithm for Incomplete Information System

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2015-01-01

    Full Text Available The problem of classification in incomplete information system is a hot issue in intelligent information processing. Hypergraph is a new intelligent method for machine learning. However, it is hard to process the incomplete information system by the traditional hypergraph, which is due to two reasons: (1 the hyperedges are generated randomly in traditional hypergraph model; (2 the existing methods are unsuitable to deal with incomplete information system, for the sake of missing values in incomplete information system. In this paper, we propose a novel classification algorithm for incomplete information system based on hypergraph model and rough set theory. Firstly, we initialize the hypergraph. Second, we classify the training set by neighborhood hypergraph. Third, under the guidance of rough set, we replace the poor hyperedges. After that, we can obtain a good classifier. The proposed approach is tested on 15 data sets from UCI machine learning repository. Furthermore, it is compared with some existing methods, such as C4.5, SVM, NavieBayes, and KNN. The experimental results show that the proposed algorithm has better performance via Precision, Recall, AUC, and F-measure.

  11. Building Information Modelling in Denmark and Iceland

    DEFF Research Database (Denmark)

    Jensen, Per Anker; Jóhannesson, Elvar Ingi

    2013-01-01

    with BIM is studied. Based on findings from both parts, ideas and recommendations are put forward for the Icelandic building industry about feasible ways of implementing BIM. Findings – Among the results are that the use of BIM is very limited in the Icelandic companies compared to the other Nordic...... for making standards and guidelines related to BIM. Public building clients are also encouraged to consider initiating projects based on making simple building models of existing buildings in order to introduce the BIM technology to the industry. Icelandic companies are recommended to start implementing BIM...... countries. Research limitations/implications – The research is limited to the Nordic countries in Europe, but many recommendations could be relevant to other countries. Practical implications – It is recommended to the Icelandic building authorities to get into cooperation with their Nordic counterparts...

  12. Display of the information model accounting system

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2011-12-01

    Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.

  13. Managing geometric information with a data base management system

    Science.gov (United States)

    Dube, R. P.

    1984-01-01

    The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.

  14. AN INFORMATION SERVICE MODEL FOR REMOTE SENSING EMERGENCY SERVICES

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2017-09-01

    Full Text Available This paper presents a method on the semantic access environment, which can solve the problem about how to identify the correct natural disaster emergency knowledge and return to the demanders. The study data is natural disaster knowledge text set. Firstly, based on the remote sensing emergency knowledge database, we utilize the sematic network to extract the key words in the input documents dataset. Then, using the semantic analysis based on words segmentation and PLSA, to establish the sematic access environment to identify the requirement of users and match the emergency knowledge in the database. Finally, the user preference model was established, which could help the system to return the corresponding information to the different users. The results indicate that semantic analysis can dispose the natural disaster knowledge effectively, which will realize diversified information service, enhance the precision of information retrieval and satisfy the requirement of users.

  15. Modeling of information flows in natural gas storage facility

    Science.gov (United States)

    Ranjbari, Leyla; Bahar, Arifah; Aziz, Zainal Abdul

    2013-09-01

    The paper considers the natural-gas storage valuation based on the information-based pricing framework of Brody-Hughston-Macrina (BHM). As opposed to many studies which the associated filtration is considered pre-specified, this work tries to construct the filtration in terms of the information provided to the market. The value of the storage is given by the sum of the discounted expectations of the cash flows under risk-neutral measure, conditional to the constructed filtration with the Brownian bridge noise term. In order to model the flow of information about the cash flows, we assume the existence of a fixed pricing kernel with liquid, homogenous and incomplete market without arbitrage.

  16. MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM

    Directory of Open Access Journals (Sweden)

    A. G. Korobeynikov

    2015-05-01

    Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed

  17. CRISP. Information Security Models and Their Economics

    International Nuclear Information System (INIS)

    Gustavsson, R.; Mellstrand, P.; Tornqvist, B.

    2005-03-01

    The deliverable D1.6 includes background material and specifications of a CRISP Framework on protection of information assets related to power net management and management of business operations related to energy services. During the project it was discovered by the CRISP consortium that the original description of WP 1.6 was not adequate for the project as such. The main insight was that the original emphasis on cost-benefit analysis of security protection measures was to early to address in the project. This issue is of course crucial in itself but requires new models of consequence analysis that still remains to be developed, especially for the new business models we are investigated in the CRISP project. The updated and approved version of the WP1.6 description, together with the also updated WP2.4 focus on Dependable ICT support of Power Grid Operations constitutes an integrated approach towards dependable and secure future utilities and their business processes. This document (D1.6) is a background to deliverable D2.4. Together they provide a dependability and security framework to the three CRISP experiments in WP3

  18. Modeling the reemergence of information diffusion in social network

    Science.gov (United States)

    Yang, Dingda; Liao, Xiangwen; Shen, Huawei; Cheng, Xueqi; Chen, Guolong

    2018-01-01

    Information diffusion in networks is an important research topic in various fields. Existing studies either focus on modeling the process of information diffusion, e.g., independent cascade model and linear threshold model, or investigate information diffusion in networks with certain structural characteristics such as scale-free networks and small world networks. However, there are still several phenomena that have not been captured by existing information diffusion models. One of the prominent phenomena is the reemergence of information diffusion, i.e., a piece of information reemerges after the completion of its initial diffusion process. In this paper, we propose an optimized information diffusion model by introducing a new informed state into traditional susceptible-infected-removed model. We verify the proposed model via simulations in real-world social networks, and the results indicate that the model can reproduce the reemergence of information during the diffusion process.

  19. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  20. Agricultural Library Information Retrieval Based on Improved Semantic Algorithm

    OpenAIRE

    Meiling , Xie

    2014-01-01

    International audience; To support users to quickly access information they need from the agricultural library’s vast information and to improve the low intelligence query service, a model for intelligent library information retrieval was constructed. The semantic web mode was introduced and the information retrieval framework was designed. The model structure consisted of three parts: Information data integration, user interface and information retrieval match. The key method supporting retr...

  1. Regional Analysis of Remote Sensing Based Evapotranspiration Information

    Science.gov (United States)

    Geli, H. M. E.; Hain, C.; Anderson, M. C.; Senay, G. B.

    2017-12-01

    Recent research findings on modeling actual evapotranspiration (ET) using remote sensing data and methods have proven the ability of these methods to address wide range of hydrological and water resources issues including river basin water balance for improved water resources management, drought monitoring, drought impact and socioeconomic responses, agricultural water management, optimization of land-use for water conservations, water allocation agreement among others. However, there is still a critical need to identify appropriate type of ET information that can address each of these issues. The current trend of increasing demand for water due to population growth coupled with variable and limited water supply due to drought especially in arid and semiarid regions with limited water supply have highlighted the need for such information. To properly address these issues different spatial and temporal resolutions of ET information will need to be used. For example, agricultural water management applications require ET information at field (30-m) and daily time scales while for river basin hydrologic analysis relatively coarser spatial and temporal scales can be adequate for such regional applications. The objective of this analysis is to evaluate the potential of using an integrated ET information that can be used to address some of these issues collectively. This analysis will highlight efforts to address some of the issues that are applicable to New Mexico including assessment of statewide water budget as well as drought impact and socioeconomic responses which all require ET information but at different spatial and temporal scales. This analysis will provide an evaluation of four remote sensing based ET models including ALEXI, DisALEXI, SSEBop, and SEBAL3.0. The models will be compared with ground-based observations from eddy covariance towers and water balance calculations. Remote sensing data from Landsat, MODIS, and VIIRS sensors will be used to provide ET

  2. MODELING INFORMATION SYSTEM AVAILABILITY BY USING BAYESIAN BELIEF NETWORK APPROACH

    Directory of Open Access Journals (Sweden)

    Semir Ibrahimović

    2016-03-01

    Full Text Available Modern information systems are expected to be always-on by providing services to end-users, regardless of time and location. This is particularly important for organizations and industries where information systems support real-time operations and mission-critical applications that need to be available on 24  7  365 basis. Examples of such entities include process industries, telecommunications, healthcare, energy, banking, electronic commerce and a variety of cloud services. This article presents a modified Bayesian Belief Network model for predicting information system availability, introduced initially by Franke, U. and Johnson, P. (in article “Availability of enterprise IT systems – an expert based Bayesian model”. Software Quality Journal 20(2, 369-394, 2012 based on a thorough review of several dimensions of the information system availability, we proposed a modified set of determinants. The model is parameterized by using probability elicitation process with the participation of experts from the financial sector of Bosnia and Herzegovina. The model validation was performed using Monte Carlo simulation.

  3. Canonical analysis based on mutual information

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2015-01-01

    combinations with the information theoretical measure mutual information (MI). We term this type of analysis canonical information analysis (CIA). MI allows for the actual joint distribution of the variables involved and not just second order statistics. While CCA is ideal for Gaussian data, CIA facilitates...

  4. Gamified Android Based Academic Information System

    Science.gov (United States)

    Setiana, Henry; Hansun, Seng

    2017-01-01

    Student is often lazy when it comes to studying, and how to motivate student was one of the problem in the educational world. To overcome the matters, we will implement the gamification method into an Academic Information System. Academic Information System is a software used for providing information and arranging administration which connected…

  5. Information Models, Data Requirements, and Agile Data Curation

    Science.gov (United States)

    Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron

    2015-04-01

    The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.

  6. CSNS control cable information management system based on web

    International Nuclear Information System (INIS)

    Lu Huihui; Wang Chunhong; Li Luofeng; Liu Zhengtong; Lei Bifeng

    2014-01-01

    This paper presents an approach to data modeling a great number of control devices and cables with complicated relations of CSNS (China Spallation Neutron Source). The CSNS accelerator control cable database was created using MySQL, and the control cable information management system based on Web was further built. During the development of the database, the design idea of IRMIS database was studied. and the actual situation of CSNS accelerator control cables was investigated. The control cable database model fitting the requirements was designed. This system is of great convenience to manage and maintain CSNS control devices and cables in the future. (authors)

  7. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  8. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan; Yilmaz, Ahmet Oǧuz; Alouini, Mohamed-Slim; Kucur, Oǧuz

    2011-01-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  9. Risk based modelling

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Baker, A.E.

    1993-01-01

    Risk based analysis is a tool becoming available to both engineers and managers to aid decision making concerning plant matters such as In-Service Inspection (ISI). In order to develop a risk based method, some form of Structural Reliability Risk Assessment (SRRA) needs to be performed to provide a probability of failure ranking for all sites around the plant. A Probabilistic Risk Assessment (PRA) can then be carried out to combine these possible events with the capability of plant safety systems and procedures, to establish the consequences of failure for the sites. In this way the probability of failures are converted into a risk based ranking which can be used to assist the process of deciding which sites should be included in an ISI programme. This paper reviews the technique and typical results of a risk based ranking assessment carried out for nuclear power plant pipework. (author)

  10. An Integrative Behavioral Model of Information Security Policy Compliance

    Directory of Open Access Journals (Sweden)

    Sang Hoon Kim

    2014-01-01

    Full Text Available The authors found the behavioral factors that influence the organization members’ compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members’ attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1 the study is expected to play a role of the baseline for future research about organization members’ compliance with the information security policy, (2 the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3 the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training

  11. An integrative behavioral model of information security policy compliance.

    Science.gov (United States)

    Kim, Sang Hoon; Yang, Kyung Hoon; Park, Sunyoung

    2014-01-01

    The authors found the behavioral factors that influence the organization members' compliance with the information security policy in organizations on the basis of neutralization theory, Theory of planned behavior, and protection motivation theory. Depending on the theory of planned behavior, members' attitudes towards compliance, as well as normative belief and self-efficacy, were believed to determine the intention to comply with the information security policy. Neutralization theory, a prominent theory in criminology, could be expected to provide the explanation for information system security policy violations. Based on the protection motivation theory, it was inferred that the expected efficacy could have an impact on intentions of compliance. By the above logical reasoning, the integrative behavioral model and eight hypotheses could be derived. Data were collected by conducting a survey; 194 out of 207 questionnaires were available. The test of the causal model was conducted by PLS. The reliability, validity, and model fit were found to be statistically significant. The results of the hypotheses tests showed that seven of the eight hypotheses were acceptable. The theoretical implications of this study are as follows: (1) the study is expected to play a role of the baseline for future research about organization members' compliance with the information security policy, (2) the study attempted an interdisciplinary approach by combining psychology and information system security research, and (3) the study suggested concrete operational definitions of influencing factors for information security policy compliance through a comprehensive theoretical review. Also, the study has some practical implications. First, it can provide the guideline to support the successful execution of the strategic establishment for the implement of information system security policies in organizations. Second, it proves that the need of education and training programs suppressing

  12. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...

  13. Optimized combination model and algorithm of parking guidance information configuration

    Directory of Open Access Journals (Sweden)

    Tian Ye

    2011-01-01

    Full Text Available Abstract Operators of parking guidance and information (PGI systems often have difficulty in providing the best car park availability information to drivers in periods of high demand. A new PGI configuration model based on the optimized combination method was proposed by analyzing of parking choice behavior. This article first describes a parking choice behavioral model incorporating drivers perceptions of waiting times at car parks based on PGI signs. This model was used to predict the influence of PGI signs on the overall performance of the traffic system. Then relationships were developed for estimating the arrival rates at car parks based on driver characteristics, car park attributes as well as the car park availability information displayed on PGI signs. A mathematical program was formulated to determine the optimal display PGI sign configuration to minimize total travel time. A genetic algorithm was used to identify solutions that significantly reduced queue lengths and total travel time compared with existing practices. These procedures were applied to an existing PGI system operating in Deqing Town and Xiuning City. Significant reductions in total travel time of parking vehicles with PGI being configured. This would reduce traffic congestion and lead to various environmental benefits.

  14. The influence of climatic changes on distribution pattern of six typical Kobresia species in Tibetan Plateau based on MaxEnt model and geographic information system

    Science.gov (United States)

    Hu, Zhongjun; Guo, Ke; Jin, Shulan; Pan, Huahua

    2018-01-01

    The issue that climatic change has great influence on species distribution is currently of great interest in field of biogeography. Six typical Kobresia species are selected from alpine grassland of Tibetan Plateau (TP) as research objects which are the high-quality forage for local husbandry, and their distribution changes are modeled in four periods by using MaxEnt model and GIS technology. The modeling results have shown that the distribution of these six typical Kobresia species in TP was strongly affected by two factors of "the annual precipitation" and "the precipitation in the wettest and driest quarters of the year". The modeling results have also shown that the most suitable habitats of K. pygmeae were located in the area around Qinghai Lake, the Hengduan-Himalayan mountain area, and the hinterland of TP. The most suitable habitats of K. humilis were mainly located in the area around Qinghai Lake and the hinterland of TP during the Last Interglacial period, and gradually merged into a bigger area; K. robusta and K. tibetica were located in the area around Qinghai Lake and the hinterland of TP, but they did not integrate into one area all the time, and K. capillifolia were located in the area around Qinghai Lake and extended to the southwest of the original distributing area, whereas K. macrantha were mainly distributed along the area of the Himalayan mountain chain, which had the smallest distribution area among them, and all these six Kobresia species can be divided into four types of "retreat/expansion" styles according to the changes of suitable habitat areas during the four periods; all these change styles are the result of long-term adaptations of the different species to the local climate changes in regions of TP and show the complexity of relationships between different species and climate. The research results have positive reference value to the protection of species diversity and sustainable development of the local husbandry in TP.

  15. Five-Factor Model personality disorder prototypes in a community sample: Self- and informant-reports predicting interview-based DSM diagnoses

    OpenAIRE

    Lawton, Erin M.; Shields, Andrew J.; Oltmanns, Thomas F.

    2011-01-01

    The need for an empirically-validated, dimensional system of personality disorders is becoming increasingly apparent. While a number of systems have been investigated in this regard, the five-factor model of personality has demonstrated the ability to adequately capture personality pathology. In particular, the personality disorder prototypes developed by Lynam and Widiger (2001) have been tested in a number of samples. The goal of the present study is to extend this literature by validating ...

  16. Contexts for concepts: Information modeling for semantic interoperability

    NARCIS (Netherlands)

    Oude Luttighuis, P.H.W.M.; Stap, R.E.; Quartel, D.

    2011-01-01

    Conceptual information modeling is a well-established practice, aimed at preparing the implementation of information systems, the specification of electronic message formats, and the design of information processes. Today's ever more connected world however poses new challenges for conceptual

  17. 5D Building Information Modelling – A Practicability Review

    Directory of Open Access Journals (Sweden)

    Lee Xia Sheng

    2016-01-01

    Full Text Available Quality, time and cost are the three most important elements in any construction project. Building information that comes timely and accurately in multiple dimensions will facilitate a refined decision making process which can improve the construction quality, time and cost. 5 dimensional Building Information Modelling or 5D BIM is an emerging trend in the construction industry that integrates all the major information starting from the initial design to the final construction stage. After that, the integrated information is arranged and communicated through Virtual Design and Construction (VDC. This research is to gauge the practicability of 5D BIM with an action research type pilot study by the means of hands-on modelling of a conceptual bungalow design based on one of the most popular BIM tools. A bungalow is selected as a study subject to simulate the major stages of 5D BIM digital workflow. The whole process starts with developing drawings (2D into digital model (3D, and is followed by the incorporation of time (4D and cost (5D. Observations are focused on the major factors that will affect the practicability of 5D BIM, including the modelling effort, inter-operability, information output and limitations. This research concludes that 5D BIM certainly has high level practicability which further differentiates BIM from Computer Aided Design (CAD. The integration of information not only enhanced the efficiency and accuracy of process in all stages, but also enabled decision makers to have a sophisticated interpretation of information which is almost impossible with the conventional 2D CAD workflow. Although it is possible to incorporate more than 5 dimensions of information, it is foreseeable that excessive information may escalate the complexity unfavourably for BIM implementation. 5D BIM has achieved a significant level of practicability; further research should be conducted to streamline implementation. Once 5D BIM is matured and widely

  18. Model-based DSL frameworks

    NARCIS (Netherlands)

    Ivanov, Ivan; Bézivin, J.; Jouault, F.; Valduriez, P.

    2006-01-01

    More than five years ago, the OMG proposed the Model Driven Architecture (MDA™) approach to deal with the separation of platform dependent and independent aspects in information systems. Since then, the initial idea of MDA evolved and Model Driven Engineering (MDE) is being increasingly promoted to

  19. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  20. Radiology information system: a workflow-based approach

    International Nuclear Information System (INIS)

    Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; Aalst, W.M.P. van der

    2009-01-01

    Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare. (orig.)