WorldWideScience

Sample records for models enable interpretation

  1. Interpretive Structural Modeling Of Implementation Enablers For Just In Time In ICPI

    Directory of Open Access Journals (Sweden)

    Nitin Upadhye

    2014-12-01

    Full Text Available Indian Corrugated Packaging Industries (ICPI have built up tough competition among the industries in terms of product cost, quality, product delivery, flexibility, and finally customer’s demand. As their customers, mostly OEMs are asking Just in Time deliveries, ICPI must implement JIT in their system. The term "JIT” as, it denotes a system that utilizes less, in terms of all inputs, to create the same outputs as those created by a traditional mass production system, while contributing increased varieties for the end customer. (Womack et al. 1990 "JIT" focuses on abolishing or reducing Muda (“Muda", the Japanese word for waste and on maximizing or fully utilizing activities that add value from the customer's perspective. There is lack of awareness in identifying the right enablers of JIT implementation. Therefore, this study has tried to find out the enablers from the literature review and expert’s opinions from corrugated packaging industries and developed the relationship matrix to see the driving power and dependence between them. In this study, modeling has been done in order to know the interrelationships between the enablers with the help of Interpretive Structural Modeling and Cross Impact Matrix Multiplication Applied to Classification (MICMAC analysis for the performance of Indian corrugated packaging industries.

  2. Modeling and interpretation of line observations*

    Directory of Open Access Journals (Sweden)

    Kamp Inga

    2015-01-01

    Full Text Available Models for the interpretation of line observations from protoplanetary disks are summarized. The spectrum ranges from 1D LTE slab models to 2D thermo-chemical radiative transfer models and their use depends largely on the type/nature of observational data that is analyzed. I discuss the various types of observational data and their interpretation in the context of disk physical and chemical properties. The most simple spatially and spectral unresolved data are line fluxes, which can be interpreted using so-called Boltzmann diagrams. The interpretation is often tricky due to optical depth and non-LTE effects and requires care. Line profiles contain kinematic information and thus indirectly the spatial origin of the emission. Using series of line profiles, we can for example deduce radial temperature gradients in disks (CO pure rotational ladder. Spectro-astrometry of e.g. CO ro-vibrational line profiles probes the disk structure in the 1–30 AU region, where planet formation through core accretion should be most efficient. Spatially and spectrally resolved line images from (submm interferometers are the richest datasets we have to date and they enable us to unravel exciting details of the radial and vertical disk structure such as winds and asymmetries.

  3. Enabling model customization and integration

    Science.gov (United States)

    Park, Minho; Fishwick, Paul A.

    2003-09-01

    Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.

  4. Model-Agnostic Interpretability of Machine Learning

    OpenAIRE

    Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos

    2016-01-01

    Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred f...

  5. Modeling and interpretation of images*

    Directory of Open Access Journals (Sweden)

    Min Michiel

    2015-01-01

    Full Text Available Imaging protoplanetary disks is a challenging but rewarding task. It is challenging because of the glare of the central star outshining the weak signal from the disk at shorter wavelengths and because of the limited spatial resolution at longer wavelengths. It is rewarding because it contains a wealth of information on the structure of the disks and can (directly probe things like gaps and spiral structure. Because it is so challenging, telescopes are often pushed to their limitations to get a signal. Proper interpretation of these images therefore requires intimate knowledge of the instrumentation, the detection method, and the image processing steps. In this chapter I will give some examples and stress some issues that are important when interpreting images from protoplanetary disks.

  6. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  7. Realising the Uncertainty Enabled Model Web

    Science.gov (United States)

    Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.

    2012-12-01

    The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address

  8. Interpretation of commonly used statistical regression models.

    Science.gov (United States)

    Kasza, Jessica; Wolfe, Rory

    2014-01-01

    A review of some regression models commonly used in respiratory health applications is provided in this article. Simple linear regression, multiple linear regression, logistic regression and ordinal logistic regression are considered. The focus of this article is on the interpretation of the regression coefficients of each model, which are illustrated through the application of these models to a respiratory health research study. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  9. Environmental Models as a Service: Enabling Interoperability ...

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantage of streamlined deployment processes and affordable cloud access to move algorithms and data to the web for discoverability and consumption. In these deployments, environmental models can become available to end users through RESTful web services and consistent application program interfaces (APIs) that consume, manipulate, and store modeling data. RESTful modeling APIs also promote discoverability and guide usability through self-documentation. Embracing the RESTful paradigm allows models to be accessible via a web standard, and the resulting endpoints are platform- and implementation-agnostic while simultaneously presenting significant computational capabilities for spatial and temporal scaling. RESTful APIs present data in a simple verb-noun web request interface: the verb dictates how a resource is consumed using HTTP methods (e.g., GET, POST, and PUT) and the noun represents the URL reference of the resource on which the verb will act. The RESTful API can self-document in both the HTTP response and an interactive web page using the Open API standard. This lets models function as an interoperable service that promotes sharing, documentation, and discoverability. Here, we discuss the

  10. Adapting Modeling & SImulation for Network Enabled Operations

    Science.gov (United States)

    2011-03-01

    Awareness in Aerospace Operations ( AGARD - CP -478; pp. 5/1-5/8), Neuilly Sur Seine, France: NATO- AGARD . 243 ChApter 8 ShAping uk defenCe poliCy...Chapter 3 73 Increasing the Maturity of Command to Deal with Complex, Information Age Environments • Players could concentrate on their own areas; they...The results are shown in figure 4.16, which shows the fit for the first four serials. The model still explains 73 % of the vari- ability, down from 82

  11. Modeling-Enabled Systems Nutritional Immunology

    Science.gov (United States)

    Verma, Meghna; Hontecillas, Raquel; Abedi, Vida; Leber, Andrew; Tubau-Juni, Nuria; Philipson, Casandra; Carbo, Adria; Bassaganya-Riera, Josep

    2016-01-01

    This review highlights the fundamental role of nutrition in the maintenance of health, the immune response, and disease prevention. Emerging global mechanistic insights in the field of nutritional immunology cannot be gained through reductionist methods alone or by analyzing a single nutrient at a time. We propose to investigate nutritional immunology as a massively interacting system of interconnected multistage and multiscale networks that encompass hidden mechanisms by which nutrition, microbiome, metabolism, genetic predisposition, and the immune system interact to delineate health and disease. The review sets an unconventional path to apply complex science methodologies to nutritional immunology research, discovery, and development through “use cases” centered around the impact of nutrition on the gut microbiome and immune responses. Our systems nutritional immunology analyses, which include modeling and informatics methodologies in combination with pre-clinical and clinical studies, have the potential to discover emerging systems-wide properties at the interface of the immune system, nutrition, microbiome, and metabolism. PMID:26909350

  12. Modeling-Enabled Systems Nutritional Immunology

    Directory of Open Access Journals (Sweden)

    Meghna eVerma

    2016-02-01

    Full Text Available This review highlights the fundamental role of nutrition in the maintenance of health, the immune response and disease prevention. Emerging global mechanistic insights in the field of nutritional immunology cannot be gained through reductionist methods alone or by analyzing a single nutrient at a time. We propose to investigate nutritional immunology as a massively interacting system of interconnected multistage and multiscale networks that encompass hidden mechanisms by which nutrition, microbiome, metabolism, genetic predisposition and the immune system interact to delineate health and disease. The review sets an unconventional path to applying complex science methodologies to nutritional immunology research, discovery and development through ‘use cases’ centered around the impact of nutrition on the gut microbiome and immune responses. Our systems nutritional immunology analyses, that include modeling and informatics methodologies in combination with pre-clinical and clinical studies, have the potential to discover emerging systems-wide properties at the interface of the immune system, nutrition, microbiome, and metabolism.

  13. Green communication: The enabler to multiple business models

    DEFF Research Database (Denmark)

    Lindgren, Peter; Clemmensen, Suberia; Taran, Yariv

    2010-01-01

    Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...... possibility to enable lower energy consumption. This paper shows how green communication enables innovation of green business models and multiple business models running simultaneously in different markets to different customers.......Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...

  14. Item hierarchy-based analysis of the Rivermead Mobility Index resulted in improved interpretation and enabled faster scoring in patients undergoing rehabilitation after stroke.

    Science.gov (United States)

    Roorda, Leo D; Green, John R; Houwink, Annemieke; Bagley, Pam J; Smith, Jane; Molenaar, Ivo W; Geurts, Alexander C

    2012-06-01

    To enable improved interpretation of the total score and faster scoring of the Rivermead Mobility Index (RMI) by studying item ordering or hierarchy and formulating start-and-stop rules in patients after stroke. Cohort study. Rehabilitation center in the Netherlands; stroke rehabilitation units and the community in the United Kingdom. Item hierarchy of the RMI was studied in an initial group of patients (n=620; mean age ± SD, 69.2±12.5y; 297 [48%] men; 304 [49%] left hemisphere lesion, and 269 [43%] right hemisphere lesion), and the adequacy of the item hierarchy-based start-and-stop rules was checked in a second group of patients (n=237; mean age ± SD, 60.0±11.3y; 139 [59%] men; 103 [44%] left hemisphere lesion, and 93 [39%] right hemisphere lesion) undergoing rehabilitation after stroke. Not applicable. Mokken scale analysis was used to investigate the fit of the double monotonicity model, indicating hierarchical item ordering. The percentages of patients with a difference between the RMI total score and the scores based on the start-and-stop rules were calculated to check the adequacy of these rules. The RMI had good fit of the double monotonicity model (coefficient H(T)=.87). The interpretation of the total score improved. Item hierarchy-based start-and-stop rules were formulated. The percentages of patients with a difference between the RMI total score and the score based on the recommended start-and-stop rules were 3% and 5%, respectively. Ten of the original 15 items had to be scored after applying the start-and-stop rules. Item hierarchy was established, enabling improved interpretation and faster scoring of the RMI. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. An expert system for dispersion model interpretation

    International Nuclear Information System (INIS)

    Skyllingstad, E.D.; Ramsdell, J.V.

    1988-10-01

    A prototype expert system designed to diagnose dispersion model uncertainty is described in this paper with application to a puff transport model. The system obtains qualitative information from the model user and through an expert-derived knowledge base, performs a rating of the current simulation. These results can then be used in combination with dispersion model output for deciding appropriate evacuation measures. Ultimately, the goal of this work is to develop an expert system that may be operated accurately by an individual uneducated in meteorology or dispersion modeling. 5 refs., 3 figs

  16. Lumped parameter models for the interpretation of environmental tracer data

    International Nuclear Information System (INIS)

    Maloszewski, P.; Zuber, A.

    1996-01-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs

  17. Lumped parameter models for the interpretation of environmental tracer data

    Energy Technology Data Exchange (ETDEWEB)

    Maloszewski, P [GSF-Inst. for Hydrology, Oberschleissheim (Germany); Zuber, A [Institute of Nuclear Physics, Cracow (Poland)

    1996-10-01

    Principles of the lumped-parameter approach to the interpretation of environmental tracer data are given. The following models are considered: the piston flow model (PFM), exponential flow model (EM), linear model (LM), combined piston flow and exponential flow model (EPM), combined linear flow and piston flow model (LPM), and dispersion model (DM). The applicability of these models for the interpretation of different tracer data is discussed for a steady state flow approximation. Case studies are given to exemplify the applicability of the lumped-parameter approach. Description of a user-friendly computer program is given. (author). 68 refs, 25 figs, 4 tabs.

  18. BIM-enabled Conceptual Modelling and Representation of Building Circulation

    OpenAIRE

    Lee, Jin Kook; Kim, Mi Jeong

    2014-01-01

    This paper describes how a building information modelling (BIM)-based approach for building circulation enables us to change the process of building design in terms of its computational representation and processes, focusing on the conceptual modelling and representation of circulation within buildings. BIM has been designed for use by several BIM authoring tools, in particular with the widely known interoperable industry foundation classes (IFCs), which follow an object-oriented data modelli...

  19. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  20. Model-Based Integration and Interpretation of Data

    DEFF Research Database (Denmark)

    Petersen, Johannes

    2004-01-01

    Data integration and interpretation plays a crucial role in supervisory control. The paper defines a set of generic inference steps for the data integration and interpretation process based on a three-layer model of system representations. The three-layer model is used to clarify the combination...... of constraint and object-centered representations of the work domain throwing new light on the basic principles underlying the data integration and interpretation process of Rasmussen's abstraction hierarchy as well as other model-based approaches combining constraint and object-centered representations. Based...

  1. Interpreting, measuring, and modeling soil respiration

    Science.gov (United States)

    Michael G. Ryan; Beverly E. Law

    2005-01-01

    This paper reviews the role of soil respiration in determining ecosystem carbon balance, and the conceptual basis for measuring and modeling soil respiration. We developed it to provide background and context for this special issue on soil respiration and to synthesize the presentations and discussions at the workshop. Soil respiration is the largest component of...

  2. Graphical interpretation of numerical model results

    International Nuclear Information System (INIS)

    Drewes, D.R.

    1979-01-01

    Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements

  3. Interpreting Stone's model of Berry phases

    International Nuclear Information System (INIS)

    Carra, Paolo

    2004-01-01

    We show that a simple quantum-mechanical model, put forward by Stone some time ago, affords a description of site magnetoelectricity, a phenomenon which takes place in crystals (and molecular systems) when space inversion is locally broken and coexistence of electric and magnetic moments is permitted by the site point group. We demonstrate this by identifying a local order parameter, which is odd under both space inversion and time reversal. This order parameter (a magnetic quadrupole) characterizes Stone's ground state. Our results indicate that the model, extended to a lattice of sites, could be relevant to the study of electronic properties of transition-metal oxides. A generalization of Stone's Hamiltonian to cover cases of different symmetry is also discussed. (letter to the editor)

  4. Superconnections: an interpretation of the standard model

    Directory of Open Access Journals (Sweden)

    Gert Roepstorff

    2000-07-01

    Full Text Available The mathematical framework of superbundles as pioneered by D. Quillen suggests that one consider the Higgs field as a natural constituent of a superconnection. I propose to take as superbundle the exterior algebra obtained from a Hermitian vector bundle of rank n where n=2 for the electroweak theory and n=5 for the full Standard Model. The present setup is similar to but avoids the use of non-commutative geometry.

  5. Sparsity enabled cluster reduced-order models for control

    Science.gov (United States)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  6. Factors affecting strategic plan implementation using interpretive structural modeling (ISM).

    Science.gov (United States)

    Bahadori, Mohammadkarim; Teymourzadeh, Ehsan; Tajik, Hamidreza; Ravangard, Ramin; Raadabadi, Mehdi; Hosseini, Seyed Mojtaba

    2018-06-11

    Purpose Strategic planning is the best tool for managers seeking an informed presence and participation in the market without surrendering to changes. Strategic planning enables managers to achieve their organizational goals and objectives. Hospital goals, such as improving service quality and increasing patient satisfaction cannot be achieved if agreed strategies are not implemented. The purpose of this paper is to investigate the factors affecting strategic plan implementation in one teaching hospital using interpretive structural modeling (ISM). Design/methodology/approach The authors used a descriptive study involving experts and senior managers; 16 were selected as the study sample using a purposive sampling method. Data were collected using a questionnaire designed and prepared based on previous studies. Data were analyzed using ISM. Findings Five main factors affected strategic plan implementation. Although all five variables and factors are top level, "senior manager awareness and participation in the strategic planning process" and "creating and maintaining team participation in the strategic planning process" had maximum drive power. "Organizational structure effects on the strategic planning process" and "Organizational culture effects on the strategic planning process" had maximum dependence power. Practical implications Identifying factors affecting strategic plan implementation is a basis for healthcare quality improvement by analyzing the relationship among factors and overcoming the barriers. Originality/value The authors used ISM to analyze the relationship between factors affecting strategic plan implementation.

  7. Risk analysis: divergent models and convergent interpretations

    Science.gov (United States)

    Carnes, B. A.; Gavrilova, N.

    2001-01-01

    Material presented at a NASA-sponsored workshop on risk models for exposure conditions relevant to prolonged space flight are described in this paper. Analyses used mortality data from experiments conducted at Argonne National Laboratory on the long-term effects of external whole-body irradiation on B6CF1 mice by 60Co gamma rays and fission neutrons delivered as a single exposure or protracted over either 24 or 60 once-weekly exposures. The maximum dose considered was restricted to 1 Gy for neutrons and 10 Gy for gamma rays. Proportional hazard models were used to investigate the shape of the dose response at these lower doses for deaths caused by solid-tissue tumors and tumors of either connective or epithelial tissue origin. For protracted exposures, a significant mortality effect was detected at a neutron dose of 14 cGy and a gamma-ray dose of 3 Gy. For single exposures, radiation-induced mortality for neutrons also occurred within the range of 10-20 cGy, but dropped to 86 cGy for gamma rays. Plots of risk relative to control estimated for each observed dose gave a visual impression of nonlinearity for both neutrons and gamma rays. At least for solid-tissue tumors, male and female mortality was nearly identical for gamma-ray exposures, but mortality risks for females were higher than for males for neutron exposures. As expected, protracting the gamma-ray dose reduced mortality risks. Although curvature consistent with that observed visually could be detected by a model parameterized to detect curvature, a relative risk term containing only a simple term for total dose was usually sufficient to describe the dose response. Although detectable mortality for the three pathology end points considered typically occurred at the same level of dose, the highest risks were almost always associated with deaths caused by tumors of epithelial tissue origin.

  8. Conceptual design interpretations, mindset and models

    CERN Document Server

    Andreasen, Mogens Myrup; Cash, Philip

    2015-01-01

    Maximising reader insights into the theory, models, methods and fundamental reasoning of design, this book addresses design activities in industrial settings, as well as the actors involved. This approach offers readers a new understanding of design activities and related functions, properties and dispositions. Presenting a ‘design mindset’ that seeks to empower students, researchers, and practitioners alike, it features a strong focus on how designers create new concepts to be developed into products, and how they generate new business and satisfy human needs.   Employing a multi-faceted perspective, the book supplies the reader with a comprehensive worldview of design in the form of a proposed model that will empower their activities as student, researcher or practitioner. We draw the reader into the core role of design conceptualisation for society, for the development of industry, for users and buyers of products, and for citizens in relation to public systems. The book also features original con...

  9. Philosophical perspectives on quantum chaos: Models and interpretations

    Science.gov (United States)

    Bokulich, Alisa Nicole

    2001-09-01

    The problem of quantum chaos is a special case of the larger problem of understanding how the classical world emerges from quantum mechanics. While we have learned that chaos is pervasive in classical systems, it appears to be almost entirely absent in quantum systems. The aim of this dissertation is to determine what implications the interpretation of quantum mechanics has for attempts to explain the emergence of classical chaos. There are three interpretations of quantum mechanics that have set out programs for solving the problem of quantum chaos: the standard interpretation, the statistical interpretation, and the deBroglie-Bohm causal interpretation. One of the main conclusions of this dissertation is that an interpretation alone is insufficient for solving the problem of quantum chaos and that the phenomenon of decoherence must be taken into account. Although a completely satisfactory solution of the problem of quantum chaos is still outstanding, I argue that the deBroglie-Bohm interpretation with the help of decoherence outlines the most promising research program to pursue. In addition to making a contribution to the debate in the philosophy of physics concerning the interpretation of quantum mechanics, this dissertation reveals two important methodological lessons for the philosophy of science. First, issues of reductionism and intertheoretic relations cannot be divorced from questions concerning the interpretation of the theories involved. Not only is the exploration of intertheoretic relations a central part of the articulation and interpretation of an individual theory, but the very terms used to discuss intertheoretic relations, such as `state' and `classical limit', are themselves defined by particular interpretations of the theory. The second lesson that emerges is that, when it comes to characterizing the relationship between classical chaos and quantum mechanics, the traditional approaches to intertheoretic relations, namely reductionism and

  10. Formal Modeling and Verification of Opportunity-enabled Risk Management

    OpenAIRE

    Aldini, Alessandro; Seigneur, Jean-Marc; Ballester Lafuente, Carlos; Titi, Xavier; Guislain, Jonathan

    2015-01-01

    With the advent of the Bring-Your-Own-Device (BYOD) trend, mobile work is achieving a widespread diffusion that challenges the traditional view of security standard and risk management. A recently proposed model, called opportunity-enabled risk management (OPPRIM), aims at balancing the analysis of the major threats that arise in the BYOD setting with the analysis of the potential increased opportunities emerging in such an environment, by combining mechanisms of risk estimation with trust an...

  11. Extended Smoluchowski models for interpreting relaxation phenomena in liquids

    International Nuclear Information System (INIS)

    Polimeno, A.; Frezzato, D.; Saielli, G.; Moro, G.J.; Nordio, P.L.

    1998-01-01

    Interpretation of the dynamical behaviour of single molecules or collective modes in liquids has been increasingly centered, in the last decade, on complex liquid systems, including ionic solutions, polymeric liquids, supercooled fluids and liquid crystals. This has been made necessary by the need of interpreting dynamical data obtained by advanced experiments, like optical Kerr effect, time dependent fluorescence shift experiments, two-dimensional Fourier-transform and high field electron spin resonance and scattering experiments like quasi-elastic neutron scattering. This communication is centered on the definition, treatment and application of several extended stochastic models, which have proved to be very effective tools for interpreting and rationalizing complex relaxation phenomena in liquids structures. First, applications of standard Fokker-Planck equations for the orientational relaxation of molecules in isotropic and ordered liquid phase are reviewed. In particular attention will be focused on the interpretation of neutron scattering in nematics. Next, an extended stochastic model is used to interpret time-domain resolved fluorescence emission experiments. A two-body stochastic model allows the theoretical interpretation of dynamical Stokes shift effects in fluorescence emission spectra, performed on probes in isotropic and ordered polar phases. Finally, for the case of isotropic fluids made of small rigid molecules, a very detailed model is considered, which includes as basic ingredients a Fokker-Planck description of the molecular vibrational motion and the slow diffusive motion of a persistent cage structure together with the decay processes related to the changing structure of the cage. (author)

  12. BIM-Enabled Conceptual Modelling and Representation of Building Circulation

    Directory of Open Access Journals (Sweden)

    Jin Kook Lee

    2014-08-01

    Full Text Available This paper describes how a building information modelling (BIM-based approach for building circulation enables us to change the process of building design in terms of its computational representation and processes, focusing on the conceptual modelling and representation of circulation within buildings. BIM has been designed for use by several BIM authoring tools, in particular with the widely known interoperable industry foundation classes (IFCs, which follow an object-oriented data modelling methodology. Advances in BIM authoring tools, using space objects and their relations defined in an IFC's schema, have made it possible to model, visualize and analyse circulation within buildings prior to their construction. Agent-based circulation has long been an interdisciplinary topic of research across several areas, including design computing, computer science, architectural morphology, human behaviour and environmental psychology. Such conventional approaches to building circulation are centred on navigational knowledge about built environments, and represent specific circulation paths and regulations. This paper, however, places emphasis on the use of ‘space objects’ in BIM-enabled design processes rather than on circulation agents, the latter of which are not defined in the IFCs' schemas. By introducing and reviewing some associated research and projects, this paper also surveys how such a circulation representation is applicable to the analysis of building circulation-related rules.

  13. DEFINE: A Service-Oriented Dynamically Enabling Function Model

    Directory of Open Access Journals (Sweden)

    Tan Wei-Yi

    2017-01-01

    In this paper, we introduce an innovative Dynamically Enable Function In Network Equipment (DEFINE to allow tenant get the network service quickly. First, DEFINE decouples an application into different functional components, and connects these function components in a reconfigurable method. Second, DEFINE provides a programmable interface to the third party, who can develop their own processing modules according to their own needs. To verify the effectiveness of this model, we set up an evaluating network with a FPGA-based OpenFlow switch prototype, and deployed several applications on it. Our results show that DEFINE has excellent flexibility and performance.

  14. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  15. ARCHITECTURES AND ALGORITHMS FOR COGNITIVE NETWORKS ENABLED BY QUALITATIVE MODELS

    DEFF Research Database (Denmark)

    Balamuralidhar, P.

    2013-01-01

    traditional limitations and potentially achieving better performance. The vision is that, networks should be able to monitor themselves, reason upon changes in self and environment, act towards the achievement of specific goals and learn from experience. The concept of a Cognitive Engine (CE) supporting...... cognitive functions, as part of network elements, enabling above said autonomic capabilities is gathering attention. Awareness of the self and the world is an important aspect of the cognitive engine to be autonomic. This is achieved through embedding their models in the engine, but the complexity...... of the cognitive engine that incorporates a context space based information structure to its knowledge model. I propose a set of guiding principles behind a cognitive system to be autonomic and use them with additional requirements to build a detailed architecture for the cognitive engine. I define a context space...

  16. Cognitive Models of Professional Communication Discourse on Teaching the Interpreters

    Directory of Open Access Journals (Sweden)

    Moshchanskaya Y. Y.

    2012-01-01

    Full Text Available The paper is devoted to the discourse on professional institutional communication and its modeling for training the interpreters. The aim of the study is the analysis of the cognitive models of the above discourse relating to the present development stage of the cognitive linguistics. The author makes the conclusion emphasizing the paradigmatic and syntagmatic orientation of the selected cognitive models and outlines the constant and variable factors for developing the didactic model of the professional communication discourse. The paper presents the discourse-analysis model of professional communication based on the systematic approach and designed for the case study of the mediated communication. The obtained results can be used for training both the interpreters and other professionals for whom the discursive competence is the key one. 

  17. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  18. Development of interpretation models for PFN uranium log analysis

    International Nuclear Information System (INIS)

    Barnard, R.W.

    1980-11-01

    This report presents the models for interpretation of borehole logs for the PFN (Prompt Fission Neutron) uranium logging system. Two models have been developed, the counts-ratio model and the counts/dieaway model. Both are empirically developed, but can be related to the theoretical bases for PFN analysis. The models try to correct for the effects of external factors (such as probe or formation parameters) in the calculation of uranium grade. The theoretical bases and calculational techniques for estimating uranium concentration from raw PFN data and other parameters are discussed. Examples and discussions of borehole logs are included

  19. Reduced ENSO Variability at the LGM Revealed by an Isotope-Enabled Earth System Model

    Science.gov (United States)

    Zhu, Jiang; Liu, Zhengyu; Brady, Esther; Otto-Bliesner, Bette; Zhang, Jiaxu; Noone, David; Tomas, Robert; Nusbaumer, Jesse; Wong, Tony; Jahn, Alexandra; hide

    2017-01-01

    Studying the El Nino Southern Oscillation (ENSO) in the past can help us better understand its dynamics and improve its future projections. However, both paleoclimate reconstructions and model simulations of ENSO strength at the Last Glacial Maximum (LGM; 21 ka B.P.) have led to contradicting results. Here we perform model simulations using the recently developed water isotope-enabled Community Earth System Model (iCESM). For the first time, model-simulated oxygen isotopes are directly compared with those from ENSO reconstructions using the individual foraminifera analysis (IFA). We find that the LGM ENSO is most likely weaker comparing with the preindustrial. The iCESM suggests that total variance of the IFA records may only reflect changes in the annual cycle instead of ENSO variability as previously assumed. Furthermore, the interpretation of subsurface IFA records can be substantially complicated by the habitat depth of thermocline-dwelling foraminifera and their vertical migration with a temporally varying thermocline.

  20. Radiation transport modelling for the interpretation of oblique ECE measurements

    Directory of Open Access Journals (Sweden)

    Denk Severin S.

    2017-01-01

    Since radiation transport modelling is required for the interpretation of oblique ECE diagnostics we present in this paper an extended forward model that supports oblique lines of sight. To account for the refraction of the line of sight, ray tracing in the cold plasma approximation was added to the model. Furthermore, an absorption coefficient valid for arbitrary propagation was implemented. Using the revised model it is shown that for the oblique ECE Imaging diagnostic at ASDEX Upgrade there can be a significant difference between the cold resonance position and the point from which most of the observed radiation originates.

  1. Life course models: improving interpretation by consideration of total effects.

    Science.gov (United States)

    Green, Michael J; Popham, Frank

    2017-06-01

    Life course epidemiology has used models of accumulation and critical or sensitive periods to examine the importance of exposure timing in disease aetiology. These models are usually used to describe the direct effects of exposures over the life course. In comparison with consideration of direct effects only, we show how consideration of total effects improves interpretation of these models, giving clearer notions of when it will be most effective to intervene. We show how life course variation in the total effects depends on the magnitude of the direct effects and the stability of the exposure. We discuss interpretation in terms of total, direct and indirect effects and highlight the causal assumptions required for conclusions as to the most effective timing of interventions. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association.

  2. Interpretation of searches for supersymmetry with simplified models

    Energy Technology Data Exchange (ETDEWEB)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Aguilo, E.; Bergauer, T.; Dragicevic, M.; Erö, J.; Fabjan, C.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Pernicka, M.; Rabady, D.; Rahbaran, B.; Rohringer, C.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Taurok, A.; Waltenberger, W.; Wulz, C. -E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Bansal, M.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Luyckx, S.; Mucibello, L.; Ochesanu, S.; Roland, B.; Rougny, R.; Selvaggi, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D’Hondt, J.; Gonzalez Suarez, R.; Kalogeropoulos, A.; Maes, M.; Olbrechts, A.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Clerbaux, B.; De Lentdecker, G.; Dero, V.; Gay, A. P. R.; Hreus, T.; Léonard, A.; Marage, P. E.; Mohammadi, A.; Reis, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Adler, V.; Beernaert, K.; Cimmino, A.; Costantini, S.; Garcia, G.; Grunewald, M.; Klein, B.; Lellouch, J.; Marinov, A.; Mccartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Walsh, S.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Bruno, G.; Castello, R.; Ceard, L.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Lemaitre, V.; Liao, J.; Militaru, O.; Nuttens, C.; Pagano, D.; Pin, A.; Piotrzkowski, K.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Alves, G. A.; Correa Martins Junior, M.; Martins, T.; Pol, M. E.; Souza, M. H. G.; Aldá Júnior, W. L.; Carvalho, W.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Malek, M.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Soares Jorge, L.; Sznajder, A.; Vilela Pereira, A.; Anjos, T. S.; Bernardes, C. A.; Dias, F. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Lagana, C.; Marinho, F.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Genchev, V.; Iaydjiev, P.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Tcholakov, V.; Trayanov, R.; Vutova, M.; Dimitrov, A.; Hadjiiska, R.; Kozhuharov, V.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Jiang, C. H.; Liang, D.; Liang, S.; Meng, X.; Tao, J.; Wang, J.; Wang, X.; Wang, Z.; Xiao, H.; Xu, M.; Zang, J.; Zhang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Guo, Y.; Li, W.; Liu, S.; Mao, Y.; Qian, S. J.; Teng, H.; Wang, D.; Zhang, L.; Zou, W.; Avila, C.; Gomez, J. P.; Gomez Moreno, B.; Osorio Oliveros, A. F.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Plestina, R.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Duric, S.; Kadija, K.; Luetic, J.; Mekterovic, D.; Morovic, S.; Attikis, A.; Galanti, M.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Finger, M.; Finger, M.; Assran, Y.; Elgammal, S.; Ellithi Kamel, A.; Mahmoud, M. A.; Mahrous, A.; Radi, A.; Kadastik, M.; Müntel, M.; Raidal, M.; Rebane, L.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Heikkinen, A.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Ungaro, D.; Wendland, L.; Banzuzi, K.; Karjalainen, A.; Korpela, A.; Tuuva, T.; Besancon, M.; Choudhury, S.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Millischer, L.; Nayak, A.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Benhabib, L.; Bianchini, L.; Bluj, M.; Busson, P.; Charlot, C.; Daci, N.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Florent, A.; Granier de Cassagnac, R.; Haguenauer, M.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Paganini, P.; Sabes, D.; Salerno, R.; Sirois, Y.; Veelken, C.; Zabi, A.; Agram, J. -L.; Andrea, J.; Bloch, D.; Bodin, D.; Brom, J. -M.; Cardaci, M.; Chabert, E. C.; Collard, C.; Conte, E.; Drouhin, F.; Fontaine, J. -C.; Gelé, D.; Goerlach, U.; Juillot, P.; Le Bihan, A. -C.; Van Hove, P.; Fassi, F.; Mercier, D.; Beauceron, S.; Beaupere, N.; Bondu, O.; Boudoul, G.; Brochet, S.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Sgandurra, L.; Sordini, V.; Tschudi, Y.; Verdier, P.; Viret, S.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Calpas, B.; Edelhoff, M.; Feld, L.; Heracleous, N.; Hindrichs, O.; Jussen, R.; Klein, K.; Merz, J.; Ostapchuk, A.; Perieanu, A.; Raupach, F.; Sammet, J.; Schael, S.; Sprenger, D.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Caudron, J.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Olschewski, M.; Papacz, P.; Pieta, H.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Steggemann, J.; Teyssier, D.; Thüer, S.; Weber, M.; Bontenackels, M.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Lingemann, J.; Nowack, A.; Perchalla, L.; Pooth, O.; Sauerland, P.; Stahl, A.; Aldaya Martin, M.; Behr, J.; Behrenhoff, W.; Behrens, U.; Bergholz, M.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Castro, E.; Costanza, F.; Dammann, D.; Diez Pardos, C.; Eckerlin, G.; Eckstein, D.; Flucke, G.; Geiser, A.; Glushkov, I.; Gunnellini, P.; Habib, S.; Hauk, J.; Hellwig, G.; Jung, H.; Kasemann, M.; Katsas, P.; Kleinwort, C.; Kluge, H.; Knutsson, A.; Krämer, M.; Krücker, D.; Kuznetsova, E.; Lange, W.; Leonard, J.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Marienfeld, M.; Melzer-Pellmann, I. -A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Novgorodova, O.; Olzem, J.; Perrey, H.; Petrukhin, A.; Pitzl, D.; Raspereza, A.; Ribeiro Cipriano, P. M.; Riedl, C.; Ron, E.; Rosin, M.; Salfeld-Nebgen, J.; Schmidt, R.; Schoerner-Sadenius, T.; Sen, N.; Spiridonov, A.; Stein, M.; Walsh, R.; Wissing, C.; Blobel, V.; Enderle, H.; Erfle, J.; Gebbert, U.; Görner, M.; Gosselink, M.; Haller, J.; Hermanns, T.; Höing, R. S.; Kaschube, K.; Kaussen, G.; Kirschenmann, H.; Klanner, R.; Lange, J.; Nowak, F.; Peiffer, T.; Pietsch, N.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schröder, M.; Schum, T.; Seidel, M.; Sibille, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Thomsen, J.; Vanelderen, L.; Barth, C.; Berger, J.; Böser, C.; Chwalek, T.; De Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Guthoff, M.; Hackstein, C.; Hartmann, F.; Hauth, T.; Heinrich, M.; Held, H.; Hoffmann, K. H.; Husemann, U.; Katkov, I.; Komaragiri, J. R.; Lobelle Pardo, P.; Martschei, D.; Mueller, S.; Müller, Th.; Niegel, M.; Nürnberg, A.; Oberst, O.; Oehler, A.; Ott, J.; Quast, G.; Rabbertz, K.; Ratnikov, F.; Ratnikova, N.; Röcker, S.; Schilling, F. -P.; Schott, G.; Simonis, H. J.; Stober, F. M.; Troendle, D.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Zeise, M.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Kesisoglou, S.; Kyriakis, A.; Loukas, D.; Manolakos, I.; Markou, A.; Markou, C.; Ntomari, E.; Gouskos, L.; Mertzimekis, T. J.; Panagiotou, A.; Saoulidou, N.; Evangelou, I.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Patras, V.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Beni, N.; Czellar, S.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Karancsi, J.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Beri, S. B.; Bhatnagar, V.; Dhingra, N.; Gupta, R.; Kaur, M.; Mehta, M. Z.; Nishu, N.; Saini, L. K.; Sharma, A.; Singh, J. B.; Kumar, Ashok; Kumar, Arun; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Shivpuri, R. K.; Banerjee, S.; Bhattacharya, S.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kailas, S.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Aziz, T.; Ganguly, S.; Guchait, M.; Gurtu, A.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Banerjee, S.; Dugad, S.; Arfaei, H.; Bakhshiansohi, H.; Etesami, S. M.; Fahim, A.; Hashemi, M.; Hesari, H.; Jafari, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Paktinat Mehdiabadi, S.; Safarzadeh, B.; Zeinali, M.; Abbrescia, M.; Barbone, L.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Marangelli, B.; My, S.; Nuzzo, S.; Pacifico, N.; Pompili, A.; Pugliese, G.; Selvaggi, G.; Silvestris, L.; Singh, G.; Venditti, R.; Verwilligen, P.; Zito, G.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Meneghelli, M.; Montanari, A.; Navarria, F. L.; Odorici, F.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D’Alessandro, R.; Focardi, E.; Frosali, S.; Gallo, E.; Gonzi, S.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Colafranceschi, S.; Fabbri, F.; Piccolo, D.; Fabbricatore, P.; Musenich, R.; Tosi, S.; Benaglia, A.; De Guio, F.; Di Matteo, L.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Massironi, A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Sala, S.; Tabarelli de Fatis, T.; Buontempo, S.; Carrillo Montoya, C. A.; Cavallo, N.; De Cosa, A.; Dogangun, O.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Dorigo, T.; Gasparini, F.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Lazzizzera, I.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Triossi, A.; Vanini, S.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Riccardi, C.; Torre, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Nappi, A.; Romeo, F.; Saha, A.; Santocchia, A.; Spiezia, A.; Taroni, S.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; D’Agnolo, R. T.; Dell’Orso, R.; Fiori, F.; Foà, L.; Giassi, A.; Kraan, A.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Del Re, D.; Diemoz, M.; Fanelli, C.; Grassi, M.; Longo, E.; Meridiani, P.; Micheli, F.; Nourbakhsh, S.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Sigamani, M.; Soffi, L.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Demaria, N.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Pastrone, N.; Pelliccioni, M.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; Marone, M.; Montanino, D.; Penzo, A.; Schizzi, A.; Kim, T. Y.; Nam, S. K.; Chang, S.; Kim, D. H.; Kim, G. N.; Kong, D. J.; Park, H.; Son, D. C.; Son, T.; Kim, J. Y.; Kim, Zero J.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, T. J.; Lee, K. S.; Moon, D. H.; Park, S. K.; Roh, Y.; Choi, M.; Kim, J. H.; Park, C.; Park, I. C.; Park, S.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, M. S.; Kwon, E.; Lee, B.; Lee, J.; Lee, S.; Seo, H.; Yu, I.; Bilinskas, M. J.; Grigelionis, I.; Janulis, M.; Juodagalvis, A.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Lopez-Fernandez, R.; Martínez-Ortega, J.; Sanchez-Hernandez, A.; Villasenor-Cendejas, L. M.; Carrillo Moreno, S.; Vazquez Valencia, F.; Salazar Ibarguen, H. A.; Casimiro Linares, E.; Morelos Pineda, A.; Reyes-Santos, M. A.; Krofcheck, D.; Bell, A. J.; Butler, P. H.; Doesburg, R.; Reucroft, S.; Silverwood, H.; Ahmad, M.; Asghar, M. I.; Butt, J.; Hoorani, H. R.; Khalid, S.; Khan, W. A.; Khurshid, T.; Qazi, S.; Shah, M. A.; Shoaib, M.; Bialkowska, H.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Wrochna, G.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Almeida, N.; Bargassa, P.; David, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Seixas, J.; Varela, J.; Vischia, P.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Smirnov, V.; Volodko, A.; Zarubin, A.; Evstyukhin, S.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Matveev, V.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Erofeeva, M.; Gavrilov, V.; Kossov, M.; Lychkovskaya, N.; Popov, V.; Safronov, G.; Semenov, S.; Shreyber, I.; Stolin, V.; Vlasov, E.; Zhokin, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Markina, A.; Obraztsov, S.; Perfilov, M.; Petrushanko, S.; Popov, A.; Sarycheva, L.; Savrin, V.; Snigirev, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Grishin, V.; Kachanov, V.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Djordjevic, M.; Ekmedzic, M.; Krpic, D.; Milosevic, J.; Aguilar-Benitez, M.; Alcaraz Maestre, J.; Arce, P.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Ferrando, A.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Merino, G.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Santaolalla, J.; Soares, M. S.; Willmott, C.; Albajar, C.; Codispoti, G.; de Trocóniz, J. F.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Lloret Iglesias, L.; Piedra Gomez, J.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Chuang, S. H.; Duarte Campderros, J.; Felcini, M.; Fernandez, M.; Gomez, G.; Gonzalez Sanchez, J.; Graziano, A.; Jorda, C.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benitez, J. F.; Bernet, C.; Bianchi, G.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Christiansen, T.; Coarasa Perez, J. A.; D’Enterria, D.; Dabrowski, A.; De Roeck, A.; Di Guida, S.; Dobson, M.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Frisch, B.; Funk, W.; Georgiou, G.; Giffels, M.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Giunta, M.; Glege, F.; Gomez-Reino Garrido, R.; Govoni, P.; Gowdy, S.; Guida, R.; Gundacker, S.; Hammer, J.; Hansen, M.; Harris, P.; Hartl, C.; Harvey, J.; Hegner, B.; Hinzmann, A.; Innocente, V.; Janot, P.; Kaadze, K.; Karavakis, E.; Kousouris, K.; Lecoq, P.; Lee, Y. -J.; Lenzi, P.; Lourenço, C.; Magini, N.; Mäki, T.; Malberti, M.; Malgeri, L.; Mannelli, M.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moser, R.; Mozer, M. U.; Mulders, M.; Musella, P.; Nesvold, E.; Orsini, L.; Palencia Cortezon, E.; Perez, E.; Perrozzi, L.; Petrilli, A.; Pfeiffer, A.; Pierini, M.; Pimiä, M.; Piparo, D.; Polese, G.; Quertenmont, L.; Racz, A.; Reece, W.; Rodrigues Antunes, J.; Rolandi, G.; Rovelli, C.; Rovere, M.; Sakulin, H.; Santanastasio, F.; Schäfer, C.; Schwick, C.; Segoni, I.; Sekmen, S.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Tsirou, A.; Veres, G. I.; Vlimant, J. R.; Wöhri, H. K.; Worm, S. D.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Gabathuler, K.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; König, S.; Kotlinski, D.; Langenegger, U.; Meier, F.; Renker, D.; Rohe, T.; Bäni, L.; Bortignon, P.; Buchmann, M. A.; Casal, B.; Chanon, N.; Deisher, A.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Eugster, J.; Freudenreich, K.; Grab, C.; Hits, D.; Lecomte, P.; Lustermann, W.; Marini, A. C.; Martinez Ruiz del Arbol, P.; Mohr, N.; Moortgat, F.; Nägeli, C.; Nef, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pape, L.; Pauss, F.; Peruzzi, M.; Ronga, F. J.; Rossini, M.; Sala, L.; Sanchez, A. K.; Starodumov, A.; Stieger, B.; Takahashi, M.; Tauscher, L.; Thea, A.; Theofilatos, K.; Treille, D.; Urscheler, C.; Wallny, R.; Weber, H. A.; Wehrli, L.; Amsler, C.; Chiochia, V.; De Visscher, S.; Favaro, C.; Ivova Rikova, M.; Kilminster, B.; Millan Mejias, B.; Otiougova, P.; Robmann, P.; Snoek, H.; Tupputi, S.; Verzetti, M.; Chang, Y. H.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Li, S. W.; Lin, W.; Lu, Y. J.; Singh, A. P.; Volpe, R.; Yu, S. S.; Bartalini, P.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Dietz, C.; Grundler, U.; Hou, W. -S.; Hsiung, Y.; Kao, K. Y.; Lei, Y. J.; Lu, R. -S.; Majumder, D.; Petrakou, E.; Shi, X.; Shiu, J. G.; Tzeng, Y. M.; Wan, X.; Wang, M.; Asavapibhop, B.; Srimanobhas, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Karaman, T.; Karapinar, G.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sogut, K.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, L. N.; Vergili, M.; Akin, I. V.; Aliev, T.; Bilin, B.; Bilmis, S.; Deniz, M.; Gamsizkan, H.; Guler, A. M.; Ocalan, K.; Ozpineci, A.; Serin, M.; Sever, R.; Surat, U. E.; Yalvac, M.; Yildirim, E.; Zeyrek, M.; Gülmez, E.; Isildak, B.; Kaya, M.; Kaya, O.; Ozkorucuklu, S.; Sonmez, N.; Cankocak, K.; Levchuk, L.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Frazier, R.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Kreczko, L.; Metson, S.; Newbold, D. M.; Nirunpong, K.; Poll, A.; Senkin, S.; Smith, V. J.; Williams, T.; Basso, L.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Jackson, J.; Kennedy, B. W.; Olaiya, E.; Petyt, D.; Radburn-Smith, B. C.; Shepherd-Themistocleous, C. H.; Tomalin, I. R.; Womersley, W. J.; Bainbridge, R.; Ball, G.; Beuselinck, R.; Buchmuller, O.; Colling, D.; Cripps, N.; Cutajar, M.; Dauncey, P.; Davies, G.; Della Negra, M.; Ferguson, W.; Fulcher, J.; Futyan, D.; Gilbert, A.; Guneratne Bryer, A.; Hall, G.; Hatherell, Z.; Hays, J.; Iles, G.; Jarvis, M.; Karapostoli, G.; Lyons, L.; Magnan, A. -M.; Marrouche, J.; Mathias, B.; Nandi, R.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Pioppi, M.; Raymond, D. M.; Rogerson, S.; Rose, A.; Ryan, M. J.; Seez, C.; Sharp, P.; Sparrow, A.; Stoye, M.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Wakefield, S.; Wardle, N.; Whyntie, T.; Chadwick, M.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Martin, W.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Hatakeyama, K.; Liu, H.; Scarborough, T.; Charaf, O.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Heister, A.; St. John, J.; Lawson, P.; Lazic, D.; Rohlf, J.; Sperka, D.; Sulak, L.; Alimena, J.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Jabeen, S.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Nguyen, D.; Segala, M.; Sinthuprasith, T.; Speer, T.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Dolen, J.; Erbacher, R.; Gardner, M.; Houtz, R.; Ko, W.; Kopecky, A.; Lander, R.; Mall, O.; Miceli, T.; Pellett, D.; Ricci-Tam, F.; Rutherford, B.; Searle, M.; Smith, J.; Squires, M.; Tripathi, M.; Vasquez Sierra, R.; Yohay, R.; Andreev, V.; Cline, D.; Cousins, R.; Duris, J.; Erhan, S.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Jarvis, C.; Rakness, G.; Schlein, P.; Traczyk, P.; Valuev, V.; Weber, M.; Babb, J.; Clare, R.; Dinardo, M. E.; Ellison, J.; Gary, J. W.; Giordano, F.; Hanson, G.; Liu, H.; Long, O. R.; Luthra, A.; Nguyen, H.; Paramesvaran, S.; Sturdy, J.; Sumowidagdo, S.; Wilken, R.; Wimpenny, S.; Andrews, W.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; Evans, D.; Holzner, A.; Kelley, R.; Lebourgeois, M.; Letts, J.; Macneill, I.; Mangano, B.; Padhi, S.; Palmer, C.; Petrucciani, G.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Sudano, E.; Tadel, M.; Tu, Y.; Vartak, A.; Wasserbaech, S.; Würthwein, F.; Yagil, A.; Yoo, J.; Barge, D.; Bellan, R.; Campagnari, C.; D’Alfonso, M.; Danielson, T.; Flowers, K.; Geffert, P.; Golf, F.; Incandela, J.; Justus, C.; Kalavase, P.; Kovalskyi, D.; Krutelyov, V.; Lowette, S.; Magaña Villalba, R.; Mccoll, N.; Pavlunin, V.; Ribnik, J.; Richman, J.; Rossin, R.; Stuart, D.; To, W.; West, C.; Apresyan, A.; Bornheim, A.; Chen, Y.; Di Marco, E.; Duarte, J.; Gataullin, M.; Ma, Y.; Mott, A.; Newman, H. B.; Rogan, C.; Spiropulu, M.; Timciuc, V.; Veverka, J.; Wilkinson, R.; Xie, S.; Yang, Y.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carroll, R.; Ferguson, T.; Iiyama, Y.; Jang, D. W.; Liu, Y. F.; Paulini, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Drell, B. R.; Ford, W. T.; Gaz, A.; Luiggi Lopez, E.; Smith, J. G.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Eggert, N.; Gibbons, L. K.; Heltsley, B.; Hopkins, W.; Khukhunaishvili, A.; Kreis, B.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Vaughan, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Burkett, K.; Butler, J. N.; Chetluru, V.; Cheung, H. W. K.; Chlebana, F.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gao, Y.; Green, D.; Gutsche, O.; Hanlon, J.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kunori, S.; Kwan, S.; Leonidopoulos, C.; Linacre, J.; Lincoln, D.; Lipton, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Mishra, K.; Mrenna, S.; Musienko, Y.; Newman-Holmes, C.; O’Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitmore, J.; Wu, W.; Yang, F.; Yun, J. C.; Acosta, D.; Avery, P.; Bourilkov, D.; Chen, M.; Cheng, T.; Das, S.; De Gruttola, M.; Di Giovanni, G. P.; Dobur, D.; Drozdetskiy, A.; Field, R. D.; Fisher, M.; Fu, Y.; Furic, I. K.; Gartner, J.; Hugon, J.; Kim, B.; Konigsberg, J.; Korytov, A.; Kropivnitskaya, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Park, M.; Remington, R.; Rinkevicius, A.; Sellers, P.; Skhirtladze, N.; Snowball, M.; Yelton, J.; Zakaria, M.; Gaultney, V.; Hewamanage, S.; Lebolo, L. M.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Chen, J.; Diamond, B.; Gleyzer, S. V.; Haas, J.; Hagopian, S.; Hagopian, V.; Jenkins, M.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Dorney, B.; Hohlmann, M.; Kalakhety, H.; Vodopiyanov, I.; Yumiceva, F.; Adams, M. R.; Anghel, I. M.; Apanasevich, L.; Bai, Y.; Bazterra, V. E.; Betts, R. R.; Bucinskaite, I.; Callner, J.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Khalatyan, S.; Lacroix, F.; O’Brien, C.; Silkworth, C.; Strom, D.; Turner, P.; Varelas, N.; Akgun, U.; Albayrak, E. A.; Bilki, B.; Clarida, W.; Duru, F.; Griffiths, S.; Merlo, J. -P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Newsom, C. R.; Norbeck, E.; Onel, Y.; Ozok, F.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yetkin, T.; Yi, K.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Giurgiu, G.; Gritsan, A. V.; Guo, Z. J.; Hu, G.; Maksimovic, P.; Swartz, M.; Whitbeck, A.; Baringer, P.; Bean, A.; Benelli, G.; Kenny, R. P.; Murray, M.; Noonan, D.; Sanders, S.; Stringer, R.; Tinti, G.; Wood, J. S.; Barfuss, A. F.; Bolton, T.; Chakaberia, I.; Ivanov, A.; Khalil, S.; Makouski, M.; Maravin, Y.; Shrestha, S.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kirn, M.; Kolberg, T.; Lu, Y.; Marionneau, M.; Mignerey, A. C.; Pedro, K.; Skuja, A.; Temple, J.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Bauer, G.; Bendavid, J.; Busza, W.; Butz, E.; Cali, I. A.; Chan, M.; Dutta, V.; Gomez Ceballos, G.; Goncharov, M.; Kim, Y.; Klute, M.; Krajczar, K.; Levin, A.; Luckey, P. D.; Ma, T.; Nahn, S.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Rudolph, M.; Stephans, G. S. F.; Stöckli, F.; Sumorok, K.; Sung, K.; Velicanu, D.; Wenger, E. A.; Wolf, R.; Wyslouch, B.; Yang, M.; Yilmaz, Y.; Yoon, A. S.; Zanetti, M.; Zhukova, V.; Cooper, S. I.; Dahmes, B.; De Benedetti, A.; Franzoni, G.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Pastika, N.; Rusack, R.; Sasseville, M.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Cremaldi, L. M.; Kroeger, R.; Perera, L.; Rahmat, R.; Sanders, D. A.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Eads, M.; Keller, J.; Kravchenko, I.; Lazo-Flores, J.; Malik, S.; Snow, G. R.; Godshalk, A.; Iashvili, I.; Jain, S.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Haley, J.; Nash, D.; Orimoto, T.; Trocino, D.; Wood, D.; Zhang, J.; Anastassov, A.; Hahn, K. A.; Kubik, A.; Lusito, L.; Mucia, N.; Odell, N.; Ofierzynski, R. A.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Velasco, M.; Won, S.; Antonelli, L.; Berry, D.; Brinkerhoff, A.; Chan, K. M.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kolb, J.; Lannon, K.; Luo, W.; Lynch, S.; Marinelli, N.; Morse, D. M.; Pearson, T.; Planer, M.; Ruchti, R.; Slaunwhite, J.; Valls, N.; Wayne, M.; Wolf, M.; Bylsma, B.; Durkin, L. S.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Puigh, D.; Rodenburg, M.; Vuosalo, C.; Williams, G.; Winer, B. L.; Berry, E.; Elmer, P.; Halyo, V.; Hebda, P.; Hegeman, J.; Hunt, A.; Jindal, P.; Koay, S. A.; Lopes Pegna, D.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Raval, A.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Brownson, E.; Lopez, A.; Mendez, H.; Ramirez Vargas, J. E.; Alagoz, E.; Barnes, V. E.; Benedetti, D.; Bolla, G.; Bortoletto, D.; De Mattia, M.; Everett, A.; Hu, Z.; Jones, M.; Koybasi, O.; Kress, M.; Laasanen, A. T.; Leonardo, N.; Maroussov, V.; Merkel, P.; Miller, D. H.; Neumeister, N.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Vidal Marono, M.; Yoo, H. D.; Zablocki, J.; Zheng, Y.; Guragain, S.; Parashar, N.; Adair, A.; Akgun, B.; Boulahouache, C.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Chung, Y. S.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Miner, D. C.; Vishnevskiy, D.; Zielinski, M.; Bhatti, A.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Lungu, G.; Malik, S.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Rekovic, V.; Robles, J.; Rose, K.; Salur, S.; Schnetzer, S.; Seitz, C.; Somalwar, S.; Stone, R.; Thomas, S.; Walker, M.; Cerizza, G.; Hollingsworth, M.; Spanier, S.; Yang, Z. C.; York, A.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Safonov, A.; Sakuma, T.; Sengupta, S.; Suarez, I.; Tatarinov, A.; Toback, D.; Akchurin, N.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Jeong, C.; Kovitanggoon, K.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Florez, C.; Greene, S.; Gurrola, A.; Johns, W.; Kurt, P.; Maguire, C.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Balazs, M.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Lin, C.; Neu, C.; Wood, J.; Gollapinni, S.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sakharov, A.; Anderson, M.; Belknap, D. A.; Borrello, L.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Friis, E.; Gray, L.; Grogg, K. S.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Klukas, J.; Lanaro, A.; Lazaridis, C.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Palmonari, F.; Pierro, G. A.; Ross, I.; Savin, A.; Smith, W. H.; Swanson, J.

    2013-09-01

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrain other theoretical models and to compare different supersymmetry-inspired analyses.

  3. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    Science.gov (United States)

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  4. Cranking model interpretation of weakly coupled bands in Hg isotopes

    International Nuclear Information System (INIS)

    Guttormsen, M.; Huebel, H.

    1982-01-01

    The positive-parity yrast states of the transitional sup(189-198)Hg isotopes are interpreted within the Bengtsson and Frauendorf version of the cranking model. The very sharp backbendings can be explained by small interaction matrix elements between the ground and s-bands. The experimentally observed large aligned angular momenta and the low band-crossing frequencies are well reproduced in the calculations. (orig.)

  5. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    Science.gov (United States)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes

  6. Hybrid Decision Making: When Interpretable Models Collaborate With Black-Box Models

    OpenAIRE

    Wang, Tong

    2018-01-01

    Interpretable machine learning models have received increasing interest in recent years, especially in domains where humans are involved in the decision-making process. However, the possible loss of the task performance for gaining interpretability is often inevitable. This performance downgrade puts practitioners in a dilemma of choosing between a top-performing black-box model with no explanations and an interpretable model with unsatisfying task performance. In this work, we propose a nove...

  7. An IT-enabled supply chain model: a simulation study

    Science.gov (United States)

    Cannella, Salvatore; Framinan, Jose M.; Barbosa-Póvoa, Ana

    2014-11-01

    During the last decades, supply chain collaboration practices and the underlying enabling technologies have evolved from the classical electronic data interchange (EDI) approach to a web-based and radio frequency identification (RFID)-enabled collaboration. In this field, most of the literature has focused on the study of optimal parameters for reducing the total cost of suppliers, by adopting operational research (OR) techniques. Herein we are interested in showing that the considered information technology (IT)-enabled structure is resilient, that is, it works well across a reasonably broad range of parameter settings. By adopting a methodological approach based on system dynamics, we study a multi-tier collaborative supply chain. Results show that the IT-enabled supply chain improves operational performance and customer service level. Nonetheless, benefits for geographically dispersed networks are of minor entity.

  8. New Cosmological Model and Its Implications on Observational Data Interpretation

    Directory of Open Access Journals (Sweden)

    Vlahovic Branislav

    2013-09-01

    Full Text Available The paradigm of ΛCDM cosmology works impressively well and with the concept of inflation it explains the universe after the time of decoupling. However there are still a few concerns; after much effort there is no detection of dark matter and there are significant problems in the theoretical description of dark energy. We will consider a variant of the cosmological spherical shell model, within FRW formalism and will compare it with the standard ΛCDM model. We will show that our new topological model satisfies cosmological principles and is consistent with all observable data, but that it may require new interpretation for some data. Considered will be constraints imposed on the model, as for instance the range for the size and allowed thickness of the shell, by the supernovae luminosity distance and CMB data. In this model propagation of the light is confined along the shell, which has as a consequence that observed CMB originated from one point or a limited space region. It allows to interpret the uniformity of the CMB without inflation scenario. In addition this removes any constraints on the uniformity of the universe at the early stage and opens a possibility that the universe was not uniform and that creation of galaxies and large structures is due to the inhomogeneities that originated in the Big Bang.

  9. Creating Data and Modeling Enabled Hydrology Instruction Using Collaborative Approach

    Science.gov (United States)

    Merwade, V.; Rajib, A.; Ruddell, B. L.; Fox, S.

    2017-12-01

    Hydrology instruction typically involves teaching of the hydrologic cycle and the processes associated with it such as precipitation, evapotranspiration, infiltration, runoff generation and hydrograph analysis. With the availability of observed and remotely sensed data related to many hydrologic fluxes, there is an opportunity to use these data for place based learning in hydrology classrooms. However, it is not always easy and possible for an instructor to complement an existing hydrology course with new material that requires both the time and technical expertise, which the instructor may not have. The work presented here describes an effort where students create the data and modeling driven instruction material as a part of their class assignment for a hydrology course at Purdue University. The data driven hydrology education project within Science Education Resources Center (SERC) is used as a platform to publish and share the instruction material so it can be used by future students in the same course or any other course anywhere in the world. Students in the class were divided into groups, and each group was assigned a topic such as precipitation, evapotranspiration, streamflow, flow duration curve and frequency analysis. Each student in the group was then asked to get data and do some analysis for an area with specific landuse characteristic such as urban, rural and agricultural. The student contribution were then organized into learning units such that someone can do a flow duration curve analysis or flood frequency analysis to see how it changes for rural area versus urban area. The hydrology education project within SERC cyberinfrastructure enables any other instructor to adopt this material as is or through modification to suit his/her place based instruction needs.

  10. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    Science.gov (United States)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  11. Evaluating topic model interpretability from a primary care physician perspective.

    Science.gov (United States)

    Arnold, Corey W; Oh, Andrea; Chen, Shawn; Speier, William

    2016-02-01

    Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann-Whitney U tests for each of the tasks. While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  13. Stability of the matrix model in operator interpretation

    Directory of Open Access Journals (Sweden)

    Katsuta Sakai

    2017-12-01

    Full Text Available The IIB matrix model is one of the candidates for nonperturbative formulation of string theory, and it is believed that the model contains gravitational degrees of freedom in some manner. In some preceding works, it was proposed that the matrix model describes the curved space where the matrices represent differential operators that are defined on a principal bundle. In this paper, we study the dynamics of the model in this interpretation, and point out the necessity of the principal bundle from the viewpoint of the stability and diffeomorphism invariance. We also compute the one-loop correction which yields a mass term for each field due to the principal bundle. We find that the stability is not violated.

  14. Help seeking in older Asian people with dementia in Melbourne: using the Cultural Exchange Model to explore barriers and enablers.

    Science.gov (United States)

    Haralambous, Betty; Dow, Briony; Tinney, Jean; Lin, Xiaoping; Blackberry, Irene; Rayner, Victoria; Lee, Sook-Meng; Vrantsidis, Freda; Lautenschlager, Nicola; Logiudice, Dina

    2014-03-01

    The prevalence of dementia is increasing in Australia. Limited research is available on access to Cognitive Dementia and Memory Services (CDAMS) for people with dementia from Culturally and Linguistically Diverse (CALD) communities. This study aimed to determine the barriers and enablers to accessing CDAMS for people with dementia and their families of Chinese and Vietnamese backgrounds. Consultations with community members, community workers and health professionals were conducted using the "Cultural Exchange Model" framework. For carers, barriers to accessing services included the complexity of the health system, lack of time, travel required to get to services, language barriers, interpreters and lack of knowledge of services. Similarly, community workers and health professionals identified language, interpreters, and community perceptions as key barriers to service access. Strategies to increase knowledge included providing information via radio, printed material and education in community group settings. The "Cultural Exchange Model" enabled engagement with and modification of the approaches to meet the needs of the targeted CALD communities.

  15. Cognitive model of image interpretation for artificial intelligence applications

    International Nuclear Information System (INIS)

    Raju, S.

    1988-01-01

    A cognitive model of imaging diagnosis was devised to aid in the development of expert systems that assist in the interpretation of diagnostic images. In this cognitive model, a small set of observations that are strongly predictive of a particular diagnosis lead to a search for other observations that would support this diagnosis but are not necessarily specific for it. Then a set of alternative diagnoses is considered. This is followed by a search for observations that might allow differentiation of the primary diagnostic consideration from the alternatives. The production rules needed to implement this model can be classified into three major categories, each of which have certain general characteristics. Knowledge of these characteristics simplifies the development of these expert systems

  16. LIME: 3D visualisation and interpretation of virtual geoscience models

    Science.gov (United States)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel

  17. On the interpretation of weight vectors of linear models in multivariate neuroimaging.

    Science.gov (United States)

    Haufe, Stefan; Meinecke, Frank; Görgen, Kai; Dähne, Sven; Haynes, John-Dylan; Blankertz, Benjamin; Bießmann, Felix

    2014-02-15

    models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Interpreting Marginal Effects in the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2014-01-01

    with a substantial increase in the probability of entering a foreign market using a joint venture, while increases in the unpredictability in the host country environment are associated with a lower probability of wholly owned subsidiaries and a higher probability of exporting entries....... that have entered foreign markets. Through the application of a multinomial logit model, careful analysis of the marginal effects is performed through graphical representations, marginal effects at the mean, average marginal effects and elasticities. I show that increasing cultural distance is associated......This paper presents the challenges when researchers interpret results about relationships between variables from discrete choice models with multiple outcomes. The recommended approach is demonstrated by testing predictions from transaction cost theory on a sample of 246 Scandinavian firms...

  19. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  20. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  1. Enabling full field physics based OPC via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-03-01

    As EUV lithography marches closer to reality for high volume production, its peculiar modeling challenges related to both inter- and intra- field effects has necessitated building OPC infrastructure that operates with field position dependency. Previous state of the art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7nm and 5nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of EPE errors. The introduction of Dynamic Model Generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through field. DMG allows unique models for EMF, apodization, aberrations, etc to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  2. Impact of convective activity on precipitation δ18O in isotope-enabled models

    Science.gov (United States)

    Hu, J.; Emile-Geay, J.; Dee, S.

    2017-12-01

    The ^18O signal preserved in paleo-archives (e.g. speleothem, tree ring cellulose, ice cores) is widely used to reconstruct precipitation or temperature. In the tropics, the inverse relationship between precipitation ^18O and rainfall amount, namely "amount effect" [Dansgaard, Tellus, 1964], is often used to interpret precipitation ^18O. However, recent studies have shown that precipitation ^18O is also influenced by precipitation type [Kurita et al, JGR, 2009; Moerman et al, EPSL, 2013], and recent observations indicate that it is negatively correlated with the fraction of precipitation associated with stratiform clouds [Aggarwal et al, Nature Geosci, 2016]. It is thus important to determine to what extent isotope-enabled climate models can reproduce these relationships. Here we do so using output from LMDZ, CAM2, and isoGSM from the Stable Water Isotope Intercomparison Group, Phase 2 (SWING2) project and results of SPEEDY-IER [Dee et al, JGR, 2015] from an AMIP-style experiment. The results show that these models simulate the "amount effect" well in the tropics, and the relationship between precipitation ^18O and precipitation is reversed in many places in mid-latitudes, in accordance with observations [Bowen, JGR, 2008]. Also, these models can all reproduce the negative correlation between monthly precipitation ^18O and stratiform precipitation proportion in mid-latitude (30°N-50°N; 50°S-30°S), but in the tropics (30°S-30°N), models show a positive correlation instead. The reason for this bias will be investigated within idealized experiments with SPEEDY-IER. The correct simulations of the impact of convective activity on precipitation ^18O in isotope-enabled models will improve our interpretation of paleoclimate proxies with respect to hydroclimate variability. P. K. Aggarwal et al. (2016), Nature Geosci., 9, 624-629, doi:10.1038/ngeo2739. G. J. Bowen. (2008), J. Geophys. Res., 113, D05113, doi:10.1029/2007JD009295. W. Dansgaard (1964), Tellus, 16(4), 436

  3. GeoPro: Technology to Enable Scientific Modeling

    International Nuclear Information System (INIS)

    C. Juan

    2004-01-01

    Development of the ground-water flow model for the Death Valley Regional Groundwater Flow System (DVRFS) required integration of numerous supporting hydrogeologic investigations. The results from recharge, discharge, hydraulic properties, water level, pumping, model boundaries, and geologic studies were integrated to develop the required conceptual and 3-D framework models, and the flow model itself. To support the complex modeling process and the needs of the multidisciplinary DVRFS team, a hardware and software system called GeoPro (Geoscience Knowledge Integration Protocol) was developed. A primary function of GeoPro is to manage the large volume of disparate data compiled for the 100,000-square-kilometer area of southern Nevada and California. The data are primarily from previous investigations and regional flow models developed for the Nevada Test Site and Yucca Mountain projects. GeoPro utilizes relational database technology (Microsoft SQL Server(trademark)) to store and manage these tabular point data, groundwater flow model ASCII data, 3-D hydrogeologic framework data, 2-D and 2.5-D GIS data, and text documents. Data management consists of versioning, tracking, and reporting data changes as multiple users access the centralized database. GeoPro also supports the modeling process by automating the routine data transformations required to integrate project software. This automation is also crucial to streamlining pre- and post-processing of model data during model calibration. Another function of GeoPro is to facilitate the dissemination and use of the model data and results through web-based documents by linking and allowing access to the underlying database and analysis tools. The intent is to convey to end-users the complex flow model product in a manner that is simple, flexible, and relevant to their needs. GeoPro is evolving from a prototype system to a production-level product. Currently the DVRFS pre- and post-processing modeling tools are being re

  4. Enabling Accessibility Through Model-Based User Interface Development.

    Science.gov (United States)

    Ziegler, Daniel; Peissner, Matthias

    2017-01-01

    Adaptive user interfaces (AUIs) can increase the accessibility of interactive systems. They provide personalized display and interaction modes to fit individual user needs. Most AUI approaches rely on model-based development, which is considered relatively demanding. This paper explores strategies to make model-based development more attractive for mainstream developers.

  5. A Customizable Dashboarding System for Watershed Model Interpretation

    Science.gov (United States)

    Easton, Z. M.; Collick, A.; Wagena, M. B.; Sommerlot, A.; Fuka, D.

    2017-12-01

    Stakeholders, including policymakers, agricultural water managers, and small farm managers, can benefit from the outputs of commonly run watershed models. However, the information that each stakeholder needs is be different. While policy makers are often interested in the broader effects that small farm management may have on a watershed during extreme events or over long periods, farmers are often interested in field specific effects at daily or seasonal period. To provide stakeholders with the ability to analyze and interpret data from large scale watershed models, we have developed a framework that can support custom exploration of the large datasets produced. For the volume of data produced by these models, SQL-based data queries are not efficient; thus, we employ a "Not Only SQL" (NO-SQL) query language, which allows data to scale in both quantity and query volumes. We demonstrate a stakeholder customizable Dashboarding system that allows stakeholders to create custom `dashboards' to summarize model output specific to their needs. Dashboarding is a dynamic and purpose-based visual interface needed to display one-to-many database linkages so that the information can be presented for a single time period or dynamically monitored over time and allows a user to quickly define focus areas of interest for their analysis. We utilize a single watershed model that is run four times daily with a combined set of climate projections, which are then indexed, and added to an ElasticSearch datastore. ElasticSearch is a NO-SQL search engine built on top of Apache Lucene, a free and open-source information retrieval software library. Aligned with the ElasticSearch project is the open source visualization and analysis system, Kibana, which we utilize for custom stakeholder dashboarding. The dashboards create a visualization of the stakeholder selected analysis and can be extended to recommend robust strategies to support decision-making.

  6. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  7. Healthcare waste management: an interpretive structural modeling approach.

    Science.gov (United States)

    Thakur, Vikas; Anbanandam, Ramesh

    2016-06-13

    Purpose - The World Health Organization identified infectious healthcare waste as a threat to the environment and human health. India's current medical waste management system has limitations, which lead to ineffective and inefficient waste handling practices. Hence, the purpose of this paper is to: first, identify the important barriers that hinder India's healthcare waste management (HCWM) systems; second, classify operational, tactical and strategical issues to discuss the managerial implications at different management levels; and third, define all barriers into four quadrants depending upon their driving and dependence power. Design/methodology/approach - India's HCWM system barriers were identified through the literature, field surveys and brainstorming sessions. Interrelationships among all the barriers were analyzed using interpretive structural modeling (ISM). Fuzzy-Matrice d'Impacts Croisés Multiplication Appliquée á un Classement (MICMAC) analysis was used to classify HCWM barriers into four groups. Findings - In total, 25 HCWM system barriers were identified and placed in 12 different ISM model hierarchy levels. Fuzzy-MICMAC analysis placed eight barriers in the second quadrant, five in third and 12 in fourth quadrant to define their relative ISM model importance. Research limitations/implications - The study's main limitation is that all the barriers were identified through a field survey and barnstorming sessions conducted only in Uttarakhand, Northern State, India. The problems in implementing HCWM practices may differ with the region, hence, the current study needs to be replicated in different Indian states to define the waste disposal strategies for hospitals. Practical implications - The model will help hospital managers and Pollution Control Boards, to plan their resources accordingly and make policies, targeting key performance areas. Originality/value - The study is the first attempt to identify India's HCWM system barriers and prioritize

  8. The shared circuits model (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading.

    Science.gov (United States)

    Hurley, Susan

    2008-02-01

    Imitation, deliberation, and mindreading are characteristically human sociocognitive skills. Research on imitation and its role in social cognition is flourishing across various disciplines. Imitation is surveyed in this target article under headings of behavior, subpersonal mechanisms, and functions of imitation. A model is then advanced within which many of the developments surveyed can be located and explained. The shared circuits model (SCM) explains how imitation, deliberation, and mindreading can be enabled by subpersonal mechanisms of control, mirroring, and simulation. It is cast at a middle, functional level of description, that is, between the level of neural implementation and the level of conscious perceptions and intentional actions. The SCM connects shared informational dynamics for perception and action with shared informational dynamics for self and other, while also showing how the action/perception, self/other, and actual/possible distinctions can be overlaid on these shared informational dynamics. It avoids the common conception of perception and action as separate and peripheral to central cognition. Rather, it contributes to the situated cognition movement by showing how mechanisms for perceiving action can be built on those for active perception.;>;>The SCM is developed heuristically, in five layers that can be combined in various ways to frame specific ontogenetic or phylogenetic hypotheses. The starting point is dynamic online motor control, whereby an organism is closely attuned to its embedding environment through sensorimotor feedback. Onto this are layered functions of prediction and simulation of feedback, mirroring, simulation of mirroring, monitored inhibition of motor output, and monitored simulation of input. Finally, monitored simulation of input specifying possible actions plus inhibited mirroring of such possible actions can generate information about the possible as opposed to actual instrumental actions of others, and the

  9. Enabling Cross-Discipline Collaboration Via a Functional Data Model

    Science.gov (United States)

    Lindholm, D. M.; Wilson, A.; Baltzer, T.

    2016-12-01

    Many research disciplines have very specialized data models that are used to express the detailed semantics that are meaningful to that community and easily utilized by their data analysis tools. While invaluable to members of that community, such expressive data structures and metadata are of little value to potential collaborators from other scientific disciplines. Many data interoperability efforts focus on the difficult task of computationally mapping concepts from one domain to another to facilitate discovery and use of data. Although these efforts are important and promising, we have found that a great deal of discovery and dataset understanding still happens at the level of less formal, personal communication. However, a significant barrier to inter-disciplinary data sharing that remains is one of data access.Scientists and data analysts continue to spend inordinate amounts of time simply trying to get data into their analysis tools. Providing data in a standard file format is often not sufficient since data can be structured in many ways. Adhering to more explicit community standards for data structure and metadata does little to help those in other communities.The Functional Data Model specializes the Relational Data Model (used by many database systems)by defining relations as functions between independent (domain) and dependent (codomain) variables. Given that arrays of data in many scientific data formats generally represent functionally related parameters (e.g. temperature as a function of space and time), the Functional Data Model is quite relevant for these datasets as well. The LaTiS software framework implements the Functional Data Model and provides a mechanism to expose an existing data source as a LaTiS dataset. LaTiS datasets can be manipulated using a Functional Algebra and output in any number of formats.LASP has successfully used the Functional Data Model and its implementation in the LaTiS software framework to bridge the gap between

  10. Postural control model interpretation of stabilogram diffusion analysis

    Science.gov (United States)

    Peterka, R. J.

    2000-01-01

    Collins and De Luca [Collins JJ. De Luca CJ (1993) Exp Brain Res 95: 308-318] introduced a new method known as stabilogram diffusion analysis that provides a quantitative statistical measure of the apparently random variations of center-of-pressure (COP) trajectories recorded during quiet upright stance in humans. This analysis generates a stabilogram diffusion function (SDF) that summarizes the mean square COP displacement as a function of the time interval between COP comparisons. SDFs have a characteristic two-part form that suggests the presence of two different control regimes: a short-term open-loop control behavior and a longer-term closed-loop behavior. This paper demonstrates that a very simple closed-loop control model of upright stance can generate realistic SDFs. The model consists of an inverted pendulum body with torque applied at the ankle joint. This torque includes a random disturbance torque and a control torque. The control torque is a function of the deviation (error signal) between the desired upright body position and the actual body position, and is generated in proportion to the error signal, the derivative of the error signal, and the integral of the error signal [i.e. a proportional, integral and derivative (PID) neural controller]. The control torque is applied with a time delay representing conduction, processing, and muscle activation delays. Variations in the PID parameters and the time delay generate variations in SDFs that mimic real experimental SDFs. This model analysis allows one to interpret experimentally observed changes in SDFs in terms of variations in neural controller and time delay parameters rather than in terms of open-loop versus closed-loop behavior.

  11. Domain-specific modeling enabling full code generation

    CERN Document Server

    Kelly, Steven

    2007-01-01

    Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....

  12. Scidac-Data: Enabling Data Driven Modeling of Exascale Computing

    Science.gov (United States)

    Mubarak, Misbah; Ding, Pengfei; Aliaga, Leo; Tsaris, Aristeidis; Norman, Andrew; Lyon, Adam; Ross, Robert

    2017-10-01

    The SciDAC-Data project is a DOE-funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab data center on the organization, movement, and consumption of high energy physics (HEP) data. The project analyzes the analysis patterns and data organization that have been used by NOvA, MicroBooNE, MINERvA, CDF, D0, and other experiments to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization, and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership-class exascale computing facilities. We present the use of a subset of the SciDAC-Data distributions, acquired from analysis of approximately 71,000 HEP workflows run on the Fermilab data center and corresponding to over 9 million individual analysis jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in high performance computing (HPC) and high throughput computing (HTC) environments. In particular we describe how the Sequential Access via Metadata (SAM) data-handling system in combination with the dCache/Enstore-based data archive facilities has been used to develop radically different models for analyzing the HEP data. We also show how the simulations may be used to assess the impact of design choices in archive facilities.

  13. Model sparsity and brain pattern interpretation of classification models in neuroimaging

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Churchill, Nathan W

    2012-01-01

    Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a ‘brain map’ derived from the classification model. In this ...

  14. Quality Systems. A Thermodynamics-Related Interpretive Model

    Directory of Open Access Journals (Sweden)

    Stefano A. Lollai

    2017-08-01

    Full Text Available In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs and systems theories. A measure of entropy is proposed for QMSs, including a virtual document entropy and an entropy linked to processes and organisation. QMSs are also interpreted in light of Cybernetics, and interrelations between Information Theory and quality are also highlighted. A measure for the information content of quality documents is proposed. Such parameters can be used as adequacy indices for QMSs. From the discussed approach, suggestions for organising QMSs are also derived. Further interpretive thermodynamic-based criteria for QMSs are also proposed. The work represents the first attempt to treat quality organisational systems according to a thermodynamics-related approach. At this stage, no data are available to compare statements in the paper.

  15. An interpretable LSTM neural network for autoregressive exogenous model

    OpenAIRE

    Guo, Tian; Lin, Tao; Lu, Yao

    2018-01-01

    In this paper, we propose an interpretable LSTM recurrent neural network, i.e., multi-variable LSTM for time series with exogenous variables. Currently, widely used attention mechanism in recurrent neural networks mostly focuses on the temporal aspect of data and falls short of characterizing variable importance. To this end, our multi-variable LSTM equipped with tensorized hidden states is developed to learn variable specific representations, which give rise to both temporal and variable lev...

  16. Software Infrastructure to Enable Modeling & Simulation as a Service (M&SaaS), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase 2 project will produce a software service infrastructure that enables most modeling and simulation (M&S) activities from code development and...

  17. Comparing heat flow models for interpretation of precast quadratic pile heat exchanger thermal response tests

    DEFF Research Database (Denmark)

    Alberdi Pagola, Maria; Poulsen, Søren Erbs; Loveridge, Fleur

    2018-01-01

    This paper investigates the applicability of currently available analytical, empirical and numerical heat flow models for interpreting thermal response tests (TRT) of quadratic cross section precast pile heat exchangers. A 3D finite element model (FEM) is utilised for interpreting five TRTs by in...

  18. Analysis of Challenges for Management Education in India Using Total Interpretive Structural Modelling

    Science.gov (United States)

    Mahajan, Ritika; Agrawal, Rajat; Sharma, Vinay; Nangia, Vinay

    2016-01-01

    Purpose: The purpose of this paper is to identify challenges for management education in India and explain their nature, significance and interrelations using total interpretive structural modelling (TISM), an innovative version of Warfield's interpretive structural modelling (ISM). Design/methodology/approach: The challenges have been drawn from…

  19. Concept Communication and Interpretation of Illness: A Holistic Model of Understanding in Nursing Practice.

    Science.gov (United States)

    Nordby, Halvor

    To ensure patient communication in nursing, certain conditions must be met that enable successful exchange of beliefs, thoughts, and other mental states. The conditions that have received most attention in the nursing literature are derived from general communication theories, psychology, and ethical frameworks of interpretation. This article focuses on a condition more directly related to an influential coherence model of concept possession from recent philosophy of mind and language. The basic ideas in this model are (i) that the primary source of understanding of illness experiences is communicative acts that express concepts of illness, and (ii) that the key to understanding patients' concepts of illness is to understand how they depend on patients' lifeworlds. The article argues that (i) and (ii) are especially relevant in caring practice since it has been extensively documented that patients' perspectives on disease and illness are shaped by their subjective horizons. According to coherentism, nurses need to focus holistically on patients' horizons in order to understand the meaning of patients' expressions of meaning. Furthermore, the coherence model implies that fundamental aims of understanding can be achieved only if nurses recognize the interdependence of patients' beliefs and experiences of ill health. The article uses case studies to elucidate how the holistic implications of coherentism can be used as conceptual tools in nursing.

  20. INTEGRATION OF QSAR AND SAR METHODS FOR THE MECHANISTIC INTERPRETATION OF PREDICTIVE MODELS FOR CARCINOGENICITY

    Directory of Open Access Journals (Sweden)

    Natalja Fjodorova

    2012-07-01

    Full Text Available The knowledge-based Toxtree expert system (SAR approach was integrated with the statistically based counter propagation artificial neural network (CP ANN model (QSAR approach to contribute to a better mechanistic understanding of a carcinogenicity model for non-congeneric chemicals using Dragon descriptors and carcinogenic potency for rats as a response. The transparency of the CP ANN algorithm was demonstrated using intrinsic mapping technique specifically Kohonen maps. Chemical structures were represented by Dragon descriptors that express the structural and electronic features of molecules such as their shape and electronic surrounding related to reactivity of molecules. It was illustrated how the descriptors are correlated with particular structural alerts (SAs for carcinogenicity with recognized mechanistic link to carcinogenic activity. Moreover, the Kohonen mapping technique enables one to examine the separation of carcinogens and non-carcinogens (for rats within a family of chemicals with a particular SA for carcinogenicity. The mechanistic interpretation of models is important for the evaluation of safety of chemicals.

  1. Knowledge-Based Decision Model Construction for Dynamic Interpretation Tasks

    National Research Council Canada - National Science Library

    Wellman, Michael

    1997-01-01

    ...) is highly variable, precluding specification of a fixed model in advance. The project yielded technical results in four areas of reasoning and decision making under uncertainty involving model construction: (1...

  2. The fractional volatility model: An agent-based interpretation

    Science.gov (United States)

    Vilela Mendes, R.

    2008-06-01

    Based on the criteria of mathematical simplicity and consistency with empirical market data, a model with volatility driven by fractional noise has been constructed which provides a fairly accurate mathematical parametrization of the data. Here, some features of the model are reviewed and extended to account for leverage effects. Using agent-based models, one tries to find which agent strategies and (or) properties of the financial institutions might be responsible for the features of the fractional volatility model.

  3. Biomechanical interpretation of a free-breathing lung motion model

    International Nuclear Information System (INIS)

    Zhao Tianyu; White, Benjamin; Lamb, James; Low, Daniel A; Moore, Kevin L; Yang Deshan; Mutic, Sasa; Lu Wei

    2011-01-01

    The purpose of this paper is to develop a biomechanical model for free-breathing motion and compare it to a published heuristic five-dimensional (5D) free-breathing lung motion model. An ab initio biomechanical model was developed to describe the motion of lung tissue during free breathing by analyzing the stress–strain relationship inside lung tissue. The first-order approximation of the biomechanical model was equivalent to a heuristic 5D free-breathing lung motion model proposed by Low et al in 2005 (Int. J. Radiat. Oncol. Biol. Phys. 63 921–9), in which the motion was broken down to a linear expansion component and a hysteresis component. To test the biomechanical model, parameters that characterize expansion, hysteresis and angles between the two motion components were reported independently and compared between two models. The biomechanical model agreed well with the heuristic model within 5.5% in the left lungs and 1.5% in the right lungs for patients without lung cancer. The biomechanical model predicted that a histogram of angles between the two motion components should have two peaks at 39.8° and 140.2° in the left lungs and 37.1° and 142.9° in the right lungs. The data from the 5D model verified the existence of those peaks at 41.2° and 148.2° in the left lungs and 40.1° and 140° in the right lungs for patients without lung cancer. Similar results were also observed for the patients with lung cancer, but with greater discrepancies. The maximum-likelihood estimation of hysteresis magnitude was reported to be 2.6 mm for the lung cancer patients. The first-order approximation of the biomechanical model fit the heuristic 5D model very well. The biomechanical model provided new insights into breathing motion with specific focus on motion trajectory hysteresis.

  4. Futures Business Models for an IoT Enabled Healthcare Sector: A Causal Layered Analysis Perspective

    OpenAIRE

    Julius Francis Gomes; Sara Moqaddemerad

    2016-01-01

    Purpose: To facilitate futures business research by proposing a novel way to combine business models as a conceptual tool with futures research techniques. Design: A futures perspective is adopted to foresight business models of the Internet of Things (IoT) enabled healthcare sector by using business models as a futures business research tool. In doing so, business models is coupled with one of the most prominent foresight methodologies, Causal Layered Analysis (CLA). Qualitative analysis...

  5. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  6. Implementations and interpretations of the talbot-ogden infiltration model

    KAUST Repository

    Seo, Mookwon

    2014-11-01

    The interaction between surface and subsurface hydrology flow systems is important for water supplies. Accurate, efficient numerical models are needed to estimate the movement of water through unsaturated soil. We investigate a water infiltration model and develop very fast serial and parallel implementations that are suitable for a computer with a graphical processing unit (GPU).

  7. ASPECTS OF MATHEMATICAL MODELING AND INTERPRETATION OF A MANUFACTURING SYSTEM

    Directory of Open Access Journals (Sweden)

    Mihaela ALDEA

    2013-05-01

    Full Text Available In the paper developing we started from a model that allows a detailed decoding of causalrelationships and getting the laws that determine the evolution of the phenomenon.The model chosen for the study is a discrete event system applicable to optimize the transport systemused in pottery. In order to simulate the manufacturing process we chose Matlab package that contains pntoollibrary, by which can be realized modeling of analyzed graphs. Since the timings of manufacture are very highand the process simulation is conducted with difficulty, we divided the graph according to the transport system.

  8. Experimental software for modeling and interpreting educational data analysis processes

    Directory of Open Access Journals (Sweden)

    Natalya V. Zorina

    2017-12-01

    Full Text Available Problems, tasks and processes of educational data mining are considered in this article. The objective is to create a fundamentally new information system of the University using the results educational data analysis. One of the functions of such a system is knowledge extraction from accumulated in the operation process data. The creation of the national system of this type is an iterative and time-consuming process requiring the preliminary studies and incremental prototyping modules. The novelty of such systems is that there is a lack of those using this methodology of the development, for this purpose a number of experiments was carried out in order to collect data, choose appropriate methods for the study and to interpret them. As a result of the experiment, the authors were available sources available for analysis in the information environment of the home university. The data were taken from the semester performance, obtained from the information system of the training department of the Institute of IT MTU MIREA, the data obtained as a result of the independent work of students and data, using specially designed Google-forms. To automate the collection of information and analysis of educational data, an experimental software package was created. As a methodology for developing the experimental software complex, a decision was made using the methodologies of rational-empirical complexes (REX and single-experimentation program technologies (TPEI. The details of the program implementation of the complex are described in detail, conclusions are given about the availability of the data sources used, and conclusions are drawn about the prospects for further development.

  9. Implementations and interpretations of the talbot-ogden infiltration model

    KAUST Repository

    Seo, Mookwon; Cerwinsky, Derrick; Gahalaut, Krishan Pratap Singh; Douglas, Craig C.

    2014-01-01

    The interaction between surface and subsurface hydrology flow systems is important for water supplies. Accurate, efficient numerical models are needed to estimate the movement of water through unsaturated soil. We investigate a water infiltration

  10. Development of Interpretable Predictive Models for BPH and Prostate Cancer.

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, J A

    2015-01-01

    Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. Statistical dependence with PC and BPH was found for prostate volume (P-value BPH prediction. PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced.

  11. Making sense of war : Using the interpretation comparison model to understand the Iraq conflict

    NARCIS (Netherlands)

    Stapel, Diederik A.; Marx, David M.

    2007-01-01

    The current research addressed the issue of how people use the past to compare and interpret the present. Using the logic of the Interpretation Comparison Model (ICM) we examined two factors (distinctness of past events and ambiguity of target event) that may influence how people make sense of a

  12. A statistical model for interpreting computerized dynamic posturography data

    Science.gov (United States)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  13. A different interpretation of the nuclear shell model

    International Nuclear Information System (INIS)

    Fabre de la Ripelle, M.

    1984-12-01

    In the first order approximation the nucleons are moving into a collective well extracted from the two-body N-N interaction. The nuclear shell model is explained by the structure of the first order solution of the Schroedinger equation. In the next step the two-body correlations generated by the N-N potential are introduced in the wave function

  14. Stieltjes electrostatic model interpretation for bound state problems

    Indian Academy of Sciences (India)

    In this paper, it is shown that Stieltjes electrostatic model and quantum Hamilton Jacobi formalism are analogous to each other. This analogy allows the bound state problem to mimic as unit moving imaginary charges i ℏ , which are placed in between the two fixed imaginary charges arising due to the classical turning ...

  15. Montmorillonite dissolution kinetics: Experimental and reactive transport modeling interpretation

    Science.gov (United States)

    Cappelli, Chiara; Yokoyama, Shingo; Cama, Jordi; Huertas, F. Javier

    2018-04-01

    The dissolution kinetics of K-montmorillonite was studied at 25 °C, acidic pH (2-4) and 0.01 M ionic strength by means of well-mixed flow-through experiments. The variations of Si, Al and Mg over time resulted in high releases of Si and Mg and Al deficit, which yielded long periods of incongruent dissolution before reaching stoichiometric steady state. This behavior was caused by simultaneous dissolution of nanoparticles and cation exchange between the interlayer K and released Ca, Mg and Al and H. Since Si was only involved in the dissolution reaction, it was used to calculate steady-state dissolution rates, RSi, over a wide solution saturation state (ΔGr ranged from -5 to -40 kcal mol-1). The effects of pH and the degree of undersaturation (ΔGr) on the K-montmorillonite dissolution rate were determined using RSi. Employing dissolution rates farthest from equilibrium, the catalytic pH effect on the K-montmorillonite dissolution rate was expressed as Rdiss = k·aH0.56±0.05 whereas using all dissolution rates, the ΔGr effect was expressed as a non-linear f(ΔGr) function Rdiss = k · [1 - exp(-3.8 × 10-4 · (|ΔGr|/RT)2.13)] The functionality of this expression is similar to the equations reported for dissolution of Na-montmorillonite at pH 3 and 50 °C (Metz, 2001) and Na-K-Ca-montmorillonite at pH 9 and 80 °C (Cama et al., 2000; Marty et al., 2011), which lends support to the use of a single f(ΔGr) term to calculate the rate over the pH range 0-14. Thus, we propose a rate law that also accounts for the effect of pOH and temperature by using the pOH-rate dependence and the apparent activation energy proposed by Rozalén et al. (2008) and Amram and Ganor (2005), respectively, and normalizing the dissolution rate constant with the edge surface area of the K-montmorillonite. 1D reactive transport simulations of the experimental data were performed using the Crunchflow code (Steefel et al., 2015) to quantitatively interpret the evolution of the released cations

  16. An exotic k-essence interpretation of interactive cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Forte, Monica [Universidad de Buenos Aires, Departamento de Fisica, Facultad de ciencias Exactas y Naturales, Buenos Aires (Argentina)

    2016-01-15

    We define a generalization of scalar fields with non-canonical kinetic term which we call exotic k-essence or, briefly, exotik. These fields are generated by the global description of cosmological models with two interactive fluids in the dark sector and under certain conditions they correspond to usual k-essences. The formalism is applied to the cases of constant potential and of inverse square potential and also we develop the purely exotik version for the modified holographic Ricci type (MHR) of dark energy, where the equations of state are not constant. With the kinetic function F = 1 + mx and the inverse square potential we recover, through the interaction term, the identification between k-essences and quintessences of an exponential potential, already known for Friedmann-Robertson-Walker and Bianchi type I geometries. Worked examples are shown that include the self-interacting MHR and also models with crossing of the phantom divide line (PDL). (orig.)

  17. Guiding center model to interpret neutral particle analyzer results

    Science.gov (United States)

    Englert, G. W.; Reinmann, J. J.; Lauver, M. R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically precribed. The flux into the neutral particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius.

  18. Modelling and interpretation of gas detection using remote laser pointers.

    Science.gov (United States)

    Hodgkinson, J; van Well, B; Padgett, M; Pride, R D

    2006-04-01

    We have developed a quantitative model of the performance of laser pointer style gas leak detectors, which are based on remote detection of backscattered radiation. The model incorporates instrumental noise limits, the reflectivity of the target background surface and a mathematical description of gas leak dispersion in constant wind speed and turbulence conditions. We have investigated optimum instrument performance and limits of detection in simulated leak detection situations. We predict that the optimum height for instruments is at eye level or above, giving an operating range of 10 m or more for most background surfaces, in wind speeds of up to 2.5 ms(-1). For ground based leak sources, we find laser pointer measurements are dominated by gas concentrations over a short distance close to the target surface, making their readings intuitive to end users in most cases. This finding is consistent with the results of field trials.

  19. An exotic k-essence interpretation of interactive cosmological models

    International Nuclear Information System (INIS)

    Forte, Monica

    2016-01-01

    We define a generalization of scalar fields with non-canonical kinetic term which we call exotic k-essence or, briefly, exotik. These fields are generated by the global description of cosmological models with two interactive fluids in the dark sector and under certain conditions they correspond to usual k-essences. The formalism is applied to the cases of constant potential and of inverse square potential and also we develop the purely exotik version for the modified holographic Ricci type (MHR) of dark energy, where the equations of state are not constant. With the kinetic function F = 1 + mx and the inverse square potential we recover, through the interaction term, the identification between k-essences and quintessences of an exponential potential, already known for Friedmann-Robertson-Walker and Bianchi type I geometries. Worked examples are shown that include the self-interacting MHR and also models with crossing of the phantom divide line (PDL). (orig.)

  20. Learning Behavior Models for Interpreting and Predicting Traffic Situations

    OpenAIRE

    Gindele, Tobias

    2014-01-01

    In this thesis, we present Bayesian state estimation and machine learning methods for predicting traffic situations. The cognitive ability to assess situations and behaviors of traffic participants, and to anticipate possible developments is an essential requirement for several applications in the traffic domain, especially for self-driving cars. We present a method for learning behavior models from unlabeled traffic observations and develop improved learning methods for decision trees.

  1. Designing a Knowledge Management Excellence Model Based on Interpretive Structural Modeling

    Directory of Open Access Journals (Sweden)

    Mirza Hassan Hosseini

    2014-09-01

    Full Text Available Despite the development of appropriate academic and experiential background knowledge management and its manifestation as a competitive advantage, many organizations have failed in its effective utilization. Among the reasons for this failure are some deficiencies in terms of methodology in inappropriate recognition and translation of KM dimensions and lack of systematic approach in establishment of causal relationships among KM factors. This article attempts to design an Organizational Knowledge Management Excellence Model. To design an organizational knowledge management excellence model based on library researches, interviews with experts and interpretive-structural modeling (ISM was used in order to identify and determine the relationships between the factors of km excellence. Accordingly, 9 key criteria of KM Excellence as well as 29 sub-criteria were extracted and the relationships and sequence of factors were defined and developed in 5 levels for designing an organizational KM excellence Model. Finally, the concepts were applied in Defense Organizations to illustrate the proposed methodology.

  2. Futures Business Models for an IoT Enabled Healthcare Sector: A Causal Layered Analysis Perspective

    Directory of Open Access Journals (Sweden)

    Julius Francis Gomes

    2016-12-01

    Full Text Available Purpose: To facilitate futures business research by proposing a novel way to combine business models as a conceptual tool with futures research techniques. Design: A futures perspective is adopted to foresight business models of the Internet of Things (IoT enabled healthcare sector by using business models as a futures business research tool. In doing so, business models is coupled with one of the most prominent foresight methodologies, Causal Layered Analysis (CLA. Qualitative analysis provides deeper understanding of the phenomenon through the layers of CLA; litany, social causes, worldview and myth. Findings: It is di cult to predict the far future for a technology oriented sector like healthcare. This paper presents three scenarios for short-, medium- and long-term future. Based on these scenarios we also present a set of business model elements for different future time frames. This paper shows a way to combine business models with CLA, a foresight methodology; in order to apply business models in futures business research. Besides offering early results for futures business research, this study proposes a conceptual space to work with individual business models for managerial stakeholders. Originality / Value: Much research on business models has offered conceptualization of the phenomenon, innovation through business model and transformation of business models. However, existing literature does not o er much on using business model as a futures research tool. Enabled by futures thinking, we collected key business model elements and building blocks for the futures market and ana- lyzed them through the CLA framework.

  3. Modeling sequential context effects in diagnostic interpretation of screening mammograms.

    Science.gov (United States)

    Alamudun, Folami; Paulus, Paige; Yoon, Hong-Jun; Tourassi, Georgia

    2018-07-01

    Prior research has shown that physicians' medical decisions can be influenced by sequential context, particularly in cases where successive stimuli exhibit similar characteristics when analyzing medical images. This type of systematic error is known to psychophysicists as sequential context effect as it indicates that judgments are influenced by features of and decisions about the preceding case in the sequence of examined cases, rather than being based solely on the peculiarities unique to the present case. We determine if radiologists experience some form of context bias, using screening mammography as the use case. To this end, we explore correlations between previous perceptual behavior and diagnostic decisions and current decisions. We hypothesize that a radiologist's visual search pattern and diagnostic decisions in previous cases are predictive of the radiologist's current diagnostic decisions. To test our hypothesis, we tasked 10 radiologists of varied experience to conduct blind reviews of 100 four-view screening mammograms. Eye-tracking data and diagnostic decisions were collected from each radiologist under conditions mimicking clinical practice. Perceptual behavior was quantified using the fractal dimension of gaze scanpath, which was computed using the Minkowski-Bouligand box-counting method. To test the effect of previous behavior and decisions, we conducted a multifactor fixed-effects ANOVA. Further, to examine the predictive value of previous perceptual behavior and decisions, we trained and evaluated a predictive model for radiologists' current diagnostic decisions. ANOVA tests showed that previous visual behavior, characterized by fractal analysis, previous diagnostic decisions, and image characteristics of previous cases are significant predictors of current diagnostic decisions. Additionally, predictive modeling of diagnostic decisions showed an overall improvement in prediction error when the model is trained on additional information about

  4. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  5. Global Analysis, Interpretation, and Modelling: First Science Conference

    Science.gov (United States)

    Sahagian, Dork

    1995-01-01

    Topics considered include: Biomass of termites and their emissions of methane and carbon dioxide - A global database; Carbon isotope discrimination during photosynthesis and the isotope ratio of respired CO2 in boreal forest ecosystems; Estimation of methane emission from rice paddies in mainland China; Climate and nitrogen controls on the geography and timescales of terrestrial biogeochemical cycling; Potential role of vegetation feedback in the climate sensitivity of high-latitude regions - A case study at 6000 years B.P.; Interannual variation of carbon exchange fluxes in terrestrial ecosystems; and Variations in modeled atmospheric transport of carbon dioxide and the consequences for CO2 inversions.

  6. Exploring How Usage-Focused Business Models Enable Circular Economy through Digital Technologies

    Directory of Open Access Journals (Sweden)

    Gianmarco Bressanelli

    2018-02-01

    Full Text Available Recent studies advocate that digital technologies are key enabling factors for the introduction of servitized business models. At the same time, these technologies support the implementation of the circular economy (CE paradigm into businesses. Despite this general agreement, the literature still overlooks how digital technologies enable such a CE transition. To fill the gap, this paper develops a conceptual framework, based on the literature and a case study of a company implementing a usage-focused servitized business model in the household appliance industry. This study focuses on the Internet of Things (IoT, Big Data, and analytics, and identifies eight specific functionalities enabled by such technologies (improving product design, attracting target customers, monitoring and tracking product activity, providing technical support, providing preventive and predictive maintenance, optimizing the product usage, upgrading the product, enhancing renovation and end-of-life activities. By investigating how these functionalities affect three CE value drivers (increasing resource efficiency, extending lifespan, and closing the loop, the conceptual framework developed in this paper advances knowledge about the role of digital technologies as an enabler of the CE within usage-focused business models. Finally, this study shows how digital technologies help overcome the drawback of usage-focused business models for the adoption of CE pointed out by previous literature.

  7. Interpretation of Higgs and Susy searches in MSUGRA and GMSB Models

    International Nuclear Information System (INIS)

    Vivie, J.B. de

    1999-10-01

    HIGGS and SUSY searches performed by the ALEPH Experiment at LEP are interpreted in the framework of two constrained R-parity conserving models: Minimal Supergravity and minimal Gauge Mediated Supersymmetry Breaking. (author)

  8. Guiding center model to interpret neutral particle analyzer results

    International Nuclear Information System (INIS)

    Englert, G.W.; Reinmann, J.J.; Lauver, M.R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically prescribed. The flux into the neutron particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius. (U.S.)

  9. Mechanistic interpretation of glass reaction: Input to kinetic model development

    International Nuclear Information System (INIS)

    Bates, J.K.; Ebert, W.L.; Bradley, J.P.; Bourcier, W.L.

    1991-05-01

    Actinide-doped SRL 165 type glass was reacted in J-13 groundwater at 90 degree C for times up to 278 days. The reaction was characterized by both solution and solid analyses. The glass was seen to react nonstoichiometrically with preferred leaching of alkali metals and boron. High resolution electron microscopy revealed the formation of a complex layer structure which became separated from the underlying glass as the reaction progressed. The formation of the layer and its effect on continued glass reaction are discussed with respect to the current model for glass reaction used in the EQ3/6 computer simulation. It is concluded that the layer formed after 278 days is not protective and may eventually become fractured and generate particulates that may be transported by liquid water. 5 refs., 5 figs. , 3 tabs

  10. The interacting boson model: its formulation, application, extension and interpretation

    International Nuclear Information System (INIS)

    Barrett, B.R.

    1981-01-01

    The goal of this article is to review the present status of the Interacting Boson Model (IBM) for describing the collective properties of medium and heavy mass nuclei, with particular emphasis being given to the work on the IBM at the University of Arizona. First, a concise review of the basic phenomenological IBM, as developed by Arima and Iachello for only one kind of boson, is presented. Next, the extension of the IBM to both proton and neutron bosons is outlined. This latter model is known as the IBM-2. The application of the IBM-2 to the tungsten isotopes by the University of Arizona group is discussed, followed by their calculations for the mercury isotopes. In the case of the mercury isotopes an extended form of the IBM-2 is developed in order to treat the configuration mixing of two entirely different structures which occur in the same energy region. The relationship between the bosons and the underlying fermionic structure of the nucleus is discussed using the generalized seniority scheme of Talmi. Work by the Arizona group to calculate the phenomenological parameters of the IBM-2 using these generalized seniority ideas is described, along with their results, which agree quite well with the empirical values. Efforts by the University of Arizona group to determine the influence of terms left out of the basic IBM, such as the g boson, using second-order perturbation theory are described. In conclusion, a discussion of the limitations as well as the usefulness of the IBM is given along with its exciting possibilities for the future of nuclear structure physics. (author)

  11. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  12. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  13. Supernova 1987A Interpreted through the SLIP Pulsar Model

    Science.gov (United States)

    Middleditch, John

    2010-01-01

    The model of pulsar emission through superluminally induced polarization currents (SLIP) predicts that pulsations produced by such currents, induced by a rotating, magnetized body at many light cylinder radii, as would be the case for a neutron star born within any star of >1.5 solar masses, will drive pulsations close to the axis of rotation. Such highly collimated pulsations (), and later, in less collimated form, the bipolarity of SN 1987A itself. The pulsations and jet interacted with circumstellar material (CM), to produce features observed in the very early light curve which correspond to: 1) the entry of the pulsed beam into the CM; 2) the entry of the 0.95 c particles into the CM; 3) the exit of the pulsed beam from the CM (with contributions in the B and I bands -- the same as later inferred/observed for its 2.14 ms pulsations); and 4) the exit of the fastest particles from the CM. Because of the energy requirements of the jet in these early stages, the spindown required of its pulsar could exceed 1e-5 Hz/s at a rotation rate of 500 Hz. There is no reason to suggest that this mechanism is not universally applicable to all SNe with gaseous remnants remaining, and thus SN 1987A is the Rosetta Stone for 99% of SNe, gamma-ray bursts, and millisecond pulsars. This work was supported in part by the Department of Energy through the Los Alamos Directed Research Grant DR20080085.

  14. Collaborative Cloud Manufacturing: Design of Business Model Innovations Enabled by Cyberphysical Systems in Distributed Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Erwin Rauch

    2016-01-01

    Full Text Available Collaborative cloud manufacturing, as a concept of distributed manufacturing, allows different opportunities for changing the logic of generating and capturing value. Cyberphysical systems and the technologies behind them are the enablers for new business models which have the potential to be disruptive. This paper introduces the topics of distributed manufacturing as well as cyberphysical systems. Furthermore, the main business model clusters of distributed manufacturing systems are described, including collaborative cloud manufacturing. The paper aims to provide support for developing business model innovations based on collaborative cloud manufacturing. Therefore, three business model architecture types of a differentiated business logic are discussed, taking into consideration the parameters which have an influence and the design of the business model and its architecture. As a result, new business models can be developed systematically and new ideas can be generated to boost the concept of collaborative cloud manufacturing within all sustainable business models.

  15. MODELLING THE FUTURE MUSIC TEACHERS’ READINESS TO PERFORMING AND INTERPRETIVE ACTIVITY DURING INSTRUMENTAL TRAINING

    Directory of Open Access Journals (Sweden)

    Chenj Bo

    2016-11-01

    Full Text Available One of the main fields of training future music teachers in Ukrainian system of higher education is instrumental music one, such as skills of performing and interpretive activities. The aim of the article is to design a model of the future music teachers’ readiness to performing and interpretive activities in musical and instrumental training. The process of modelling is based on several interrelated scientific approaches, including systemic, personality-centered, reflective, competence, active and creative ones. While designing a model of music future teachers’ readinesses to musical interpretive activities, its philosophical, informative, interactive, hedonistic, creative functions are taken into account. Important theoretical and methodological factors are thought to be principles of musical and pedagogical education: culture correspondence and reflection; unity of emotional and conscious, artistic and technical items in musical education; purposeful interrelations and art and pedagogical communication between teachers and students; intensification of music and creative activity. Above-mentioned pedagogical phenomenon is subdivided into four components: motivation-oriented, cognitive-evaluating, performance-independent, creative and productive. For each component relevant criteria and indicators are identified. The stages of future music teachers’ readiness to performing interpretative activity are highlighted: information searching one, which contributes to the implementation of complex diagnostic methods (surveys, questionnaires, testing; regulative and performing one, which is characterized by future music teachers’ immersion into music performing and interpretative activities; operational and reflective stage, which involves activation of mechanisms of future music teachers’ self-knowledge, self-realization, formation of skills of independent artistic and expressive various music genres and styles interpretation; projective and

  16. Enabling interoperability in planetary sciences and heliophysics: The case for an information model

    Science.gov (United States)

    Hughes, J. Steven; Crichton, Daniel J.; Raugh, Anne C.; Cecconi, Baptiste; Guinness, Edward A.; Isbell, Christopher E.; Mafi, Joseph N.; Gordon, Mitchell K.; Hardman, Sean H.; Joyner, Ronald S.

    2018-01-01

    The Planetary Data System has developed the PDS4 Information Model to enable interoperability across diverse science disciplines. The Information Model is based on an integration of International Organization for Standardization (ISO) level standards for trusted digital archives, information model development, and metadata registries. Where controlled vocabularies provides a basic level of interoperability by providing a common set of terms for communication between both machines and humans the Information Model improves interoperability by means of an ontology that provides semantic information or additional related context for the terms. The information model was defined by team of computer scientists and science experts from each of the diverse disciplines in the Planetary Science community, including Atmospheres, Geosciences, Cartography and Imaging Sciences, Navigational and Ancillary Information, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies. The model was designed to be extensible beyond the Planetary Science community, for example there are overlaps between certain PDS disciplines and the Heliophysics and Astrophysics disciplines. "Interoperability" can apply to many aspects of both the developer and the end-user experience, for example agency-to-agency, semantic level, and application level interoperability. We define these types of interoperability and focus on semantic level interoperability, the type of interoperability most directly enabled by an information model.

  17. Multi-dimensional knowledge translation: enabling health informatics capacity audits using patient journey models.

    Science.gov (United States)

    Catley, Christina; McGregor, Carolyn; Percival, Jennifer; Curry, Joanne; James, Andrew

    2008-01-01

    This paper presents a multi-dimensional approach to knowledge translation, enabling results obtained from a survey evaluating the uptake of Information Technology within Neonatal Intensive Care Units to be translated into knowledge, in the form of health informatics capacity audits. Survey data, having multiple roles, patient care scenarios, levels, and hospitals, is translated using a structured data modeling approach, into patient journey models. The data model is defined such that users can develop queries to generate patient journey models based on a pre-defined Patient Journey Model architecture (PaJMa). PaJMa models are then analyzed to build capacity audits. Capacity audits offer a sophisticated view of health informatics usage, providing not only details of what IT solutions a hospital utilizes, but also answering the questions: when, how and why, by determining when the IT solutions are integrated into the patient journey, how they support the patient information flow, and why they improve the patient journey.

  18. Information Management Workflow and Tools Enabling Multiscale Modeling Within ICME Paradigm

    Science.gov (United States)

    Arnold, Steven M.; Bednarcyk, Brett A.; Austin, Nic; Terentjev, Igor; Cebon, Dave; Marsden, Will

    2016-01-01

    With the increased emphasis on reducing the cost and time to market of new materials, the need for analytical tools that enable the virtual design and optimization of materials throughout their processing - internal structure - property - performance envelope, along with the capturing and storing of the associated material and model information across its lifecycle, has become critical. This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Fortunately, material information management systems and physics-based multiscale modeling methods have kept pace with the growing user demands. Herein, recent efforts to establish workflow for and demonstrate a unique set of web application tools for linking NASA GRC's Integrated Computational Materials Engineering (ICME) Granta MI database schema and NASA GRC's Integrated multiscale Micromechanics Analysis Code (ImMAC) software toolset are presented. The goal is to enable seamless coupling between both test data and simulation data, which is captured and tracked automatically within Granta MI®, with full model pedigree information. These tools, and this type of linkage, are foundational to realizing the full potential of ICME, in which materials processing, microstructure, properties, and performance are coupled to enable application-driven design and optimization of materials and structures.

  19. Interpretation of moving EM dipole-dipole measurements using thin plate models

    International Nuclear Information System (INIS)

    Oksama, M.; Suppala, I.

    1998-01-01

    The three dimensional inversion of electromagnetic data is still rather problematic, because forward modelling programs are usually time consuming. They are based on numerical methods like finite element or integral equation methods. In the study a specific model for interpretation: two thin plates, which are located in a horizontally layered earth with two layers has been chosen. The model is rather limited, but in a few geological cases it is relevant. This interpretation method has been applied for two geophysical EM-systems, the slingram-system and the airborne electromagnetic system of the Geological Survey of Finland (GTK)

  20. Interpretation of moving EM dipole-dipole measurements using thin plate models

    Energy Technology Data Exchange (ETDEWEB)

    Oksama, M.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)

    1998-09-01

    The three dimensional inversion of electromagnetic data is still rather problematic, because forward modelling programs are usually time consuming. They are based on numerical methods like finite element or integral equation methods. In the study a specific model for interpretation: two thin plates, which are located in a horizontally layered earth with two layers has been chosen. The model is rather limited, but in a few geological cases it is relevant. This interpretation method has been applied for two geophysical EM-systems, the slingram-system and the airborne electromagnetic system of the Geological Survey of Finland (GTK) 5 refs.

  1. Using the model statement to elicit information and cues to deceit in interpreter-based interviews.

    Science.gov (United States)

    Vrij, Aldert; Leal, Sharon; Mann, Samantha; Dalton, Gary; Jo, Eunkyung; Shaboltas, Alla; Khaleeva, Maria; Granskaya, Juliana; Houston, Kate

    2017-06-01

    We examined how the presence of an interpreter during an interview affects eliciting information and cues to deceit, while using a method that encourages interviewees to provide more detail (model statement, MS). A total of 199 Hispanic, Korean and Russian participants were interviewed either in their own native language without an interpreter, or through an interpreter. Interviewees either lied or told the truth about a trip they made during the last twelve months. Half of the participants listened to a MS at the beginning of the interview. The dependent variables were 'detail', 'complications', 'common knowledge details', 'self-handicapping strategies' and 'ratio of complications'. In the MS-absent condition, the interviews resulted in less detail when an interpreter was present than when an interpreter was absent. In the MS-present condition, the interviews resulted in a similar amount of detail in the interpreter present and absent conditions. Truthful statements included more complications and fewer common knowledge details and self-handicapping strategies than deceptive statements, and the ratio of complications was higher for truth tellers than liars. The MS strengthened these results, whereas an interpreter had no effect on these results. Copyright © 2017. Published by Elsevier B.V.

  2. Interpretation of the results of statistical measurements. [search for basic probability model

    Science.gov (United States)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  3. Featuring Multiple Local Optima to Assist the User in the Interpretation of Induced Bayesian Network Models

    DEFF Research Database (Denmark)

    Dalgaard, Jens; Pena, Jose; Kocka, Tomas

    2004-01-01

    We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...

  4. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    Science.gov (United States)

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  6. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  7. Implementing novel models of posttreatment care for cancer survivors: Enablers, challenges and recommendations.

    Science.gov (United States)

    Jefford, Michael; Kinnane, Nicole; Howell, Paula; Nolte, Linda; Galetakis, Spiridoula; Bruce Mann, Gregory; Naccarella, Lucio; Lai-Kwon, Julia; Simons, Katherine; Avery, Sharon; Thompson, Kate; Ashley, David; Haskett, Martin; Davies, Elise; Whitfield, Kathryn

    2015-12-01

    The American Society of Clinical Oncology and US Institute of Medicine emphasize the need to trial novel models of posttreatment care, and disseminate findings. In 2011, the Victorian State Government (Australia) established the Victorian Cancer Survivorship Program (VCSP), funding six 2-year demonstration projects, targeting end of initial cancer treatment. Projects considered various models, enrolling people of differing cancer types, age and residential areas. We sought to determine common enablers of success, as well as challenges/barriers. Throughout the duration of the projects, a formal "community of practice" met regularly to share experiences. Projects provided regular formal progress reports. An analysis framework was developed to synthesize key themes and identify critical enablers and challenges. Two external reviewers examined final project reports. Discussion with project teams clarified content. Survivors reported interventions to be acceptable, appropriate and effective. Strong clinical leadership was identified as a critical success factor. Workforce education was recognized as important. Partnerships with consumers, primary care and community organizations; risk stratified pathways with rapid re-access to specialist care; and early preparation for survivorship, self-management and shared care models supported positive project outcomes. Tailoring care to individual needs and predicted risks was supported. Challenges included: lack of valid assessment and prediction tools; limited evidence to support novel care models; workforce redesign; and effective engagement with community-based care and issues around survivorship terminology. The VCSP project outcomes have added to growing evidence around posttreatment care. Future projects should consider the identified enablers and challenges when designing and implementing survivorship care. © 2015 Wiley Publishing Asia Pty Ltd.

  8. An Observation Capability Metadata Model for EO Sensor Discovery in Sensor Web Enablement Environments

    Directory of Open Access Journals (Sweden)

    Chuli Hu

    2014-10-01

    Full Text Available Accurate and fine-grained discovery by diverse Earth observation (EO sensors ensures a comprehensive response to collaborative observation-required emergency tasks. This discovery remains a challenge in an EO sensor web environment. In this study, we propose an EO sensor observation capability metadata model that reuses and extends the existing sensor observation-related metadata standards to enable the accurate and fine-grained discovery of EO sensors. The proposed model is composed of five sub-modules, namely, ObservationBreadth, ObservationDepth, ObservationFrequency, ObservationQuality and ObservationData. The model is applied to different types of EO sensors and is formalized by the Open Geospatial Consortium Sensor Model Language 1.0. The GeosensorQuery prototype retrieves the qualified EO sensors based on the provided geo-event. An actual application to flood emergency observation in the Yangtze River Basin in China is conducted, and the results indicate that sensor inquiry can accurately achieve fine-grained discovery of qualified EO sensors and obtain enriched observation capability information. In summary, the proposed model enables an efficient encoding system that ensures minimum unification to represent the observation capabilities of EO sensors. The model functions as a foundation for the efficient discovery of EO sensors. In addition, the definition and development of this proposed EO sensor observation capability metadata model is a helpful step in extending the Sensor Model Language (SensorML 2.0 Profile for the description of the observation capabilities of EO sensors.

  9. Analysis of diet optimization models for enabling conditions for hypertrophic muscle enlargement in athletes

    Directory of Open Access Journals (Sweden)

    L. Matijević

    2013-01-01

    Full Text Available In this study mathematical models were created and used in diet optimization for an athlete – recreational bodybuilder for pretournament period. The main aim was to determine weekly menus that can enable conditions for the hypertrophic muscle enlargement and to reduce the fat mass in a body. Each daily offer was planned to contain six to seven meals but with respect to several user’s personal demands. Optimal carbohydrates, fat and protein ratio in diet for enabling hypertrophy, recommended in literature, was found to be 43:30:27 and was chosen as the target in this research. Variables included in models were presented dishes and constraints, observed values of the offers; price, mass of consumed food, energy, water and content of different nutrients. The general idea was to create the models and to compare different programs in solving a problem. LINDO and MS Excel were recognized as widely spread and were chosen for model testing and examination. Both programs were suggested weekly menus that were acceptable to the user and were met all recommendations and demands. Weekly menus were analysed and compared. Sensitivity tests from both programs were used to detect possible critical points in the menu. Used programs produced slightly different results but still with very high correlation between proposed weekly intakes (R2=0.99856, p<0.05 so both can be successfully used in the pretournament period of bodybuilding and recommended for this complex task.

  10. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    Science.gov (United States)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  11. Interpretable Predictive Models for Knowledge Discovery from Home-Care Electronic Health Records

    Directory of Open Access Journals (Sweden)

    Bonnie L. Westra

    2011-01-01

    Full Text Available The purpose of this methodological study was to compare methods of developing predictive rules that are parsimonious and clinically interpretable from electronic health record (EHR home visit data, contrasting logistic regression with three data mining classification models. We address three problems commonly encountered in EHRs: the value of including clinically important variables with little variance, handling imbalanced datasets, and ease of interpretation of the resulting predictive models. Logistic regression and three classification models using Ripper, decision trees, and Support Vector Machines were applied to a case study for one outcome of improvement in oral medication management. Predictive rules for logistic regression, Ripper, and decision trees are reported and results compared using F-measures for data mining models and area under the receiver-operating characteristic curve for all models. The rules generated by the three classification models provide potentially novel insights into mining EHRs beyond those provided by standard logistic regression, and suggest steps for further study.

  12. A mathematical model for interpretable clinical decision support with applications in gynecology.

    Directory of Open Access Journals (Sweden)

    Vanya M C A Van Belle

    Full Text Available Over time, methods for the development of clinical decision support (CDS systems have evolved from interpretable and easy-to-use scoring systems to very complex and non-interpretable mathematical models. In order to accomplish effective decision support, CDS systems should provide information on how the model arrives at a certain decision. To address the issue of incompatibility between performance, interpretability and applicability of CDS systems, this paper proposes an innovative model structure, automatically leading to interpretable and easily applicable models. The resulting models can be used to guide clinicians when deciding upon the appropriate treatment, estimating patient-specific risks and to improve communication with patients.We propose the interval coded scoring (ICS system, which imposes that the effect of each variable on the estimated risk is constant within consecutive intervals. The number and position of the intervals are automatically obtained by solving an optimization problem, which additionally performs variable selection. The resulting model can be visualised by means of appealing scoring tables and color bars. ICS models can be used within software packages, in smartphone applications, or on paper, which is particularly useful for bedside medicine and home-monitoring. The ICS approach is illustrated on two gynecological problems: diagnosis of malignancy of ovarian tumors using a dataset containing 3,511 patients, and prediction of first trimester viability of pregnancies using a dataset of 1,435 women. Comparison of the performance of the ICS approach with a range of prediction models proposed in the literature illustrates the ability of ICS to combine optimal performance with the interpretability of simple scoring systems.The ICS approach can improve patient-clinician communication and will provide additional insights in the importance and influence of available variables. Future challenges include extensions of the

  13. Enabling Real-time Water Decision Support Services Using Model as a Service

    Science.gov (United States)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  14. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  15. An approach to the interpretation of backpropagation neural network models in QSAR studies.

    Science.gov (United States)

    Baskin, I I; Ait, A O; Halberstam, N M; Palyulin, V A; Zefirov, N S

    2002-03-01

    An approach to the interpretation of backpropagation neural network models for quantitative structure-activity and structure-property relationships (QSAR/QSPR) studies is proposed. The method is based on analyzing the first and second moments of distribution of the values of the first and the second partial derivatives of neural network outputs with respect to inputs calculated at data points. The use of such statistics makes it possible not only to obtain actually the same characteristics as for the case of traditional "interpretable" statistical methods, such as the linear regression analysis, but also to reveal important additional information regarding the non-linear character of QSAR/QSPR relationships. The approach is illustrated by an example of interpreting a backpropagation neural network model for predicting position of the long-wave absorption band of cyane dyes.

  16. Dream interpretation, affect, and the theory of neuronal group selection: Freud, Winnicott, Bion, and Modell.

    Science.gov (United States)

    Shields, Walker

    2006-12-01

    The author uses a dream specimen as interpreted during psychoanalysis to illustrate Modell's hypothesis that Edelman's theory of neuronal group selection (TNGS) may provide a valuable neurobiological model for Freud's dynamic unconscious, imaginative processes in the mind, the retranscription of memory in psychoanalysis, and intersubjective processes in the analytic relationship. He draws parallels between the interpretation of the dream material with keen attention to affect-laden meanings in the evolving analytic relationship in the domain of psychoanalysis and the principles of Edelman's TNGS in the domain of neurobiology. The author notes how this correlation may underscore the importance of dream interpretation in psychoanalysis. He also suggests areas for further investigation in both realms based on study of their interplay.

  17. Modeling sports highlights using a time-series clustering framework and model interpretation

    Science.gov (United States)

    Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay

    2005-01-01

    In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.

  18. SUV navigator enables rapid [18F]-FDG PET/CT image interpretation compared with 2D ROI and 3D VOI evaluations

    International Nuclear Information System (INIS)

    Okizaki, Atsutaka; Nakayama Michihiro; Ishitoya, Shunta; Nakajima, Kaori; Yamashina Masaaki; Aburano, Tamio; Takahashi, Koji

    2017-01-01

    Positron emission tomography (PET) and the maximum standardized uptake value (SUV max ) is a useful technique for assessing malignant tumors. Measurements of SUV max in multiple lesions per patient frequently require many time-consuming procedures. To address this issue, we designed a novel interface named SUV Navigator (SUVnavi), and the purpose of this study was to investigate its utility. We measured SUV max in 661 lesions from 100 patients with malignant tumors. Diagnoses and SUV max measurements were made with SUVnavi, 2D, and 3D measurements. SUV measurement accuracy in each method were also evaluated. The average reduction in time with SUVnavi versus 2D was 53.8% and 3D was 37.5%; time required with SUVnavi was significantly shorter than with 2D and 3D (P < 0.001 and P < 0.001, respectively). The time reduction and lesion number had a positive correlation (P < 0.001 and P < 0.001, respectively). SUV max agreed with precise SUV max in all lesions measured with SUVnavi and 3D but in only 466 of 661 lesions (70.5%) measured with 2D. Conclusion SUVnavi may be useful for rapid [ 18 F]-fluorodeoxyglucose positron emission tomogra phy/computed tomography ([ 18 F]-FDG PET/CT) image interpretation without reducing the accuracy of SUV max measurement. (author)

  19. Validation of Diagnostic Imaging Based on Repeat Examinations. An Image Interpretation Model

    International Nuclear Information System (INIS)

    Isberg, B.; Jorulf, H.; Thorstensen, Oe.

    2004-01-01

    Purpose: To develop an interpretation model, based on repeatedly acquired images, aimed at improving assessments of technical efficacy and diagnostic accuracy in the detection of small lesions. Material and Methods: A theoretical model is proposed. The studied population consists of subjects that develop focal lesions which increase in size in organs of interest during the study period. The imaging modality produces images that can be re-interpreted with high precision, e.g. conventional radiography, computed tomography, and magnetic resonance imaging. At least four repeat examinations are carried out. Results: The interpretation is performed in four or five steps: 1. Independent readers interpret the examinations chronologically without access to previous or subsequent films. 2. Lesions found on images at the last examination are included in the analysis, with interpretation in consensus. 3. By concurrent back-reading in consensus, the lesions are identified on previous images until they are so small that even in retrospect they are undetectable. The earliest examination at which included lesions appear is recorded, and the lesions are verified by their growth (imaging reference standard). Lesion size and other characteristics may be recorded. 4. Records made at step 1 are corrected to those of steps 2 and 3. False positives are recorded. 5. (Optional) Lesion type is confirmed by another diagnostic test. Conclusion: Applied on subjects with progressive disease, the proposed image interpretation model may improve assessments of technical efficacy and diagnostic accuracy in the detection of small focal lesions. The model may provide an accurate imaging reference standard as well as repeated detection rates and false-positive rates for tested imaging modalities. However, potential review bias necessitates a strict protocol

  20. Kinetic Modeling of Accelerated Stability Testing Enabled by Second Harmonic Generation Microscopy.

    Science.gov (United States)

    Song, Zhengtian; Sarkar, Sreya; Vogt, Andrew D; Danzer, Gerald D; Smith, Casey J; Gualtieri, Ellen J; Simpson, Garth J

    2018-04-03

    The low limits of detection afforded by second harmonic generation (SHG) microscopy coupled with image analysis algorithms enabled quantitative modeling of the temperature-dependent crystallization of active pharmaceutical ingredients (APIs) within amorphous solid dispersions (ASDs). ASDs, in which an API is maintained in an amorphous state within a polymer matrix, are finding increasing use to address solubility limitations of small-molecule APIs. Extensive stability testing is typically performed for ASD characterization, the time frame for which is often dictated by the earliest detectable onset of crystal formation. Here a study of accelerated stability testing on ritonavir, a human immunodeficiency virus (HIV) protease inhibitor, has been conducted. Under the condition for accelerated stability testing at 50 °C/75%RH and 40 °C/75%RH, ritonavir crystallization kinetics from amorphous solid dispersions were monitored by SHG microscopy. SHG microscopy coupled by image analysis yielded limits of detection for ritonavir crystals as low as 10 ppm, which is about 2 orders of magnitude lower than other methods currently available for crystallinity detection in ASDs. The four decade dynamic range of SHG microscopy enabled quantitative modeling with an established (JMAK) kinetic model. From the SHG images, nucleation and crystal growth rates were independently determined.

  1. Ames Culture Chamber System: Enabling Model Organism Research Aboard the international Space Station

    Science.gov (United States)

    Steele, Marianne

    2014-01-01

    Understanding the genetic, physiological, and behavioral effects of spaceflight on living organisms and elucidating the molecular mechanisms that underlie these effects are high priorities for NASA. Certain organisms, known as model organisms, are widely studied to help researchers better understand how all biological systems function. Small model organisms such as nem-atodes, slime mold, bacteria, green algae, yeast, and moss can be used to study the effects of micro- and reduced gravity at both the cellular and systems level over multiple generations. Many model organisms have sequenced genomes and published data sets on their transcriptomes and proteomes that enable scientific investigations of the molecular mechanisms underlying the adaptations of these organisms to space flight.

  2. Estimation and interpretation of genetic effects with epistasis using the NOIA model.

    Science.gov (United States)

    Alvarez-Castro, José M; Carlborg, Orjan; Rönnegård, Lars

    2012-01-01

    We introduce this communication with a brief outline of the historical landmarks in genetic modeling, especially concerning epistasis. Then, we present methods for the use of genetic modeling in QTL analyses. In particular, we summarize the essential expressions of the natural and orthogonal interactions (NOIA) model of genetic effects. Our motivation for reviewing that theory here is twofold. First, this review presents a digest of the expressions for the application of the NOIA model, which are often mixed with intermediate and additional formulae in the original articles. Second, we make the required theory handy for the reader to relate the genetic concepts to the particular mathematical expressions underlying them. We illustrate those relations by providing graphical interpretations and a diagram summarizing the key features for applying genetic modeling with epistasis in comprehensive QTL analyses. Finally, we briefly review some examples of the application of NOIA to real data and the way it improves the interpretability of the results.

  3. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  4. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    Science.gov (United States)

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  5. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Massaine Bandeira e Sousa

    2017-06-01

    Full Text Available Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1 single-environment, main genotypic effect model (SM; (2 multi-environment, main genotypic effects model (MM; (3 multi-environment, single variance G×E deviation model (MDs; and (4 multi-environment, environment-specific variance G×E deviation model (MDe. Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB, and a nonlinear kernel Gaussian kernel (GK. The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets, having different numbers of maize hybrids evaluated in different environments for grain yield (GY, plant height (PH, and ear height (EH. Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied.

  6. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  7. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization

    Science.gov (United States)

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-01-01

    Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http

  8. Enhancing CIDOC-CRM and compatible models with the concept of multiple interpretation

    Science.gov (United States)

    Van Ruymbeke, M.; Hallot, P.; Billen, R.

    2017-08-01

    Modelling cultural heritage and archaeological objects is used as much for management as for research purposes. To ensure the sustainable benefit of digital data, models benefit from taking the data specificities of historical and archaeological domains into account. Starting from a conceptual model tailored to storing these specificities, we present, in this paper, an extended mapping to CIDOC-CRM and its compatible models. Offering an ideal framework to structure and highlight the best modelling practices, these ontologies are essentially dedicated to storing semantic data which provides information about cultural heritage objects. Based on this standard, our proposal focuses on multiple interpretation and sequential reality.

  9. Risk of the Maritime Supply Chain System Based on Interpretative Structural Model

    Directory of Open Access Journals (Sweden)

    Jiang He

    2017-11-01

    Full Text Available Marine transportation is the most important transport mode of in the international trade, but the maritime supply chain is facing with many risks. At present, most of the researches on the risk of the maritime supply chain focus on the risk identification and risk management, and barely carry on the quantitative analysis of the logical structure of each influencing factor. This paper uses the interpretative structure model to analysis the maritime supply chain risk system. On the basis of comprehensive literature analysis and expert opinion, this paper puts forward 16 factors of maritime supply chain risk system. Using the interpretative structure model to construct maritime supply chain risk system, and then optimize the model. The model analyzes the structure of the maritime supply chain risk system and its forming process, and provides a scientific basis for the controlling the maritime supply chain risk, and puts forward some corresponding suggestions for the prevention and control the maritime supply chain risk.

  10. Enabling model checking for collaborative process analysis: from BPMN to `Network of Timed Automata'

    Science.gov (United States)

    Mallek, Sihem; Daclin, Nicolas; Chapurlat, Vincent; Vallespir, Bruno

    2015-04-01

    Interoperability is a prerequisite for partners involved in performing collaboration. As a consequence, the lack of interoperability is now considered a major obstacle. The research work presented in this paper aims to develop an approach that allows specifying and verifying a set of interoperability requirements to be satisfied by each partner in the collaborative process prior to process implementation. To enable the verification of these interoperability requirements, it is necessary first and foremost to generate a model of the targeted collaborative process; for this research effort, the standardised language BPMN 2.0 is used. Afterwards, a verification technique must be introduced, and model checking is the preferred option herein. This paper focuses on application of the model checker UPPAAL in order to verify interoperability requirements for the given collaborative process model. At first, this step entails translating the collaborative process model from BPMN into a UPPAAL modelling language called 'Network of Timed Automata'. Second, it becomes necessary to formalise interoperability requirements into properties with the dedicated UPPAAL language, i.e. the temporal logic TCTL.

  11. Spatial Interpretation of Tower, Chamber and Modelled Terrestrial Fluxes in a Tropical Forest Plantation

    Science.gov (United States)

    Whidden, E.; Roulet, N.

    2003-04-01

    Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.

  12. Modelling research on determining shape coefficients for subdivision interpretation in γ-ray spectral logging

    International Nuclear Information System (INIS)

    Yin Wangming; She Guanjun; Tang Bin

    2011-01-01

    This paper first describes the physical meaning of the shape coefficients in the subdivision interpretation of γ-ray logging; then discusses the theory, method to determine the practical shape coefficients with logging model and defines the formula to approximately calculate the coefficients. A great deal of experimental work has been preformed with a HPGe γ-ray spectrometer and reached satisfied result which has validated the effeciency of the modelling method. (authors)

  13. Applying total interpretive structural modeling to study factors affecting construction labour productivity

    Directory of Open Access Journals (Sweden)

    Sayali Shrikrishna Sandbhor

    2014-03-01

    Full Text Available Construction sector has always been dependent on manpower. Most of the activities carried out on any construction site are labour intensive. Since productivity of any project depends directly on productivity of labour, it is a prime responsibility of the employer to enhance labour productivity. Measures to improve the same depend on analysis of positive and negative factors affecting productivity. Major attention should be given to factors that decrease the productivity of labour. Factor analysis thus is an integral part of any study aiming to improve productivity.  Interpretive structural modeling is a methodology for identifying and summarizing relationships among factors which define an issue or problem. It provides a means to arrange the factors in an order as per their complexity. This study attempts to use the latest version of interpretive structural modeling i.e. total interpretive structural modeling to analyze factors negatively affecting construction labour productivity. It establishes interpretive relationship among these factors facilitating improvement in the overall productivity of construction site.

  14. Antagonism and Mutual Dependency. Critial Models of Performance and “Piano Interpretation Schools”

    Directory of Open Access Journals (Sweden)

    Rui Cruz

    2011-12-01

    Full Text Available To polarize and, coincidently, intersect two different concepts, in terms of a distinction/analogy between “piano interpretation schools” and “critical models” is the aim of this paper. The former, with its prior connotations of both empiricism and dogmatism and not directly shaped by aesthetic criteria or interpretational ideals, depends mainly on the aural and oral tradition as well the teacher-student legacy; the latter employs ideally the generic criteria of interpretativeness, which can be measured in accordance to an aesthetic formula and can include features such as non-obviousness, inferentially, lack of consensus, concern with meaning or significance, concern with structure or design, etc. The relative autonomy of the former is a challenge to the latter, which embraces the range of perspectives available in the horizon of the history of ideas about music and interpretation. The effort of recognizing models of criticism within musical interpretation creates the vehicle for new understandings of the nature and the historical development of Western classical piano performance, promoting also the production of quality critical argument and the communication of key performance tendencies and styles.

  15. Supporting interpretation of dynamic simulation. Application to chemical kinetic models; Aides a l`interpretation de simulations dynamiques. Application aux modeles de cinetique chimique

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, B

    1998-04-22

    Numerous scientific and technical domains make constant use of dynamical simulations. Such simulators are put in the hands of a growing number of users. This phenomenon is due both to the extraordinary increase in computing performance, and to better graphical user interfaces which make simulation models easy to operate. But simulators are still computer programs which produce series of numbers from other series of numbers, even if they are displayed graphically. This thesis presents new interaction paradigms between a dynamical simulator and its user. The simulator produces a self-made interpretation of its results, thanks to a dedicated representation of its domain with objects. It shows dominant cyclic mechanisms identified by their instantaneous loop gain estimates, it uses a notion of episodes for splitting the simulation into homogeneous time intervals, and completes this by animations which rely on the graphical structure of the system. These new approaches are demonstrated with examples from chemical kinetics, because of the energic and exemplary characteristics of the encountered behaviors. They are implemented in the Spike software, Software Platform for Interactive Chemical Kinetics Experiments. Similar concepts are also shown in two other domains: interpretation of seismic wave propagation, and simulation of large projects. (author) 95 refs.

  16. [Interpretative method as a synthesis of explicative, teleologic and analogic models].

    Science.gov (United States)

    Yáñez Cortés, R

    1980-06-01

    To establish the basis of the interpretative method is congruous with finding a solid basis--epistemologically speaking--to the analytic theory. This basis would be the means to transform this theory into a real science with its necessary adecuation among method, act and object of knowledge. It is only from a scientific stand that the psychoanalytic theory will be able to face successfully the reductionisms that menace it, be it the biologist-naturalism with its explanations of the psychic phenomena by means of mechanisms and biologic models or be it the speculative ideologies with their nucleus of technical praxis which make it impossible for the social-factic sciences to become real sciences. We propose as interpretative method the union of two models: the teleologic one which makes possible the appearance of intelligible, contingent and variable explanations between an antecedent and a consequent on one side, and on the other, the analogic model with its two moments: the comparative and the symbolic one. These moments makes possible the comparison and the union between antecedent and consequent baring in mind the "natural" ambiguity of the subject-object in question. The principal objective of the method--as a regulative idea in the Kantian sense--would be the search of univocity as regards the choice of one and only one sense from all the possible senses that "explain" the motive relationship or motive-end relationship in order to make the interpretation scientific. This status of scientificity should obey the rules of explanation: that the interpretations be derived effectively from the presupposed theory, that they really explain what they claim to explain, that they are not contradictory or contrary in the same ontologic level. We postulate that the synthesis of the two mentioned models, the teleologic-explanative and the analogic one allows us to find a possibility to make clear the "dark" sense of the noun interpretation and in this way the factibility of

  17. Statistical analysis of road-vehicle-driver interaction as an enabler to designing behavioural models

    International Nuclear Information System (INIS)

    Chakravarty, T; Chowdhury, A; Ghose, A; Bhaumik, C; Balamuralidhar, P

    2014-01-01

    Telematics form an important technology enabler for intelligent transportation systems. By deploying on-board diagnostic devices, the signatures of vehicle vibration along with its location and time are recorded. Detailed analyses of the collected signatures offer deep insights into the state of the objects under study. Towards that objective, we carried out experiments by deploying telematics device in one of the office bus that ferries employees to office and back. Data is being collected from 3-axis accelerometer, GPS, speed and the time for all the journeys. In this paper, we present initial results of the above exercise by applying statistical methods to derive information through systematic analysis of the data collected over four months. It is demonstrated that the higher order derivative of the measured Z axis acceleration samples display the properties Weibull distribution when the time axis is replaced by the amplitude of such processed acceleration data. Such an observation offers us a method to predict future behaviour where deviations from prediction are classified as context-based aberrations or progressive degradation of the system. In addition we capture the relationship between speed of the vehicle and median of the jerk energy samples using regression analysis. Such results offer an opportunity to develop a robust method to model road-vehicle interaction thereby enabling us to predict such like driving behaviour and condition based maintenance etc

  18. Crossing statistic: Bayesian interpretation, model selection and resolving dark energy parametrization problem

    International Nuclear Information System (INIS)

    Shafieloo, Arman

    2012-01-01

    By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties

  19. A Novel Experimental and Modelling Strategy for Nanoparticle Toxicity Testing Enabling the Use of Small Quantities

    Directory of Open Access Journals (Sweden)

    Marinda van Pomeren

    2017-11-01

    Full Text Available Metallic nanoparticles (NPs differ from other metal forms with respect to their large surface to volume ratio and subsequent inherent reactivity. Each new modification to a nanoparticle alters the surface to volume ratio, fate and subsequently the toxicity of the particle. Newly-engineered NPs are commonly available only in low quantities whereas, in general, rather large amounts are needed for fate characterizations and effect studies. This challenge is especially relevant for those NPs that have low inherent toxicity combined with low bioavailability. Therefore, within our study, we developed new testing strategies that enable working with low quantities of NPs. The experimental testing method was tailor-made for NPs, whereas we also developed translational models based on different dose-metrics allowing to determine dose-response predictions for NPs. Both the experimental method and the predictive models were verified on the basis of experimental effect data collected using zebrafish embryos exposed to metallic NPs in a range of different chemical compositions and shapes. It was found that the variance in the effect data in the dose-response predictions was best explained by the minimal diameter of the NPs, whereas the data confirmed that the predictive model is widely applicable to soluble metallic NPs. The experimental and model approach developed in our study support the development of (ecotoxicity assays tailored to nano-specific features.

  20. Modeling of RFID-Enabled Real-Time Manufacturing Execution System in Mixed-Model Assembly Lines

    Directory of Open Access Journals (Sweden)

    Zhixin Yang

    2015-01-01

    Full Text Available To quickly respond to the diverse product demands, mixed-model assembly lines are well adopted in discrete manufacturing industries. Besides the complexity in material distribution, mixed-model assembly involves a variety of components, different process plans and fast production changes, which greatly increase the difficulty for agile production management. Aiming at breaking through the bottlenecks in existing production management, a novel RFID-enabled manufacturing execution system (MES, which is featured with real-time and wireless information interaction capability, is proposed to identify various manufacturing objects including WIPs, tools, and operators, etc., and to trace their movements throughout the production processes. However, being subject to the constraints in terms of safety stock, machine assignment, setup, and scheduling requirements, the optimization of RFID-enabled MES model for production planning and scheduling issues is a NP-hard problem. A new heuristical generalized Lagrangian decomposition approach has been proposed for model optimization, which decomposes the model into three subproblems: computation of optimal configuration of RFID senor networks, optimization of production planning subjected to machine setup cost and safety stock constraints, and optimization of scheduling for minimized overtime. RFID signal processing methods that could solve unreliable, redundant, and missing tag events are also described in detail. The model validity is discussed through algorithm analysis and verified through numerical simulation. The proposed design scheme has important reference value for the applications of RFID in multiple manufacturing fields, and also lays a vital research foundation to leverage digital and networked manufacturing system towards intelligence.

  1. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    Science.gov (United States)

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  2. RWater - A Novel Cyber-enabled Data-driven Educational Tool for Interpreting and Modeling Hydrologic Processes

    Science.gov (United States)

    Rajib, M. A.; Merwade, V.; Zhao, L.; Song, C.

    2014-12-01

    Explaining the complex cause-and-effect relationships in hydrologic cycle can often be challenging in a classroom with the use of traditional teaching approaches. With the availability of observed rainfall, streamflow and other hydrology data on the internet, it is possible to provide the necessary tools to students to explore these relationships and enhance their learning experience. From this perspective, a new online educational tool, called RWater, is developed using Purdue University's HUBzero technology. RWater's unique features include: (i) its accessibility including the R software from any java supported web browser; (ii) no installation of any software on user's computer; (iii) all the work and resulting data are stored in user's working directory on RWater server; and (iv) no prior programming experience with R software is necessary. In its current version, RWater can dynamically extract streamflow data from any USGS gaging station without any need for post-processing for use in the educational modules. By following data-driven modules, students can write small scripts in R and thereby create visualizations to identify the effect of rainfall distribution and watershed characteristics on runoff generation, investigate the impacts of landuse and climate change on streamflow, and explore the changes in extreme hydrologic events in actual locations. Each module contains relevant definitions, instructions on data extraction and coding, as well as conceptual questions based on the possible analyses which the students would perform. In order to assess its suitability in classroom implementation, and to evaluate users' perception over its utility, the current version of RWater has been tested with three different groups: (i) high school students, (ii) middle and high school teachers; and (iii) upper undergraduate/graduate students. The survey results from these trials suggest that the RWater has potential to improve students' understanding on various relationships in hydrologic cycle, leading towards effective dissemination of hydrology education ranging from K-12 to the graduate level. RWater is a publicly available for use at: https://mygeohub.org/tools/rwater.

  3. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  4. Interpretive Structural Model of Key Performance Indicators for Sustainable Maintenance Evaluatian in Rubber Industry

    Science.gov (United States)

    Amrina, E.; Yulianto, A.

    2018-03-01

    Sustainable maintenance is a new challenge for manufacturing companies to realize sustainable development. In this paper, an interpretive structural model is developed to evaluate sustainable maintenance in the rubber industry. The initial key performance indicators (KPIs) is identified and derived from literature and then validated by academic and industry experts. As a result, three factors of economic, social, and environmental dividing into a total of thirteen indicators are proposed as the KPIs for sustainable maintenance evaluation in rubber industry. Interpretive structural modeling (ISM) methodology is applied to develop a network structure model of the KPIs consisting of three levels. The results show the economic factor is regarded as the basic factor, the social factor as the intermediate factor, while the environmental factor indicated to be the leading factor. Two indicators of social factor i.e. labor relationship, and training and education have both high driver and dependence power, thus categorized as the unstable indicators which need further attention. All the indicators of environmental factor and one indicator of social factor are indicated as the most influencing indicator. The interpretive structural model hoped can aid the rubber companies in evaluating sustainable maintenance performance.

  5. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2014-01-01

    Full Text Available This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV, permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I wellbore storage section, (II intermediate flow section (transient section, (III mid-radial flow section, (IV crossflow section (from low permeability layer to high permeability layer, and (V systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR.

  6. Boundary effects relevant for the string interpretation of σ-models

    International Nuclear Information System (INIS)

    Behrndt, K.; Dorn, H.

    1991-01-01

    At first a short discussion of the on/off boundary position dependence of the renormalization counter terms and β-functions for generalized σ-models on manifolds with boundary is given. Treating the energy-momentum tensor of such models as a two-dimensional distribution one can show that contrary to the first impression this does not imply any obstruction for the string interpretation of such models. The analysis is extended to the effect of dual loop corrections to string induced equations of motion, too. (orig.)

  7. Using global magnetospheric models for simulation and interpretation of Swarm external field measurements

    DEFF Research Database (Denmark)

    Moretto, T.; Vennerstrøm, Susanne; Olsen, Nils

    2006-01-01

    simulated external contributions relevant for internal field modeling. These have proven very valuable for the design and planning of the up-coming multi-satellite Swarm mission. In addition, a real event simulation was carried out for a moderately active time interval when observations from the Orsted...... it consistently underestimates the dayside region 2 currents and overestimates the horizontal ionospheric closure currents in the dayside polar cap. Furthermore, with this example we illustrate the great benefit of utilizing the global model for the interpretation of Swarm external field observations and......, likewise, the potential of using Swarm measurements to test and improve the global model....

  8. Risk of the Maritime Supply Chain System Based on Interpretative Structural Model

    OpenAIRE

    Jiang He; Xiong Wei; Cao Yonghui

    2017-01-01

    Marine transportation is the most important transport mode of in the international trade, but the maritime supply chain is facing with many risks. At present, most of the researches on the risk of the maritime supply chain focus on the risk identification and risk management, and barely carry on the quantitative analysis of the logical structure of each influencing factor. This paper uses the interpretative structure model to analysis the maritime supply chain risk system. On the basis of com...

  9. Testing, Modeling, and Monitoring to Enable Simpler, Cheaper, Longer-lived Surface Caps

    International Nuclear Information System (INIS)

    Piet, S. J.; Breckenridge, R. P.; Burns, D. E.

    2003-01-01

    Society has and will continue to generate hazardous wastes whose risks must be managed. For exceptionally toxic, long-lived, and feared waste, the solution is deep burial, e.g., deep geological disposal at Yucca Mtn. For some waste, recycle or destruction/treatment is possible. The alternative for other wastes is storage at or near the ground level (in someone's back yard); most of these storage sites include a surface barrier (cap) to prevent downward water migration. Some of the hazards will persist indefinitely. As society and regulators have demanded additional proof that caps are robust against more threats and for longer time periods, the caps have become increasingly complex and expensive. As in other industries, increased complexity will eventually increase the difficulty in estimating performance, in monitoring system/component performance, and in repairing or upgrading barriers as risks are managed. An approach leading to simpler, less expensive, longer-lived, more manageable caps is needed. Our project, which started in April 2002, aims to catalyze a Barrier Improvement Cycle (iterative learning and application) and thus enable Remediation System Performance Management (doing the right maintenance neither too early nor too late). The knowledge gained and the capabilities built will help verify the adequacy of past remedial decisions, improve barrier management, and enable improved solutions for future decisions. We believe it will be possible to develop simpler, longer-lived, less expensive caps that are easier to monitor, manage, and repair. The project is planned to: (a) improve the knowledge of degradation mechanisms in times shorter than service life; (b) improve modeling of barrier degradation dynamics; (c) develop sensor systems to identify early degradation; and (d) provide a better basis for developing and testing of new barrier systems. This project combines selected exploratory studies (benchtop and field scale), coupled effects accelerated

  10. Structural interpretation of El Hierro (Canary Islands) rifts system from gravity inversion modelling

    Science.gov (United States)

    Sainz-Maza, S.; Montesinos, F. G.; Martí, J.; Arnoso, J.; Calvo, M.; Borreguero, A.

    2017-08-01

    Recent volcanism in El Hierro Island is mostly concentrated along three elongated and narrow zones which converge at the center of the island. These zones with extensive volcanism have been identified as rift zones. The presence of similar structures is common in many volcanic oceanic islands, so understanding their origin, dynamics and structure is important to conduct hazard assessment in such environments. There is still not consensus on the origin of the El Hierro rift zones, having been associated with mantle uplift or interpreted as resulting from gravitational spreading and flank instability. To further understand the internal structure and origin of the El Hierro rift systems, starting from the previous gravity studies, we developed a new 3D gravity inversion model for its shallower layers, gathering a detailed picture of this part of the island, which has permitted a new interpretation about these rifts. Previous models already identified a main central magma accumulation zone and several shallower high density bodies. The new model allows a better resolution of the pathways that connect both levels and the surface. Our results do not point to any correspondence between the upper parts of these pathways and the rift identified at the surface. Non-clear evidence of progression toward deeper parts into the volcanic system is shown, so we interpret them as very shallow structures, probably originated by local extensional stresses derived from gravitational loading and flank instability, which are used to facilitate the lateral transport of magma when it arrives close to the surface.

  11. Radar imaging of glaciovolcanic stratigraphy, Mount Wrangell caldera, Alaska - Interpretation model and results

    Science.gov (United States)

    Clarke, Garry K. C.; Cross, Guy M.; Benson, Carl S.

    1989-01-01

    Glaciological measurements and an airborne radar sounding survey of the glacier lying in Mount Wrangell caldera raise many questions concerning the glacier thermal regime and volcanic history of Mount Wrangell. An interpretation model has been developed that allows the depth variation of temperature, heat flux, pressure, density, ice velocity, depositional age, and thermal and dielectric properties to be calculated. Some predictions of the interpretation model are that the basal ice melting rate is 0.64 m/yr and the volcanic heat flux is 7.0 W/sq m. By using the interpretation model to calculate two-way travel time and propagation losses, radar sounding traces can be transformed to give estimates of the variation of power reflection coefficient as a function of depth and depositional age. Prominent internal reflecting zones are located at depths of approximately 59-91m, 150m, 203m, and 230m. These internal reflectors are attributed to buried horizons of acidic ice, possibly intermixed with volcanic ash, that were deposited during past eruptions of Mount Wrangell.

  12. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    Science.gov (United States)

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Analysis of bias in groundwater modelling due to the interpretation of site characterization data

    International Nuclear Information System (INIS)

    Clark, K.J.; Impey, M.D.; Ikeda, T.; McEwen, T.; White, M.

    1997-01-01

    Bias is a difference between model and reality. Bias can be introduced at any stage of the modelling process during a site characterization or performance assessment program. It is desirable to understand such bias so as to be able to optimally design and interpret a site characterization program. The objective of this study was to examine the source and effect of bias due to the assumptions modellers have to make because reality cannot be fully characterized in the prediction of groundwater fluxes. A well-defined synthetic reality was therefore constructed for this study. A limited subset of these data were independently interpreted and used to compute groundwater fluxes across specified boundaries in a cross section. The modelling results were compared to the true solutions derived using the full dataset. This study clarified and identified the large number of assumptions and judgments which have to be made when modelling a limited site characterization dataset. It is concluded that bias is introduced at each modelling stage, and that it is not necessarily detectable by the modellers even if multiple runs with varied parameter values are undertaken

  14. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  15. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  16. Model-based MPC enables curvilinear ILT using either VSB or multi-beam mask writers

    Science.gov (United States)

    Pang, Linyong; Takatsukasa, Yutetsu; Hara, Daisuke; Pomerantsev, Michael; Su, Bo; Fujimura, Aki

    2017-07-01

    Inverse Lithography Technology (ILT) is becoming the choice for Optical Proximity Correction (OPC) of advanced technology nodes in IC design and production. Multi-beam mask writers promise significant mask writing time reduction for complex ILT style masks. Before multi-beam mask writers become the main stream working tools in mask production, VSB writers will continue to be the tool of choice to write both curvilinear ILT and Manhattanized ILT masks. To enable VSB mask writers for complex ILT style masks, model-based mask process correction (MB-MPC) is required to do the following: 1). Make reasonable corrections for complex edges for those features that exhibit relatively large deviations from both curvilinear ILT and Manhattanized ILT designs. 2). Control and manage both Edge Placement Errors (EPE) and shot count. 3. Assist in easing the migration to future multi-beam mask writer and serve as an effective backup solution during the transition. In this paper, a solution meeting all those requirements, MB-MPC with GPU acceleration, will be presented. One model calibration per process allows accurate correction regardless of the target mask writer.

  17. The Role of Stochastic Models in Interpreting the Origins of Biological Chirality

    Directory of Open Access Journals (Sweden)

    Gábor Lente

    2010-04-01

    Full Text Available This review summarizes recent stochastic modeling efforts in the theoretical research aimed at interpreting the origins of biological chirality. Stochastic kinetic models, especially those based on the continuous time discrete state approach, have great potential in modeling absolute asymmetric reactions, experimental examples of which have been reported in the past decade. An overview of the relevant mathematical background is given and several examples are presented to show how the significant numerical problems characteristic of the use of stochastic models can be overcome by non-trivial, but elementary algebra. In these stochastic models, a particulate view of matter is used rather than the concentration-based view of traditional chemical kinetics using continuous functions to describe the properties system. This has the advantage of giving adequate description of single-molecule events, which were probably important in the origin of biological chirality. The presented models can interpret and predict the random distribution of enantiomeric excess among repetitive experiments, which is the most striking feature of absolute asymmetric reactions. It is argued that the use of the stochastic kinetic approach should be much more widespread in the relevant literature.

  18. PBPK and population modelling to interpret urine cadmium concentrations of the French population

    Energy Technology Data Exchange (ETDEWEB)

    Béchaux, Camille, E-mail: Camille.bechaux@anses.fr [ANSES, French Agency for Food, Environmental and Occupational Health Safety, 27-31 Avenue du Général Leclerc, 94701 Maisons-Alfort (France); Bodin, Laurent [ANSES, French Agency for Food, Environmental and Occupational Health Safety, 27-31 Avenue du Général Leclerc, 94701 Maisons-Alfort (France); Clémençon, Stéphan [Telecom ParisTech, 46 rue Barrault, 75634 Paris Cedex 13 (France); Crépet, Amélie [ANSES, French Agency for Food, Environmental and Occupational Health Safety, 27-31 Avenue du Général Leclerc, 94701 Maisons-Alfort (France)

    2014-09-15

    As cadmium accumulates mainly in kidney, urinary concentrations are considered as relevant data to assess the risk related to cadmium. The French Nutrition and Health Survey (ENNS) recorded the concentration of cadmium in the urine of the French population. However, as with all biomonitoring data, it needs to be linked to external exposure for it to be interpreted in term of sources of exposure and for risk management purposes. The objective of this work is thus to interpret the cadmium biomonitoring data of the French population in terms of dietary and cigarette smoke exposures. Dietary and smoking habits recorded in the ENNS study were combined with contamination levels in food and cigarettes to assess individual exposures. A PBPK model was used in a Bayesian population model to link this external exposure with the measured urinary concentrations. In this model, the level of the past exposure was corrected thanks to a scaling function which account for a trend in the French dietary exposure. It resulted in a modelling which was able to explain the current urinary concentrations measured in the French population through current and past exposure levels. Risk related to cadmium exposure in the general French population was then assessed from external and internal critical values corresponding to kidney effects. The model was also applied to predict the possible urinary concentrations of the French population in 2030 assuming there will be no more changes in the exposures levels. This scenario leads to significantly lower concentrations and consequently lower related risk. - Highlights: • Interpretation of urine cadmium concentrations in France • PBPK and Bayesian population modelling of cadmium exposure • Assessment of the historic time-trend of the cadmium exposure in France • Risk assessment from current and future external and internal exposure.

  19. PBPK and population modelling to interpret urine cadmium concentrations of the French population

    International Nuclear Information System (INIS)

    Béchaux, Camille; Bodin, Laurent; Clémençon, Stéphan; Crépet, Amélie

    2014-01-01

    As cadmium accumulates mainly in kidney, urinary concentrations are considered as relevant data to assess the risk related to cadmium. The French Nutrition and Health Survey (ENNS) recorded the concentration of cadmium in the urine of the French population. However, as with all biomonitoring data, it needs to be linked to external exposure for it to be interpreted in term of sources of exposure and for risk management purposes. The objective of this work is thus to interpret the cadmium biomonitoring data of the French population in terms of dietary and cigarette smoke exposures. Dietary and smoking habits recorded in the ENNS study were combined with contamination levels in food and cigarettes to assess individual exposures. A PBPK model was used in a Bayesian population model to link this external exposure with the measured urinary concentrations. In this model, the level of the past exposure was corrected thanks to a scaling function which account for a trend in the French dietary exposure. It resulted in a modelling which was able to explain the current urinary concentrations measured in the French population through current and past exposure levels. Risk related to cadmium exposure in the general French population was then assessed from external and internal critical values corresponding to kidney effects. The model was also applied to predict the possible urinary concentrations of the French population in 2030 assuming there will be no more changes in the exposures levels. This scenario leads to significantly lower concentrations and consequently lower related risk. - Highlights: • Interpretation of urine cadmium concentrations in France • PBPK and Bayesian population modelling of cadmium exposure • Assessment of the historic time-trend of the cadmium exposure in France • Risk assessment from current and future external and internal exposure

  20. Theoretical interpretation of the nuclear structure of 88Se within the ACM and the QPM models.

    Science.gov (United States)

    Gratchev, I. N.; Thiamova, G.; Alexa, P.; Simpson, G. S.; Ramdhane, M.

    2018-02-01

    The four-parameter algebraic collective model (ACM) Hamiltonian is used to describe the nuclear structure of 88Se. It is shown that the ACM is capable of providing a reasonable description of the excitation energies and relative positions of the ground-state band and γ band. The most probable interpretation of the nuclear structure of 88Se is that of a transitional nucleus. The Quasiparticle-plus-Phonon Model (QPM) was also applied to describe the nuclear motion in 88Se. Preliminarily calculations show that the collectivity of second excited state {2}2+ is weak and that this state contains a strong two-quasiparticle component.

  1. Interpreting and Understanding Logits, Probits, and other Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Karlson, Kristian Bernt; Holm, Anders

    2018-01-01

    Methods textbooks in sociology and other social sciences routinely recommend the use of the logit or probit model when an outcome variable is binary, an ordered logit or ordered probit when it is ordinal, and a multinomial logit when it has more than two categories. But these methodological...... guidelines take little or no account of a body of work that, over the past 30 years, has pointed to problematic aspects of these nonlinear probability models and, particularly, to difficulties in interpreting their parameters. In this chapterreview, we draw on that literature to explain the problems, show...

  2. The application of release models to the interpretation of rare gas coolant activities

    International Nuclear Information System (INIS)

    Wise, C.

    1985-01-01

    Much research is carried out into the release of fission products from UO 2 fuel and from failed pins. A significant application of this data is to define models of release which can be used to interpret measured coolant activities of rare gas isotopes. Such interpretation is necessary to extract operationally relevant parameters, such as the number and size of failures in the core and the 131 I that might be released during depressurization faults. The latter figure forms part of the safety case for all operating CAGRs. This paper describes and justifies the models which are used in the ANAGRAM program to interpret CAGR coolant activities, highlighting any remaining uncertainties. The various methods by which the program can extract relevant information from the measurements are outlined, and examples are given of the analysis of coolant data. These analyses point to a generally well understood picture of fission gas release from low temperature failures. Areas of higher temperature release are identified where further research would be beneficial to coolant activity analysis. (author)

  3. Interpretation of the quasi-elastic neutron scattering on PAA by rotational diffusion models

    International Nuclear Information System (INIS)

    Bata, L.; Vizi, J.; Kugler, S.

    1974-10-01

    First the most important data determined by other methods for para azoxy anisolon (PAA) are collected. This molecule makes a rotational oscillational motion around the mean molecular direction. The details of this motion can be determined by inelastic neutron scattering. Quasielastic neutron scattering measurements were carried out without orienting magnetic field on a time-of-flight facility with neutron beam of 4.26 meV. For the interpretation of the results two models, the spherical rotation diffusion model and the circular random walk model are investigated. The comparison shows that the circular random walk model (with N=8 sites, d=4A diameter and K=10 10 s -1 rate constant) fits very well with the quasi-elastic neutron scattering, while the spherical rotational diffusion model seems to be incorrect. (Sz.N.Z.)

  4. Digital structural interpretation of mountain-scale photogrammetric 3D models (Kamnik Alps, Slovenia)

    Science.gov (United States)

    Dolžan, Erazem; Vrabec, Marko

    2015-04-01

    From the earliest days of geological science, mountainous terrains with their extreme topographic relief and sparse to non-existent vegetation were utilized to a great advantage for gaining 3D insight into geological structure. But whereas Alpine vistas may offer perfect panoramic views of geology, the steep mountain slopes and vertical cliffs make it very time-consuming and difficult (if not impossible) to acquire quantitative mapping data such as precisely georeferenced traces of geological boundaries and attitudes of structural planes. We faced this problem in mapping the central Kamnik Alps of northern Slovenia, which are built up from Mid to Late Triassic succession of carbonate rocks. Polyphase brittle tectonic evolution, monotonous lithology and the presence of temporally and spatially irregular facies boundary between bedded platform carbonates and massive reef limestones considerably complicate the structural interpretation of otherwise perfectly exposed, but hardly accessible massif. We used Agisoft Photoscan Structure-from-Motion photogrammetric software to process a series of overlapping high-resolution (~0.25 m ground resolution) vertical aerial photographs originally acquired by the Geodetic Authority of the Republic of Slovenia for surveying purposes, to derive very detailed 3D triangular mesh models of terrain and associated photographic textures. Phototextures are crucial for geological interpretation of the models as they provide additional levels of detail and lithological information which is not resolvable from geometrical mesh models alone. We then exported the models to Paradigm Gocad software to refine and optimize the meshing. Structural interpretation of the models, including mapping of traces and surfaces of faults and stratigraphic boundaries and determining dips of structural planes, was performed in MVE Move suite which offers a range of useful tools for digital mapping and interpretation. Photogrammetric model was complemented by

  5. Uniform angular overlap model interpretation of the crystal field effect in U(5+) fluoride compounds

    Energy Technology Data Exchange (ETDEWEB)

    Gajek, Z.; Mulak, J. (W. Trzebiatowski Inst. of Low Temperature and Structure Research, Polish Academy of Sciences, Wroclaw (Poland))

    1990-11-01

    The uniform interpretation of the crystal field effect in three different U(5+) fluoride compounds: CsUF{sub 6}, {alpha}-UF{sub 5} and {beta}-UF{sub 5} within the angular overlap model (AOM) is given. Some characteristic relations between the AOM parameters and their distance dependencies resulting from ab initio calculations are introduced and examined from a phenomenological point of view. The traditional simplest approach with only one independent parameter, i.e. e{sub {sigma}} with e{sub {pi}}:e{sub {sigma}} = 0.32 and e{sub {delta}} = 0, is shown to provide a consistent interpretation of the crystal field effect of the whole class of the compounds. The parameters obtained for one compound are easily and successfully extrapolated to others. The specificity and importance of the e{sub {delta}} parameter for 5f{sup 1} systems is discussed. (orig.).

  6. On the use of musculoskeletal models to interpret motor control strategies from performance data

    Science.gov (United States)

    Cheng, Ernest J.; Loeb, Gerald E.

    2008-06-01

    The intrinsic viscoelastic properties of muscle are central to many theories of motor control. Much of the debate over these theories hinges on varying interpretations of these muscle properties. In the present study, we describe methods whereby a comprehensive musculoskeletal model can be used to make inferences about motor control strategies that would account for behavioral data. Muscle activity and kinematic data from a monkey were recorded while the animal performed a single degree-of-freedom pointing task in the presence of pseudo-random torque perturbations. The monkey's movements were simulated by a musculoskeletal model with accurate representations of musculotendon morphometry and contractile properties. The model was used to quantify the impedance of the limb while moving rapidly, the differential action of synergistic muscles, the relative contribution of reflexes to task performance and the completeness of recorded EMG signals. Current methods to address these issues in the absence of musculoskeletal models were compared with the methods used in the present study. We conclude that musculoskeletal models and kinetic analysis can improve the interpretation of kinematic and electrophysiological data, in some cases by illuminating shortcomings of the experimental methods or underlying assumptions that may otherwise escape notice.

  7. Interpreting space-based trends in carbon monoxide with multiple models

    Directory of Open Access Journals (Sweden)

    S. A. Strode

    2016-06-01

    Full Text Available We use a series of chemical transport model and chemistry climate model simulations to investigate the observed negative trends in MOPITT CO over several regions of the world, and to examine the consistency of time-dependent emission inventories with observations. We find that simulations driven by the MACCity inventory, used for the Chemistry Climate Modeling Initiative (CCMI, reproduce the negative trends in the CO column observed by MOPITT for 2000–2010 over the eastern United States and Europe. However, the simulations have positive trends over eastern China, in contrast to the negative trends observed by MOPITT. The model bias in CO, after applying MOPITT averaging kernels, contributes to the model–observation discrepancy in the trend over eastern China. This demonstrates that biases in a model's average concentrations can influence the interpretation of the temporal trend compared to satellite observations. The total ozone column plays a role in determining the simulated tropospheric CO trends. A large positive anomaly in the simulated total ozone column in 2010 leads to a negative anomaly in OH and hence a positive anomaly in CO, contributing to the positive trend in simulated CO. These results demonstrate that accurately simulating variability in the ozone column is important for simulating and interpreting trends in CO.

  8. The use of cloud enabled building information models – an expert analysis

    Directory of Open Access Journals (Sweden)

    Alan Redmond

    2015-10-01

    Full Text Available The dependency of today’s construction professionals to use singular commercial applications for design possibilities creates the risk of being dictated by the language-tools they use. This unknowingly approach to converting to the constraints of a particular computer application’s style, reduces one’s association with cutting-edge design as no single computer application can support all of the tasks associated with building-design and production. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. Cloud computing is a centralized heterogeneous platform that enables different applications to be connected to each other through using remote data servers. However, the possibility of providing an interoperable process based on binding several construction applications through a single repository platform ‘cloud computing’ required further analysis. The following Delphi questionnaires analysed the exchanging information opportunities of Building Information Modelling (BIM as the possible solution for the integration of applications on a cloud platform. The survey structure is modelled to; (i identify the most appropriate applications for advancing interoperability at the early design stage, (ii detect the most severe barriers of BIM implementation from a business and legal viewpoint, (iii examine the need for standards to address information exchange between design team, and (iv explore the use of the most common interfaces for exchanging information. The anticipated findings will assist in identifying a model that will enhance the standardized passing of information between systems at the feasibility design stage of a construction project.

  9. The use of cloud enabled building information models – an expert analysis

    Directory of Open Access Journals (Sweden)

    Alan Redmond

    2012-12-01

    Full Text Available The dependency of today’s construction professionals to use singular commercial applications for design possibilities creates the risk of being dictated by the language-tools they use. This unknowingly approach to converting to the constraints of a particular computer application’s style, reduces one’s association with cutting-edge design as no single computer application can support all of the tasks associated with building-design and production. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. Cloud computing is a centralized heterogeneous platform that enables different applications to be connected to each other through using remote data servers. However, the possibility of providing an interoperable process based on binding several construction applications through a single repository platform ‘cloud computing’ required further analysis. The following Delphi questionnaires analysed the exchanging information opportunities of Building Information Modelling (BIM as the possible solution for the integration of applications on a cloud platform. The survey structure is modelled to; (i identify the most appropriate applications for advancing interoperability at the early design stage, (ii detect the most severe barriers of BIM implementation from a business and legal viewpoint, (iii examine the need for standards to address information exchange between design team, and (iv explore the use of the most common interfaces for exchanging information. The anticipated findings will assist in identifying a model that will enhance the standardized passing of information between systems at the feasibility design stage of a construction project.

  10. A grey DEMATEL-based approach for modeling enablers of green innovation in manufacturing organizations.

    Science.gov (United States)

    Gupta, Himanshu; Barua, Mukesh Kumar

    2018-04-01

    Incorporating green practices into the manufacturing process has gained momentum over the past few years and is a matter of great concern for both manufacturers as well as researchers. Regulatory pressures in developed countries have forced the organizations to adopt green practices; however, this issue still lacks attention in developing economies like India. There is an urgent need to identify enablers of green innovation for manufacturing organizations and also to identify prominent enablers among those. This study is an attempt to first identify enablers of green innovation and then establish a causal relationship among them to identify the enablers that can drive others. Grey DEMATEL (Decision Making Trial and Evaluation Laboratory) methodology is used for establishing the causal relationship among enablers. The novelty of this study lies in the fact that no study has been done in the past to identify the enablers of green innovation and then establishing the causal relationship among them. A total of 21 enablers of green innovation have been identified; research indicates developing green manufacturing capabilities, resources for green innovation, ease of getting loans from financial institutions, and environmental regulations as the most influential enablers of green innovation. Managerial and practical implications of the research are also presented to assist managers of the case company in adopting green innovation practices at their end.

  11. Expert judgment based multi-criteria decision model to address uncertainties in risk assessment of nanotechnology-enabled food products

    International Nuclear Information System (INIS)

    Flari, Villie; Chaudhry, Qasim; Neslo, Rabin; Cooke, Roger

    2011-01-01

    Currently, risk assessment of nanotechnology-enabled food products is considered difficult due to the large number of uncertainties involved. We developed an approach which could address some of the main uncertainties through the use of expert judgment. Our approach employs a multi-criteria decision model, based on probabilistic inversion that enables capturing experts’ preferences in regard to safety of nanotechnology-enabled food products, and identifying their opinions in regard to the significance of key criteria that are important in determining the safety of such products. An advantage of these sample-based techniques is that they provide out-of-sample validation and therefore a robust scientific basis. This validation in turn adds predictive power to the model developed. We achieved out-of-sample validation in two ways: (1) a portion of the expert preference data was excluded from the model’s fitting and was then predicted by the model fitted on the remaining rankings and (2) a (partially) different set of experts generated new scenarios, using the same criteria employed in the model, and ranked them; their ranks were compared with ranks predicted by the model. The degree of validation in each method was less than perfect but reasonably substantial. The validated model we applied captured and modelled experts’ preferences regarding safety of hypothetical nanotechnology-enabled food products. It appears therefore that such an approach can provide a promising route to explore further for assessing the risk of nanotechnology-enabled food products.

  12. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-05

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group. © 2016 Wiley Periodicals, Inc.

  13. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity.

    Science.gov (United States)

    Barretina, Jordi; Caponigro, Giordano; Stransky, Nicolas; Venkatesan, Kavitha; Margolin, Adam A; Kim, Sungjoon; Wilson, Christopher J; Lehár, Joseph; Kryukov, Gregory V; Sonkin, Dmitriy; Reddy, Anupama; Liu, Manway; Murray, Lauren; Berger, Michael F; Monahan, John E; Morais, Paula; Meltzer, Jodi; Korejwa, Adam; Jané-Valbuena, Judit; Mapa, Felipa A; Thibault, Joseph; Bric-Furlong, Eva; Raman, Pichai; Shipway, Aaron; Engels, Ingo H; Cheng, Jill; Yu, Guoying K; Yu, Jianjun; Aspesi, Peter; de Silva, Melanie; Jagtap, Kalpana; Jones, Michael D; Wang, Li; Hatton, Charles; Palescandolo, Emanuele; Gupta, Supriya; Mahan, Scott; Sougnez, Carrie; Onofrio, Robert C; Liefeld, Ted; MacConaill, Laura; Winckler, Wendy; Reich, Michael; Li, Nanxin; Mesirov, Jill P; Gabriel, Stacey B; Getz, Gad; Ardlie, Kristin; Chan, Vivien; Myer, Vic E; Weber, Barbara L; Porter, Jeff; Warmuth, Markus; Finan, Peter; Harris, Jennifer L; Meyerson, Matthew; Golub, Todd R; Morrissey, Michael P; Sellers, William R; Schlegel, Robert; Garraway, Levi A

    2012-03-28

    The systematic translation of cancer genomic data into knowledge of tumour biology and therapeutic possibilities remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacological annotation is available. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacological profiles for 24 anticancer drugs across 479 of the cell lines, this collection allowed identification of genetic, lineage, and gene-expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Together, our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of 'personalized' therapeutic regimens.

  14. The Cancer Cell Line Encyclopedia enables predictive modeling of anticancer drug sensitivity

    Science.gov (United States)

    Barretina, Jordi; Caponigro, Giordano; Stransky, Nicolas; Venkatesan, Kavitha; Margolin, Adam A.; Kim, Sungjoon; Wilson, Christopher J.; Lehár, Joseph; Kryukov, Gregory V.; Sonkin, Dmitriy; Reddy, Anupama; Liu, Manway; Murray, Lauren; Berger, Michael F.; Monahan, John E.; Morais, Paula; Meltzer, Jodi; Korejwa, Adam; Jané-Valbuena, Judit; Mapa, Felipa A.; Thibault, Joseph; Bric-Furlong, Eva; Raman, Pichai; Shipway, Aaron; Engels, Ingo H.; Cheng, Jill; Yu, Guoying K.; Yu, Jianjun; Aspesi, Peter; de Silva, Melanie; Jagtap, Kalpana; Jones, Michael D.; Wang, Li; Hatton, Charles; Palescandolo, Emanuele; Gupta, Supriya; Mahan, Scott; Sougnez, Carrie; Onofrio, Robert C.; Liefeld, Ted; MacConaill, Laura; Winckler, Wendy; Reich, Michael; Li, Nanxin; Mesirov, Jill P.; Gabriel, Stacey B.; Getz, Gad; Ardlie, Kristin; Chan, Vivien; Myer, Vic E.; Weber, Barbara L.; Porter, Jeff; Warmuth, Markus; Finan, Peter; Harris, Jennifer L.; Meyerson, Matthew; Golub, Todd R.; Morrissey, Michael P.; Sellers, William R.; Schlegel, Robert; Garraway, Levi A.

    2012-01-01

    The systematic translation of cancer genomic data into knowledge of tumor biology and therapeutic avenues remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacologic annotation is available1. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number, and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacologic profiles for 24 anticancer drugs across 479 of the lines, this collection allowed identification of genetic, lineage, and gene expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Altogether, our results suggest that large, annotated cell line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of “personalized” therapeutic regimens2. PMID:22460905

  15. Interpretive modelling of scrape-off plasmas on the MAST tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, J. [Euratom/UKAEA Fusion Association, Culham Science Centre, D2/2.01 Fusion Association, Abingdon, Oxfordshire OX14 3DB (United Kingdom); University of York, Heslington, York (United Kingdom)], E-mail: james.harrison@ukaea.org.uk; Lisgo, S. [Euratom/UKAEA Fusion Association, Culham Science Centre, D2/2.01 Fusion Association, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Counsell, G.F. [Fusion for Energy, Barcelona (Spain); Gibson, K. [University of York, Heslington, York (United Kingdom); Dowling, J. [Euratom/UKAEA Fusion Association, Culham Science Centre, D2/2.01 Fusion Association, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Trojan, L. [University of Manchester, Oxford Road, Manchester (United Kingdom); Reiter, D. [IPP, Forschungszentrum Juelich GmbH, EURATOM Association, D-52425 Juelich (Germany)

    2009-06-15

    Electrical currents in the scrape-off layer (SOL) of MAST are modelled using an interpretive Onion-Skin Model (OSM) constrained with experimental data from MAST diagnostics. The model was extended to include the effects of the magnetic mirror force, which has a strong influence on the particle and momentum balance in spherical tokamaks, such as MAST . These modifications serve to more accurately model the parallel electric fields present in the MAST SOL, which can alter plasma dynamics via the E x B drift. Simulations show that the electrical current at the divertor targets is predominantly thermoelectric, whereas Pfirsch-Schlueter currents have a greater contribution to the total current in the bulk of the SOL plasma.

  16. Parallel approach to identifying the well-test interpretation model using a neurocomputer

    Science.gov (United States)

    May, Edward A., Jr.; Dagli, Cihan H.

    1996-03-01

    The well test is one of the primary diagnostic and predictive tools used in the analysis of oil and gas wells. In these tests, a pressure recording device is placed in the well and the pressure response is recorded over time under controlled flow conditions. The interpreted results are indicators of the well's ability to flow and the damage done to the formation surrounding the wellbore during drilling and completion. The results are used for many purposes, including reservoir modeling (simulation) and economic forecasting. The first step in the analysis is the identification of the Well-Test Interpretation (WTI) model, which determines the appropriate solution method. Mis-identification of the WTI model occurs due to noise and non-ideal reservoir conditions. Previous studies have shown that a feed-forward neural network using the backpropagation algorithm can be used to identify the WTI model. One of the drawbacks to this approach is, however, training time, which can run into days of CPU time on personal computers. In this paper a similar neural network is applied using both a personal computer and a neurocomputer. Input data processing, network design, and performance are discussed and compared. The results show that the neurocomputer greatly eases the burden of training and allows the network to outperform a similar network running on a personal computer.

  17. Fractal Geometry Enables Classification of Different Lung Morphologies in a Model of Experimental Asthma

    Science.gov (United States)

    Obert, Martin; Hagner, Stefanie; Krombach, Gabriele A.; Inan, Selcuk; Renz, Harald

    2015-06-01

    Animal models represent the basis of our current understanding of the pathophysiology of asthma and are of central importance in the preclinical development of drug therapies. The characterization of irregular lung shapes is a major issue in radiological imaging of mice in these models. The aim of this study was to find out whether differences in lung morphology can be described by fractal geometry. Healthy and asthmatic mouse groups, before and after an acute asthma attack induced by methacholine, were studied. In vivo flat-panel-based high-resolution Computed Tomography (CT) was used for mice's thorax imaging. The digital image data of the mice's lungs were segmented from the surrounding tissue. After that, the lungs were divided by image gray-level thresholds into two additional subsets. One subset contained basically the air transporting bronchial system. The other subset corresponds mainly to the blood vessel system. We estimated the fractal dimension of all sets of the different mouse groups using the mass radius relation (mrr). We found that the air transporting subset of the bronchial lung tissue enables a complete and significant differentiation between all four mouse groups (mean D of control mice before methacholine treatment: 2.64 ± 0.06; after treatment: 2.76 ± 0.03; asthma mice before methacholine treatment: 2.37 ± 0.16; after treatment: 2.71 ± 0.03; p < 0.05). We conclude that the concept of fractal geometry allows a well-defined, quantitative numerical and objective differentiation of lung shapes — applicable most likely also in human asthma diagnostics.

  18. Enabling Parametric Optimal Ascent Trajectory Modeling During Early Phases of Design

    Science.gov (United States)

    Holt, James B.; Dees, Patrick D.; Diaz, Manuel J.

    2015-01-01

    -modal due to the interaction of various constraints. Additionally, when these obstacles are coupled with The Program to Optimize Simulated Trajectories [1] (POST), an industry standard program to optimize ascent trajectories that is difficult to use, it requires expert trajectory analysts to effectively optimize a vehicle's ascent trajectory. As it has been pointed out, the paradigm of trajectory optimization is still a very manual one because using modern computational resources on POST is still a challenging problem. The nuances and difficulties involved in correctly utilizing, and therefore automating, the program presents a large problem. In order to address these issues, the authors will discuss a methodology that has been developed. The methodology is two-fold: first, a set of heuristics will be introduced and discussed that were captured while working with expert analysts to replicate the current state-of-the-art; secondly, leveraging the power of modern computing to evaluate multiple trajectories simultaneously, and therefore, enable the exploration of the trajectory's design space early during the pre-conceptual and conceptual phases of design. When this methodology is coupled with design of experiments in order to train surrogate models, the authors were able to visualize the trajectory design space, enabling parametric optimal ascent trajectory information to be introduced with other pre-conceptual and conceptual design tools. The potential impact of this methodology's success would be a fully automated POST evaluation suite for the purpose of conceptual and preliminary design trade studies. This will enable engineers to characterize the ascent trajectory's sensitivity to design changes in an arbitrary number of dimensions and for finding settings for trajectory specific variables, which result in optimal performance for a "dialed-in" launch vehicle design. The effort described in this paper was developed for the Advanced Concepts Office [2] at NASA Marshall

  19. Analysis and interpretation of borehole hydraulic tests in deep boreholes: principles, model development, and applications

    International Nuclear Information System (INIS)

    Pickens, J.F.; Grisak, G.E.; Avis, J.D.; Belanger, D.W.

    1987-01-01

    A review of the literature on hydraulic testing and interpretive methods, particularly in low-permeability media, indicates a need for a comprehensive hydraulic testing interpretive capability. Physical limitations on boreholes, such as caving and erosion during continued drilling, as well as the high costs associated with deep-hole rigs and testing equipment, often necessitate testing under nonideal conditions with respect to antecedent pressures and temperatures. In these situations, which are common in the high-level nuclear waste programs throughout the world, the interpretive requirements include the ability to quantitatively account for thermally induced pressure responses and borehole pressure history (resulting in a time-dependent pressure profile around the borehole) as well as equipment compliance effects in low-permeability intervals. A numerical model was developed to provide the capability to handle these antecedent conditions. Sensitivity studies and practical applications are provided to illustrate the importance of thermal effects and antecedent pressure history. It is demonstrated theoretically and with examples from the Swiss (National Genossenschaft fuer die Lagerung radioaktiver Abfaelle) regional hydrogeologic characterization program that pressure changes (expressed as hydraulic head) of the order of tens to hundreds of meters can results from 1 0 to 2 0 C temperature variations during shut-in (packer isolated) tests in low-permeability formations. Misinterpreted formation pressures and hydraulic conductivity can also result from inaccurate antecedent pressure history. Interpretation of representative formation properties and pressures requires that antecedent pressure information and test period temperature data be included as an integral part of the hydraulic test analyses

  20. Validation of a model of left ventricular segmentation for interpretation of SPET myocardial perfusion images

    International Nuclear Information System (INIS)

    Aepfelbacher, F.C.; Johnson, R.B.; Schwartz, J.G.; Danias, P.G.; Chen, L.; Parker, R.A.; Parker, A.J.

    2001-01-01

    Several models of left ventricular segmentation have been developed that assume a standard coronary artery distribution, and are currently used for interpretation of single-photon emission tomography (SPET) myocardial perfusion imaging. This approach has the potential for incorrect assignment of myocardial segments to vascular territories, possibly over- or underestimating the number of vessels with significant coronary artery disease (CAD). We therefore sought to validate a 17-segment model of myocardial perfusion by comparing the predefined coronary territory assignment with the actual angiographically derived coronary distribution. We examined 135 patients who underwent both coronary angiography and stress SPET imaging within 30 days. Individualized coronary distribution was determined by review of the coronary angiograms and used to identify the coronary artery supplying each of the 17 myocardial segments of the model. The actual coronary distribution was used to assess the accuracy of the assumed coronary distribution of the model. The sensitivities and specificities of stress SPET for detection of CAD in individual coronary arteries and the classification regarding perceived number of diseased coronary arteries were also compared between the two coronary distributions (actual and assumed). The assumed coronary distribution corresponded to the actual coronary anatomy in all but one segment (3). The majority of patients (80%) had 14 or more concordant segments. Sensitivities and specificities of stress SPET for detection of CAD in the coronary territories were similar, with the exception of the RCA territory, for which specificity for detection of CAD was better for the angiographically derived coronary artery distribution than for the model. There was 95% agreement between assumed and angiographically derived coronary distributions in classification to single- versus multi-vessel CAD. Reassignment of a single segment (segment 3) from the LCX to the LAD

  1. Validation of a model of left ventricular segmentation for interpretation of SPET myocardial perfusion images

    Energy Technology Data Exchange (ETDEWEB)

    Aepfelbacher, F.C.; Johnson, R.B.; Schwartz, J.G.; Danias, P.G. [Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA (United States); Chen, L.; Parker, R.A. [Biometrics Center, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA (United States); Parker, A.J. [Nuclear Medicine Division, Department of Radiology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA (United States)

    2001-11-01

    Several models of left ventricular segmentation have been developed that assume a standard coronary artery distribution, and are currently used for interpretation of single-photon emission tomography (SPET) myocardial perfusion imaging. This approach has the potential for incorrect assignment of myocardial segments to vascular territories, possibly over- or underestimating the number of vessels with significant coronary artery disease (CAD). We therefore sought to validate a 17-segment model of myocardial perfusion by comparing the predefined coronary territory assignment with the actual angiographically derived coronary distribution. We examined 135 patients who underwent both coronary angiography and stress SPET imaging within 30 days. Individualized coronary distribution was determined by review of the coronary angiograms and used to identify the coronary artery supplying each of the 17 myocardial segments of the model. The actual coronary distribution was used to assess the accuracy of the assumed coronary distribution of the model. The sensitivities and specificities of stress SPET for detection of CAD in individual coronary arteries and the classification regarding perceived number of diseased coronary arteries were also compared between the two coronary distributions (actual and assumed). The assumed coronary distribution corresponded to the actual coronary anatomy in all but one segment (3). The majority of patients (80%) had 14 or more concordant segments. Sensitivities and specificities of stress SPET for detection of CAD in the coronary territories were similar, with the exception of the RCA territory, for which specificity for detection of CAD was better for the angiographically derived coronary artery distribution than for the model. There was 95% agreement between assumed and angiographically derived coronary distributions in classification to single- versus multi-vessel CAD. Reassignment of a single segment (segment 3) from the LCX to the LAD

  2. Institutional analysis of milkfish supply chain using interpretive structural modelling (ISM) (case study of UD. Bunda Foods, Sidoarjo District)

    Science.gov (United States)

    Silalahi, R. L. R.; Mustaniroh, S. A.; Ikasari, D. M.; Sriulina, R. P.

    2018-03-01

    UD. Bunda Foods is an SME located in the district of Sidoarjo. UD. Bunda Foods has problems of maintaining its milkfish’s quality assurance and developing marketing strategies. Improving those problems enables UD. Bunda Foods to compete with other similar SMEs and to market its product for further expansion of their business. The objectives of this study were to determine the model of the institutional structure of the milkfish supply chain, to determine the elements, the sub-elements, and the relationship among each element. The method used in this research was Interpretive Structural Modeling (ISM), involving 5 experts as respondents consisting of 1 practitioner, 1 academician, and 3 government organisation employees. The results showed that there were two key elements include requirement and goals elements. Based on the Drive Power-Dependence (DP-D) matrix, the key sub-elements of requirement element, consisted of raw material continuity, appropriate marketing strategy, and production capital, were positioned in the Linkage sector quadrant. The DP-D matrix for the key sub-elements of the goal element also showed a similar position. The findings suggested several managerial implications to be carried out by UD. Bunda Foods include establishing good relationships with all involved institutions, obtaining capital assistance, and attending the marketing training provided by the government.

  3. Opening Pandora's Box: The impact of open system modeling on interpretations of anoxia

    Science.gov (United States)

    Hotinski, Roberta M.; Kump, Lee R.; Najjar, Raymond G.

    2000-06-01

    The geologic record preserves evidence that vast regions of ancient oceans were once anoxic, with oxygen levels too low to sustain animal life. Because anoxic conditions have been postulated to foster deposition of petroleum source rocks and have been implicated as a kill mechanism in extinction events, the genesis of such anoxia has been an area of intense study. Most previous models of ocean oxygen cycling proposed, however, have either been qualitative or used closed-system approaches. We reexamine the question of anoxia in open-system box models in order to test the applicability of closed-system results over long timescales and find that open and closed-system modeling results may differ significantly on both short and long timescales. We also compare a scenario with basinwide diffuse upwelling (a three-box model) to a model with upwelling concentrated in the Southern Ocean (a four-box model). While a three-box modeling approach shows that only changes in high-latitude convective mixing rate and character of deepwater sources are likely to cause anoxia, four-box model experiments indicate that slowing of thermohaline circulation, a reduction in wind-driven upwelling, and changes in high-latitude export production may also cause dysoxia or anoxia in part of the deep ocean on long timescales. These results suggest that box models must capture the open-system and vertically stratified nature of the ocean to allow meaningful interpretations of long-lived episodes of anoxia.

  4. Evaluating the skills of isotope-enabled general circulation models against in situ atmospheric water vapor isotope observations

    DEFF Research Database (Denmark)

    Steen-Larsen, Hans Christian; Risi, C.; Werner, M.

    2017-01-01

    The skills of isotope-enabled general circulation models are evaluated against atmospheric water vapor isotopes. We have combined in situ observations of surface water vapor isotopes spanning multiple field seasons (2010, 2011, and 2012) from the top of the Greenland Ice Sheet (NEEM site: 77.45°N......: 2014). This allows us to benchmark the ability to simulate the daily water vapor isotope variations from five different simulations using isotope-enabled general circulation models. Our model-data comparison documents clear isotope biases both on top of the Greenland Ice Sheet (1-11% for δ18O and 4...... boundary layer water vapor isotopes of the Baffin Bay region show strong influence on the water vapor isotopes at the NEEM deep ice core-drilling site in northwest Greenland. Our evaluation of the simulations using isotope-enabled general circulation models also documents wide intermodel spatial...

  5. Microscopic creep models and the interpretation of stress-dip tests during creep

    International Nuclear Information System (INIS)

    Poirier, J.P.

    1976-09-01

    A critical analysis is made of the principal divergent view points concerning stress-dip tests. The raw data are examined and interpreted in the light of various creep models. The following problems are discussed: is the reverse strain anelastic or plastic; is the zero creep rate periodic due to recovery or is it spurious; can the existence or inexistence of an internal stress be deduced from stress-dip tests; can stress-dip tests allow to determine whether glide is jerky or viscous; can the internal stress be measured by stress-dip tests

  6. Comprehensive Interpretation of the Laboratory Experiments Results to Construct Model of the Polish Shale Gas Rocks

    Science.gov (United States)

    Jarzyna, Jadwiga A.; Krakowska, Paulina I.; Puskarczyk, Edyta; Wawrzyniak-Guz, Kamila; Zych, Marcin

    2018-03-01

    More than 70 rock samples from so-called sweet spots, i.e. the Ordovician Sa Formation and Silurian Ja Member of Pa Formation from the Baltic Basin (North Poland) were examined in the laboratory to determine bulk and grain density, total and effective/dynamic porosity, absolute permeability, pore diameters size, total surface area, and natural radioactivity. Results of the pyrolysis, i.e., TOC (Total Organic Carbon) together with S1 and S2 - parameters used to determine the hydrocarbon generation potential of rocks, were also considered. Elemental composition from chemical analyses and mineral composition from XRD measurements were also included. SCAL analysis, NMR experiments, Pressure Decay Permeability measurements together with water immersion porosimetry and adsorption/ desorption of nitrogen vapors method were carried out along with the comprehensive interpretation of the outcomes. Simple and multiple linear statistical regressions were used to recognize mutual relationships between parameters. Observed correlations and in some cases big dispersion of data and discrepancies in the property values obtained from different methods were the basis for building shale gas rock model for well logging interpretation. The model was verified by the result of the Monte Carlo modelling of spectral neutron-gamma log response in comparison with GEM log results.

  7. Entropy-Based Model for Interpreting Life Systems in Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Guo-lian Kang

    2008-01-01

    Full Text Available Traditional Chinese medicine (TCM treats qi as the core of the human life systems. Starting with a hypothetical correlation between TCM qi and the entropy theory, we address in this article a holistic model for evaluating and unveiling the rule of TCM life systems. Several new concepts such as acquired life entropy (ALE, acquired life entropy flow (ALEF and acquired life entropy production (ALEP are propounded to interpret TCM life systems. Using the entropy theory, mathematical models are established for ALE, ALEF and ALEP, which reflect the evolution of life systems. Some criteria are given on physiological activities and pathological changes of the body in different stages of life. Moreover, a real data-based simulation shows life entropies of the human body with different ages, Cold and Hot constitutions and in different seasons in North China are coincided with the manifestations of qi as well as the life evolution in TCM descriptions. Especially, based on the comparative and quantitative analysis, the entropy-based model can nicely describe the evolution of life entropies in Cold and Hot individuals thereby fitting the Yin–Yang theory in TCM. Thus, this work establishes a novel approach to interpret the fundamental principles in TCM, and provides an alternative understanding for the complex life systems.

  8. Interpretation of ensembles created by multiple iterative rebuilding of macromolecular models

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Adams, Paul D.; Moriarty, Nigel W.; Zwart, Peter; Read, Randy J.; Turk, Dusan; Hung, Li-Wei

    2007-01-01

    Heterogeneity in ensembles generated by independent model rebuilding principally reflects the limitations of the data and of the model-building process rather than the diversity of structures in the crystal. Automation of iterative model building, density modification and refinement in macromolecular crystallography has made it feasible to carry out this entire process multiple times. By using different random seeds in the process, a number of different models compatible with experimental data can be created. Sets of models were generated in this way using real data for ten protein structures from the Protein Data Bank and using synthetic data generated at various resolutions. Most of the heterogeneity among models produced in this way is in the side chains and loops on the protein surface. Possible interpretations of the variation among models created by repetitive rebuilding were investigated. Synthetic data were created in which a crystal structure was modelled as the average of a set of ‘perfect’ structures and the range of models obtained by rebuilding a single starting model was examined. The standard deviations of coordinates in models obtained by repetitive rebuilding at high resolution are small, while those obtained for the same synthetic crystal structure at low resolution are large, so that the diversity within a group of models cannot generally be a quantitative reflection of the actual structures in a crystal. Instead, the group of structures obtained by repetitive rebuilding reflects the precision of the models, and the standard deviation of coordinates of these structures is a lower bound estimate of the uncertainty in coordinates of the individual models

  9. An interpretation of the behavior of EoS/GE models for asymmetric systems

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios; Panayiotis, Vlamos

    2000-01-01

    or zero pressure or at other conditions (system's pressure, constant volume packing fraction). In a number of publications over the last years, the achievements and the shortcomings of the various EoS/G(E) models have been presented via phase equilibrium calculations. This short communication provides...... an explanation of several literature EoSIGE models, especially those based on zero-reference pressure (PSRK, MHV1, MHV2), in the prediction of phase equilibria for asymmetric systems as well as an interpretation of the LCVM and kappa-MHV1 models which provide an empirical - yet as shown here theoretically...... justified - solution to these problems. (C) 2000 Elsevier Science Ltd. All rights reserved....

  10. Enabling School Structure, Collective Responsibility, and a Culture of Academic Optimism: Toward a Robust Model of School Performance in Taiwan

    Science.gov (United States)

    Wu, Jason H.; Hoy, Wayne K.; Tarter, C. John

    2013-01-01

    Purpose: The purpose of this research is twofold: to test a theory of academic optimism in Taiwan elementary schools and to expand the theory by adding new variables, collective responsibility and enabling school structure, to the model. Design/methodology/approach: Structural equation modeling was used to test, refine, and expand an…

  11. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  12. Under the pile. Understanding subsurface dynamics of historical cities trough geophysical models interpretation

    Science.gov (United States)

    Bernardes, Paulo; Pereira, Bruno; Alves, Mafalda; Fontes, Luís; Sousa, Andreia; Martins, Manuela; Magalhães, Fernanda; Pimenta, Mário

    2017-04-01

    Braga is one of the oldest cities of the Iberian NW and as of so, the research team's studying the city's historical core for the past 40 years is often confronted with the unpredictability factor laying beneath an urban site with such a long construction history. In fact, Braga keeps redesigning its urban structure over itself on for the past 2000 years, leaving us with a research object filled with an impressive set of construction footprints from the various planning decisions that were taken in the city along its historical path. Aiming for a predicting understanding of the subsoil, we have used near surface geophysics as an effort of minimizing the areas of intervention for traditional archaeological survey techniques. The Seminário de Santiago integrated geophysical survey is an example of the difficulties of interpreting geophysical models in very complex subsurface scenarios. This geophysical survey was planned in order to aid the requalification project being designed for this set of historical buildings, that are estimated to date back to the 16h century, and that were built over one of the main urban arteries of both roman and medieval layers of Braga. We have used both GPR as well as ERT methods for the geophysical survey, but for the purpose of this article, we will focus in the use of the ERT alone. For the interpretation of the geophysical models we've cross-referenced the dense knowledge existing over the building's construction phases with the complex geophysical data collected, using mathematical processing and volume-based visualization techniques, resorting to the use of Res2Inv©, Paraview© and Voxler® software's. At the same time we tried to pinpoint the noise caused by the past 30 year's infrastructural interventions regarding the replacement of the building's water and sanitation systems and for which we had no design plants, regardless of its recent occurring. The deep impact of this replacement actions revealed by the archaeological

  13. New Interpretations of the Rayn Anticlines in the Arabian Basin Inferred from Gravity Modelling

    Science.gov (United States)

    AlMogren, S. M.; Mukhopadhyay, M.

    2014-12-01

    The Ryan Anticlines comprise of a regularly-spaced set of super-giant anticlines oriented NNW, developed due to E-W compression in the Arabian Basin. Most prominent of these being: the Ghawar Anticline, followed by the Summan, Khurais Anticlines and Qatar Arch. Gravity anomaly is largely characteristic for both Ryan Anticlines and its smaller size version the Jinadriah Anticline in the Riyadh Salt Basin. It displays a bipolar gravity field - a zone of gravity high running along the fold axis that is flanked by asymmetric gravity lows. Available structural models commonly infer structural uplift for the median gravity high but ignore the flanking lows. Here we interpret the bipolar gravity anomaly due primarily to such anticline structures, while, the flanking gravity lows are due to greater sediment thickness largely compacted and deformed over the basement depressions. Further complexities are created due to the salt layer and its migration at the lower horizons of sediment strata. Such diagnostic gravity anomaly pattern is taken here as an evidence for basement tectonics due to prevailing crustal dynamics in the Arabian Basin. Density inversion provides details on the subsurface density variation due to the folding and structural configuration for the sediment layers, including the salt layer, affected by basement deformation. This interpretation is largely supported by gravity forward and inversion models given in the present study what is partly constrained by the available seismic, MT and deep resistivity lines and surface geologic mapping. Most of the oil-gas fields in this part of the Arabian Basin are further known for salt diapirism. In this study the gravity interpretation help in identification of salt diapirism directly overlying the basement is firstly given here for Jinadriah Anticline; that is next extended to a regional geologic cross-section traversing the Ryan Anticlines to infer probable subsurface continuation of salt diapirs directly overlying

  14. Interpretation of hydraulic conductivity data and parameter evaluation for groundwater flow models

    International Nuclear Information System (INIS)

    Niemi, A.

    1991-01-01

    The report reviews recent developments in evaluating effective permeabilities for groundwater flow models, starting from methods of well test interpretation for and proceeding to the principles of parameter estimation. Basic concepts of parameter evaluation as well as expressions derived for effective permeabilities in traditional porous medium are described. Due to the assumptions made, these do often not apply for fractured media. Specific features of fractured medium are discussed, including approaches used determining the size of a possible REV and questions related to the application of stochastic theories. Due to the difficulties encountered when applying traditional deterministic models for fractured media, stochastic and fracture network approaches have been developed. The application of these techniques is still under development, the main questions to be resolved being related to the scarcity of data

  15. Customer involvement in greening the supply chain: an interpretive structural modeling methodology

    Science.gov (United States)

    Kumar, Sanjay; Luthra, Sunil; Haleem, Abid

    2013-04-01

    The role of customers in green supply chain management needs to be identified and recognized as an important research area. This paper is an attempt to explore the involvement aspect of customers towards greening of the supply chain (SC). An empirical research approach has been used to collect primary data to rank different variables for effective customer involvement in green concept implementation in SC. An interpretive structural-based model has been presented, and variables have been classified using matrice d' impacts croises- multiplication appliqué a un classement analysis. Contextual relationships among variables have been established using experts' opinions. The research may help practicing managers to understand the interaction among variables affecting customer involvement. Further, this understanding may be helpful in framing the policies and strategies to green SC. Analyzing interaction among variables for effective customer involvement in greening SC to develop the structural model in the Indian perspective is an effort towards promoting environment consciousness.

  16. Interpreting predictive maps of disease: highlighting the pitfalls of distribution models in epidemiology

    Directory of Open Access Journals (Sweden)

    Nicola A. Wardrop

    2014-11-01

    Full Text Available The application of spatial modelling to epidemiology has increased significantly over the past decade, delivering enhanced understanding of the environmental and climatic factors affecting disease distributions and providing spatially continuous representations of disease risk (predictive maps. These outputs provide significant information for disease control programmes, allowing spatial targeting and tailored interventions. However, several factors (e.g. sampling protocols or temporal disease spread can influence predictive mapping outputs. This paper proposes a conceptual framework which defines several scenarios and their potential impact on resulting predictive outputs, using simulated data to provide an exemplar. It is vital that researchers recognise these scenarios and their influence on predictive models and their outputs, as a failure to do so may lead to inaccurate interpretation of predictive maps. As long as these considerations are kept in mind, predictive mapping will continue to contribute significantly to epidemiological research and disease control planning.

  17. First Principles Modeling and Interpretation of Ionization-Triggered Charge Migration in Molecules

    Science.gov (United States)

    Bruner, Adam; Hernandez, Sam; Mauger, Francois; Abanador, Paul; Gaarde, Mette; Schafer, Ken; Lopata, Ken

    Modeling attosecond coherent charge migration in molecules is important for understanding initial steps of photochemistry and light harvesting processes. Ionization triggered hole migration can be difficult to characterize and interpret as the dynamics can be convoluted with excited states. Here, we introduce a real-time time-dependent density functional theory (RT-TDDFT) approach for modeling such dynamics from first principles. To isolate the specific hole dynamics from excited states, Fourier transform analysis and orbital occupations are used to provide a spatial hole representation in the frequency domain. These techniques are applied to hole transfer across a thiophene dimer as well as core-hole triggered valence motion in nitrosobenzene. This work was supported by U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award No. DE-SC0012462.

  18. Measurement and interpretation of swarm parameters and their application in plasma modelling

    International Nuclear Information System (INIS)

    Petrovic, Z Lj; Dujko, S; Maric, D; Malovic, G; Nikitovic, Z; Sasic, O; Jovanovic, J; Stojanovic, V; Radmilovic-Radenovic, M

    2009-01-01

    In this review paper, we discuss the current status of the physics of charged particle swarms, mainly electrons, having plasma modelling in mind. The measurements of the swarm coefficients and the availability of the data are briefly discussed. We try to give a summary of the past ten years and cite the main reviews and databases, which store the majority of the earlier work. The need for reinitiating the swarm experiments and where and how those would be useful is pointed out. We also add some guidance on how to find information on ions and fast neutrals. Most space is devoted to interpretation of transport data, analysis of kinetic phenomena, and accuracy of calculation and proper use of transport data in plasma models. We have tried to show which aspects of kinetic theory developed for swarm physics and which segments of data would be important for further improvement of plasma models. Finally, several examples are given where actual models are mostly based on the physics of swarms and those include Townsend discharges, afterglows, breakdown and some atmospheric phenomena. Finally we stress that, while complex, some of the results from the kinetic theory of swarms and the related phenomenology must be used either to test the plasma models or even to bring in new physics or higher accuracy and reliability to the models. (review article)

  19. Introduction of a methodology for visualization and graphical interpretation of Bayesian classification models.

    Science.gov (United States)

    Balfer, Jenny; Bajorath, Jürgen

    2014-09-22

    Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided.

  20. A new interpretation and validation of variance based importance measures for models with correlated inputs

    Science.gov (United States)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  1. Model Interpretation of Topological Spatial Analysis for the Visually Impaired (Blind Implemented in Google Maps

    Directory of Open Access Journals (Sweden)

    Marcelo Franco Porto

    2013-06-01

    Full Text Available The technological innovations promote the availability of geographic information on the Internet through Web GIS such as Google Earth and Google Maps. These systems contribute to the teaching and diffusion of geographical knowledge that instigates the recognition of the space we live in, leading to the creation of a spatial identity. In these products available on the Web, the interpretation and analysis of spatial information gives priority to one of the human senses: vision. Due to the fact that this representation of information is transmitted visually (image and vectors, a portion of the population is excluded from part of this knowledge because categories of analysis of geographic data such as borders, territory, and space can only be understood by people who can see. This paper deals with the development of a model of interpretation of topological spatial analysis based on the synthesis of voice and sounds that can be used by the visually impaired (blind.The implementation of a prototype in Google Maps and the usability tests performed are also examined. For the development work it was necessary to define the model of topological spatial analysis, focusing on computational implementation, which allows users to interpret the spatial relationships of regions (countries, states and municipalities, recognizing its limits, neighborhoods and extension beyond their own spatial relationships . With this goal in mind, several interface and usability guidelines were drawn up to be used by the visually impaired (blind. We conducted a detailed study of the Google Maps API (Application Programming Interface, which was the environment selected for prototype development, and studied the information available for the users of that system. The prototype was developed based on the synthesis of voice and sounds that implement the proposed model in C # language and in .NET environment. To measure the efficiency and effectiveness of the prototype, usability

  2. A New GPU-Enabled MODTRAN Thermal Model for the PLUME TRACKER Volcanic Emission Analysis Toolkit

    Science.gov (United States)

    Acharya, P. K.; Berk, A.; Guiang, C.; Kennett, R.; Perkins, T.; Realmuto, V. J.

    2013-12-01

    Real-time quantification of volcanic gaseous and particulate releases is important for (1) recognizing rapid increases in SO2 gaseous emissions which may signal an impending eruption; (2) characterizing ash clouds to enable safe and efficient commercial aviation; and (3) quantifying the impact of volcanic aerosols on climate forcing. The Jet Propulsion Laboratory (JPL) has developed state-of-the-art algorithms, embedded in their analyst-driven Plume Tracker toolkit, for performing SO2, NH3, and CH4 retrievals from remotely sensed multi-spectral Thermal InfraRed spectral imagery. While Plume Tracker provides accurate results, it typically requires extensive analyst time. A major bottleneck in this processing is the relatively slow but accurate FORTRAN-based MODTRAN atmospheric and plume radiance model, developed by Spectral Sciences, Inc. (SSI). To overcome this bottleneck, SSI in collaboration with JPL, is porting these slow thermal radiance algorithms onto massively parallel, relatively inexpensive and commercially-available GPUs. This paper discusses SSI's efforts to accelerate the MODTRAN thermal emission algorithms used by Plume Tracker. Specifically, we are developing a GPU implementation of the Curtis-Godson averaging and the Voigt in-band transmittances from near line center molecular absorption, which comprise the major computational bottleneck. The transmittance calculations were decomposed into separate functions, individually implemented as GPU kernels, and tested for accuracy and performance relative to the original CPU code. Speedup factors of 14 to 30× were realized for individual processing components on an NVIDIA GeForce GTX 295 graphics card with no loss of accuracy. Due to the separate host (CPU) and device (GPU) memory spaces, a redesign of the MODTRAN architecture was required to ensure efficient data transfer between host and device, and to facilitate high parallel throughput. Currently, we are incorporating the separate GPU kernels into a

  3. Toward quantitative prediction of charge mobility in organic semiconductors: tunneling enabled hopping model.

    Science.gov (United States)

    Geng, Hua; Peng, Qian; Wang, Linjun; Li, Haijiao; Liao, Yi; Ma, Zhiying; Shuai, Zhigang

    2012-07-10

    A tunneling-enabled hopping mechanism is proposed, providing a pratical tool to quantitatively assess charge mobility in organic semiconductors. The paradoxical phenomena in TIPS-pentacene is well explained in that the optical probe indicates localized charges while transport measurements show bands of charge. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Interpretive sociology of foreign policy: “agent” model of state behavior on the international arena

    Directory of Open Access Journals (Sweden)

    Ivan Nikolaevich Timofeev

    2017-12-01

    Full Text Available The article revisits the utility of sociological theories for the students of international relations. The failure of IR scholars to predict Ukrainian crisis revealed the limits of realism, which still remains most influential IR theory. These limits make rethink the prospects of convergence of IR and sociological theories. Pros and cons of holistic constructivist theory are examined. The article results in making an “agent-focused” model composed of the concepts of Max Weber’s interpretive sociology, Graham Allison’s typology of models of decision making and Mark Haas’s model of ideological origins of great powers’ politics. In doing so, it also revisits the concept of identity as a mean to understand “social facts” and their influence on foreign policy. The emphasis on the “agent” though not the “structure” is approached as an alternative to holistic constructivism of Alexander Wendt and his epigones. The “agent” model is supposed to be more capable for studies of great powers’, which play an active role in setting up the “structure’s” parameters. Three different approaches to “agent” are considered - “agent” as a state, as a bureaucratic body or structure within the state and as decision-makers and their staff. The model is designed for further empirical research of the Russian foreign policy.

  5. Characterization of Rock Mechanical Properties Using Lab Tests and Numerical Interpretation Model of Well Logs

    Directory of Open Access Journals (Sweden)

    Hao Xu

    2016-01-01

    Full Text Available The tight gas reservoir in the fifth member of the Xujiahe formation contains heterogeneous interlayers of sandstone and shale that are low in both porosity and permeability. Elastic characteristics of sandstone and shale are analyzed in this study based on petrophysics tests. The tests indicate that sandstone and mudstone samples have different stress-strain relationships. The rock tends to exhibit elastic-plastic deformation. The compressive strength correlates with confinement pressure and elastic modulus. The results based on thin-bed log interpretation match dynamic Young’s modulus and Poisson’s ratio predicted by theory. The compressive strength is calculated from density, elastic impedance, and clay contents. The tensile strength is calibrated using compressive strength. Shear strength is calculated with an empirical formula. Finally, log interpretation of rock mechanical properties is performed on the fifth member of the Xujiahe formation. Natural fractures in downhole cores and rock microscopic failure in the samples in the cross section demonstrate that tensile fractures were primarily observed in sandstone, and shear fractures can be observed in both mudstone and sandstone. Based on different elasticity and plasticity of different rocks, as well as the characteristics of natural fractures, a fracture propagation model was built.

  6. Proper interpretation of dissolved nitrous oxide isotopes, production pathways, and emissions requires a modelling approach.

    Science.gov (United States)

    Thuss, Simon J; Venkiteswaran, Jason J; Schiff, Sherry L

    2014-01-01

    Stable isotopes ([Formula: see text]15N and [Formula: see text]18O) of the greenhouse gas N2O provide information about the sources and processes leading to N2O production and emission from aquatic ecosystems to the atmosphere. In turn, this describes the fate of nitrogen in the aquatic environment since N2O is an obligate intermediate of denitrification and can be a by-product of nitrification. However, due to exchange with the atmosphere, the [Formula: see text] values at typical concentrations in aquatic ecosystems differ significantly from both the source of N2O and the N2O emitted to the atmosphere. A dynamic model, SIDNO, was developed to explore the relationship between the isotopic ratios of N2O, N2O source, and the emitted N2O. If the N2O production rate or isotopic ratios vary, then the N2O concentration and isotopic ratios may vary or be constant, not necessarily concomitantly, depending on the synchronicity of production rate and source isotopic ratios. Thus prima facie interpretation of patterns in dissolved N2O concentrations and isotopic ratios is difficult. The dynamic model may be used to correctly interpret diel field data and allows for the estimation of the gas exchange coefficient, N2O production rate, and the production-weighted [Formula: see text] values of the N2O source in aquatic ecosystems. Combining field data with these modelling efforts allows this critical piece of nitrogen cycling and N2O flux to the atmosphere to be assessed.

  7. Interpretation of experiments and modeling of internal strains in Beryllium using a polycrystal model

    International Nuclear Information System (INIS)

    Tome, C.; Bourke, M.A.M.; Daymond, M.R.

    2000-01-01

    The elastic and plastic anisotropy of Be have been examined during a uniaxial compression test, by in-situ monitoring in a pulsed neutron beam. Comparisons between the measured hkil strains and the predictions from an elasto-plastic self-consistent (EPSC) model are made. Agreement is qualitatively correct for most planes in the elasto-plastic regime. Possible mechanisms responsible for the quantitative discrepancies between model and experiment are discussed

  8. Model-supported interpretation of Cedars-Sinai '201 Tl SPECT polar maps

    International Nuclear Information System (INIS)

    Petta, P.

    1994-10-01

    Cardiac scintigraphic imaging yields information about regional heart muscle perfusion distribution. The scintigraphic technique does not directly depict the coronary arteries. Inferring alterations of the supplying vessels from the characteristics of abnormally perfused areas of the myocardium is the difficult task in the interpretation of these image data. We investigate ways of applying model-based techniques to this end. Encoding of a model of myocardial perfusion as background knowledge supplied to a first-order inductive learner yielded classifiers capable of identifying presence of coronary artery disease down to the level of determination of affected vessels with an accuracy comparable to other diagnostic systems for this domain. We also identified criteria setting a limit to the performance obtainable by any single approach, such as machine learning or probabilistic techniques. This led to the realization of a model-supported diagnostic system, integrating an abductive perfusion model with heuristics embodying other domain knowledge, such as common variations of vessel anatomy and information related to the image-delivering process, including typical image artefacts. This system achieves excellent accuracy in the identification of diseased vessels and is additionally capable of locating stenosed vessel segments of affected arteries with satisfactory precision. (author)

  9. A numerical cloud model to interpret the isotope content of hailstones

    International Nuclear Information System (INIS)

    Jouzel, J.; Brichet, N.; Thalmann, B.; Federer, B.

    1980-07-01

    Measurements of the isotope content of hailstones are frequently used to deduce their trajectories and updraft speeds within severe storms. The interpretation was made in the past on the basis of an adiabatic equilibrium model in which the stones grew exclusively by interaction with droplets and vapor. Using the 1D steady-state model of Hirsch with parametrized cloud physics these unrealistic assumptions were dropped and the effects of interactions between droplets, drops, ice crystals and graupel on the concentrations of stable isotopes in hydrometeors were taken into account. The construction of the model is briefly discussed. The resulting height profiles of D and O 18 in hailstones deviate substantially from the equilibrium case, rendering most earlier trajectory calculations invalid. It is also seen that in the lower cloud layers the ice of the stones is richer due to relaxation effects, but at higher cloud layers (T(a) 0 C) the ice is much poorer in isotopes. This yields a broader spread of the isotope values in the interval 0>T(a)>-35 0 C or alternatively, it means that hailstones with a very large range of measured isotope concentrations grow in a smaller and therefore more realistic temperature interval. The use of the model in practice will be demonstrated

  10. ARTEFACT MOBILE DATA MODEL TO SUPPORT CULTURAL HERITAGE DATA COLLECTION AND INTERPRETATION

    Directory of Open Access Journals (Sweden)

    Z. S. Mohamed-Ghouse

    2012-07-01

    Full Text Available This paper discusses the limitation of existing data structures in mobile mapping applications to support archaeologists to manage the artefact (any object made or modified by a human culture, and later recovered by an archaeological endeavor details excavated at a cultural heritage site. Current limitations of data structure in the mobile mapping application allow archeologist to record only one artefact per test pit location. In reality, more than one artefact can be excavated from the same test pit location. A spatial data model called Artefact Mobile Data Model (AMDM was developed applying existing Relational Data Base Management System (RDBMS technique to overcome the limitation. The data model was implemented in a mobile database environment called SprintDB Pro which was in turn connected to ArcPad 7.1 mobile mapping application through Open Data Base Connectivity (ODBC. In addition, the design of a user friendly application built on top of AMDM to interpret and record the technology associated with each artefact excavated in the field is also discussed in the paper. In summary, the paper discusses the design and implementation of a data model to facilitate the collection of artefacts in the field using integrated mobile mapping and database approach.

  11. Analysis of interactions among the barriers to JIT production: interpretive structural modelling approach

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-09-01

    `Survival of the fittest' is the reality in modern global competition. Organizations around the globe are adopting or willing to embrace just-in-time (JIT) production to reinforce the competitiveness. Even though JIT is the most powerful inventory management methodologies it is not free from barriers. Barriers derail the implementation of JIT production system. One of the most significant tasks of top management is to identify and understand the relationship between the barriers to JIT production for alleviating its bad effects. The aims of this paper are to study the barriers hampering the implementation of successful JIT production and analysing the interactions among the barriers using interpretive structural modelling technique. Twelve barriers have been identified after reviewing literature. This paper offers a roadmap for preparing an action plan to tackle the barriers in successful implementation of JIT production.

  12. Interpretation for ''high''-Tc of the totally interconnected solution of the Ma and Lee model

    International Nuclear Information System (INIS)

    Wiecko, C.

    1988-09-01

    The already presented totally interconnected (mean-field) approximation of the Ma and Lee model, pictures very well many ingredients of the present status of comprehension of high-T c superconductors. The picture is that of a disordered grain with variable number of particles available for an attractive on-site pairing interaction, embedded in a reservoir of normal particles which fix the chemical potential. Interesting effect of absence of T c and then a sharp increase and slow decay of T c with disorder appears for weak coupling pairing as compared with the hopping probability for single particles. Interpretation is given in terms of one-particle Anderson localization theory and standard mechanisms. (author). 13 refs, 4 figs

  13. Effects of waveform model systematics on the interpretation of GW150914

    OpenAIRE

    Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N

    2017-01-01

    PAPER\\ud Effects of waveform model systematics on the interpretation of GW150914\\ud B P Abbott1, R Abbott1, T D Abbott2, M R Abernathy3, F Acernese4,5, K Ackley6, C Adams7, T Adams8, P Addesso9,144, R X Adhikari1, V B Adya10, C Affeldt10, M Agathos11, K Agatsuma11, N Aggarwal12, O D Aguiar13, L Aiello14,15, A Ain16, P Ajith17, B Allen10,18,19, A Allocca20,21, P A Altin22, A Ananyeva1, S B Anderson1, W G Anderson18, S Appert1, K Arai1, M C Araya1, J S Areeda23, N Arnaud24, K G Arun25, S Ascenz...

  14. A Time-Space Symmetry Based Cylindrical Model for Quantum Mechanical Interpretations

    Science.gov (United States)

    Vo Van, Thuan

    2017-12-01

    Following a bi-cylindrical model of geometrical dynamics, our study shows that a 6D-gravitational equation leads to geodesic description in an extended symmetrical time-space, which fits Hubble-like expansion on a microscopic scale. As a duality, the geodesic solution is mathematically equivalent to the basic Klein-Gordon-Fock equations of free massive elementary particles, in particular, the squared Dirac equations of leptons. The quantum indeterminism is proved to have originated from space-time curvatures. Interpretation of some important issues of quantum mechanical reality is carried out in comparison with the 5D space-time-matter theory. A solution of lepton mass hierarchy is proposed by extending to higher dimensional curvatures of time-like hyper-spherical surfaces than one of the cylindrical dynamical geometry. In a result, the reasonable charged lepton mass ratios have been calculated, which would be tested experimentally.

  15. Modeling and Simulation With Operational Databases to Enable Dynamic Situation Assessment & Prediction

    Science.gov (United States)

    2010-11-01

    subsections discuss the design of the simulations. 3.12.1 Lanchester5D Simulation A Lanchester simulation was developed to conduct performance...benchmarks using the WarpIV Kernel and HyperWarpSpeed. The Lanchester simulation contains a user-definable number of grid cells in which blue and red...forces engage in battle using Lanchester equations. Having a user-definable number of grid cells enables the simulation to be stressed with high entity

  16. Causality in cancer research: a journey through models in molecular epidemiology and their philosophical interpretation

    Directory of Open Access Journals (Sweden)

    Paolo Vineis

    2017-06-01

    Full Text Available Abstract In the last decades, Systems Biology (including cancer research has been driven by technology, statistical modelling and bioinformatics. In this paper we try to bring biological and philosophical thinking back. We thus aim at making different traditions of thought compatible: (a causality in epidemiology and in philosophical theorizing—notably, the “sufficient-component-cause framework” and the “mark transmission” approach; (b new acquisitions about disease pathogenesis, e.g. the “branched model” in cancer, and the role of biomarkers in this process; (c the burgeoning of omics research, with a large number of “signals” and of associations that need to be interpreted. In the paper we summarize first the current views on carcinogenesis, and then explore the relevance of current philosophical interpretations of “cancer causes”. We try to offer a unifying framework to incorporate biomarkers and omic data into causal models, referring to a position called “evidential pluralism”. According to this view, causal reasoning is based on both “evidence of difference-making” (e.g. associations and on “evidence of underlying biological mechanisms”. We conceptualize the way scientists detect and trace signals in terms of information transmission, which is a generalization of the mark transmission theory developed by philosopher Wesley Salmon. Our approach is capable of helping us conceptualize how heterogeneous factors such as micro and macro-biological and psycho-social—are causally linked. This is important not only to understand cancer etiology, but also to design public health policies that target the right causal factors at the macro-level.

  17. Risk assessment by integrating interpretive structural modeling and Bayesian network, case of offshore pipeline project

    International Nuclear Information System (INIS)

    Wu, Wei-Shing; Yang, Chen-Feng; Chang, Jung-Chuan; Château, Pierre-Alexandre; Chang, Yang-Chi

    2015-01-01

    The sound development of marine resource usage relies on a strong maritime engineering industry. The perilous marine environment poses the highest risk to all maritime work. It is therefore imperative to reduce the risk associated with maritime work by using some analytical methods other than engineering techniques. This study addresses this issue by using an integrated interpretive structure modeling (ISM) and Bayesian network (BN) approach in a risk assessment context. Mitigating or managing maritime risk relies primarily on domain expert experience and knowledge. ISM can be used to incorporate expert knowledge in a systematic manner and helps to impose order and direction on complex relationships that exist among system elements. Working with experts, this research used ISM to clearly specify an engineering risk factor relationship represented by a cause–effect diagram, which forms the structure of the BN. The expert subjective judgments were further transformed into a prior and conditional probability set to be embedded in the BN. We used the BN to evaluate the risks of two offshore pipeline projects in Taiwan. The results indicated that the BN can provide explicit risk information to support better project management. - Highlights: • We adopt an integrated method for risk assessment of offshore pipeline projects. • We conduct semi-structural interview with the experts for risk factor identification. • Interpretive structural modeling helps to form the digraph of Bayesian network (BN) • We perform the risk analysis with the experts by building a BN. • Risk evaluations of two case studies using the BN show effectiveness of the methods

  18. Using physiologically based models for clinical translation: predictive modelling, data interpretation or something in-between?

    Science.gov (United States)

    Niederer, Steven A; Smith, Nic P

    2016-12-01

    Heart disease continues to be a significant clinical problem in Western society. Predictive models and simulations that integrate physiological understanding with patient information derived from clinical data have huge potential to contribute to improving our understanding of both the progression and treatment of heart disease. In particular they provide the potential to improve patient selection and optimisation of cardiovascular interventions across a range of pathologies. Currently a significant proportion of this potential is still to be realised. In this paper we discuss the opportunities and challenges associated with this realisation. Reviewing the successful elements of model translation for biophysically based models and the emerging supporting technologies, we propose three distinct modes of clinical translation. Finally we outline the challenges ahead that will be fundamental to overcome if the ultimate goal of fully personalised clinical cardiac care is to be achieved. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  19. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    Science.gov (United States)

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep

  20. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    Science.gov (United States)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for

  1. Enabling proactive agricultural drainage reuse for improved water quality through collaborative networks and low-complexity data-driven modelling

    OpenAIRE

    Zia, Huma

    2015-01-01

    With increasing prevalence of Wireless Sensor Networks (WSNs) in agriculture and hydrology, there exists an opportunity for providing a technologically viable solution for the conservation of already scarce fresh water resources. In this thesis, a novel framework is proposed for enabling a proactive management of agricultural drainage and nutrient losses at farm scale where complex models are replaced by in-situ sensing, communication and low complexity predictive models suited to an autonomo...

  2. Enabling Integrated Decision Making for Electronic-Commerce by Modelling an Enterprise's Sharable Knowledge.

    Science.gov (United States)

    Kim, Henry M.

    2000-01-01

    An enterprise model, a computational model of knowledge about an enterprise, is a useful tool for integrated decision-making by e-commerce suppliers and customers. Sharable knowledge, once represented in an enterprise model, can be integrated by the modeled enterprise's e-commerce partners. Presents background on enterprise modeling, followed by…

  3. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Application of a numerical model in the interpretation of a leaky aquifer test

    International Nuclear Information System (INIS)

    Schroth, B.; Narasimhan, T.N.

    1997-01-01

    The potential use of numerical models in aquifer analysis is by no means a new concept; yet relatively few engineers and scientists are taking advantage of this powerful tool that is more convenient to use now than ever before. In this technical note the authors present an example of using a numerical model in an integrated analysis of data from a three-layer leaky aquifer system involving well-bore storage, skin effects, variable discharge, and observation wells in the pumped aquifer and in an unpumped aquifer. The modeling detail may differ for other cases. The intent is to show that interpretation can be achieved with reduced bias by reducing assumptions in regard to system geometry, flow rate, and other details. A multiwell aquifer test was carried out at a site on the western part of the Lawrence Livermore National Laboratory (LLNL), located about 60 kilometers east of San Francisco. The test was conducted to hydraulically characterize one part of the site and thus help develop remediation strategies to alleviate the ground-water contamination

  5. Two Higgs Doublet Model and Model Independent Interpretation of Neutral Higgs Boson Searches

    CERN Document Server

    Abbiendi, G.; Ainsley, C.; Akesson, P.F.; Alexander, G.; Allison, John; Anderson, K.J.; Arcelli, S.; Asai, S.; Ashby, S.F.; Axen, D.; Azuelos, G.; Bailey, I.; Ball, A.H.; Barberio, E.; Barlow, Roger J.; Baumann, S.; Behnke, T.; Bell, Kenneth Watson; Bella, G.; Bellerive, A.; Benelli, G.; Bentvelsen, S.; Bethke, S.; Biebel, O.; Bloodworth, I.J.; Boeriu, O.; Bock, P.; Bohme, J.; Bonacorsi, D.; Boutemeur, M.; Braibant, S.; Bright-Thomas, P.; Brigliadori, L.; Brown, Robert M.; Burckhart, H.J.; Cammin, J.; Capiluppi, P.; Carnegie, R.K.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Clarke, P.E.L.; Clay, E.; Cohen, I.; Cooke, O.C.; Couchman, J.; Couyoumtzelis, C.; Coxe, R.L.; Csilling, A.; Cuffiani, M.; Dado, S.; Dallavalle, G.Marco; Dallison, S.; de Roeck, A.; de Wolf, E.; Dervan, P.; Desch, K.; Dienes, B.; Dixit, M.S.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Estabrooks, P.G.; Etzion, E.; Fabbri, F.; Fanti, M.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Furtjes, A.; Futyan, D.I.; Gagnon, P.; Gary, J.W.; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Glenzinski, D.; Goldberg, J.; Grandi, C.; Graham, K.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Hajdu, C.; Hanson, G.G.; Hansroul, M.; Hapke, M.; Harder, K.; Harel, A.; Harin-Dirac, M.; Hauke, A.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hill, J.C.; Hocker, James Andrew; Hoffman, Kara Dion; Homer, R.J.; Honma, A.K.; Horvath, D.; Hossain, K.R.; Howard, R.; Huntemeyer, P.; Igo-Kemenes, P.; Ishii, K.; Jacob, F.R.; Jawahery, A.; Jeremie, H.; Jones, C.R.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karapetian, G.; Karlen, D.; Kartvelishvili, V.; Kawagoe, K.; Kawamoto, T.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klein, K.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Kokott, T.P.; Komamiya, S.; Kowalewski, Robert V.; Kress, T.; Krieger, P.; von Krogh, J.; Kuhl, T.; Kupper, M.; Kyberd, P.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Lawson, I.; Layter, J.G.; Leins, A.; Lellouch, D.; Letts, J.; Levinson, L.; Liebisch, R.; Lillich, J.; List, B.; Littlewood, C.; Lloyd, A.W.; Lloyd, S.L.; Loebinger, F.K.; Long, G.D.; Losty, M.J.; Lu, J.; Ludwig, J.; Macchiolo, A.; Macpherson, A.; Mader, W.; Marcellini, S.; Marchant, T.E.; Martin, A.J.; Martin, J.P.; Martinez, G.; Mashimo, T.; Mattig, Peter; McDonald, W.John; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Mendez-Lorenzo, P.; Menges, W.; Merritt, F.S.; Mes, H.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Mohr, W.; Montanari, A.; Mori, T.; Nagai, K.; Nakamura, I.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oakham, F.G.; Odorici, F.; Ogren, H.O.; Oh, A.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Patt, J.; Pfeifenschneider, P.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Pooth, O.; Przybycien, M.; Quadt, A.; Rembser, C.; Renkel, P.; Rick, H.; Rodning, N.; Roney, J.M.; Rosati, S.; Roscoe, K.; Rossi, A.M.; Rozen, Y.; Runge, K.; Runolfsson, O.; Rust, D.R.; Sachs, K.; Saeki, T.; Sahr, O.; Sarkisyan, E.K.G.; Sbarra, C.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Shepherd-Themistocleous, C.H.; Sherwood, P.; Siroli, G.P.; Skuja, A.; Smith, A.M.; Snow, G.A.; Sobie, R.; Soldner-Rembold, S.; Spagnolo, S.; Sproston, M.; Stahl, A.; Stephens, K.; Stoll, K.; Strom, David M.; Strohmer, R.; Stumpf, L.; Surrow, B.; Talbot, S.D.; Tarem, S.; Taylor, R.J.; Teuscher, R.; Thiergen, M.; Thomas, J.; Thomson, M.A.; Torrence, E.; Towers, S.; Toya, D.; Trefzger, T.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Vachon, B.; Vannerem, P.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Waller, D.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, D.; White, J.S.; Wilson, G.W.; Wilson, J.A.; Wyatt, T.R.; Yamashita, S.; Zacek, V.; Zer-Zion, D.

    2001-01-01

    Searches for the neutral Higgs bosons h0 and A0, are used to obtain limits on the Type II Two Higgs Doublet Model (2HDM(II)) with no CP-violation in the Higgs sector and no additional particles besides the five Higgs bosons. The analysis combines approximately 170 pb-1 of data collected with the OPAL detector at sqrt{s} ~ 189 GeV with previous runs at sqrt{s} ~ mZ and sqrt{s} ~ 183 GeV. The searches are sensitive to the h0, A0 -> qq, gg, tau+tau- and h0 -> A0A0 decay modes of the Higgs bosons. For the first time, the 2HDM(II) parameter space is explored in a detailed scan, and new flavour independent analyses are applied to examine regions in which the neutral Higgs bosons decay predominantly into light quarks or gluons. Model-independent limits are also given.

  6. Model-unrestricted scattering potentials for light ions and their interpretation in the folding model

    International Nuclear Information System (INIS)

    Ermer, M.; Clement, H.; Frank, G.; Grabmayr, P.; Heberle, N.; Wagner, G.J.

    1989-01-01

    High-quality data for elastic proton, deuteron and α-particle scattering on 40 Ca and 208 Pb at 26-30 MeV/N have been analyzed in terms of the model-unrestricted Fourier-Bessel concept. While extracted scattering potentials show substantial deviations from Woods-Saxon shapes, their real central parts are well described by folding calculations using a common effective nucleon-nucleon interaction with a weak density dependence. (orig.)

  7. Underground gas storage Lobodice geological model development based on 3D seismic interpretation

    International Nuclear Information System (INIS)

    Kopal, L.

    2015-01-01

    Aquifer type underground gas storage (UGS) Lobodice was developed in the Central Moravian part of Carpathian foredeep in Czech Republic 50 years ago. In order to improve knowledge about UGS geological structure 3D seismic survey was performed in 2009. Reservoir is rather shallow (400 - 500 m below surface) it is located in complicated locality so limitations for field acquisition phase were abundant. This article describes process work flow from 3D seismic field data acquisition to geological model creation. The outcomes of this work flow define geometry of UGS reservoir, its tectonics, structure spill point, cap rock and sealing features of the structure. Improving of geological knowledge about the reservoir enables less risky new well localization for UGS withdrawal rate increasing. (authors)

  8. Interpreting expression data with metabolic flux models: predicting Mycobacterium tuberculosis mycolic acid production.

    Directory of Open Access Journals (Sweden)

    Caroline Colijn

    2009-08-01

    Full Text Available Metabolism is central to cell physiology, and metabolic disturbances play a role in numerous disease states. Despite its importance, the ability to study metabolism at a global scale using genomic technologies is limited. In principle, complete genome sequences describe the range of metabolic reactions that are possible for an organism, but cannot quantitatively describe the behaviour of these reactions. We present a novel method for modeling metabolic states using whole cell measurements of gene expression. Our method, which we call E-Flux (as a combination of flux and expression, extends the technique of Flux Balance Analysis by modeling maximum flux constraints as a function of measured gene expression. In contrast to previous methods for metabolically interpreting gene expression data, E-Flux utilizes a model of the underlying metabolic network to directly predict changes in metabolic flux capacity. We applied E-Flux to Mycobacterium tuberculosis, the bacterium that causes tuberculosis (TB. Key components of mycobacterial cell walls are mycolic acids which are targets for several first-line TB drugs. We used E-Flux to predict the impact of 75 different drugs, drug combinations, and nutrient conditions on mycolic acid biosynthesis capacity in M. tuberculosis, using a public compendium of over 400 expression arrays. We tested our method using a model of mycolic acid biosynthesis as well as on a genome-scale model of M. tuberculosis metabolism. Our method correctly predicts seven of the eight known fatty acid inhibitors in this compendium and makes accurate predictions regarding the specificity of these compounds for fatty acid biosynthesis. Our method also predicts a number of additional potential modulators of TB mycolic acid biosynthesis. E-Flux thus provides a promising new approach for algorithmically predicting metabolic state from gene expression data.

  9. A formal approach to the analysis of clinical computer-interpretable guideline modeling languages.

    Science.gov (United States)

    Grando, M Adela; Glasspool, David; Fox, John

    2012-01-01

    To develop proof strategies to formally study the expressiveness of workflow-based languages, and to investigate their applicability to clinical computer-interpretable guideline (CIG) modeling languages. We propose two strategies for studying the expressiveness of workflow-based languages based on a standard set of workflow patterns expressed as Petri nets (PNs) and notions of congruence and bisimilarity from process calculus. Proof that a PN-based pattern P can be expressed in a language L can be carried out semi-automatically. Proof that a language L cannot provide the behavior specified by a PNP requires proof by exhaustion based on analysis of cases and cannot be performed automatically. The proof strategies are generic but we exemplify their use with a particular CIG modeling language, PROforma. To illustrate the method we evaluate the expressiveness of PROforma against three standard workflow patterns and compare our results with a previous similar but informal comparison. We show that the two proof strategies are effective in evaluating a CIG modeling language against standard workflow patterns. We find that using the proposed formal techniques we obtain different results to a comparable previously published but less formal study. We discuss the utility of these analyses as the basis for principled extensions to CIG modeling languages. Additionally we explain how the same proof strategies can be reused to prove the satisfaction of patterns expressed in the declarative language CIGDec. The proof strategies we propose are useful tools for analysing the expressiveness of CIG modeling languages. This study provides good evidence of the benefits of applying formal methods of proof over semi-formal ones. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. The value of soil respiration measurements for interpreting and modeling terrestrial carbon cycling

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Claire L.; Bond-Lamberty, Ben; Desai, Ankur R.; Lavoie, Martin; Risk, Dave; Tang, Jianwu; Todd-Brown, Katherine; Vargas, Rodrigo

    2016-11-16

    A recent acceleration of model-data synthesis activities has leveraged many terrestrial carbon (C) datasets, but utilization of soil respiration (RS) data has not kept pace with other types such as eddy covariance (EC) fluxes and soil C stocks. Here we argue that RS data, including non-continuous measurements from survey sampling campaigns, have unrealized value and should be utilized more extensively and creatively in data synthesis and modeling activities. We identify three major challenges in interpreting RS data, and discuss opportunities to address them. The first challenge is that when RS is compared to ecosystem respiration (RECO) measured from EC towers, it is not uncommon to find substantial mismatch, indicating one or both flux methodologies are unreliable. We argue the most likely cause of mismatch is unreliable EC data, and there is an unrecognized opportunity to utilize RS for EC quality control. The second challenge is that RS integrates belowground heterotrophic (RH) and autotrophic (RA) activity, whereas modelers generally prefer partitioned fluxes, and few models include an explicit RS output. Opportunities exist to use the total RS flux for data assimilation and model benchmarking methods rather than less-certain partitioned fluxes. Pushing for more experiments that not only partition RS but also monitor the age of RA and RH, as well as for the development of belowground RA components in models, would allow for more direct comparison between measured and modeled values. The third challenge is that soil respiration is generally measured at a very different resolution than that needed for comparison to EC or ecosystem- to global-scale models. Measuring soil fluxes with finer spatial resolution and more extensive coverage, and downscaling EC fluxes to match the scale of RS, will improve chamber and tower comparisons. Opportunities also exist to estimate RH at regional scales by implementing decomposition functional types, akin to plant functional

  11. Modeling extracellular electrical stimulation: I. Derivation and interpretation of neurite equations.

    Science.gov (United States)

    Meffin, Hamish; Tahayori, Bahman; Grayden, David B; Burkitt, Anthony N

    2012-12-01

    Neuroprosthetic devices, such as cochlear and retinal implants, work by directly stimulating neurons with extracellular electrodes. This is commonly modeled using the cable equation with an applied extracellular voltage. In this paper a framework for modeling extracellular electrical stimulation is presented. To this end, a cylindrical neurite with confined extracellular space in the subthreshold regime is modeled in three-dimensional space. Through cylindrical harmonic expansion of Laplace's equation, we derive the spatio-temporal equations governing different modes of stimulation, referred to as longitudinal and transverse modes, under types of boundary conditions. The longitudinal mode is described by the well-known cable equation, however, the transverse modes are described by a novel ordinary differential equation. For the longitudinal mode, we find that different electrotonic length constants apply under the two different boundary conditions. Equations connecting current density to voltage boundary conditions are derived that are used to calculate the trans-impedance of the neurite-plus-thin-extracellular-sheath. A detailed explanation on depolarization mechanisms and the dominant current pathway under different modes of stimulation is provided. The analytic results derived here enable the estimation of a neurite's membrane potential under extracellular stimulation, hence bypassing the heavy computational cost of using numerical methods.

  12. A simple conceptual model to interpret the 100 000 years dynamics of paleo-climate records

    Directory of Open Access Journals (Sweden)

    C. S. Quiroga Lombard

    2010-10-01

    Full Text Available Spectral analyses performed on records of cosmogenic nuclides reveal a group of dominant spectral components during the Holocene period. Only a few of them are related to known solar cycles, i.e., the De Vries/Suess, Gleissberg and Hallstatt cycles. The origin of the others remains uncertain. On the other hand, time series of North Atlantic atmospheric/sea surface temperatures during the last ice age display the existence of repeated large-scale warming events, called Dansgaard-Oeschger (DO events, spaced around multiples of 1470 years. The De Vries/Suess and Gleissberg cycles with periods close to 1470/7 (~210 and 1470/17 (~86.5 years have been proposed to explain these observations. In this work we found that a conceptual bistable model forced with the De Vries/Suess and Gleissberg cycles plus noise displays a group of dominant frequencies similar to those obtained in the Fourier spectra from paleo-climate during the Holocene. Moreover, we show that simply changing the noise amplitude in the model we obtain similar power spectra to those corresponding to GISP2 δ18O (Greenland Ice Sheet Project 2 during the last ice age. These results give a general dynamical framework which allows us to interpret the main characteristic of paleoclimate records from the last 100 000 years.

  13. Applying Interpretive Structural Modeling to Cost Overruns in Construction Projects in the Sultanate of Oman

    Directory of Open Access Journals (Sweden)

    K. Alzebdeh

    2015-06-01

    Full Text Available Cost overruns in construction projects are a problem faced by project managers, engineers, and clients throughout the Middle East.  Globally, several studies in the literature have focused on identifying the causes of these overruns and used statistical methods to rank them according to their impacts. None of these studies have considered the interactions among these factors. This paper examines interpretive structural modelling (ISM as a viable technique for modelling complex interactions among factors responsible for cost overruns in construction projects in the Sultanate of Oman. In particular, thirteen interrelated factors associated with cost overruns were identified, along with their contextual interrelationships. Application of ISM leads to organizing these factors in a hierarchical structure which effectively demonstrates their interactions in a simple way. Four factors were found to be at the root of cost overruns: instability of the US dollar, changes in governmental regulations, faulty cost estimation, and poor coordination among projects’ parties. Taking appropriate actions to minimize the influence of these factors can ultimately lead to better control of future project costs. Thisstudy is of value to managers and decision makers because it provides a powerful yet very easy to apply approach for investigating the problem of cost overruns and other similar issues.

  14. Interpreting the R K (*) anomaly in the colored Zee-Babu model

    Science.gov (United States)

    Guo, Shu-Yuan; Han, Zhi-Long; Li, Bin; Liao, Yi; Ma, Xiao-Dong

    2018-03-01

    We consider the feasibility of interpreting the R K (*) anomaly in the colored Zee-Babu model. The model generates neutrino masses at two loops with the help of a scalar leptoquark S ∼ (3 , 3 , - 1/3) and a scalar diquark ω ∼ (6 , 1 , - 2/3), and contributes to the transition b → sℓ-ℓ+ via the exchange of a leptoquark S at tree level. Under constraints from lepton flavor violating (LFV) and flavor changing neutral current (FCNC) processes, and direct collider searches for heavy particles, we acquire certain parameter space that can accommodate the R K (*) anomaly for both normal (NH) and inverted (IH) hierarchies of neutrino masses. We further examine the LFV decays of the B meson, and find a strong correlation with the neutrino mass hierarchy, i.e., Br (B+ →K+μ±τ∓) ≳Br (B+ →K+μ±e∓) ≈Br (B+ →K+τ±e∓) for NH, while Br (B+ →K+μ±τ∓) ≪Br (B+ →K+μ±e∓) ≈Br (B+ →K+τ±e∓) for IH. Among these decays, only B+ →K+μ±e∓ in the case of NH is promising at the LHCb RUN II, while for IH all LFV decays are hard to detect in the near future.

  15. A mathematical model for interpreting in vitro rhGH release from laminar implants.

    Science.gov (United States)

    Santoveña, A; García, J T; Oliva, A; Llabrés, M; Fariña, J B

    2006-02-17

    Recombinant human growth hormone (rhGH), used mainly for the treatment of growth hormone deficiency in children, requires daily subcutaneous injections. The use of controlled release formulations with appropriate rhGH release kinetics reduces the frequency of medication, improving patient compliance and quality of life. Biodegradable implants are a valid alternative, offering the feasibility of a regular release rate after administering a single dose, though it exists the slight disadvantage of a very minor surgical operation. Three laminar implant formulations (F(1), F(2) and F(3)) were produced by different manufacture procedures using solvent-casting techniques with the same copoly(D,L-lactic) glycolic acid (PLGA) polymer (Mw=48 kDa). A correlation in vitro between polymer matrix degradation and drug release rate from these formulations was found and a mathematical model was developed to interpret this. This model was applied to each formulation. The obtained results where explained in terms of manufacture parameters with the aim of elucidate whether drug release only occurs by diffusion or erosion, or by a combination of both mechanisms. Controlling the manufacture method and the resultant changes in polymer structure facilitates a suitable rhGH release profile for different rhGH deficiency treatments.

  16. Mars’ Low Dissipation Factor at 11-h - Interpretation from Anelasticity-Based Dissipation Model

    Science.gov (United States)

    Castillo-Rogez, Julie; Choukroun, M.

    2010-10-01

    We explore the information contained in the ratio of the tidal Love number k2 to the dissipation factor Q characterizing the response of Mars to the tides exerted by its satellite Phobos (11-h period). Assuming that Mars can be approximated as a Maxwell body, Bills et al. [1] have inferred an average viscosity of the Martian mantle 8.7x1014 Pa s. Such a low viscosity appears inconsistent with Mars’ thermal evolution and current heat budget models. Alternative explanations include the presence of partial melt in the mantle [2], or the presence of an aquifer in the crust [3]. We revisit the interpretation of Mars’ k2/Q using a laboratory-based attenuation model that accounts for material viscoelasticity and anelasticity. As a first step, we have computed Mars’ k2/Q for an interior model that includes a solid inner core, a liquid core layer, a mantle, and crust (consistent with the observed moment of inertia, and k2 measured at the orbital period), and searched for the range of mantle viscosities that can explain the observed k2/Q. Successful models are characterized by an average mantle viscosity between 1018 and 1022 Pa s, which rules out the presence of partial melt in the mantle. We can narrow down that range by performing a more detailed calculation of the mineralogy and temperature profiles. Preliminary results will be presented at the meeting. References: [1] Bills et al. (2005) JGR 110, E00704; [2] Ruedas et al. (2009 White paper to the NRC Planetary Science decadal survey; [3] Bills et al. (2009) LPS 40, 1712. MC is supported by a NASA Postdoctoral Program Fellowship, administered by Oak Ridge Associated Universities. This work has been conducted at the Jet Propulsion Laboratory, California Institute of Technology, under a contract to NASA. Government sponsorship acknowledged.

  17. DOMstudio: an integrated workflow for Digital Outcrop Model reconstruction and interpretation

    Science.gov (United States)

    Bistacchi, Andrea

    2015-04-01

    Different Remote Sensing technologies, including photogrammetry and LIDAR, allow collecting 3D dataset that can be used to create 3D digital representations of outcrop surfaces, called Digital Outcrop Models (DOM), or sometimes Virtual Outcrop Models (VOM). Irrespective of the Remote Sensing technique used, DOMs can be represented either by photorealistic point clouds (PC-DOM) or textured surfaces (TS-DOM). The first are datasets composed of millions of points with XYZ coordinates and RGB colour, whilst the latter are triangulated surfaces onto which images of the outcrop have been mapped or "textured" (applying a tech-nology originally developed for movies and videogames). Here we present a workflow that allows exploiting in an integrated and efficient, yet flexible way, both kinds of dataset: PC-DOMs and TS-DOMs. The workflow is composed of three main steps: (1) data collection and processing, (2) interpretation, and (3) modelling. Data collection can be performed with photogrammetry, LIDAR, or other techniques. The quality of photogrammetric datasets obtained with Structure From Motion (SFM) techniques has shown a tremendous improvement over the past few years, and this is becoming the more effective way to collect DOM datasets. The main advantages of photogrammetry over LIDAR are represented by the very simple and lightweight field equipment (a digital camera), and by the arbitrary spatial resolution, that can be increased simply getting closer to the out-crop or by using a different lens. It must be noted that concerns about the precision of close-range photogrammetric surveys, that were justified in the past, are no more a problem if modern software and acquisition schemas are applied. In any case, LIDAR is a well-tested technology and it is still very common. Irrespective of the data collection technology, the output will be a photorealistic point cloud and a collection of oriented photos, plus additional imagery in special projects (e.g. infrared images

  18. Developing an Interpretive Structural Modeling(ISM in order to Achieve Agility via Information Technology in Manufacturing Organization

    Directory of Open Access Journals (Sweden)

    Ali Mohammadi

    2012-12-01

    Full Text Available Agility is considered the ability to respond quickly to changes and a major factor for success and survival in today's business. The purpose of this research is to offer a conceptual model using Interpretive Structural Modeling(ISM. To this end after reviewing literature and theoretical background, the indices related to achieving agility via information technology (IT have been identified and then the Interpretive Structural Model in four levels has been represented. Findings show that tendency and commitment of top managers, organizational climate and strategic planning alignment with information technology planning is the major factors affecting agility achievement via information technology (IT.

  19. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  20. Clustering and interpretation of local earthquake tomography models in the southern Dead Sea basin

    Science.gov (United States)

    Bauer, Klaus; Braeuer, Benjamin

    2016-04-01

    The Dead Sea transform (DST) marks the boundary between the Arabian and the African plates. Ongoing left-lateral relative plate motion and strike-slip deformation started in the Early Miocene (20 MA) and produced a total shift of 107 km until presence. The Dead Sea basin (DSB) located in the central part of the DST is one of the largest pull-apart basins in the world. It was formed from step-over of different fault strands at a major segment boundary of the transform fault system. The basin development was accompanied by deposition of clastics and evaporites and subsequent salt diapirism. Ongoing deformation within the basin and activity of the boundary faults are indicated by increased seismicity. The internal architecture of the DSB and the crustal structure around the DST were subject of several large scientific projects carried out since 2000. Here we report on a local earthquake tomography study from the southern DSB. In 2006-2008, a dense seismic network consisting of 65 stations was operated for 18 months in the southern part of the DSB and surrounding regions. Altogether 530 well-constrained seismic events with 13,970 P- and 12,760 S-wave arrival times were used for a travel time inversion for Vp, Vp/Vs velocity structure and seismicity distribution. The work flow included 1D inversion, 2.5D and 3D tomography, and resolution analysis. We demonstrate a possible strategy how several tomographic models such as Vp, Vs and Vp/Vs can be integrated for a combined lithological interpretation. We analyzed the tomographic models derived by 2.5D inversion using neural network clustering techniques. The method allows us to identify major lithologies by their petrophysical signatures. Remapping the clusters into the subsurface reveals the distribution of basin sediments, prebasin sedimentary rocks, and crystalline basement. The DSB shows an asymmetric structure with thickness variation from 5 km in the west to 13 km in the east. Most importantly, a well-defined body

  1. GeneTopics - interpretation of gene sets via literature-driven topic models

    Science.gov (United States)

    2013-01-01

    Background Annotation of a set of genes is often accomplished through comparison to a library of labelled gene sets such as biological processes or canonical pathways. However, this approach might fail if the employed libraries are not up to date with the latest research, don't capture relevant biological themes or are curated at a different level of granularity than is required to appropriately analyze the input gene set. At the same time, the vast biomedical literature offers an unstructured repository of the latest research findings that can be tapped to provide thematic sub-groupings for any input gene set. Methods Our proposed method relies on a gene-specific text corpus and extracts commonalities between documents in an unsupervised manner using a topic model approach. We automatically determine the number of topics summarizing the corpus and calculate a gene relevancy score for each topic allowing us to eliminate non-specific topics. As a result we obtain a set of literature topics in which each topic is associated with a subset of the input genes providing directly interpretable keywords and corresponding documents for literature research. Results We validate our method based on labelled gene sets from the KEGG metabolic pathway collection and the genetic association database (GAD) and show that the approach is able to detect topics consistent with the labelled annotation. Furthermore, we discuss the results on three different types of experimentally derived gene sets, (1) differentially expressed genes from a cardiac hypertrophy experiment in mice, (2) altered transcript abundance in human pancreatic beta cells, and (3) genes implicated by GWA studies to be associated with metabolite levels in a healthy population. In all three cases, we are able to replicate findings from the original papers in a quick and semi-automated manner. Conclusions Our approach provides a novel way of automatically generating meaningful annotations for gene sets that are directly

  2. Contagion effect of enabling or coercive use of costing model within the managerial couple in lean organizations

    DEFF Research Database (Denmark)

    Kristensen, Thomas; Israelsen, Poul

    In the lean strategy is enabling formalization behaviour expected at the lower levels of management to be successful. We study the contagion effect between the superior, middle manager, of the lower level manager. This effect is proposed to be a dominant contingency variable for the use of costin...... models at the lower levels of management. Thus the use of costing models at the middle manager level is an important key to be successful with the lean package.......In the lean strategy is enabling formalization behaviour expected at the lower levels of management to be successful. We study the contagion effect between the superior, middle manager, of the lower level manager. This effect is proposed to be a dominant contingency variable for the use of costing...

  3. A novel methodology for interpreting air quality measurements from urban streets using CFD modelling

    Science.gov (United States)

    Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming

    2011-09-01

    In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population

  4. Interpretation of two compact planetary nebulae, IC 4997 and NGC 6572, with aid of theoretical models.

    Science.gov (United States)

    Hyung, S; Aller, L H

    1993-01-15

    Observations of two dense compact planetary nebulae secured with the Hamilton Echelle spectrograph at Lick Observatory combined with previously published UV spectra secured with the International Ultraviolet Explorer enable us to probe the electron densities and temperatures (plasma diagnostics) and ionic concentrations in these objects. The diagnostic diagrams show that no homogenous model will work for these nebulae. NGC 6572 may consist of an inner torordal ring of density 25,000 atoms/cm3 and an outer conical shell of density 10,000 atoms/cm3. The simplest model of IC 4997 suggests a thick inner shell with a density of about 107 atoms/cm3 and an outer envelope of density 10,000 atoms/cm3. The abundances of all elements heavier than He appear to be less than the solar values in NGC 6572, whereas He, C, N, and O may be more abundant in IC 4997 than in the sun. IC 4997 presents puzzling problems.

  5. Systemic therapy and the social relational model of disability: enabling practices with people with intellectual disability

    OpenAIRE

    Haydon-Laurelut, Mark

    2009-01-01

    Therapy has been critiqued for personalizing the political (Kitzinger, 1993). The social-relational model (Thomas, 1999) is one theoretical resource for understanding the practices of therapy through a political lens. The social model(s) have viewed therapy with suspicion. This paper highlights – using composite case examples and the authors primary therapeutic modality, systemic therapy – some systemic practices with adults with Intellectual Disability (ID) that enact a position that it is s...

  6. Interpretation of TOMS Observations of Tropical Tropospheric Ozone with a Global Model and In Situ Observations

    Science.gov (United States)

    Martin, Randall V.; Jacob, Daniel J.; Logan, Jennifer A.; Bey, Isabelle; Yantosca, Robert M.; Staudt, Amanda C.; Fiore, Arlene M.; Duncan, Bryan N.; Liu, Hongyu; Ginoux, Paul

    2004-01-01

    We interpret the distribution of tropical tropospheric ozone columns (TTOCs) from the Total Ozone Mapping Spectrometer (TOMS) by using a global three-dimensional model of tropospheric chemistry (GEOS-CHEM) and additional information from in situ observations. The GEOS-CHEM TTOCs capture 44% of the variance of monthly mean TOMS TTOCs from the convective cloud differential method (CCD) with no global bias. Major discrepancies are found over northern Africa and south Asia where the TOMS TTOCs do not capture the seasonal enhancements from biomass burning found in the model and in aircraft observations. A characteristic feature of these northern topical enhancements, in contrast to southern tropical enhancements, is that they are driven by the lower troposphere where the sensitivity of TOMS is poor due to Rayleigh scattering. We develop an efficiency correction to the TOMS retrieval algorithm that accounts for the variability of ozone in the lower troposphere. This efficiency correction increases TTOC's over biomass burning regions by 3-5 Dobson units (DU) and decreases them by 2-5 DU over oceanic regions, improving the agreement between CCD TTOCs and in situ observations. Applying the correction to CCD TTOCs reduces by approximately DU the magnitude of the "tropical Atlantic paradox" [Thompson et al, 2000], i.e. the presence of a TTOC enhancement over the southern tropical Atlantic during the northern African biomass burning season in December-February. We reproduce the remainder of the paradox in the model and explain it by the combination of upper tropospheric ozone production from lightning NOx, peristent subsidence over the southern tropical Atlantic as part of the Walker circulation, and cross-equatorial transport of upper tropospheric ozone from northern midlatitudes in the African "westerly duct." These processes in the model can also account for the observed 13-17 DU persistent wave-1 pattern in TTOCs with a maximum above the tropical Atlantic and a minimum

  7. Interpretive Medicine

    Science.gov (United States)

    Reeve, Joanne

    2010-01-01

    Patient-centredness is a core value of general practice; it is defined as the interpersonal processes that support the holistic care of individuals. To date, efforts to demonstrate their relationship to patient outcomes have been disappointing, whilst some studies suggest values may be more rhetoric than reality. Contextual issues influence the quality of patient-centred consultations, impacting on outcomes. The legitimate use of knowledge, or evidence, is a defining aspect of modern practice, and has implications for patient-centredness. Based on a critical review of the literature, on my own empirical research, and on reflections from my clinical practice, I critique current models of the use of knowledge in supporting individualised care. Evidence-Based Medicine (EBM), and its implementation within health policy as Scientific Bureaucratic Medicine (SBM), define best evidence in terms of an epistemological emphasis on scientific knowledge over clinical experience. It provides objective knowledge of disease, including quantitative estimates of the certainty of that knowledge. Whilst arguably appropriate for secondary care, involving episodic care of selected populations referred in for specialist diagnosis and treatment of disease, application to general practice can be questioned given the complex, dynamic and uncertain nature of much of the illness that is treated. I propose that general practice is better described by a model of Interpretive Medicine (IM): the critical, thoughtful, professional use of an appropriate range of knowledges in the dynamic, shared exploration and interpretation of individual illness experience, in order to support the creative capacity of individuals in maintaining their daily lives. Whilst the generation of interpreted knowledge is an essential part of daily general practice, the profession does not have an adequate framework by which this activity can be externally judged to have been done well. Drawing on theory related to the

  8. Enabling intelligent copernicus services for carbon and water balance modeling of boreal forest ecosystems - North State

    Science.gov (United States)

    Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi

    2015-04-01

    The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.

  9. ICoNOs MM: The IT-enabled Collaborative Networked Organizations Maturity Model

    NARCIS (Netherlands)

    Santana Tapia, R.G.

    2009-01-01

    The focus of this paper is to introduce a comprehensive model for assessing and improving maturity of business-IT alignment (B-ITa) in collaborative networked organizations (CNOs): the ICoNOs MM. This two dimensional maturity model (MM) addresses five levels of maturity as well as four domains to

  10. Investigating dye performance and crosstalk in fluorescence enabled bioimaging using a model system

    DEFF Research Database (Denmark)

    Arppe, Riikka; R. Carro-Temboury, Miguel; Hempel, Casper

    2017-01-01

    -talk of fluorophores on the detected fluorescence signal. The described model system comprises of lanthanide (III) ion doped Linde Type A zeolites dispersed in a PVA film stained with fluorophores. We tested: F18, MitoTracker Red and ATTO647N. This model system allowed comparing performance of the fluorophores...

  11. A controlled human malaria infection model enabling evaluation of transmission-blocking interventions

    NARCIS (Netherlands)

    Collins, K.A.; Wang, C.Y.; Adams, M.; Mitchell, H.; Rampton, M.; Elliott, S.; Reuling, I.J.; Bousema, T.; Sauerwein, R.; Chalon, S.; Mohrle, J.J.; McCarthy, J.S.

    2018-01-01

    BACKGROUND: Drugs and vaccines that can interrupt the transmission of Plasmodium falciparum will be important for malaria control and elimination. However, models for early clinical evaluation of candidate transmission-blocking interventions are currently unavailable. Here, we describe a new model

  12. Understanding influential factors on implementing green supply chain management practices: An interpretive structural modelling analysis.

    Science.gov (United States)

    Agi, Maher A N; Nishant, Rohit

    2017-03-01

    In this study, we establish a set of 19 influential factors on the implementation of Green Supply Chain Management (GSCM) practices and analyse the interaction between these factors and their effect on the implementation of GSCM practices using the Interpretive Structural Modelling (ISM) method and the "Matrice d'Impacts Croisés Multiplication Appliquée à un Classement" (MICMAC) analysis on data compiled from interviews with supply chain (SC) executives based in the Gulf countries (Middle East region). The study reveals a strong influence and driving power of the nature of the relationships between SC partners on the implementation of GSCM practices. We especially found that dependence, trust, and durability of the relationship with SC partners have a very high influence. In addition, the size of the company, the top management commitment, the implementation of quality management and the employees training and education exert a critical influence on the implementation of GSCM practices. Contextual elements such as the industry sector and region and their effect on the prominence of specific factors are also highlighted through our study. Finally, implications for research and practice are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. On the zigzagging causility model of EPR correlations and on the interpretation of quantum mechanics

    Science.gov (United States)

    de Beauregard, O. Costa

    1988-09-01

    Being formalized inside the S-matrix scheme, the zigzagging causility model of EPR correlations has full Lorentz and CPT invariance. EPR correlations, proper or reversed, and Wheeler's smoky dragon metaphor are respectively pictured in spacetime or in the momentum-energy space, as V-shaped, A-shaped, or C-shaped ABC zigzags, with a summation at B over virtual states |B> = *. The formal parrallelism breaks down at the level of interpretation because (A|C) = ||2. CPT invariance implies the Fock and Watanabe principle that, in quantum mechanics, retarded (advanced) waves are used for prediction (retrodiction), an expression of which is = = , with |Φ> denoting a preparation, |Ψ> a measurement, and U the evolution operator. The transformation |Ψ> = |UΦ> or |Φ> = |U-1Ψ> exchanges the “preparation representation” and the “measurement representation” of a system and is ancillary in the formalization of the quantum chance game by the “wavelike algebra” of conditional amplitude. In 1935 EPR overlooked that a conditional amplitude = Σ between the two distant measurements is at stake, and that only measurements actually performed do make sense. The reversibility = * implies that causality is CPT-invariant, or arrowless, at the microlevel. Arrowed causality is a macroscopic emergence, corollary to wave retardation and probability increase. Factlike irreversibility states repression, not suppression, of “blind statistical retrodiction”—that is, of “final cause.”

  14. Factors Influencing Implementation of OHSAS 18001 in Indian Construction Organizations: Interpretive Structural Modeling Approach.

    Science.gov (United States)

    Rajaprasad, Sunku Venkata Siva; Chalapathi, Pasupulati Venkata

    2015-09-01

    Construction activity has made considerable breakthroughs in the past two decades on the back of increases in development activities, government policies, and public demand. At the same time, occupational health and safety issues have become a major concern to construction organizations. The unsatisfactory safety performance of the construction industry has always been highlighted since the safety management system is neglected area and not implemented systematically in Indian construction organizations. Due to a lack of enforcement of the applicable legislation, most of the construction organizations are forced to opt for the implementation of Occupational Health Safety Assessment Series (OHSAS) 18001 to improve safety performance. In order to better understand factors influencing the implementation of OHSAS 18001, an interpretive structural modeling approach has been applied and the factors have been classified using matrice d'impacts croises-multiplication appliqué a un classement (MICMAC) analysis. The study proposes the underlying theoretical framework to identify factors and to help management of Indian construction organizations to understand the interaction among factors influencing in implementation of OHSAS 18001. Safety culture, continual improvement, morale of employees, and safety training have been identified as dependent variables. Safety performance, sustainable construction, and conducive working environment have been identified as linkage variables. Management commitment and safety policy have been identified as the driver variables. Management commitment has the maximum driving power and the most influential factor is safety policy, which states clearly the commitment of top management towards occupational safety and health.

  15. The ethics of absolute relativity: An eschatological ontological model for interpreting the Sermon on the Mount

    Directory of Open Access Journals (Sweden)

    Andre van Oudtshoorn

    2014-01-01

    Full Text Available Jesus� imperatives in the Sermon on the Mount continue to play a significant role in Christian ethical discussions. The tension between the radical demands of Jesus and the impossibility of living this out within the everyday world has been noted by many scholars. In this article, an eschatological-ontological model, based on the social construction of reality, is developed to show that this dialectic is not necessarily an embarrassment to the church but, instead, belongs to the essence of the church as the recipient of the Spirit of Christ and as called by him to exist now in terms of the coming new age that has already been realised in Christ. The absolute demands of Jesus� imperatives, it is argued, must relativise all other interpretations of reality whilst the world, in turn, relativises Jesus� own definition of what �is� and therefore also the injunctions to his disciples on how to live within this world. This process of radical relativisation provides a critical framework for Christian living. The church must expect, and do, the impossible within this world through her faith in Christ who recreates and redefines reality. The church�s ethical task, it is further argued, is to participate with the Spirit in the construction of signs of this new reality in Christ in this world through her actions marked by faith, hope and love.

  16. Studies on Interpretive Structural Model for Forest Ecosystem Management Decision-Making

    Science.gov (United States)

    Liu, Suqing; Gao, Xiumei; Zen, Qunying; Zhou, Yuanman; Huang, Yuequn; Han, Weidong; Li, Linfeng; Li, Jiping; Pu, Yingshan

    Characterized by their openness, complexity and large scale, forest ecosystems interweave themselves with social system, economic system and other natural ecosystems, thus complicating both their researches and management decision-making. According to the theories of sustainable development, hierarchy-competence levels, cybernetics and feedback, 25 factors have been chosen from human society, economy and nature that affect forest ecosystem management so that they are systematically analyzed via developing an interpretive structural model (ISM) to reveal their relationships and positions in the forest ecosystem management. The ISM consists of 7 layers with the 3 objectives for ecosystem management being the top layer (the seventh layer). The ratio between agricultural production value and industrial production value as the bases of management decision-making in forest ecosystems becomes the first layer at the bottom because it has great impacts on the values of society and the development trends of forestry, while the factors of climatic environments, intensive management extent, management measures, input-output ratio as well as landscape and productivity are arranged from the second to sixth layers respectively.

  17. Open Knee: Open Source Modeling & Simulation to Enable Scientific Discovery and Clinical Care in Knee Biomechanics

    Science.gov (United States)

    Erdemir, Ahmet

    2016-01-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical function of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor intensive reproduction of model development steps can be avoided. The interested parties can immediately utilize readily available models for scientific discovery and for clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes detailed anatomical representation of the joint's major tissue structures, their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next generation knee models are noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  18. Geomorphic Map of Worcester County, Maryland, Interpreted from a LIDAR-Based, Digital Elevation Model

    Science.gov (United States)

    Newell, Wayne L.; Clark, Inga

    2008-01-01

    A recently compiled mosaic of a LIDAR-based digital elevation model (DEM) is presented with geomorphic analysis of new macro-topographic details. The geologic framework of the surficial and near surface late Cenozoic deposits of the central uplands, Pocomoke River valley, and the Atlantic Coast includes Cenozoic to recent sediments from fluvial, estuarine, and littoral depositional environments. Extensive Pleistocene (cold climate) sandy dune fields are deposited over much of the terraced landscape. The macro details from the LIDAR image reveal 2 meter-scale resolution of details of the shapes of individual dunes, and fields of translocated sand sheets. Most terrace surfaces are overprinted with circular to elliptical rimmed basins that represent complex histories of ephemeral ponds that were formed, drained, and overprinted by younger basins. The terrains of composite ephemeral ponds and the dune fields are inter-shingled at their margins indicating contemporaneous erosion, deposition, and re-arrangement and possible internal deformation of the surficial deposits. The aggregate of these landform details and their deposits are interpreted as the products of arid, cold climate processes that were common to the mid-Atlantic region during the Last Glacial Maximum. In the Pocomoke valley and its larger tributaries, erosional remnants of sandy flood plains with anastomosing channels indicate the dynamics of former hydrology and sediment load of the watershed that prevailed at the end of the Pleistocene. As the climate warmed and precipitation increased during the transition from late Pleistocene to Holocene, dune fields were stabilized by vegetation, and the stream discharge increased. The increased discharge and greater local relief of streams graded to lower sea levels stimulated down cutting and created the deeply incised valleys out onto the continental shelf. These incised valleys have been filling with fluvial to intertidal deposits that record the rising sea

  19. The Policy Dystopia Model: An Interpretive Analysis of Tobacco Industry Political Activity.

    Science.gov (United States)

    Ulucanlar, Selda; Fooks, Gary J; Gilmore, Anna B

    2016-09-01

    Tobacco industry interference has been identified as the greatest obstacle to the implementation of evidence-based measures to reduce tobacco use. Understanding and addressing industry interference in public health policy-making is therefore crucial. Existing conceptualisations of corporate political activity (CPA) are embedded in a business perspective and do not attend to CPA's social and public health costs; most have not drawn on the unique resource represented by internal tobacco industry documents. Building on this literature, including systematic reviews, we develop a critically informed conceptual model of tobacco industry political activity. We thematically analysed published papers included in two systematic reviews examining tobacco industry influence on taxation and marketing of tobacco; we included 45 of 46 papers in the former category and 20 of 48 papers in the latter (n = 65). We used a grounded theory approach to build taxonomies of "discursive" (argument-based) and "instrumental" (action-based) industry strategies and from these devised the Policy Dystopia Model, which shows that the industry, working through different constituencies, constructs a metanarrative to argue that proposed policies will lead to a dysfunctional future of policy failure and widely dispersed adverse social and economic consequences. Simultaneously, it uses diverse, interlocking insider and outsider instrumental strategies to disseminate this narrative and enhance its persuasiveness in order to secure its preferred policy outcomes. Limitations are that many papers were historical (some dating back to the 1970s) and focused on high-income regions. The model provides an evidence-based, accessible way of understanding diverse corporate political strategies. It should enable public health actors and officials to preempt these strategies and develop realistic assessments of the industry's claims.

  20. Interpretation of duoplasmatron-type ion sources from a model of the discharge

    International Nuclear Information System (INIS)

    Lejeune, C.

    1971-06-01

    The performance logical improvement of these sources needs a precise knowledge of the emitting ionized medium, on which the whole of the properties is depending. Ion production mechanisms have been studied in the discharge together with their transport towards the extraction hole. The source properties are described, in a new manner, as a function of the discharge modes. The discharge is characterized by the existence of a mode change, related to anode column neutral atom lowering in anode column (arc starvation). The complementarity of probe measurements and the energy spectra analysis of the charges emitted by the anode hole allowed to get the potential axial profile, to discover an electric energetic electron beam extracted from cathodic plasma by the striction shealth potential difference and to determine the electron density radial profile in the anode column. Result analysis allows to get a simple scheme of plasmas and laws controlling them in each of the important modes. The density and potential axial theoretical repartition has been calculated as a function of independent parameters-anode pressure and arc intensity and of three secondary parameters characterizing the energy exchange (electron temperature) and magnetic field topography. The agreement between model predictions and experimental variations for source properties -more specifically gas nature and geometry- allow to give the duoplasmatrons sources similitude rules. The discharge model has allowed to interpret the luminous emission spectra from the anode column. It has been shown theoretically that the peculiar conditions of ionization and excitation allow to use the column as an amplifier medium in the optically field. This plasma has been used successfully as an active medium for an ionic laser in a continuous mode [fr

  1. Finger Thickening during Extra-Heavy Oil Waterflooding: Simulation and Interpretation Using Pore-Scale Modelling.

    Directory of Open Access Journals (Sweden)

    Mohamed Regaieg

    Full Text Available Although thermal methods have been popular and successfully applied in heavy oil recovery, they are often found to be uneconomic or impractical. Therefore, alternative production protocols are being actively pursued and interesting options include water injection and polymer flooding. Indeed, such techniques have been successfully tested in recent laboratory investigations, where X-ray scans performed on homogeneous rock slabs during water flooding experiments have shown evidence of an interesting new phenomenon-post-breakthrough, highly dendritic water fingers have been observed to thicken and coalesce, forming braided water channels that improve sweep efficiency. However, these experimental studies involve displacement mechanisms that are still poorly understood, and so the optimization of this process for eventual field application is still somewhat problematic. Ideally, a combination of two-phase flow experiments and simulations should be put in place to help understand this process more fully. To this end, a fully dynamic network model is described and used to investigate finger thickening during water flooding of extra-heavy oils. The displacement physics has been implemented at the pore scale and this is followed by a successful benchmarking exercise of the numerical simulations against the groundbreaking micromodel experiments reported by Lenormand and co-workers in the 1980s. A range of slab-scale simulations has also been carried out and compared with the corresponding experimental observations. We show that the model is able to replicate finger architectures similar to those observed in the experiments and go on to reproduce and interpret, for the first time to our knowledge, finger thickening following water breakthrough. We note that this phenomenon has been observed here in homogeneous (i.e. un-fractured media: the presence of fractures could be expected to exacerbate such fingering still further. Finally, we examine the impact of

  2. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation (presentation)

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  3. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation.

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  4. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  5. Improving the effectiveness of ecological site descriptions: General state-and-transition models and the Ecosystem Dynamics Interpretive Tool (EDIT)

    Science.gov (United States)

    Bestelmeyer, Brandon T.; Williamson, Jeb C.; Talbot, Curtis J.; Cates, Greg W.; Duniway, Michael C.; Brown, Joel R.

    2016-01-01

    State-and-transition models (STMs) are useful tools for management, but they can be difficult to use and have limited content.STMs created for groups of related ecological sites could simplify and improve their utility. The amount of information linked to models can be increased using tables that communicate management interpretations and important within-group variability.We created a new web-based information system (the Ecosystem Dynamics Interpretive Tool) to house STMs, associated tabular information, and other ecological site data and descriptors.Fewer, more informative, better organized, and easily accessible STMs should increase the accessibility of science information.

  6. The DSET Tool Library: A software approach to enable data exchange between climate system models

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, J. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    Climate modeling is a computationally intensive process. Until recently computers were not powerful enough to perform the complex calculations required to simulate the earth`s climate. As a result standalone programs were created that represent components of the earth`s climate (e.g., Atmospheric Circulation Model). However, recent advances in computing, including massively parallel computing, make it possible to couple the components forming a complete earth climate simulation. The ability to couple different climate model components will significantly improve our ability to predict climate accurately and reliably. Historically each major component of the coupled earth simulation is a standalone program designed independently with different coordinate systems and data representations. In order for two component models to be coupled, the data of one model must be mapped to the coordinate system of the second model. The focus of this project is to provide a general tool to facilitate the mapping of data between simulation components, with an emphasis on using object-oriented programming techniques to provide polynomial interpolation, line and area weighting, and aggregation services.

  7. Laboratory-based Interpretation of Seismological Models: Dealing with Incomplete or Incompatible Experimental Data (Invited)

    Science.gov (United States)

    Jackson, I.; Kennett, B. L.; Faul, U. H.

    2009-12-01

    In parallel with cooperative developments in seismology during the past 25 years, there have been phenomenal advances in mineral/rock physics making laboratory-based interpretation of seismological models increasingly useful. However, the assimilation of diverse experimental data into a physically sound framework for seismological application is not without its challenges as demonstrated by two examples. In the first example, that of equation-of-state and elasticity data, an appropriate, thermodynamically consistent framework involves finite-strain expansion of the Helmholz free energy incorporating the Debye approximation to the lattice vibrational energy, as advocated by Stixrude and Lithgow-Bertelloni. Within this context, pressure, specific heat and entropy, thermal expansion, elastic constants and their adiabatic and isothermal pressure derivatives are all calculable without further approximation in an internally consistent manner. The opportunities and challenges of assimilating a wide range of sometimes marginally incompatible experimental data into a single model of this type will be demonstrated with reference to MgO, unquestionably the most thoroughly studied mantle mineral. A neighbourhood-algorithm inversion has identified a broadly satisfactory model, but uncertainties in key parameters associated particularly with pressure calibration remain sufficiently large as to preclude definitive conclusions concerning lower-mantle chemical composition and departures from adiabaticity. The second example is the much less complete dataset concerning seismic-wave dispersion and attenuation emerging from low-frequency forced-oscillation experiments. Significant progress has been made during the past decade towards an understanding of high-temperature, micro-strain viscoelastic relaxation in upper-mantle materials, especially as regards the roles of oscillation period, temperature, grain size and melt fraction. However, the influence of other potentially important

  8. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the charge distribution model

    Science.gov (United States)

    Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.

    2012-10-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models

  9. The Blackboard Model of Computer Programming Applied to the Interpretation of Passive Sonar Data

    National Research Council Canada - National Science Library

    Liebing, David

    1997-01-01

    ... (location, course, speed, classification, etc.). At present the potential volume of data produced by modern sonar systems is so large that unless some form of computer assistance is provided with the interpretation of this data, information...

  10. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  11. Subject-enabled analytics model on measurement statistics in health risk expert system for public health informatics.

    Science.gov (United States)

    Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun

    2017-11-01

    This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. IMPEx : enabling model/observational data comparison in planetary plasma sciences

    Science.gov (United States)

    Génot, V.; Khodachenko, M.; Kallio, E. J.; Al-Ubaidi, T.; Alexeev, I. I.; Topf, F.; Gangloff, M.; André, N.; Bourrel, N.; Modolo, R.; Hess, S.; Perez-Suarez, D.; Belenkaya, E. S.; Kalegaev, V.

    2013-09-01

    The FP7 IMPEx infrastructure, whose general goal is to encourage and facilitate inter-comparison between observational and model data in planetary plasma sciences, is now established for 2 years. This presentation will focus on a tour of the different achievements which occurred during this period. Within the project, data originate from multiple sources : large observational databases (CDAWeb, AMDA at CDPP, CLWeb at IRAP), simulation databases for hybrid and MHD codes (FMI, LATMOS), planetary magnetic field models database and online services (SINP). Each of these databases proposes dedicated access to their models and runs (HWA@FMI, LATHYS@LATMOS, SMDC@SINP). To gather this large data ensemble, IMPEx offers a distributed framework in which these data may be visualized, analyzed, and shared thanks to interoperable tools; they comprise of AMDA - an online space physics analysis tool -, 3DView - a tool for data visualization in 3D planetary context -, and CLWeb - an online space physics visualization tool. A simulation data model, based on SPASE, has been designed to ease data exchange within the infrastructure. On the communication point of view, the VO paradigm has been retained and the architecture is based on web services and the IVOA protocol SAMP. The presentation will focus on how the tools may be operated synchronously to manipulate these heterogeneous data sets. Use cases based on in-flight missions and associated model runs will be proposed for the demonstration. Finally the motivation and functionalities of the future IMPEx portal will be exposed. As requirements to and potentialities of joining the IMPEx infrastructure will be shown, the presentation could be seen as an invitation to other modeling teams in the community which may be interested to promote their results via IMPEx.

  13. Integrating semantics and procedural generation: key enabling factors for declarative modeling of virtual worlds

    NARCIS (Netherlands)

    Bidarra, R.; Kraker, K.J. de; Smelik, R.M.; Tutenel, T.

    2010-01-01

    Manual content creation for virtual worlds can no longer satisfy the increasing demand arising from areas as entertainment and serious games, simulations, movies, etc. Furthermore, currently deployed modeling tools basically do not scale up: while they become more and more specialized and complex,

  14. Developmental Impact Analysis of an ICT-Enabled Scalable Healthcare Model in BRICS Economies

    Directory of Open Access Journals (Sweden)

    Dhrubes Biswas

    2012-06-01

    Full Text Available This article highlights the need for initiating a healthcare business model in a grassroots, emerging-nation context. This article’s backdrop is a history of chronic anomalies afflicting the healthcare sector in India and similarly placed BRICS nations. In these countries, a significant percentage of populations remain deprived of basic healthcare facilities and emergency services. Community (primary care services are being offered by public and private stakeholders as a panacea to the problem. Yet, there is an urgent need for specialized (tertiary care services at all levels. As a response to this challenge, an all-inclusive health-exchange system (HES model, which utilizes information communication technology (ICT to provide solutions in rural India, has been developed. The uniqueness of the model lies in its innovative hub-and-spoke architecture and its emphasis on affordability, accessibility, and availability to the masses. This article describes a developmental impact analysis (DIA that was used to assess the impact of this model. The article contributes to the knowledge base of readers by making them aware of the healthcare challenges emerging nations are facing and ways to mitigate those challenges using entrepreneurial solutions.

  15. Neonatal tolerance induction enables accurate evaluation of gene therapy for MPS I in a canine model.

    Science.gov (United States)

    Hinderer, Christian; Bell, Peter; Louboutin, Jean-Pierre; Katz, Nathan; Zhu, Yanqing; Lin, Gloria; Choa, Ruth; Bagel, Jessica; O'Donnell, Patricia; Fitzgerald, Caitlin A; Langan, Therese; Wang, Ping; Casal, Margret L; Haskins, Mark E; Wilson, James M

    2016-09-01

    High fidelity animal models of human disease are essential for preclinical evaluation of novel gene and protein therapeutics. However, these studies can be complicated by exaggerated immune responses against the human transgene. Here we demonstrate that dogs with a genetic deficiency of the enzyme α-l-iduronidase (IDUA), a model of the lysosomal storage disease mucopolysaccharidosis type I (MPS I), can be rendered immunologically tolerant to human IDUA through neonatal exposure to the enzyme. Using MPS I dogs tolerized to human IDUA as neonates, we evaluated intrathecal delivery of an adeno-associated virus serotype 9 vector expressing human IDUA as a therapy for the central nervous system manifestations of MPS I. These studies established the efficacy of the human vector in the canine model, and allowed for estimation of the minimum effective dose, providing key information for the design of first-in-human trials. This approach can facilitate evaluation of human therapeutics in relevant animal models, and may also have clinical applications for the prevention of immune responses to gene and protein replacement therapies. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Quality Concerns in Technical Education in India: A Quantifiable Quality Enabled Model

    Science.gov (United States)

    Gambhir, Victor; Wadhwa, N. C.; Grover, Sandeep

    2016-01-01

    Purpose: The paper aims to discuss current Technical Education scenarios in India. It proposes modelling the factors affecting quality in a technical institute and then applying a suitable technique for assessment, comparison and ranking. Design/methodology/approach: The paper chose graph theoretic approach for quantification of quality-enabled…

  17. Single-shot spiral imaging enabled by an expanded encoding model: Demonstration in diffusion MRI.

    Science.gov (United States)

    Wilm, Bertram J; Barmet, Christoph; Gross, Simon; Kasper, Lars; Vannesjo, S Johanna; Haeberlin, Max; Dietrich, Benjamin E; Brunner, David O; Schmid, Thomas; Pruessmann, Klaas P

    2017-01-01

    The purpose of this work was to improve the quality of single-shot spiral MRI and demonstrate its application for diffusion-weighted imaging. Image formation is based on an expanded encoding model that accounts for dynamic magnetic fields up to third order in space, nonuniform static B 0 , and coil sensitivity encoding. The encoding model is determined by B 0 mapping, sensitivity mapping, and concurrent field monitoring. Reconstruction is performed by iterative inversion of the expanded signal equations. Diffusion-tensor imaging with single-shot spiral readouts is performed in a phantom and in vivo, using a clinical 3T instrument. Image quality is assessed in terms of artefact levels, image congruence, and the influence of the different encoding factors. Using the full encoding model, diffusion-weighted single-shot spiral imaging of high quality is accomplished both in vitro and in vivo. Accounting for actual field dynamics, including higher orders, is found to be critical to suppress blurring, aliasing, and distortion. Enhanced image congruence permitted data fusion and diffusion tensor analysis without coregistration. Use of an expanded signal model largely overcomes the traditional vulnerability of spiral imaging with long readouts. It renders single-shot spirals competitive with echo-planar readouts and thus deploys shorter echo times and superior readout efficiency for diffusion imaging and further prospective applications. Magn Reson Med 77:83-91, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  18. Attention Deficit Hyperactivity Disorder and Scholastic Achievement: A Model of Mediation via Academic Enablers

    Science.gov (United States)

    Volpe, Robert J.; DuPaul, George J.; DiPerna, James C.; Jitendra, Asha K.; Lutz, J. Gary; Tresco, Katy; Junod, Rosemary Vile

    2006-01-01

    The current study examined the influence of symptoms of attention deficit hyperactivity disorder (ADHD) on student academic achievement in reading and in mathematics in a sample of 146 first- through fourth-grade students, 103 of which were identified as having ADHD and academic problems in reading and/or math. A theoretical model was examined…

  19. Relationships of radiation track structure to biological effect: a re-interpretation of the parameters of the Katz model

    International Nuclear Information System (INIS)

    Goodhead, D.T.

    1989-01-01

    The Katz track-model of cell inactivation has been more successful than any other biophysical model in fitting and predicting inactivation of mammalian cells exposed to a wide variety of ionising radiations. Although the model was developed as a parameterised phenomenological description, without necessarily implying any particular mechanistic processes, the present analysis attempts to interpret it and thereby benefit further from its success to date. A literal interpretation of the parameters leads to contradictions with other experimental and theoretical information, especially since the fitted parameters imply very large (> ∼ 4 μm) subcellular sensitive sites which each require very large amounts (> ∼ 100 keV) of energy deposition in order to be inactivated. Comparisons of these fits with those for cell mutation suggest a re-interpretation in terms of (1) very much smaller sites and (2) a clearer distinction between the ion-kill and γ-kill modes of inactivation. It is suggested that this re-interpretation may be able to guide future development of the phenomenological Katz model and also parameterisation of mechanistic biophysical models. (author)

  20. Revisiting the generation and interpretation of climate models experiments for adaptation decision-making (Invited)

    Science.gov (United States)

    Ranger, N.; Millner, A.; Niehoerster, F.

    2010-12-01

    Traditionally, climate change risk assessments have taken a roughly four-stage linear ‘chain’ of moving from socioeconomic projections, to climate projections, to primary impacts and then finally onto economic and social impact assessment. Adaptation decisions are then made on the basis of these outputs. The escalation of uncertainty through this chain is well known; resulting in an ‘explosion’ of uncertainties in the final risk and adaptation assessment. The space of plausible future risk scenarios is growing ever wider with the application of new techniques which aim to explore uncertainty ever more deeply; such as those used in the recent ‘probabilistic’ UK Climate Projections 2009, and the stochastic integrated assessment models, for example PAGE2002. This explosion of uncertainty can make decision-making problematic, particularly given that the uncertainty information communicated can not be treated as strictly probabilistic and therefore, is not an easy fit with standard decision-making under uncertainty approaches. Additional problems can arise from the fact that the uncertainty estimated for different components of the ‘chain’ is rarely directly comparable or combinable. Here, we explore the challenges and limitations of using current projections for adaptation decision-making. We report the findings of a recent report completed for the UK Adaptation Sub-Committee on approaches to deal with these challenges and make robust adaptation decisions today. To illustrate these approaches, we take a number of illustrative case studies, including a case of adaptation to hurricane risk on the US Gulf Coast. This is a particularly interesting case as it involves urgent adaptation of long-lived infrastructure but requires interpreting highly uncertain climate change science and modelling; i.e. projections of Atlantic basin hurricane activity. An approach we outline is reversing the linear chain of assessments to put the economics and decision

  1. Energy Consumption Model and Measurement Results for Network Coding-enabled IEEE 802.11 Meshed Wireless Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Rasmussen, Ulrik Wilken; Hundebøll, Martin

    2012-01-01

    This paper presents an energy model and energy measurements for network coding enabled wireless meshed networks based on IEEE 802.11 technology. The energy model and the energy measurement testbed is limited to a simple Alice and Bob scenario. For this toy scenario we compare the energy usages...... for a system with and without network coding support. While network coding reduces the number of radio transmissions, the operational activity on the devices due to coding will be increased. We derive an analytical model for the energy consumption and compare it to real measurements for which we build...... a flexible, low cost tool to be able to measure at any given node in a meshed network. We verify the precision of our tool by comparing it to a sophisticated device. Our main results in this paper are the derivation of an analytical energy model, the implementation of a distributed energy measurement testbed...

  2. Composition of uppermost mantle beneath the Northern Fennoscandia - numerical modeling and petrological interpretation

    Science.gov (United States)

    Virshylo, Ivan; Kozlovskaya, Elena; Prodaivoda, George; Silvennoinen, Hanna

    2013-04-01

    Studying of the uppermost mantle beneath the northern Fennoscandia is based on the data of the POLENET/LAPNET passive seismic array. Firstly, arrivals of P-waves of teleseismic events were inverted into P-wave velocity model using non-linear tomography (Silvennoinen et al., in preparation). The second stage was numerical petrological interpretation of referred above velocity model. This study presents estimation of mineralogical composition of the uppermost mantle as a result of numerical modeling. There are many studies concerning calculation of seismic velocities for polymineral media under high pressure and temperature conditions (Afonso, Fernàndez, Ranalli, Griffin, & Connolly, 2008; Fullea et al., 2009; Hacker, 2004; Xu, Lithgow-Bertelloni, Stixrude, & Ritsema, 2008). The elastic properties under high pressure and temperature (PT) conditions were modelled using the expanded Hook's law - Duhamel-Neumann equation, which allows computation of thermoelastic strains. Furthermore, we used a matrix model with multi-component inclusions that has no any restrictions on shape, orientation or concentration of inclusions. Stochastic method of conditional moment with computation scheme of Mori-Tanaka (Prodaivoda, Khoroshun, Nazarenko, & Vyzhva, 2000) is applied instead of traditional Voigt-Reuss-Hill and Hashin-Shtrikman equations. We developed software for both forward and inverse problem calculation. Inverse algorithm uses methods of global non-linear optimization. We prefer a "model-based" approach for ill-posed problem, which means that the problem is solved using geological and geophysical constraints for each parameter of a priori and final models. Additionally, we are checking at least several different hypothesis explaining how it is possible to get the solution with good fit to the observed data. If the a priori model is close to the real medium, the nearest solution would be found by the inversion. Otherwise, the global optimization is searching inside the

  3. A classical mechanics model for the interpretation of piezoelectric property data

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Andrew J., E-mail: a.j.bell@leeds.ac.uk [Institute for Materials Research, School of Chemical and Process Engineering, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2015-12-14

    In order to provide a means of understanding, the relationship between the primary electromechanical coefficients and simple crystal chemistry parameters for piezoelectric materials, a static analysis of a 3 atom, dipolar molecule has been undertaken to derive relationships for elastic compliance s{sup E}, dielectric permittivity ε{sup X}, and piezoelectric charge coefficient d in terms of an effective ionic charge and two inter-atomic force constants. The relationships demonstrate the mutual interdependence of the three coefficients, in keeping with experimental evidence from a large dataset of commercial piezoelectric materials. It is shown that the electromechanical coupling coefficient k is purely an expression of the asymmetry in the two force constants or bond compliances. The treatment is extended to show that the quadratic electrostriction relation between strain and polarization, in both centrosymmetric and non-centrosymmetric systems, is due to the presence of a non-zero 2nd order term in the bond compliance. Comparison with experimental data explains the counter-intuitive, positive correlation of k with s{sup E} and ε{sup X} and supports the proposition that high piezoelectric activity in single crystals is dominated by large compliance coupled with asymmetry in the sub-cell force constants. However, the analysis also shows that in polycrystalline materials, the dielectric anisotropy of the constituent crystals can be more important for attaining large charge coefficients. The model provides a completely new methodology for the interpretation of piezoelectric and electrostrictive property data and suggests methods for rapid screening for high activity in candidate piezoelectric materials, both experimentally and by novel interrogation of ab initio calculations.

  4. A classical mechanics model for the interpretation of piezoelectric property data

    International Nuclear Information System (INIS)

    Bell, Andrew J.

    2015-01-01

    In order to provide a means of understanding, the relationship between the primary electromechanical coefficients and simple crystal chemistry parameters for piezoelectric materials, a static analysis of a 3 atom, dipolar molecule has been undertaken to derive relationships for elastic compliance s E , dielectric permittivity ε X , and piezoelectric charge coefficient d in terms of an effective ionic charge and two inter-atomic force constants. The relationships demonstrate the mutual interdependence of the three coefficients, in keeping with experimental evidence from a large dataset of commercial piezoelectric materials. It is shown that the electromechanical coupling coefficient k is purely an expression of the asymmetry in the two force constants or bond compliances. The treatment is extended to show that the quadratic electrostriction relation between strain and polarization, in both centrosymmetric and non-centrosymmetric systems, is due to the presence of a non-zero 2nd order term in the bond compliance. Comparison with experimental data explains the counter-intuitive, positive correlation of k with s E and ε X and supports the proposition that high piezoelectric activity in single crystals is dominated by large compliance coupled with asymmetry in the sub-cell force constants. However, the analysis also shows that in polycrystalline materials, the dielectric anisotropy of the constituent crystals can be more important for attaining large charge coefficients. The model provides a completely new methodology for the interpretation of piezoelectric and electrostrictive property data and suggests methods for rapid screening for high activity in candidate piezoelectric materials, both experimentally and by novel interrogation of ab initio calculations

  5. Describing the clinical reasoning process: application of a model of enablement to a pediatric case.

    Science.gov (United States)

    Furze, Jennifer; Nelson, Kelly; O'Hare, Megan; Ortner, Amanda; Threlkeld, A Joseph; Jensen, Gail M

    2013-04-01

    Clinical reasoning is a core tenet of physical therapy practice leading to optimal patient care. The purpose of this case was to describe the outcomes, subjective experience, and reflective clinical reasoning process for a child with cerebral palsy using the International Classification of Functioning, Disability, and Health (ICF) model. Application of the ICF framework to a 9-year-old boy with spastic triplegic cerebral palsy was utilized to capture the interwoven factors present in this case. Interventions in the pool occurred twice weekly for 1 h over a 10-week period. Immediately post and 4 months post-intervention, the child made functional and meaningful gains. The family unit also developed an enjoyment of exercising together. Each individual family member described psychological, emotional, or physical health improvements. Reflection using the ICF model as a framework to discuss clinical reasoning can highlight important factors contributing to effective patient management.

  6. A probabilistic generative model for quantification of DNA modifications enables analysis of demethylation pathways.

    Science.gov (United States)

    Äijö, Tarmo; Huang, Yun; Mannerström, Henrik; Chavez, Lukas; Tsagaratou, Ageliki; Rao, Anjana; Lähdesmäki, Harri

    2016-03-14

    We present a generative model, Lux, to quantify DNA methylation modifications from any combination of bisulfite sequencing approaches, including reduced, oxidative, TET-assisted, chemical-modification assisted, and methylase-assisted bisulfite sequencing data. Lux models all cytosine modifications (C, 5mC, 5hmC, 5fC, and 5caC) simultaneously together with experimental parameters, including bisulfite conversion and oxidation efficiencies, as well as various chemical labeling and protection steps. We show that Lux improves the quantification and comparison of cytosine modification levels and that Lux can process any oxidized methylcytosine sequencing data sets to quantify all cytosine modifications. Analysis of targeted data from Tet2-knockdown embryonic stem cells and T cells during development demonstrates DNA modification quantification at unprecedented detail, quantifies active demethylation pathways and reveals 5hmC localization in putative regulatory regions.

  7. A model to enable indirect manufacturing options transactions between organisations: An application to the ceramic industry

    International Nuclear Information System (INIS)

    Rodriguez-Rodriguez, R.; Gomez-Gasquet, P.; Oltra-Badenes, R. F.

    2014-01-01

    In the current competitive contexts, it is widely accepted and proved that inter-enterprise collaboration lead in many occasions to better results. The Spanish ceramic industry must improve, dropping its manufacturing costs in order to be able to compete with low cost products coming from Asia. In this sense, this work presents the main results obtained from applying an innovative model, which facilitates the transfer of manufacturing options between two ceramic enterprises that share a common supplier in the scenario where one of them needs more manufacturing capacity than the one booked according to its demand forecast and the another need less. Then, some decisional mechanisms are applied, which output the values for certain parameters in order to augment the benefit of all the three participants. With the application of this model better organisational results both economic and of service level are achieved. (Author)

  8. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  9. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    Science.gov (United States)

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-01-01

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609

  10. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    Directory of Open Access Journals (Sweden)

    Muhammad Golam Kibria

    2015-09-01

    Full Text Available User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  11. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.

    Science.gov (United States)

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-09-18

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  12. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad; Elsawy, Hesham; Bader, Ahmed; Alouini, Mohamed-Slim

    2017-01-01

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  13. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  14. Interpreting aerosol lifetimes using the GEOS-Chem model and constraints from radionuclide measurements

    Energy Technology Data Exchange (ETDEWEB)

    Croft, B. [Dalhousie Univ., Halifax (Canada). Dept. of Physics and Atmospheric Science; Pierce, J.R. [Dalhousie Univ., Halifax (Canada). Dept. of Physics and Atmospheric Science; Colorado State Univ., Fort Collins, CO (United States); Martin, R.V. [Dalhousie Univ., Halifax (Canada). Dept. of Physics and Atmospheric Science; Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States)

    2014-07-01

    Fukushima emissions), but similar to the mean lifetime of 3.9 days for the {sup 137}Cs emissions injected with a uniform spread through the model's Northern Hemisphere boundary layer. Simulated e-folding times were insensitive to emission parameters (altitude, location, and time), suggesting that these measurement-based e-folding times provide a robust constraint on simulated e-folding times. Despite the reasonable global mean agreement of GEOS-Chem with measurement e-folding times, site by site comparisons yield differences of up to a factor of two, which suggest possible deficiencies in either the model transport, removal processes or the representation of {sup 137}Cs removal, particularly in the tropics and at high latitudes. There is an ongoing need to develop constraints on aerosol lifetimes, but these measurement-based constraints must be carefully interpreted given the sensitivity of mean lifetimes and e-folding times to both mixing and removal processes.

  15. Predictability and interpretability of hybrid link-level crash frequency models for urban arterials compared to cluster-based and general negative binomial regression models.

    Science.gov (United States)

    Najaf, Pooya; Duddu, Venkata R; Pulugurtha, Srinivas S

    2018-03-01

    Machine learning (ML) techniques have higher prediction accuracy compared to conventional statistical methods for crash frequency modelling. However, their black-box nature limits the interpretability. The objective of this research is to combine both ML and statistical methods to develop hybrid link-level crash frequency models with high predictability and interpretability. For this purpose, M5' model trees method (M5') is introduced and applied to classify the crash data and then calibrate a model for each homogenous class. The data for 1134 and 345 randomly selected links on urban arterials in the city of Charlotte, North Carolina was used to develop and validate models, respectively. The outputs from the hybrid approach are compared with the outputs from cluster-based negative binomial regression (NBR) and general NBR models. Findings indicate that M5' has high predictability and is very reliable to interpret the role of different attributes on crash frequency compared to other developed models.

  16. On the Physical Interpretation of the Saleh-Valenzuela Model and the definition of its power delay profiles

    NARCIS (Netherlands)

    Meijerink, Arjan; Molisch, Andreas F.

    2014-01-01

    The physical motivation and interpretation of the stochastic propagation channel model of Saleh and Valenzuela are discussed in detail. This motivation mainly relies on assumptions on the stochastic properties of the positions of transmitter, receiver and scatterers in the propagation environment,

  17. Teacher Effectiveness Examined as a System: Interpretive Structural Modeling and Facilitation Sessions with U.S. and Japanese Students

    Science.gov (United States)

    Georgakopoulos, Alexia

    2009-01-01

    This study challenges narrow definitions of teacher effectiveness and uses a systems approach to investigate teacher effectiveness as a multi-dimensional, holistic phenomenon. The methods of Nominal Group Technique and Interpretive Structural Modeling were used to assist U.S. and Japanese students separately construct influence structures during…

  18. SciDAC-Data, A Project to Enabling Data Driven Modeling of Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mubarak, M.; Ding, P.; Aliaga, L.; Tsaris, A.; Norman, A.; Lyon, A.; Ross, R.

    2016-10-10

    The SciDAC-Data project is a DOE funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab Data Center on the organization, movement, and consumption of High Energy Physics data. The project will analyze the analysis patterns and data organization that have been used by the NOvA, MicroBooNE, MINERvA and other experiments, to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership class exascale computing facilities. We will address the use of the SciDAC-Data distributions acquired from Fermilab Data Center’s analysis workflows and corresponding to around 71,000 HEP jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in HPC environments. In particular we describe in detail how the Sequential Access via Metadata (SAM) data handling system in combination with the dCache/Enstore based data archive facilities have been analyzed to develop the radically different models of the analysis of HEP data. We present how the simulation may be used to analyze the impact of design choices in archive facilities.

  19. Remote patient management: technology-enabled innovation and evolving business models for chronic disease care.

    Science.gov (United States)

    Coye, Molly Joel; Haselkorn, Ateret; DeMello, Steven

    2009-01-01

    Remote patient management (RPM) is a transformative technology that improves chronic care management while reducing net spending for chronic disease. Broadly deployed within the Veterans Health Administration and in many small trials elsewhere, RPM has been shown to support patient self-management, shift responsibilities to non-clinical providers, and reduce the use of emergency department and hospital services. Because transformative technologies offer major opportunities to advance national goals of improved quality and efficiency in health care, it is important to understand their evolution, the experiences of early adopters, and the business models that may support their deployment.

  20. Genome-scale modeling enables metabolic engineering of Saccharomyces cerevisiae for succinic acid production.

    Science.gov (United States)

    Agren, Rasmus; Otero, José Manuel; Nielsen, Jens

    2013-07-01

    In this work, we describe the application of a genome-scale metabolic model and flux balance analysis for the prediction of succinic acid overproduction strategies in Saccharomyces cerevisiae. The top three single gene deletion strategies, Δmdh1, Δoac1, and Δdic1, were tested using knock-out strains cultivated anaerobically on glucose, coupled with physiological and DNA microarray characterization. While Δmdh1 and Δoac1 strains failed to produce succinate, Δdic1 produced 0.02 C-mol/C-mol glucose, in close agreement with model predictions (0.03 C-mol/C-mol glucose). Transcriptional profiling suggests that succinate formation is coupled to mitochondrial redox balancing, and more specifically, reductive TCA cycle activity. While far from industrial titers, this proof-of-concept suggests that in silico predictions coupled with experimental validation can be used to identify novel and non-intuitive metabolic engineering strategies.

  1. Computational Laboratory Astrophysics to Enable Transport Modeling of Protons and Hydrogen in Stellar Winds, the ISM, and other Astrophysical Environments

    Science.gov (United States)

    Schultz, David

    As recognized prominently by the APRA program, interpretation of NASA astrophysical mission observations requires significant products of laboratory astrophysics, for example, spectral lines and transition probabilities, electron-, proton-, or heavy-particle collision data. Availability of these data underpin robust and validated models of astrophysical emissions and absorptions, energy, momentum, and particle transport, dynamics, and reactions. Therefore, measured or computationally derived, analyzed, and readily available laboratory astrophysics data significantly enhances the scientific return on NASA missions such as HST, Spitzer, and JWST. In the present work a comprehensive set of data will be developed for the ubiquitous proton-hydrogen and hydrogen-hydrogen collisions in astrophysical environments including ISM shocks, supernova remnants and bubbles, HI clouds, young stellar objects, and winds within stellar spheres, covering the necessary wide range of energy- and charge-changing channels, collision energies, and most relevant scattering parameters. In addition, building on preliminary work, a transport and reaction simulation will be developed incorporating the elastic and inelastic collision data collected and produced. The work will build upon significant previous efforts of the principal investigators and collaborators, will result in a comprehensive data set required for modeling these environments and interpreting NASA astrophysical mission observations, and will benefit from feedback from collaborators who are active users of the work proposed.

  2. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Science.gov (United States)

    Choi, Ickwon; Chung, Amy W; Suscovich, Todd J; Rerks-Ngarm, Supachai; Pitisuttithum, Punnee; Nitayaphan, Sorachai; Kaewkungwal, Jaranit; O'Connell, Robert J; Francis, Donald; Robb, Merlin L; Michael, Nelson L; Kim, Jerome H; Alter, Galit; Ackerman, Margaret E; Bailey-Kellogg, Chris

    2015-04-01

    The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity) and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release). We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  3. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  4. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    International Nuclear Information System (INIS)

    Ehlert, Kurt; Loewe, Laurence

    2014-01-01

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise

  5. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  6. Analysis and interpretation of the model of a Faraday cage for electromagnetic compatibility testing

    Directory of Open Access Journals (Sweden)

    Nenad V. Munić

    2014-02-01

    Full Text Available In order to improve the work of the Laboratory for Electromagnetic Compatibility Testing in the Technical Test Center (TTC, we investigated the influence of the Faraday cage on measurement results. The primary goal of this study is the simulation of the fields in the cage, especially around resonant frequencies, in order to be able to predict results of measurements of devices under test in the anechoic chamber or in any other environment. We developed simulation (computer models of the cage step by step, by using the Wipl-D program and by comparing the numerical results with measurements as well as by resolving difficulties due to the complex structure and imperfections of the cage. The subject of this paper is to present these simulation models and the corresponding results of the computations and measurements. Construction of the cage The cage is made of steel plates with the dimensions 1.25 m x 2.5 m. The base of the cage is a square; the footprint interior dimensions are 3.76 m x 3.76 m, and the height is 2.5 m. The cage ceiling is lowered by plasticized aluminum strips. The strips are loosely attached to the carriers which are screwed to the ceiling. The cage has four ventilation openings (two on the ceiling and two on one wall, made of honeycomb waveguide holes. In one corner of the cage, there is a single door with springs made of beryllium bronze. For frequencies of a few tens of MHz, the skin effect is fully developed in the cage walls. By measuring the input impedance of the wire line parallel to a wall of the cage, we calculated the surface losses of the cage plates. In addition, we used a magnetic probe to detect shield discontinuities. We generated a strong current at a frequency of 106 kHz outside the cage and measured the magnetic field inside the cage at the places of cage shield discontinuities. In this paper, we showed the influence of these places on the measurement results, especially on the qualitative and quantitative

  7. Modeling ductal carcinoma in situ: a HER2-Notch3 collaboration enables luminal filling.

    LENUS (Irish Health Repository)

    Pradeep, C-R

    2012-02-16

    A large fraction of ductal carcinoma in situ (DCIS), a non-invasive precursor lesion of invasive breast cancer, overexpresses the HER2\\/neu oncogene. The ducts of DCIS are abnormally filled with cells that evade apoptosis, but the underlying mechanisms remain incompletely understood. We overexpressed HER2 in mammary epithelial cells and observed growth factor-independent proliferation. When grown in extracellular matrix as three-dimensional spheroids, control cells developed a hollow lumen, but HER2-overexpressing cells populated the lumen by evading apoptosis. We demonstrate that HER2 overexpression in this cellular model of DCIS drives transcriptional upregulation of multiple components of the Notch survival pathway. Importantly, luminal filling required upregulation of a signaling pathway comprising Notch3, its cleaved intracellular domain and the transcriptional regulator HES1, resulting in elevated levels of c-MYC and cyclin D1. In line with HER2-Notch3 collaboration, drugs intercepting either arm reverted the DCIS-like phenotype. In addition, we report upregulation of Notch3 in hyperplastic lesions of HER2 transgenic animals, as well as an association between HER2 levels and expression levels of components of the Notch pathway in tumor specimens of breast cancer patients. Therefore, it is conceivable that the integration of the Notch and HER2 signaling pathways contributes to the pathophysiology of DCIS.

  8. Nuclear criticality safety: general. 3. Tokaimura Criticality Accident: Point Model Stochastic Neutronic Interpretation

    International Nuclear Information System (INIS)

    Mechitoua, Boukhmes

    2001-01-01

    step is based on the knowledge of the reactivity insertion. 2. Initiation probability for one neutron P(t). 3. Initiation probability with the neutron source P S (t). Japanese specialists told us that the accident happened during the seventh batch pouring. They estimated the k eff before and at the end of this operation: After the sixth batch, K=0.981, and at the end of the seventh batch, K=1.030. When the accident happened (neutron burst), 3 $ was inserted in 15 s, so if we suppose a linear insertion, we have a slope equal to 20 c/s. We may write K(t) = 1 + wt with w = 0.2 β = 0.00160/s. During the accident, there was between 14 and 16 kg of uranium with an enrichment of 18.8%. We have calculated P S (t) and we have taken into account six internal source levels: 1. spontaneous fission: 150 to 170 to 200 n/s; 2. (α, n) reactions and others of this type, and amplification of the internal source during the delayed critical phase: 500 to 1000 to 2000 n/s. In Fig. 2, we can see that the initiation occurred almost surely before 7 s and with a probability close to 0.46 before 2 s with a source of 200 n/s. With a source of 2000 n/s, we have higher initiation probabilities; for example, the initiation occurred almost surely before 2 s and with a probability close to 0.77 before 1 s after the critical time. These results are interesting because they show that a supercritical system does not lead immediately to initiation. One may have short supercritical excursion with no neutron production. The point model approach is useful for gaining a good understanding of what can be the stochastic neutronic contribution for the interpretation of criticality accidents. The results described in this paper may be useful for the interpretation of the time delay between the critical state time and the neutron burst. The thought process we have described may be used in the 'real world', that is, with multigroup or continuous-energy simulations

  9. Conceptual model and economic experiments to explain nonpersistence and enable mechanism designs fostering behavioral change.

    Science.gov (United States)

    Djawadi, Behnud Mir; Fahr, René; Turk, Florian

    2014-12-01

    Medical nonpersistence is a worldwide problem of striking magnitude. Although many fields of studies including epidemiology, sociology, and psychology try to identify determinants for medical nonpersistence, comprehensive research to explain medical nonpersistence from an economics perspective is rather scarce. The aim of the study was to develop a conceptual framework that augments standard economic choice theory with psychological concepts of behavioral economics to understand how patients' preferences for discontinuing with therapy arise over the course of the medical treatment. The availability of such a framework allows the targeted design of mechanisms for intervention strategies. Our conceptual framework models the patient as an active economic agent who evaluates the benefits and costs for continuing with therapy. We argue that a combination of loss aversion and mental accounting operations explains why patients discontinue with therapy at a specific point in time. We designed a randomized laboratory economic experiment with a student subject pool to investigate the behavioral predictions. Subjects continue with therapy as long as experienced utility losses have to be compensated. As soon as previous losses are evened out, subjects perceive the marginal benefit of persistence lower than in the beginning of the treatment. Consequently, subjects start to discontinue with therapy. Our results highlight that concepts of behavioral economics capture the dynamic structure of medical nonpersistence better than does standard economic choice theory. We recommend that behavioral economics should be a mandatory part of the development of possible intervention strategies aimed at improving patients' compliance and persistence behavior. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. DNA sequence+shape kernel enables alignment-free modeling of transcription factor binding.

    Science.gov (United States)

    Ma, Wenxiu; Yang, Lin; Rohs, Remo; Noble, William Stafford

    2017-10-01

    Transcription factors (TFs) bind to specific DNA sequence motifs. Several lines of evidence suggest that TF-DNA binding is mediated in part by properties of the local DNA shape: the width of the minor groove, the relative orientations of adjacent base pairs, etc. Several methods have been developed to jointly account for DNA sequence and shape properties in predicting TF binding affinity. However, a limitation of these methods is that they typically require a training set of aligned TF binding sites. We describe a sequence + shape kernel that leverages DNA sequence and shape information to better understand protein-DNA binding preference and affinity. This kernel extends an existing class of k-mer based sequence kernels, based on the recently described di-mismatch kernel. Using three in vitro benchmark datasets, derived from universal protein binding microarrays (uPBMs), genomic context PBMs (gcPBMs) and SELEX-seq data, we demonstrate that incorporating DNA shape information improves our ability to predict protein-DNA binding affinity. In particular, we observe that (i) the k-spectrum + shape model performs better than the classical k-spectrum kernel, particularly for small k values; (ii) the di-mismatch kernel performs better than the k-mer kernel, for larger k; and (iii) the di-mismatch + shape kernel performs better than the di-mismatch kernel for intermediate k values. The software is available at https://bitbucket.org/wenxiu/sequence-shape.git. rohs@usc.edu or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  11. Modeling and Inversion Methods for the Interpretation of Resistivity Logging Tool Response

    NARCIS (Netherlands)

    Anderson, B.I.

    2001-01-01

    The electrical resistivity measured by well logging tools is one of the most important rock parameters for indicating the amount of hydrocarbons present in a reservoir. The main interpretation challenge is to invert the measured data, solving for the true resistivity values in each zone of a

  12. Fuzzy knot theory interpretation of Yang-Mills instantons and Witten's 5-Brane model

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2008-01-01

    A knot theory interpretation of 'tHooft's instanton based on hyperbolic volume, crossing numbers and exceptional Lie symmetry groups is given. Subsequently it is shown that although instantons and particle-like states of Heterotic super strings may appear to be different concepts, on a very deep fuzzy level they are not

  13. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    Science.gov (United States)

    2012-09-01

    interpreting the state vector as the health indicator and a threshold is used on this variable in order to compute EOL (end-of-life) and RUL. Here, we...End-of-life ( EOL ) would match the true spread and would not change from one experiment to another. This is, however, in practice impossible to achieve

  14. A Model for Teaching Midrash in the Primary School: Forming Understandings of Rabbinic Interpretation of Scripture

    Science.gov (United States)

    Sigel, Deena

    2010-01-01

    In Jewish primary schools, religious education is centred on the study of Torah. At Sinai, according to Jewish tradition, Moses received the Torah in two parts: a written tradition (Hebrew scripture) and an oral tradition. The oral tradition contained much scriptural "interpretation" known, in Hebrew, as midrash. Midrash continued to be…

  15. DLNA: a simple one-dimensional dynamical model as a possible interpretation of fragment size distribution in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Lacroix, D.; Dayras, R.

    1996-08-01

    The possibility of interpreting multifragmentation data obtained from heavy-ion collisions at intermediate energies, by a new type of model: the DLNA (Dynamical Limited Nuclear Aggregation) is discussed. This model is connected to a more general class of models presenting Self-Organization Criticality (SOC). It is shown that the fragment size distributions exhibit a power-law dependence comparable to those obtained in second-order phase transition or percolation models. Fluctuations in term of scaled-factorial moments and cumulants are also studied: no signal of intermittency is seen. (K.A.)

  16. MALDI-TOF-MS with PLS Modeling Enables Strain Typing of the Bacterial Plant Pathogen Xanthomonas axonopodis

    Science.gov (United States)

    Sindt, Nathan M.; Robison, Faith; Brick, Mark A.; Schwartz, Howard F.; Heuberger, Adam L.; Prenni, Jessica E.

    2018-02-01

    Matrix-assisted desorption/ionization time of flight mass spectrometry (MALDI-TOF-MS) is a fast and effective tool for microbial species identification. However, current approaches are limited to species-level identification even when genetic differences are known. Here, we present a novel workflow that applies the statistical method of partial least squares discriminant analysis (PLS-DA) to MALDI-TOF-MS protein fingerprint data of Xanthomonas axonopodis, an important bacterial plant pathogen of fruit and vegetable crops. Mass spectra of 32 X. axonopodis strains were used to create a mass spectral library and PLS-DA was employed to model the closely related strains. A robust workflow was designed to optimize the PLS-DA model by assessing the model performance over a range of signal-to-noise ratios (s/n) and mass filter (MF) thresholds. The optimized parameters were observed to be s/n = 3 and MF = 0.7. The model correctly classified 83% of spectra withheld from the model as a test set. A new decision rule was developed, termed the rolled-up Maximum Decision Rule (ruMDR), and this method improved identification rates to 92%. These results demonstrate that MALDI-TOF-MS protein fingerprints of bacterial isolates can be utilized to enable identification at the strain level. Furthermore, the open-source framework of this workflow allows for broad implementation across various instrument platforms as well as integration with alternative modeling and classification algorithms.

  17. Improving Service Quality in Technical Education: Use of Interpretive Structural Modeling

    Science.gov (United States)

    Debnath, Roma Mitra; Shankar, Ravi

    2012-01-01

    Purpose: The purpose of this paper is to identify the relevant enablers and barriers related to technical education. It seeks to critically analyze the relationship amongst them so that policy makers can focus on relevant parameters to improve the service quality of technical education. Design/methodology/approach: The present study employs the…

  18. Using Water and Agrochemicals in the Soil, Crop and Vadose Environment (WAVE Model to Interpret Nitrogen Balance and Soil Water Reserve Under Different Tillage Managements

    Directory of Open Access Journals (Sweden)

    Zare Narjes

    2014-10-01

    Full Text Available Applying models to interpret soil, water and plant relationships under different conditions enable us to study different management scenarios and then to determine the optimum option. The aim of this study was using Water and Agrochemicals in the soil, crop and Vadose Environment (WAVE model to predict water content, nitrogen balance and its components over a corn crop season under both conventional tillage (CT and direct seeding into mulch (DSM. In this study a corn crop was cultivated at the Irstea experimental station in Montpellier, France under both CT and DSM. Model input data were weather data, nitrogen content in both the soil and mulch at the beginning of the season, the amounts and the dates of irrigation and nitrogen application. The results show an appropriate agreement between measured and model simulations (nRMSE < 10%. Using model outputs, nitrogen balance and its components were compared with measured data in both systems. The amount of N leaching in validation period were 10 and 8 kgha–1 in CT and DSM plots, respectively; therefore, these results showed better performance of DSM in comparison with CT. Simulated nitrogen leaching from CT and DSM can help us to assess groundwater pollution risk caused by these two systems.

  19. Construction and Optimization of a Heterologous Pathway for Protocatechuate Catabolism in Escherichia coli Enables Bioconversion of Model Aromatic Compounds.

    Science.gov (United States)

    Clarkson, Sonya M; Giannone, Richard J; Kridelbaugh, Donna M; Elkins, James G; Guss, Adam M; Michener, Joshua K

    2017-09-15

    The production of biofuels from lignocellulose yields a substantial lignin by-product stream that currently has few applications. Biological conversion of lignin-derived compounds into chemicals and fuels has the potential to improve the economics of lignocellulose-derived biofuels, but few microbes are able both to catabolize lignin-derived aromatic compounds and to generate valuable products. While Escherichia coli has been engineered to produce a variety of fuels and chemicals, it is incapable of catabolizing most aromatic compounds. Therefore, we engineered E. coli to catabolize protocatechuate, a common intermediate in lignin degradation, as the sole source of carbon and energy via heterologous expression of a nine-gene pathway from Pseudomonas putida KT2440. We next used experimental evolution to select for mutations that increased growth with protocatechuate more than 2-fold. Increasing the strength of a single ribosome binding site in the heterologous pathway was sufficient to recapitulate the increased growth. After optimization of the core pathway, we extended the pathway to enable catabolism of a second model compound, 4-hydroxybenzoate. These engineered strains will be useful platforms to discover, characterize, and optimize pathways for conversions of lignin-derived aromatics. IMPORTANCE Lignin is a challenging substrate for microbial catabolism due to its polymeric and heterogeneous chemical structure. Therefore, engineering microbes for improved catabolism of lignin-derived aromatic compounds will require the assembly of an entire network of catabolic reactions, including pathways from genetically intractable strains. Constructing defined pathways for aromatic compound degradation in a model host would allow rapid identification, characterization, and optimization of novel pathways. We constructed and optimized one such pathway in E. coli to enable catabolism of a model aromatic compound, protocatechuate, and then extended the pathway to a related

  20. GATE V6: a major enhancement of the GATE simulation platform enabling modelling of CT and radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Jan, S; Becheva, E [DSV/I2BM/SHFJ, Commissariat a l' Energie Atomique, Orsay (France); Benoit, D; Rehfeld, N; Stute, S; Buvat, I [IMNC-UMR 8165 CNRS-Paris 7 and Paris 11 Universities, 15 rue Georges Clemenceau, 91406 Orsay Cedex (France); Carlier, T [INSERM U892-Cancer Research Center, University of Nantes, Nantes (France); Cassol, F; Morel, C [Centre de physique des particules de Marseille, CNRS-IN2P3 and Universite de la Mediterranee, Aix-Marseille II, 163, avenue de Luminy, 13288 Marseille Cedex 09 (France); Descourt, P; Visvikis, D [INSERM, U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Brest (France); Frisson, T; Grevillot, L; Guigues, L; Sarrut, D; Zahra, N [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U630, INSA-Lyon, Universite Lyon 1, Centre Leon Berard (France); Maigne, L; Perrot, Y [Laboratoire de Physique Corpusculaire, 24 Avenue des Landais, 63177 Aubiere Cedex (France); Schaart, D R [Delft University of Technology, Radiation Detection and Medical Imaging, Mekelweg 15, 2629 JB Delft (Netherlands); Pietrzyk, U, E-mail: buvat@imnc.in2p3.fr [Reseach Center Juelich, Institute of Neurosciences and Medicine and Department of Physics, University of Wuppertal (Germany)

    2011-02-21

    GATE (Geant4 Application for Emission Tomography) is a Monte Carlo simulation platform developed by the OpenGATE collaboration since 2001 and first publicly released in 2004. Dedicated to the modelling of planar scintigraphy, single photon emission computed tomography (SPECT) and positron emission tomography (PET) acquisitions, this platform is widely used to assist PET and SPECT research. A recent extension of this platform, released by the OpenGATE collaboration as GATE V6, now also enables modelling of x-ray computed tomography and radiation therapy experiments. This paper presents an overview of the main additions and improvements implemented in GATE since the publication of the initial GATE paper (Jan et al 2004 Phys. Med. Biol. 49 4543-61). This includes new models available in GATE to simulate optical and hadronic processes, novelties in modelling tracer, organ or detector motion, new options for speeding up GATE simulations, examples illustrating the use of GATE V6 in radiotherapy applications and CT simulations, and preliminary results regarding the validation of GATE V6 for radiation therapy applications. Upon completion of extensive validation studies, GATE is expected to become a valuable tool for simulations involving both radiotherapy and imaging.

  1. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  2. Intuitive and interpretable visual communication of a complex statistical model of disease progression and risk.

    Science.gov (United States)

    Jieyi Li; Arandjelovic, Ognjen

    2017-07-01

    Computer science and machine learning in particular are increasingly lauded for their potential to aid medical practice. However, the highly technical nature of the state of the art techniques can be a major obstacle in their usability by health care professionals and thus, their adoption and actual practical benefit. In this paper we describe a software tool which focuses on the visualization of predictions made by a recently developed method which leverages data in the form of large scale electronic records for making diagnostic predictions. Guided by risk predictions, our tool allows the user to explore interactively different diagnostic trajectories, or display cumulative long term prognostics, in an intuitive and easily interpretable manner.

  3. Application of the Perceptual Factors, Enabling and Reinforcing Model on Pap Smaear Screening in Iranian Northern Woman

    Directory of Open Access Journals (Sweden)

    Abolhassan Naghibi

    2016-03-01

    Full Text Available Background and Purpose: Cervical cancer is the most prevalent cancer among women in the world. Cervical cancer is no symptoms and can be treated if diagnosed in the first stage of the disease. The aim of this study was to survey the affecting factors of the Pap smears test on perceptual factors, enabling and reinforcing (PEN-3 model constructs in women. Materials and Methods: This study was a descriptive cross-sectional study. The sample size was 416 married women with random sampling. The questionnaire had 50 questions based on PEN-3 model structures. Data were analyzed by descriptive statistics and logistic regression method in software SPSS 20. Results: The mean age of women was 32.70 ± 21.00 years. The knowledge of risk factors and screening methods for cervical cancer was 37.2. About 40% of women had a history of Pap smears. The most important of perception factors were effective, family history of the disease, encourage people to Pap smear, and fear of detecting of cervical cancer. The most important enabling factors were the presence of expert health personnel to provide training and Pap smear test (50.3%, lack of time and too busy to do Pap smear test (23.2%. The reinforcing factors were the media advice (41.3%, doctor’s advice (32.5% and neglect and forgetfulness (36.2%. Conclusion: This study has shown the Pap smear screening behavior affected by personal factors, family, cultural and economic. Application of PEN-3 can effective in planning and designing intervention programs for cervical cancer screening.

  4. Enablers and inhibitors of the implementation of the Casalud Model, a Mexican innovative healthcare model for non-communicable disease prevention and control.

    Science.gov (United States)

    Tapia-Conyer, Roberto; Saucedo-Martinez, Rodrigo; Mujica-Rosales, Ricardo; Gallardo-Rincon, Hector; Campos-Rivera, Paola Abril; Lee, Evan; Waugh, Craig; Guajardo, Lucia; Torres-Beltran, Braulio; Quijano-Gonzalez, Ursula; Soni-Gallardo, Lidia

    2016-07-22

    The Mexican healthcare system is under increasing strain due to the rising prevalence of non-communicable diseases (especially type 2 diabetes), mounting costs, and a reactive curative approach focused on treating existing diseases and their complications rather than preventing them. Casalud is a comprehensive primary healthcare model that enables proactive prevention and disease management throughout the continuum of care, using innovative technologies and a patient-centred approach. Data were collected over a 2-year period in eight primary health clinics (PHCs) in two states in central Mexico to identify and assess enablers and inhibitors of the implementation process of Casalud. We used mixed quantitative and qualitative data collection tools: surveys, in-depth interviews, and participant and non-participant observations. Transcripts and field notes were analyzed and coded using Framework Analysis, focusing on defining and describing enablers and inhibitors of the implementation process. We identified seven recurring topics in the analyzed textual data. Four topics were categorized as enablers: political support for the Casalud model, alignment with current healthcare trends, ongoing technical improvements (to ease adoption and support), and capacity building. Three topics were categorized as inhibitors: administrative practices, health clinic human resources, and the lack of a shared vision of the model. Enablers are located at PHCs and across all levels of government, and include political support for, and the technological validity of, the model. The main inhibitor is the persistence of obsolete administrative practices at both state and PHC levels, which puts the administrative feasibility of the model's implementation in jeopardy. Constructing a shared vision around the model could facilitate the implementation of Casalud as well as circumvent administrative inhibitors. In order to overcome PHC-level barriers, it is crucial to have an efficient and

  5. Visual Environment for Rich Data Interpretation (VERDI) program for environmental modeling systems

    Science.gov (United States)

    VERDI is a flexible, modular, Java-based program used for visualizing multivariate gridded meteorology, emissions and air quality modeling data created by environmental modeling systems such as the CMAQ model and WRF.

  6. Linguistics in Text Interpretation

    DEFF Research Database (Denmark)

    Togeby, Ole

    2011-01-01

    A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'.......A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'....

  7. Interpretative commenting.

    Science.gov (United States)

    Vasikaran, Samuel

    2008-08-01

    * Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.

  8. Two-zone model for the broadband Crab nebula spectrum: microscopic interpretation

    Directory of Open Access Journals (Sweden)

    Fraschetti F.

    2017-01-01

    Full Text Available We develop a simple two-zone interpretation of the broadband baseline Crab nebula spectrum between 10−5 eV and ~ 100 TeV by using two distinct log-parabola energetic electrons distributions. We determine analytically the very-high energy photon spectrum as originated by inverse-Compton scattering of the far-infrared soft ambient photons within the nebula off a first population of electrons energized at the nebula termination shock. The broad and flat 200 GeV peak jointly observed by Fermi/LAT and MAGIC is naturally reproduced. The synchrotron radiation from a second energetic electron population explains the spectrum from the radio range up to ~ 10 keV. We infer from observations the energy dependence of the microscopic probability of remaining in proximity of the shock of the accelerating electrons.

  9. Models everywhere. How a fully integrated model-based test environment can enable progress in the future

    Energy Technology Data Exchange (ETDEWEB)

    Ben Gaid, Mongi; Lebas, Romain; Fremovici, Morgan; Font, Gregory; Le Solliec, Gunael [IFP Energies nouvelles, Rueil-Malmaison (France); Albrecht, Antoine [D2T Powertrain Engineering, Rueil-Malmaison (France)

    2011-07-01

    The aim of this paper is to demonstrate how advanced modelling approaches coupled with powerful tools allow to set up a complete and coherent test environment suite. Based on a real study focused on the development of a Euro 6 hybrid powertrain with a Euro 5 turbocharged diesel engine, the authors present how a diesel engine simulator including an in-cylinder phenomenological approach to predict the raw emissions can be coupled with a DOC and DPF after-treatment system and embedded in the complete hybrid powertrain to be used in various test environments: - coupled with the control software in a multi-model multi-core simulation platform with test automation features, allowing the simulation speed to be faster than the real-time; - exported in a real time hardware in the loop platform with the ECU and hardware actuators; embedded at the experimental engine test bed to perform driving cycles such as NEDC or FTP cycles with the hybrid powertrain management. Thanks to these complete and versatile test platform suite xMOD/Morphee, all the key issues of a full hybrid powertrain can be addressed efficiently and at low cost compared to the experimental powertrain prototypes: consumption minimisation, energy optimisation, thermal exhaust management. NOx/soots trade off, NO/NO2 ratios.. Having a good balance between versatility and compliancy of the model oriented test platforms such as presented in this paper is the best way to take the maximum benefit of the model developed at each stage of the powertrain development. (orig.)

  10. Construction and Optimization of a Heterologous Pathway for Protocatechuate Catabolism in Escherichia coli Enables Bioconversion of Model Aromatic Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Clarkson, Sonya M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Biosciences Division; Giannone, Richard J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Chemical Sciences Division; Kridelbaugh, Donna M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Biosciences Division; Elkins, James G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Biosciences Division; Guss, Adam M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Biosciences Division; Michener, Joshua K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Biosciences Division, BioEnergy Science Center; Vieille, Claire [Michigan State Univ., East Lansing, MI (United States)

    2017-07-21

    The production of biofuels from lignocellulose yields a substantial lignin by-product stream that currently has few applications. Biological conversion of lignin-derived compounds into chemicals and fuels has the potential to improve the economics of lignocellulose-derived biofuels, but few microbes are able both to catabolize lignin-derived aromatic compounds and to generate valuable products. WhileEscherichia colihas been engineered to produce a variety of fuels and chemicals, it is incapable of catabolizing most aromatic compounds. Therefore, we engineeredE. colito catabolize protocatechuate, a common intermediate in lignin degradation, as the sole source of carbon and energy via heterologous expression of a nine-gene pathway fromPseudomonas putidaKT2440. Then, we used experimental evolution to select for mutations that increased growth with protocatechuate more than 2-fold. Increasing the strength of a single ribosome binding site in the heterologous pathway was sufficient to recapitulate the increased growth. After optimization of the core pathway, we extended the pathway to enable catabolism of a second model compound, 4-hydroxybenzoate. These engineered strains will be useful platforms to discover, characterize, and optimize pathways for conversions of lignin-derived aromatics.

    IMPORTANCELignin is a challenging substrate for microbial catabolism due to its polymeric and heterogeneous chemical structure. Therefore, engineering microbes for improved catabolism of lignin-derived aromatic compounds will require the assembly of an entire network of catabolic reactions, including pathways from genetically intractable strains. By constructing defined pathways for aromatic compound degradation in a model host would allow rapid

  11. Effects of waveform model systematics on the interpretation of GW150914

    OpenAIRE

    Abbott, B. P.; Abbott, R.; Adhikari, R. X.; Ananyeva, A.; Anderson, S. B.; Appert, S.; Arai, K.; Araya, M. C.; Barayoga, J. C.; Barish, B. C.; Berger, B. K.; Billingsley, G.; Biscans, S; Blackburn, J. K.; Bork, R.

    2017-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein's e...

  12. Effects of waveform model systematics on the interpretation of GW150914

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K.M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, M.J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Boer, M.; Bogaert, J.G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y; Cheng, H. -P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, A. J. K.; Chua, S. S. Y.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.A.; Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fong, H.; Forsyth, S. S.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.A.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J.G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.E.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G.F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGrath Hoareau, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A. L.; Miller, B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, Brian C J; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P.G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Gutierrez-Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Castro-Perez, J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, D.M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, António Dias da; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, R. J. E.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson-Moore, P.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifir, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van Den Brand, J. F.J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.M.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J.L.; Wu, D.S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S.J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; Boyle, M.; Chu, I.W.T.; Hemberger, D.; Hinder, I.; Kidder, L. E.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano-Vinuales, A.

    2017-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions,

  13. An Outcrop-based Detailed Geological Model to Test Automated Interpretation of Seismic Inversion Results

    NARCIS (Netherlands)

    Feng, R.; Sharma, S.; Luthi, S.M.; Gisolf, A.

    2015-01-01

    Previously, Tetyukhina et al. (2014) developed a geological and petrophysical model based on the Book Cliffs outcrops that contained eight lithotypes. For reservoir modelling purposes, this model is judged to be too coarse because in the same lithotype it contains reservoir and non-reservoir

  14. Objective interpretation as conforming interpretation

    Directory of Open Access Journals (Sweden)

    Lidka Rodak

    2011-12-01

    Full Text Available The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and should be rather linked with conforming interpretation. And what this actually implies is that it is not the virtue of certainty and predictability – which are usually associated with objectivity- but coherence that makes the foundation of applicability of objectivity in law.What could be observed from the analyses, is that both the phenomenon of conforming interpretation and objective interpretation play the role of arguments in the interpretive discourse, arguments that provide justification that interpretation is not arbitrary or subjective. With regards to the important part of the ideology of legal application which is the conviction that decisions should be taken on the basis of law in order to exclude arbitrariness, objective interpretation could be read as a question “what kind of authority “supports” certain interpretation”? that is almost never free of judicial creativity and judicial activism.One can say that, objective and conforming interpretation are just another arguments used in legal discourse.

  15. Numerical models: Detailing and simulation techniques aimed at comparison with experimental data, support to test result interpretation

    International Nuclear Information System (INIS)

    Lin Chiwen

    2001-01-01

    This part of the presentation discusses the modelling details required and the simulation techniques available for analyses, facilitating the comparison with the experimental data and providing support for interpretation of the test results. It is organised to cover the following topics: analysis inputs; basic modelling requirements for reactor coolant system; method applicable for reactor cooling system; consideration of damping values and integration time steps; typical analytic models used for analysis of reactor pressure vessel and internals; hydrodynamic mass and fluid damping for the internal analysis; impact elements for fuel analysis; and PEI theorem and its applications. The intention of these topics is to identify the key parameters associated with models of analysis and analytical methods. This should provide proper basis for useful comparison with the test results

  16. Impact of the interfaces for wind and wave modeling - interpretation using COAWST, SAR and point measurements

    DEFF Research Database (Denmark)

    Air and sea interacts, where winds generate waves and waves affect the winds. This topic is ever relevant for offshore functions such as shipping, portal routines, wind farm operation and maintenance. In a coupled modeling system, the atmospheric modeling and the wave modeling interfere with each...... use the stress directly, thus avoiding the uncertainties caused by parameterizations. This study examines the efficiency of the wave impact transfer to the atmospheric modeling through the two types of interfaces, roughness length and stress, through the coupled......-ocean-atmosphere-wave-sediment-transport (COAWST) modeling system. The roughness length has been calculated using seven schemes (Charnock, Fan, Oost, Drennen, Liu, Andreas, Taylor-Yelland). The stress approach is applied through a wave boundary layer model in SWAN. The experiments are done to a case where the Synthetic Aperture Radar (SAR) image...

  17. Animal-Assisted Therapy for persons with disabilities based on canine tail language interpretation via fuzzy emotional behavior model.

    Science.gov (United States)

    Phanwanich, Warangkhana; Kumdee, Orrawan; Ritthipravat, Panrasee; Wongsawat, Yodchanan

    2011-01-01

    Animal-Assisted Therapy (AAT) is the science that employs the merit of human-animal interaction to alleviate mental and physical problems of persons with disabilities. However, to achieve the goal of AAT for persons with severe disabilities (e.g. spinal cord injury and amyotrophic lateral sclerosis), real-time animal language interpretation is needed. Since canine behaviors can be visually distinguished from its tail, this paper proposes the automatic real-time interpretation of canine tail language for human-canine interaction in the case of persons with severe disabilities. Canine tail language is captured via two 3-axis accelerometers. Directions and frequency are selected as our features of interests. New fuzzy rules and center of gravity (COG)-based defuzzification method are proposed in order to interpret the features into three canine emotional behaviors, i.e., agitate, happy, and scare as well as its blended emotional behaviors. The emotional behavior model is performed in the simulated dog. The average recognition rate in real dog is 93.75% accuracy.

  18. "On Clocks and Clouds:" Confirming and Interpreting Climate Models as Scientific Hypotheses (Invited)

    Science.gov (United States)

    Donner, L.

    2009-12-01

    The certainty of climate change projected under various scenarios of emissions using general circulation models is an issue of vast societal importance. Unlike numerical weather prediction, a problem to which general circulation models are also applied, projected climate changes usually lie outside of the range of external forcings for which the models generating these changes have been directly evaluated. This presentation views climate models as complex scientific hypotheses and thereby frames these models within a well-defined process of both advancing scientific knowledge and recognizing its limitations. Karl Popper's Logik der Forschung (The Logic of Scientific Discovery, 1934) and 1965 essay “On Clocks and Clouds” capture well the methodologies and challenges associated with constructing climate models. Indeed, the process of a problem situation generating tentative theories, refined by error elimination, characterizes aptly the routine of general circulation model development. Limitations on certainty arise from the distinction Popper perceived in types of natural processes, which he exemplified by clocks, capable of exact measurement, and clouds, subject only to statistical approximation. Remarkably, the representation of clouds in general circulation models remains the key uncertainty in understanding atmospheric aspects of climate change. The asymmetry of hypothesis falsification by negation and much vaguer development of confidence in hypotheses consistent with some of their implications is an important practical challenge to confirming climate models. The presentation will discuss the ways in which predictions made by climate models for observable aspects of the present and past climate can be regarded as falsifiable hypotheses. The presentation will also include reasons why “passing” these tests does not provide complete confidence in predictions about the future by climate models. Finally, I will suggest that a “reductionist” view, in

  19. EarthServer - an FP7 project to enable the web delivery and analysis of 3D/4D models

    Science.gov (United States)

    Laxton, John; Sen, Marcus; Passmore, James

    2013-04-01

    EarthServer aims at open access and ad-hoc analytics on big Earth Science data, based on the OGC geoservice standards Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS). The WCS model defines "coverages" as a unifying paradigm for multi-dimensional raster data, point clouds, meshes, etc., thereby addressing a wide range of Earth Science data including 3D/4D models. WCPS allows declarative SQL-style queries on coverages. The project is developing a pilot implementing these standards, and will also investigate the use of GeoSciML to describe coverages. Integration of WCPS with XQuery will in turn allow coverages to be queried in combination with their metadata and GeoSciML description. The unified service will support navigation, extraction, aggregation, and ad-hoc analysis on coverage data from SQL. Clients will range from mobile devices to high-end immersive virtual reality, and will enable 3D model visualisation using web browser technology coupled with developing web standards. EarthServer is establishing open-source client and server technology intended to be scalable to Petabyte/Exabyte volumes, based on distributed processing, supercomputing, and cloud virtualization. Implementation will be based on the existing rasdaman server technology developed. Services using rasdaman technology are being installed serving the atmospheric, oceanographic, geological, cryospheric, planetary and general earth observation communities. The geology service (http://earthserver.bgs.ac.uk/) is being provided by BGS and at present includes satellite imagery, superficial thickness data, onshore DTMs and 3D models for the Glasgow area. It is intended to extend the data sets available to include 3D voxel models. Use of the WCPS standard allows queries to be constructed against single or multiple coverages. For example on a single coverage data for a particular area can be selected or data with a particular range of pixel values. Queries on multiple surfaces can be

  20. Use of modeling and simulation in the planning, analysis and interpretation of ultrasonic testing

    International Nuclear Information System (INIS)

    Algernon, Daniel; Grosse, Christian U.

    2016-01-01

    Acoustic testing methods such as ultrasound and impact echo are an important tool in building diagnostics. The range includes thickness measurements, the representation of the internal component geometry as well as the detection of voids (gravel pockets), delaminations or possibly locating grouting faults in the interior of metallic cladding tubes of tendon ducts. Basically acoustic method for non-destructive testing (NDT) is based on the excitation of elastic waves that interact with the target object (e.g. to detect discontinuity in the component) at the acoustic interface. From the signal received at the component surface this interaction shall be detected and interpreted to draw conclusions about the presence of the target object, and optionally to determine its size and position (approximately). Although the basic underlying physical principles of the application of elastic waves in NDT are known, it can be complicated by complex relationships in the form of restricted access, component geometries, or the type and form of reflectors. To estimate the chances of success of a test is already often not trivial. These circumstances highlight the importance of using simulations that allow a theoretically sound basis for testing and allow easy optimizing test systems. The deployable simulation methods are varied. Common are in particular the finite element method, the Elasto Finite Integration Technique and semi-analytical calculation methods. [de

  1. The strong non-reciprocity of metamaterial absorber: characteristic, interpretation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Li Yuanxun; Xie Yunsong; Zhang Huaiwu; Liu Yingli; Wen Qiye; Ling Weiwei, E-mail: liyuanxun@uestc.edu.c [State Key Laboratory of Electronic Thin Film and Integrated Devices, University of Electronic Science and Technology of China, Chengdu, 610054 (China)

    2009-05-07

    We simulated the metamaterial absorbers in two propagation conditions and observed the universal phenomenon of strong non-reciprocity. It is found that this non-reciprocity cannot be well interpreted using the effective medium theory, which indicates that the designing and understanding for the metamaterial absorber based on the proposed effective medium theory could not be applicable. The reason is pointed out that the metamaterial absorber does not satisfy the homogeneous-effective limit. So we put forward a three-parameter modified effective medium theory to fully describe the metamaterial absorbers. We have also investigated the relationships of S-parameters and absorptance among the metamaterial absorbers and the two components inside. Then the power absorption distributions in these three structures are discussed in detail. It can be concluded that the absorption is derived from the ERR structure and is enhanced largely by the coupling mechanism, and the strong non-reciprocity results from the different roles which wire structure plays in both propagation conditions.

  2. The strong non-reciprocity of metamaterial absorber: characteristic, interpretation and modelling

    International Nuclear Information System (INIS)

    Li Yuanxun; Xie Yunsong; Zhang Huaiwu; Liu Yingli; Wen Qiye; Ling Weiwei

    2009-01-01

    We simulated the metamaterial absorbers in two propagation conditions and observed the universal phenomenon of strong non-reciprocity. It is found that this non-reciprocity cannot be well interpreted using the effective medium theory, which indicates that the designing and understanding for the metamaterial absorber based on the proposed effective medium theory could not be applicable. The reason is pointed out that the metamaterial absorber does not satisfy the homogeneous-effective limit. So we put forward a three-parameter modified effective medium theory to fully describe the metamaterial absorbers. We have also investigated the relationships of S-parameters and absorptance among the metamaterial absorbers and the two components inside. Then the power absorption distributions in these three structures are discussed in detail. It can be concluded that the absorption is derived from the ERR structure and is enhanced largely by the coupling mechanism, and the strong non-reciprocity results from the different roles which wire structure plays in both propagation conditions.

  3. Towards a Good Practice Model for an Entrepreneurial HEI: Perspectives of Academics, Enterprise Enablers and Graduate Entrepreneurs

    Science.gov (United States)

    Williams, Perri; Fenton, Mary

    2013-01-01

    This paper reports on an examination of the perspectives of academics, enterprise enablers and graduate entrepreneurs of an entrepreneurial higher education institution (HEI). The research was conducted in Ireland among 30 graduate entrepreneurs and 15 academics and enterprise enablers (enterprise development agency personnel) to provide a…

  4. Modelling as a tool when interpreting biodegradation of micro pollutants in activated sludge systems

    DEFF Research Database (Denmark)

    Press-Kristensen, Kåre; Lindblom, Erik Ulfson; Henze, Mogens

    2007-01-01

    The aims of the present work were to improve the biodegradation of the endocrine disrupting micro pollutant, bisphenol A (BPA), used as model compound in an activated sludge system and to underline the importance of modelling the system. Previous results have shown that BPA mainly is degraded under...

  5. Long-term development of how students interpret a model; Complementarity of contexts and mathematics

    NARCIS (Netherlands)

    Vos, Pauline; Roorda, Gerrit; Stillman, Gloria Ann; Blum, Werner; Kaiser, Gabriele

    2017-01-01

    When students engage in rich mathematical modelling tasks, they have to handle real-world contexts and mathematics in chorus. This is not easy. In this chapter, contexts and mathematics are perceived as complementary, which means they can be integrated. Based on four types of approaches to modelling

  6. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    Science.gov (United States)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  7. Use of eHealth technologies to enable the implementation of musculoskeletal Models of Care: Evidence and practice.

    Science.gov (United States)

    Slater, Helen; Dear, Blake F; Merolli, Mark A; Li, Linda C; Briggs, Andrew M

    2016-06-01

    Musculoskeletal (MSK) conditions are the second leading cause of morbidity-related burden of disease globally. EHealth is a potentially critical factor that enables the implementation of accessible, sustainable and more integrated MSK models of care (MoCs). MoCs serve as a vehicle to drive evidence into policy and practice through changes at a health system, clinician and patient level. The use of eHealth to implement MoCs is intuitive, given the capacity to scale technologies to deliver system and economic efficiencies, to contribute to sustainability, to adapt to low-resource settings and to mitigate access and care disparities. We follow a practice-oriented approach to describing the 'what' and 'how' to harness eHealth in the implementation of MSK MoCs. We focus on the practical application of eHealth technologies across care settings to those MSK conditions contributing most substantially to the burden of disease, including osteoarthritis and inflammatory arthritis, skeletal fragility-associated conditions and persistent MSK pain. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Proposal for a Conceptual Model for Evaluating Lean Product Development Performance: A Study of LPD Enablers in Manufacturing Companies

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    The instability in today's market and the emerging demands for mass customized products by customers, are driving companies to seek for cost effective and time efficient improvements in their production system and this have led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. Among such developmental architecture adopted, is the integration of lean thinking in the product development process. However, due to lack of clear understanding of the lean performance and its measurements, many companies are unable to implement and fully integrate the lean principle into their product development process and without a proper performance measurement, the performance level of the organizational value stream will be unknown and the specific area of improvement as it relates to the LPD program cannot be tracked. Hence, it will result in poor decision making in the LPD implementation. This paper therefore seeks to present a conceptual model for evaluation of LPD performances by identifying and analysing the core existing LPD enabler (Chief Engineer, Cross-functional teams, Set-based engineering, Poka-yoke (mistakeproofing), Knowledge-based environment, Value-focused planning and development, Top management support, Technology, Supplier integration, Workforce commitment and Continuous improvement culture) for assessing the LPD performance.

  9. Using models to interpret the impact of roadside barriers on near-road air quality

    Science.gov (United States)

    Amini, Seyedmorteza; Ahangar, Faraz Enayati; Schulte, Nico; Venkatram, Akula

    2016-08-01

    The question this paper addresses is whether semi-empirical dispersion models based on data from controlled wind tunnel and tracer experiments can describe data collected downwind of a sound barrier next to a real-world urban highway. Both models are based on the mixed wake model described in Schulte et al. (2014). The first neglects the effects of stability on dispersion, and the second accounts for reduced entrainment into the wake of the barrier under unstable conditions. The models were evaluated with data collected downwind of a kilometer-long barrier next to the I-215 freeway running next to the University of California campus in Riverside. The data included measurements of 1) ultrafine particle (UFP) concentrations at several distances from the barrier, 2) micrometeorological variables upwind and downwind of the barrier, and 3) traffic flow separated by automobiles and trucks. Because the emission factor for UFP is highly uncertain, we treated it as a model parameter whose value is obtained by fitting model estimates to observations of UFP concentrations measured at distances where the barrier impact is not dominant. Both models provide adequate descriptions of both the magnitude and the spatial variation of observed concentrations. The good performance of the models reinforces the conclusion from Schulte et al. (2014) that the presence of the barrier is equivalent to shifting the line sources on the road upwind by a distance of about HU/u∗ where H is the barrier height, U is the wind velocity at half of the barrier height, and u∗ is the friction velocity. The models predict that a 4 m barrier results in a 35% reduction in average concentration within 40 m (10 times the barrier height) of the barrier, relative to the no-barrier site. This concentration reduction is 55% if the barrier height is doubled.

  10. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  11. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  12. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably......This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  13. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    Science.gov (United States)

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  15. Managing risks in the fisheries supply chain using House of Risk Framework (HOR) and Interpretive Structural Modeling (ISM)

    Science.gov (United States)

    Nguyen, T. L. T.; Tran, T. T.; Huynh, T. P.; Ho, T. K. D.; Le, A. T.; Do, T. K. H.

    2018-04-01

    One of the sectors which contributes importantly to the development of Vietnam economy is fishery industry. However, during recent year, it has been witnessed many difficulties on managing the performance of the fishery supply chain operations as a whole. In this paper, a framework for supply chain risk management (SCRM) is proposed. Initially, all the activities are mapped by using Supply Chain Operations Reference (SCOR) model. Next, the risk ranking is analyzed in House of Risk. Furthermore, interpretive structural modeling (ISM) is used to identify inter-relationships among supply chain risks and to visualize the risks according to their levels. For illustration, the model has been tested in several case studies with fishery companies in Can Tho, Mekong Delta. This study identifies 22 risk events and 20 risk agents through the supply chain. Also, the risk priority could be used for further House of Risk with proactive actions in future studies.

  16. A model based system for the interpretation of MR human brain scans

    International Nuclear Information System (INIS)

    Kapouleas, I.; Kulikowski, C.A.

    1988-01-01

    This paper describes a prototype system for identifying and characterizing Multiple Scleroris (MS) lesions in the brain from magnetic resonance (MR) images. The system is designed to obtain an initial segmentation of each cross-sectional image with low level vision methods, and then derive successive refinements of image subregions through a model-driven approach that correlates relevant information from T1 and T2 images and 3-D information from complementary cross-sections when necessary. The system uses a b-spline surface model of the brain that matches the characteristics of the individual's brain. The normal internal structures of the brain are then scaled proportionately before carrying out the successive refinement operations for the detection of the MS lesions. The low level vision and the solid modeling components of the system have been successfully tested on several hundred images from a number of MR patient studies. The first steps of model fitting have been implemented and show promising results

  17. Mathematical interpretation of Brownian motor model: Limit cycles and directed transport phenomena

    Science.gov (United States)

    Yang, Jianqiang; Ma, Hong; Zhong, Suchuang

    2018-03-01

    In this article, we first suggest that the attractor of Brownian motor model is one of the reasons for the directed transport phenomenon of Brownian particle. We take the classical Smoluchowski-Feynman (SF) ratchet model as an example to investigate the relationship between limit cycles and directed transport phenomenon of the Brownian particle. We study the existence and variation rule of limit cycles of SF ratchet model at changing parameters through mathematical methods. The influences of these parameters on the directed transport phenomenon of a Brownian particle are then analyzed through numerical simulations. Reasonable mathematical explanations for the directed transport phenomenon of Brownian particle in SF ratchet model are also formulated on the basis of the existence and variation rule of the limit cycles and numerical simulations. These mathematical explanations provide a theoretical basis for applying these theories in physics, biology, chemistry, and engineering.

  18. Model Interpretation of Climate Signals: Application to the Asian Monsoon Climate

    Science.gov (United States)

    Lau, William K. M.

    2002-01-01

    This is an invited review paper intended to be published as a Chapter in a book entitled "The Global Climate System: Patterns, Processes and Teleconnections" Cambridge University Press. The author begins with an introduction followed by a primer of climate models, including a description of various modeling strategies and methodologies used for climate diagnostics and predictability studies. Results from the CLIVAR Monsoon Model Intercomparison Project (MMIP) were used to illustrate the application of the strategies to modeling the Asian monsoon. It is shown that state-of-the art atmospheric GCMs have reasonable capability in simulating the seasonal mean large scale monsoon circulation, and response to El Nino. However, most models fail to capture the climatological as well as interannual anomalies of regional scale features of the Asian monsoon. These include in general over-estimating the intensity and/or misplacing the locations of the monsoon convection over the Bay of Bengal, and the zones of heavy rainfall near steep topography of the Indian subcontinent, Indonesia, and Indo-China and the Philippines. The intensity of convection in the equatorial Indian Ocean is generally weaker in models compared to observations. Most important, an endemic problem in all models is the weakness and the lack of definition of the Mei-yu rainbelt of the East Asia, in particular the part of the Mei-yu rainbelt over the East China Sea and southern Japan are under-represented. All models seem to possess certain amount of intraseasonal variability, but the monsoon transitions, such as the onset and breaks are less defined compared with the observed. Evidences are provided that a better simulation of the annual cycle and intraseasonal variability is a pre-requisite for better simulation and better prediction of interannual anomalies.

  19. The practical use of resistance modelling to interpret the gas separation properties of hollow fiber membranes

    International Nuclear Information System (INIS)

    Ahmad Fauzi Ismail; Shilton, S.J.

    2000-01-01

    A simple resistance modelling methodology is presented for gas transport through asymmetric polymeric membranes. The methodology allows fine structural properties such as active layer thickness and surface porosity, to be determined from experimental gas permeation data. This paper, which could be regarded as a practical guide, shows that resistance modeling, if accompanied by realistic working assumptions, need not be difficult and can provide a valuable insight into the relationships between the membrane fabrication conditions and performance of gas separation membranes. (Author)

  20. A criticism of big bang cosmological models based on interpretation of the red shift

    Energy Technology Data Exchange (ETDEWEB)

    Kierein, J.W. (Ball Aerospace Systems Div., Boulder, CO (USA))

    1988-08-01

    The interaction of light with the intergalactic plasma produces the Hubble red shift versus distance relationship. This interaction also produces an isotopic long wavelength background radiation from the plasma. Intrinsic red shifts in quasars and other objects are similarly explained, showing why they are exceptions to Hubble's law. Because the red shift is not doppler-shifted, big bang cosmological models should be replaced with static models. (author).

  1. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  2. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project

    Directory of Open Access Journals (Sweden)

    Berman Barbara

    2011-03-01

    Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.

  3. Predictive modelling of the dielectric response of plasmonic substrates: application to the interpretation of ellipsometric spectra

    Science.gov (United States)

    Pugliara, A.; Bayle, M.; Bonafos, C.; Carles, R.; Respaud, M.; Makasheva, K.

    2018-03-01

    A predictive modelling of plasmonic substrates appropriate to read ellipsometric spectra is presented in this work. We focus on plasmonic substrates containing a single layer of silver nanoparticles (AgNPs) embedded in silica matrices. The model uses the Abeles matrix formalism and is based on the quasistatic approximation of the classical Maxwell-Garnett mixing rule, however accounting for the electronic confinement effect through the damping parameter. It is applied on samples elaborated by: (i) RF-diode sputtering followed by Plasma Enhanced Chemical Vapor Deposition (PECVD) and (ii) Low Energy Ion Beam Synthesis (LE-IBS), and represents situations with increasing degree of complexity that can be accounted for by the model. It allows extraction of the main characteristics of the AgNPs population: average size, volume fraction and distance of the AgNPs layer from the matrix free surface. Model validation is achieved through comparison with results obtained from transmission electron microscopy approving for its applicability. The advantages and limitations of the proposed model are discussed after eccentricity-based statistical analysis along with further developments related to the quality of comparison between the model-generated spectra and the experimentally-recorded ellipsometric spectra.

  4. Interpreting the cosmic far-infrared background anisotropies using a gas regulator model

    Science.gov (United States)

    Wu, Hao-Yi; Doré, Olivier; Teyssier, Romain; Serra, Paolo

    2018-04-01

    Cosmic far-infrared background (CFIRB) is a powerful probe of the history of star formation rate (SFR) and the connection between baryons and dark matter across cosmic time. In this work, we explore to which extent the CFIRB anisotropies can be reproduced by a simple physical framework for galaxy evolution, the gas regulator (bathtub) model. This model is based on continuity equations for gas, stars, and metals, taking into account cosmic gas accretion, star formation, and gas ejection. We model the large-scale galaxy bias and small-scale shot noise self-consistently, and we constrain our model using the CFIRB power spectra measured by Planck. Because of the simplicity of the physical model, the goodness of fit is limited. We compare our model predictions with the observed correlation between CFIRB and gravitational lensing, bolometric infrared luminosity functions, and submillimetre source counts. The strong clustering of CFIRB indicates a large galaxy bias, which corresponds to haloes of mass 1012.5 M⊙ at z = 2, higher than the mass associated with the peak of the star formation efficiency. We also find that the far-infrared luminosities of haloes above 1012 M⊙ are higher than the expectation from the SFR observed in ultraviolet and optical surveys.

  5. A simple interpretation of Hubbert's model of resource exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Bardi, U.; Lavacchi, A. [Dipartimento di Chimica, Universita di Firenze, Via della Lastruccia 3, Sesto Fiorentino FI (Italy); Bardi, U.; Lavacchi, A. [ASPO - Association for the Study of Peak Oil and Gas, Italian section, c/o Dipartimento di Chimica, Universita di Firenze, 50019 Sesto Fiorentino (Italy)

    2009-07-01

    The well known 'Hubbert curve' assumes that the production curve of a crude oil in a free market economy is 'bell shaped' and symmetric. The model was first applied in the 1950s as a way of forecasting the production of crude oil in the US lower 48 states. Today, variants of the model are often used for describing the worldwide production of crude oil, which is supposed to reach a global production peak ('peak oil') and to decline afterwards. The model has also been shown to be generally valid for mineral resources other than crude oil and also for slowly renewable biological resources such as whales. Despite its widespread use, Hubbert's model sometimes criticized for being arbitrary and its underlying assumptions are rarely examined. In the present work, we use a simple model to generate the bell shaped curve using the smallest possible number of assumptions, taking also into account the 'Energy Return to Energy Invested' (EROI or EROEI) parameter. We show that this model can reproduce several historical cases, even for resources other than crude oil, and provide a useful tool for understanding the general mechanisms of resource exploitation and the future of energy production in the world's economy. (author)

  6. Teaching Real Data Interpretation with Models (TRIM): Analysis of Student Dialogue in a Large-Enrollment Cell and Developmental Biology Course

    Science.gov (United States)

    Zagallo, Patricia; Meddleton, Shanice; Bolger, Molly S.

    2016-01-01

    We present our design for a cell biology course to integrate content with scientific practices, specifically data interpretation and model-based reasoning. A 2-year research project within this course allowed us to understand how students interpret authentic biological data in this setting. Through analysis of written work, we measured the extent…

  7. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    Directory of Open Access Journals (Sweden)

    Gurutzeta Guillera-Arroita

    Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  8. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    Science.gov (United States)

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  9. Development and assessment of modular models of calculation for the interpretation of rod-melting experiments

    International Nuclear Information System (INIS)

    Tuerk, W.

    1980-01-01

    By the example of recalculations of rod-melting experiment it is shown how a modular simulation model for complex systems can be formulated within the scope of RSYST1. The procedure of code development as well as the physical and numerical methods and approximations of the simulation model are described. To each important physical process a code module is assigned. The individual moduls describe heat production, rod heat-up, rod oxidation, rod environment, rod deformation by thermal expansion and can buckling, melting of the rod, rod failure, and flowing off of the melted mass. A comparison of the results for the overall model with the result of different experiments indicates that the phenomena during heat-up and melting of the rod are treated in agreement with the experiments. The results of the calculation model and its submodels are thus largely supported by experiments. Therefore further predictions with a high level of confidence can be made with the model within the scope of reactor safety research. (orig.) [de

  10. Penultimate interpretation.

    Science.gov (United States)

    Neuman, Yair

    2010-10-01

    Interpretation is at the center of psychoanalytic activity. However, interpretation is always challenged by that which is beyond our grasp, the 'dark matter' of our mind, what Bion describes as ' O'. O is one of the most central and difficult concepts in Bion's thought. In this paper, I explain the enigmatic nature of O as a high-dimensional mental space and point to the price one should pay for substituting the pre-symbolic lexicon of the emotion-laden and high-dimensional unconscious for a low-dimensional symbolic representation. This price is reification--objectifying lived experience and draining it of vitality and complexity. In order to address the difficulty of approaching O through symbolization, I introduce the term 'Penultimate Interpretation'--a form of interpretation that seeks 'loopholes' through which the analyst and the analysand may reciprocally save themselves from the curse of reification. Three guidelines for 'Penultimate Interpretation' are proposed and illustrated through an imaginary dialogue. Copyright © 2010 Institute of Psychoanalysis.

  11. Photoelectrolysis at the oxide-electrolyte interface as interpreted through the 'transition' layer model

    Science.gov (United States)

    Kalia, R. K.; Weber, Michael F.; Schumacher, L.; Dignam, M. J.

    1980-12-01

    A transition layer model of the oxide-electrolyte interface, proposed earlier by one of us, is outlined and then examined in the light of experimental data relating primarily to photoelectrolysis of water at semiconducting oxide electrodes. The model provides useful insight into the behaviour of the system and allows a calculation of thc minimum bias potential needed for photoelectrolysis, thus illuminating the origin of the requirement for such an external bias. In order to electrolyse water without a bias, the model requires an n-type oxide to be sufficiently reduced so that it is thermodynamically capable of chemically reducing water to produce hydrogen at 1 atm pressure. Similarly, for bias-free operation, a p-type metal oxide must be thermodynamically unstable with respect to the release of oxygen at 1 atm pressure. In the face of these requirements it is apparent that oxide stability is bound to be in general a serious problem for nonstoichiometric single metal oxides.

  12. A simple, single-substrate model to interpret intra-annual stable isotope signals in tree-ring cellulose

    Science.gov (United States)

    Ogée, J.; Barbour, M. M.; Wingate, L.; Bert, D.; Bosc, A.; Stievenard, M.; Lambrot, C.; Pierre, M.; Bariac, T.; Dewar, R. C.

    2009-04-01

    High-resolution intra-annual measurements of the carbon and oxygen stable isotope composition of cellulose in annual tree rings (δ13Ccellulose and δ18Ocellulose, respectively) reveal well-defined seasonal patterns that could contain valuable records of past climate and tree function. Interpreting these signals is nonetheless complex because they not only record the signature of current assimilates, but also depend on carbon allocation dynamics within the trees. Here, we present a simple, single-substrate model for wood growth containing only 12 main parameters. The model is used to interpret an isotopic intra-annual chronology collected in an even-aged maritime pine plantation growing in the South-West of France, where climate, soil and flux variables were also monitored. The empirical δ13Ccellulose and δ18Ocellulose exhibit dynamic seasonal patterns, with clear differences between years and individuals, that are mostly captured by the model. In particular, the amplitude of both signals is reproduced satisfactorily as well as the sharp 18O enrichment at the beginning of 1997 and the less pronounced 13C and 18O depletion observed at the end of the latewood. Our results suggest that the single-substrate hypothesis is a good approximation for tree ring studies on Pinus pinaster, at least for the environmental conditions covered by this study. A sensitivity analysis revealed that, in the early wood, the model was particularly sensitive to the date when cell wall thickening begins (twt). We therefore propose to use the model to reconstruct time series of twt and explore how climate influences this key parameter of xylogenesis.

  13. The MEXICO project (Model Experiments in Controlled Conditions): The database and first results of data processing and interpretation

    International Nuclear Information System (INIS)

    Snel, H; Schepers, J G; Montgomerie, B

    2007-01-01

    The Mexico (Model experiments in Controlled Conditions) was a FP5 project, partly financed by European Commission. The main objective was to create a database of detailed aerodynamic and load measurements on a wind turbine model, in a large and high quality wind tunnel, to be used for model validation and improvement. Here model stands for both the extended BEM modelling used in state-of-the-art design and certification software, and CFD modelling of the rotor and near wake flow. For this purpose a three bladed 4.5 m diameter wind tunnel model was built and instrumented. The wind tunnel experiments were carried out in the open section (9.5*9.5 m 2 ) of the Large Scale Facility of the DNW (German-Netherlands) during a six day campaign in December 2006. The conditions for measurements cover three operational tip speed ratios, many blade pitch angles, three yaw misalignment angles and a small number of unsteady cases in the form of pitch ramps and rotor speed ramps. One of the most important feats of the measurement program was the flow field mapping, with stereo PIV techniques. Overall the measurement campaign was very successful. The paper describes the now existing database and discusses a number of highlights from early data processing and interpretation. It should be stressed that all results are first results, no tunnel correction has been performed so far, nor has the necessary checking of data quality

  14. Viral epidemics in a cell culture: novel high resolution data and their interpretation by a percolation theory based model.

    Directory of Open Access Journals (Sweden)

    Balázs Gönci

    2010-12-01

    Full Text Available Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation.

  15. Viral epidemics in a cell culture: novel high resolution data and their interpretation by a percolation theory based model.

    Science.gov (United States)

    Gönci, Balázs; Németh, Valéria; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás

    2010-12-20

    Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation.

  16. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  17. Interpretation and extrapolation of ecological responses in model ecosystems stressed with non-persistent insecticides

    NARCIS (Netherlands)

    Wijngaarden, van R.P.A.

    2006-01-01

    This thesis aims to contribute to the discussion concerning whether micro- and mesocosm studies can serve as adequate models for robust risk assessment of pesticides. For this purpose, results from freshwater micro- and mesocosm experiments conducted under different experimental conditions are

  18. Process-based interpretation of conceptual hydrological model performance using a multinational catchment set

    Science.gov (United States)

    Poncelet, Carine; Merz, Ralf; Merz, Bruno; Parajka, Juraj; Oudin, Ludovic; Andréassian, Vazken; Perrin, Charles

    2017-08-01

    Most of previous assessments of hydrologic model performance are fragmented, based on small number of catchments, different methods or time periods and do not link the results to landscape or climate characteristics. This study uses large-sample hydrology to identify major catchment controls on daily runoff simulations. It is based on a conceptual lumped hydrological model (GR6J), a collection of 29 catchment characteristics, a multinational set of 1103 catchments located in Austria, France, and Germany and four runoff model efficiency criteria. Two analyses are conducted to assess how features and criteria are linked: (i) a one-dimensional analysis based on the Kruskal-Wallis test and (ii) a multidimensional analysis based on regression trees and investigating the interplay between features. The catchment features most affecting model performance are the flashiness of precipitation and streamflow (computed as the ratio of absolute day-to-day fluctuations by the total amount in a year), the seasonality of evaporation, the catchment area, and the catchment aridity. Nonflashy, nonseasonal, large, and nonarid catchments show the best performance for all the tested criteria. We argue that this higher performance is due to fewer nonlinear responses (higher correlation between precipitation and streamflow) and lower input and output variability for such catchments. Finally, we show that, compared to national sets, multinational sets increase results transferability because they explore a wider range of hydroclimatic conditions.

  19. Household Labour Supply in Britain and Denmark: Some Interpretations Using a Model of Pareto Optimal Behaviour

    DEFF Research Database (Denmark)

    Barmby, Tim; Smith, Nina

    1996-01-01

    This paper analyses the labour supply behaviour of households in Denmark and Britain. It employs models in which the preferences of individuals within the household are explicitly represented. The households are then assumed to decide on their labour supply in a Pareto-Optimal fashion. Describing...

  20. Energetic protons at Mars: interpretation of SLED/Phobos-2 observations by a kinetic model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    2012-11-01

    Full Text Available Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs from the Sun can penetrate close to the planet (under some circumstances reaching the surface. On 13 March 1989 the SLED instrument aboard the Phobos-2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8 RM. In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3-D self-consistent hybrid model (HYB-Mars where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1 a flux enhancement near the inbound bow shock, (2 the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3 the energy dependency of the flux enhancement near the bow shock and (4 how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars–solar wind interaction significantly modulated the Martian energetic particle environment.

  1. Energetic protons at Mars. Interpretation of SLED/Phobos-2 observations by a kinetic model

    International Nuclear Information System (INIS)

    Kallio, E.; Alho, M.; Jarvinen, R.; Dyadechkin, S.; McKenna-Lawlor, S.; Afonin, V.V.

    2012-01-01

    Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs) from the Sun can penetrate close to the planet (under some circumstances reaching the surface). On 13 March 1989 the SLED instrument aboard the Phobos- 2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8RM). In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3- D self-consistent hybrid model (HYB-Mars) where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1) a flux enhancement near the inbound bow shock, (2) the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3) the energy dependency of the flux enhancement near the bow shock and (4) how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars-solar wind interaction significantly modulated the Martian energetic particle environment. (orig.)

  2. Interpretation and modeling of a subsurface injection test, 200 East Area, Hanford, Washington

    International Nuclear Information System (INIS)

    Smoot, J.L.; Lu, A.H.

    1994-11-01

    A tracer experiment was conducted in 1980 and 1981 in the unsaturated zone in the southeast portion of the Hanford 200 East Area near the Plutonium-Uranium Extraction (PUREX) facility. The field design consisted of a central injection well with 32 monitoring wells within an 8-m radius. Water containing radioactive and other tracers was injected weekly during the experiment. The unique features of the experiment were the documented control of the inputs, the experiment's three-dimensional nature, the in-situ measurement of radioactive tracers, and the use of multiple injections. The spacing of the test wells provided reasonable lag distribution for spatial correlation analysis. Preliminary analyses indicated spatial correlation on the order of 400 to 500 cm in the vertical direction. Previous researchers found that two-dimensional axisymmetric modeling of moisture content generally underpredicts lateral spreading and overpredicts vertical movement of the injected water. Incorporation of anisotropic hydraulic properties resulted in the best model predictions. Three-dimensional modeling incorporated the geologic heterogeneity of discontinuous layers and lenses of sediment apparent in the site geology. Model results were compared statistically with measured experimental data and indicate reasonably good agreement with vertical and lateral field moisture distributions

  3. Energetic protons at Mars. Interpretation of SLED/Phobos-2 observations by a kinetic model

    Energy Technology Data Exchange (ETDEWEB)

    Kallio, E.; Alho, M.; Jarvinen, R.; Dyadechkin, S. [Finnish Meteorological Institute, Helsinki (Finland); McKenna-Lawlor, S. [Space Technology Ireland, Maynooth, Co. Kildare (Ireland); Afonin, V.V. [Space Research Institute, Moscow (Russian Federation)

    2012-07-01

    Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs) from the Sun can penetrate close to the planet (under some circumstances reaching the surface). On 13 March 1989 the SLED instrument aboard the Phobos- 2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8RM). In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3- D self-consistent hybrid model (HYB-Mars) where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1) a flux enhancement near the inbound bow shock, (2) the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3) the energy dependency of the flux enhancement near the bow shock and (4) how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars-solar wind interaction significantly modulated the Martian energetic particle environment. (orig.)

  4. QCD and Fermi gas model interpretations of the E.M.C. effect

    International Nuclear Information System (INIS)

    Close, F.E.

    1986-07-01

    It is suggested that there is a correspondence between the quantum chromo-dynamic (QCD) approach and the conventional model of nucleon binding which leads to nuclear properties being related to the anomalous dimensions of QCD. This in turn may lead to a 'unified' approach to nuclear and quark-gluon physics. A discussion is given with respect to the EMC effect. (UK)

  5. Computational models for interpretation of wave function imaging in cross-sectional STM of quantum dots

    NARCIS (Netherlands)

    Maksym, P.A.; Roy, M.; Wijnheijmer, A.P.; Koenraad, P.M.

    2008-01-01

    Computational models are used to investigate the role of electron-electron interactions in cross-sectional STM of cleaved quantum dots. If correlation effects are weak, the tunnelling current reflects the nodal structure of the non-interacting dot states. If correlation is strong, peaks in the

  6. Interpreting Physics

    CERN Document Server

    MacKinnon, Edward

    2012-01-01

    This book is the first to offer a systematic account of the role of language in the development and interpretation of physics. An historical-conceptual analysis of the co-evolution of mathematical and physical concepts leads to the classical/quatum interface. Bohrian orthodoxy stresses the indispensability of classical concepts and the functional role of mathematics. This book analyses ways of extending, and then going beyond this orthodoxy orthodoxy. Finally, the book analyzes how a revised interpretation of physics impacts on basic philosophical issues: conceptual revolutions, realism, and r

  7. Interpretation and modelling of fission product Ba and Mo releases from fuel

    Science.gov (United States)

    Brillant, G.

    2010-02-01

    The release mechanisms of two fission products (namely barium and molybdenum) in severe accident conditions are studied using the VERCORS experimental observations. Barium is observed to be mostly released under reducing conditions while molybdenum release is most observed under oxidizing conditions. As well, the volatility of some precipitates in fuel is evaluated by thermodynamic equilibrium calculations. The polymeric species (MoO 3) n are calculated to largely contribute to molybdenum partial pressure and barium volatility is greatly enhanced if the gas atmosphere is reducing. Analytical models of fission product release from fuel are proposed for barium and molybdenum. Finally, these models have been integrated in the ASTEC/ELSA code and validation calculations have been performed on several experimental tests.

  8. Study of nickel nuclei by (p,d) and (p,t) reactions. Shell model interpretation

    International Nuclear Information System (INIS)

    Kong-A-Siou, D.-H.

    1975-01-01

    The experimental techniques employed at the Nuclear Science Institute (Grenoble) and at Michigan State University are described. The development of the transition amplitude calculation of the one-or two-nucleon transfer reactions is described first, after which the principle of shell model calculations is outlined. The choices of configuration space and two-body interactions are discussed. The DWBA method of analysis is studied in more detail. The effects of different approximations and the influence of the parameters are examined. Special attention is paid to the j-dependence of the form of the angular distributions, on effect not explained in the standard DWBA framework. The results are analysed and a large section is devoted to a comparative study of the experimental results obtained and those from other nuclear reactions. The spectroscopic data obtained are compared with the results of shell model calculations [fr

  9. Interactions of Cosmic Rays around the Universe. Models for UHECR data interpretation

    Directory of Open Access Journals (Sweden)

    Boncioli Denise

    2017-01-01

    Full Text Available Ultra high energy cosmic rays (UHECRs are expected to be accelerated inastrophysical sources and to travel through extragalactic space before hitting the Earth atmosphere. They interact both with the environment in the source and with the intergalactic photon fields they encounter, causing different processes at various scales depending on the photon energy in the nucleus rest frame. UHECR interactions are sensitive to uncertainties in the extragalactic background spectrum and in the photo-disintegration models.

  10. Development of analytical and numerical models for the assessment and interpretation of hydrogeological field tests

    International Nuclear Information System (INIS)

    Mironenko, V.A.; Rumynin, V.G.; Konosavsky, P.K.; Pozdniakov, S.P.; Shestakov, V.M.; Roshal, A.A.

    1994-07-01

    Mathematical models of the flow and tracer tests in fractured aquifers are being developed for the further study of radioactive wastes migration in round water at the Lake Area, which is associated with one of the waste disposal site in Russia. The choice of testing methods, tracer types (chemical or thermal) and the appropriate models are determined by the nature of the ongoing ground-water pollution processes and the hydrogeological features of the site under consideration. Special importance is attached to the increased density of wastes as well as to the possible redistribution of solutes both in the liquid phase and in the absorbed state (largely, on fracture surfaces). This allows for studying physical-and-chemical (hydrogeochemical) interaction parameters which are hard to obtain (considering a fractured structure of the rock mass) in laboratory. Moreover, a theoretical substantiation is being given to the field methods of studying the properties of a fractured stratum aimed at the further construction of the drainage system or the subsurface flow barrier (cutoff wall), as well as the monitoring system that will evaluate the reliability of these ground-water protection measures. The proposed mathematical models are based on a tight combination of analytical and numerical methods, the former being preferred in solving the principal (2D axisymmetrical) class of the problems. The choice of appropriate problems is based on the close feedback with subsequent field tests in the Lake Area. 63 refs

  11. A micromechanical interpretation of the temperature dependence of Beremin model parameters for French RPV steel

    International Nuclear Information System (INIS)

    Mathieu, Jean-Philippe; Inal, Karim; Berveiller, Sophie; Diard, Olivier

    2010-01-01

    Local approach to brittle fracture for low-alloyed steels is discussed in this paper. A bibliographical introduction intends to highlight general trends and consensual points of the topic and evokes debatable aspects. French RPV steel 16MND5 (equ. ASTM A508 Cl.3), is then used as a model material to study the influence of temperature on brittle fracture. A micromechanical modelling of brittle fracture at the elementary volume scale already used in previous work is then recalled. It involves a multiscale modelling of microstructural plasticity which has been tuned on experimental inter-phase and inter-granular stresses heterogeneities measurements. Fracture probability of the elementary volume can then be computed using a randomly attributed defect size distribution based on realistic carbides repartition. This defect distribution is then deterministically correlated to stress heterogeneities simulated within the microstructure using a weakest-link hypothesis on the elementary volume, which results in a deterministic stress to fracture. Repeating the process allows to compute Weibull parameters on the elementary volume. This tool is then used to investigate the physical mechanisms that could explain the already experimentally observed temperature dependence of Beremin's parameter for 16MND5 steel. It is showed that, assuming that the hypothesis made in this work about cleavage micro-mechanisms are correct, effective equivalent surface energy (i.e. surface energy plus plastically dissipated energy when blunting the crack tip) for propagating a crack has to be temperature dependent to explain Beremin's parameters temperature evolution.

  12. Development of analytical and numerical models for the assessment and interpretation of hydrogeological field tests

    Energy Technology Data Exchange (ETDEWEB)

    Mironenko, V.A.; Rumynin, V.G.; Konosavsky, P.K. [St. Petersburg Mining Inst. (Russian Federation); Pozdniakov, S.P.; Shestakov, V.M. [Moscow State Univ. (Russian Federation); Roshal, A.A. [Geosoft-Eastlink, Moscow (Russian Federation)

    1994-07-01

    Mathematical models of the flow and tracer tests in fractured aquifers are being developed for the further study of radioactive wastes migration in round water at the Lake Area, which is associated with one of the waste disposal site in Russia. The choice of testing methods, tracer types (chemical or thermal) and the appropriate models are determined by the nature of the ongoing ground-water pollution processes and the hydrogeological features of the site under consideration. Special importance is attached to the increased density of wastes as well as to the possible redistribution of solutes both in the liquid phase and in the absorbed state (largely, on fracture surfaces). This allows for studying physical-and-chemical (hydrogeochemical) interaction parameters which are hard to obtain (considering a fractured structure of the rock mass) in laboratory. Moreover, a theoretical substantiation is being given to the field methods of studying the properties of a fractured stratum aimed at the further construction of the drainage system or the subsurface flow barrier (cutoff wall), as well as the monitoring system that will evaluate the reliability of these ground-water protection measures. The proposed mathematical models are based on a tight combination of analytical and numerical methods, the former being preferred in solving the principal (2D axisymmetrical) class of the problems. The choice of appropriate problems is based on the close feedback with subsequent field tests in the Lake Area. 63 refs.

  13. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits

    Science.gov (United States)

    2018-01-01

    Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. PMID:29408930

  14. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2018-02-01

    Full Text Available Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures-recurrent connections, shared feed-forward projections, and shared gain fluctuations-on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing.

  15. Modelling and interpreting the isotopic composition of water vapour in convective updrafts

    Directory of Open Access Journals (Sweden)

    M. Bolot

    2013-08-01

    Full Text Available The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed-phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters, including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener–Bergeron–Findeisen process. As all of these processes are related to updraft strength, particle size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  16. Modelling and interpreting the isotopic composition of water vapour in convective updrafts

    Science.gov (United States)

    Bolot, M.; Legras, B.; Moyer, E. J.

    2013-08-01

    The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed-phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters, including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, particle size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  17. Auxiliary matrices for the six-vertex model at qN = 1 and a geometric interpretation of its symmetries

    International Nuclear Information System (INIS)

    Korff, Christian

    2003-01-01

    The construction of auxiliary matrices for the six-vertex model at a root of unity is investigated from a quantum group theoretic point of view. Employing the concept of intertwiners associated with the quantum loop algebra U q (s-tilde l-tilde 2 ) at q N = 1, a three-parameter family of auxiliary matrices is constructed. The elements of this family satisfy a functional relation with the transfer matrix allowing one to solve the eigenvalue problem of the model and to derive the Bethe ansatz equations. This functional relation is obtained from the decomposition of a tensor product of evaluation representations and involves auxiliary matrices with different parameters. Because of this dependence on additional parameters, the auxiliary matrices break in general the finite symmetries of the six-vertex model, such as spin-reversal or spin-conservation. More importantly, they also lift the extra degeneracies of the transfer matrix due to the loop symmetry present at rational coupling values. The extra parameters in the auxiliary matrices are shown to be directly related to the elements in the enlarged centre Z of the algebra U q (s-tilde l-tilde 2 ) at q N = 1. This connection provides a geometric interpretation of the enhanced symmetry of the six-vertex model at rational coupling. The parameters labelling the auxiliary matrices can be interpreted as coordinates on a hypersurface Spec Z subset of C 4 which remains invariant under the action of an infinite-dimensional group G of analytic transformations, called the quantum coadjoint action

  18. Gas/aerosol Partitioning Parameterisation For Global Modelling: A Physical Interpretation of The Relationship Between Activity Coefficients and Relative Humidity

    Science.gov (United States)

    Metzger, S.; Dentener, F. J.; Lelieveld, J.; Pandis, S. N.

    A computationally efficient model (EQSAM) to calculate gas/aerosol partitioning ofsemi-volatile inorganic aerosol components has been developed for use in global- atmospheric chemistry and climate models; presented at the EGS 2001.We introduce and discuss here the physics behind the parameterisation, upon whichthe EQuilib- rium Simplified Aerosol Model EQSAM is based. The parameterisation,which ap- proximates the activity coefficient calculation sufficiently accurately forglobal mod- elling, is based on a method that directly relates aerosol activitycoefficients to the ambient relative humidity, assuming chemical equilibrium.It therefore provides an interesting alternative for the computationally expensiveiterative activity coefficient calculation methods presently used in thermodynamicgas/aerosol equilibrium mod- els (EQMs). The parameterisation can be used,however, also in dynamical models that calculate mass transfer between theliquid/solid aerosol phases and the gas/phase explicitly; dynamical models oftenincorporate an EQM to calculate the aerosol com- position. The gain of theparameterisation is that the entire system of the gas/aerosol equilibrium partitioningcan be solved non-iteratively, a substantial advantage in global modelling.Since we have already demonstrated at the EGS 2001 that EQSAM yields similarresults as current state-of-the-art equilibrium models, we focus here on a dis- cussionof our physical interpretation of the parameterisation; the identification of theparameters needed is crucial. Given the lag of reliable data, the best way tothor- oughly validate the parameterisation for global modelling applications is theimple- mentation in current state-of-the-art gas/aerosol partitioning routines, whichare embe- ded in e.g. a global atmospheric chemistry transport model, by comparingthe results of the parameterisation against the ones based on the widely used activitycoefficient calculation methods (i.e. Bromley, Kussik-Meissner or Pitzer). Then

  19. Zigzagging causility model of EPR correlations and on the interpretation of quantum mechanics

    International Nuclear Information System (INIS)

    de Beauregard, O.C.

    1988-01-01

    Being formalized inside the S-matrix scheme, the zigzagging causility model of EPR correlations has full Lorentz and CPT invariance. EPR correlations, proper or reversed, and Wheeler's smoky dragon metaphor are respectively pictured in a spacetime or in the momentum-energy space, as V-shaped, anti LAMBDA-shaped, or C-shaped ABC zigzags, with a summation at B over virtual states absolute value B> = *. The reversibility = * implies that causality is CPT-invariant, or arrowless, at the microlevel. Arrowed causality is a macroscopic emergence, corollary to wave retardation and probability increase. Factlike irreversibility states repression, not suppression, of blind statistical retrodiction- that is, of final cause

  20. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  1. Performing Interpretation

    Science.gov (United States)

    Kothe, Elsa Lenz; Berard, Marie-France

    2013-01-01

    Utilizing a/r/tographic methodology to interrogate interpretive acts in museums, multiple areas of inquiry are raised in this paper, including: which knowledge is assigned the greatest value when preparing a gallery talk; what lies outside of disciplinary knowledge; how invitations to participate invite and disinvite in the same gesture; and what…

  2. Interpretation of quarks having fractional quantum numbers as structural quasi-particles by means of the composite model with integral quantum numbers

    International Nuclear Information System (INIS)

    Tyapkin, A.A.

    1976-01-01

    The problem is raised on the interpretation of quarks having fractional quantum numbers as structural quasi-particles. A new composite model is proposed on the basis of the fundamental triplet representation of fermions having integral quantum numbers

  3. A SEMI-ANALYTICAL LINE TRANSFER MODEL TO INTERPRET THE SPECTRA OF GALAXY OUTFLOWS

    International Nuclear Information System (INIS)

    Scarlata, C.; Panagia, N.

    2015-01-01

    We present a semi-analytical line transfer model, (SALT), to study the absorption and re-emission line profiles from expanding galactic envelopes. The envelopes are described as a superposition of shells with density and velocity varying with the distance from the center. We adopt the Sobolev approximation to describe the interaction between the photons escaping from each shell and the remainder of the envelope. We include the effect of multiple scatterings within each shell, properly accounting for the atomic structure of the scattering ions. We also account for the effect of a finite circular aperture on actual observations. For equal geometries and density distributions, our models reproduce the main features of the profiles generated with more complicated transfer codes. Also, our SALT line profiles nicely reproduce the typical asymmetric resonant absorption line profiles observed in starforming/starburst galaxies whereas these absorption profiles cannot be reproduced with thin shells moving at a fixed outflow velocity. We show that scattered resonant emission fills in the resonant absorption profiles, with a strength that is different for each transition. Observationally, the effect of resonant filling depends on both the outflow geometry and the size of the outflow relative to the spectroscopic aperture. Neglecting these effects will lead to incorrect values of gas covering fraction and column density. When a fluorescent channel is available, the resonant profiles alone cannot be used to infer the presence of scattered re-emission. Conversely, the presence of emission lines of fluorescent transitions reveals that emission filling cannot be neglected

  4. The Australian methane budget: Interpreting surface and train-borne measurements using a chemistry transport model

    Science.gov (United States)

    Fraser, Annemarie; Chan Miller, Christopher; Palmer, Paul I.; Deutscher, Nicholas M.; Jones, Nicholas B.; Griffith, David W. T.

    2011-10-01

    We investigate the Australian methane budget from 2005-2008 using the GEOS-Chem 3D chemistry transport model, focusing on the relative contribution of emissions from different sectors and the influence of long-range transport. To evaluate the model, we use in situ surface measurements of methane, methane dry air column average (XCH4) from ground-based Fourier transform spectrometers (FTSs), and train-borne surface concentration measurements from an in situ FTS along the north-south continental transect. We use gravity anomaly data from Gravity Recovery and Climate Experiment to describe the spatial and temporal distribution of wetland emissions and scale it to a prior emission estimate, which better describes observed atmospheric methane variability at tropical latitudes. The clean air sites of Cape Ferguson and Cape Grim are the least affected by local emissions, while Wollongong, located in the populated southeast with regional coal mining, samples the most locally polluted air masses (2.5% of the total air mass versus Asia, accounting for ˜25% of the change in surface concentration above background. At Cape Ferguson and Cape Grim, emissions from ruminant animals are the largest source of methane above background, at approximately 20% and 30%, respectively, of the surface concentration. At Wollongong, emissions from coal mining are the largest source above background representing 60% of the surface concentration. The train data provide an effective way of observing transitions between urban, desert, and tropical landscapes.

  5. Interpretable Active Learning

    OpenAIRE

    Phillips, Richard L.; Chang, Kyu Hyun; Friedler, Sorelle A.

    2017-01-01

    Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. W...

  6. Modeling of the Enceladus water vapor jets for interpreting UVIS star and solar occultation observations

    Science.gov (United States)

    Portyankina, Ganna; Esposito, Larry W.; Aye, Klaus-Michael; Hansen, Candice J.

    2015-11-01

    One of the most spectacular discoveries of the Cassini mission is jets emitting from the southern pole of Saturn’s moon Enceladus. The composition of the jets is water vapor and salty ice grains with traces of organic compounds. Jets, merging into a wide plume at a distance, are observed by multiple instruments on Cassini. Recent observations of the visible dust plume by the Cassini Imaging Science Subsystem (ISS) identified as many as 98 jet sources located along “tiger stripes” [Porco et al. 2014]. There is a recent controversy on the question if some of these jets are “optical illusion” caused by geometrical overlap of continuous source eruptions along the “tiger stripes” in the field of view of ISS [Spitale et al. 2015]. The Cassini’s Ultraviolet Imaging Spectrograph (UVIS) observed occultations of several stars and the Sun by the water vapor plume of Enceladus. During the solar occultation separate collimated gas jets were detected inside the background plume [Hansen et al., 2006 and 2011]. These observations directly provide data about water vapor column densities along the line of sight of the UVIS instrument and could help distinguish between the presence of only localized or also continuous sources. We use Monte Carlo simulations and Direct Simulation Monte Carlo (DSMC) to model the plume of Enceladus with multiple (or continuous) jet sources. The models account for molecular collisions, gravitational and Coriolis forces. The models result in the 3-D distribution of water vapor density and surface deposition patterns. Comparison between the simulation results and column densities derived from UVIS observations provide constraints on the physical characteristics of the plume and jets. The specific geometry of the UVIS observations helps to estimate the production rates and velocity distribution of the water molecules emitted by the individual jets.Hansen, C. J. et al., Science 311:1422-1425 (2006); Hansen, C. J. et al, GRL 38:L11202 (2011

  7. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  8. Analytic modeling, simulation and interpretation of broadband beam coupling impedance bench measurements

    Energy Technology Data Exchange (ETDEWEB)

    Niedermayer, U., E-mail: niedermayer@temf.tu-darmstadt.de [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Eidam, L. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Boine-Frankenheim, O. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); GSI Helmholzzentrum für Schwerionenforschung, Planckstraße 1, 64291 Darmstadt (Germany)

    2015-03-11

    First, a generalized theoretical approach towards beam coupling impedances and stretched-wire measurements is introduced. Applied to a circular symmetric setup, this approach allows to compare beam and wire impedances. The conversion formulas for TEM scattering parameters from measurements to impedances are thoroughly analyzed and compared to the analytical beam impedance solution. A proof of validity for the distributed impedance formula is given. The interaction of the beam or the TEM wave with dispersive material such as ferrite is discussed. The dependence of the obtained beam impedance on the relativistic velocity β is investigated and found as material property dependent. Second, numerical simulations of wakefields and scattering parameters are compared. The applicability of scattering parameter conversion formulas for finite device length is investigated. Laboratory measurement results for a circularly symmetric test setup, i.e. a ferrite ring, are shown and compared to analytic and numeric models. The optimization of the measurement process and error reduction strategies are discussed.

  9. Hot-spots of primary productivity: An Alternative interpretation to Conventional upwelling models

    Science.gov (United States)

    van Ruth, Paul D.; Ganf, George G.; Ward, Tim M.

    2010-12-01

    The eastern Great Australian Bight (EGAB) forms part of the Southern and Indian Oceans and is an area of high ecological and economic importance. Although it supports a commercial fishery, quantitative estimates of the primary productivity underlying this industry are open to debate. Estimates range from 500 mg C m -2 day -1. Part of this variation may be due to the unique upwelling circulation of shelf waters in summer/autumn (November-April), which shares some similarities with highly productive eastern boundary current upwelling systems, but differs due to the influence of a northern boundary current, the Flinders current, and a wide continental shelf. This study examines spatial variations in primary productivity in the EGAB during the upwelling seasons of 2005 and 2006. Daily integral productivity calculated using the vertically generalised production model (VGPM) showed a high degree of spatial variation. Productivity was low (modelled with the VGPM, which uses surface measures of phytoplankton biomass to calculate productivity. Macro-nutrient concentrations could not be used to explain the difference in the low and high productivities (silica > 1 μmol L -1, nitrate/nitrite > 0.4 μmol L -1, phosphate > 0.1 μmol L -1). Mixing patterns or micro-nutrient concentrations are possible explanations for spatial variations in primary productivity in the EGAB. On a global scale, daily rates of primary productivity of the EGAB lie between the highly productive eastern boundary current upwelling systems, and less productive coastal regions of western and south eastern Australia, and the oligotrophic ocean. However, daily productivity rates in the upwelling hotspots of the EGAB rival productivities in Benguela and Humboldt currents.

  10. Exploring the Gross Schoenebeck (Germany) geothermal site using a statistical joint interpretation of magnetotelluric and seismic tomography models

    Energy Technology Data Exchange (ETDEWEB)

    Munoz, Gerard; Bauer, Klaus; Moeck, Inga; Schulze, Albrecht; Ritter, Oliver [Deutsches GeoForschungsZentrum (GFZ), Telegrafenberg, 14473 Potsdam (Germany)

    2010-03-15

    Exploration for geothermal resources is often challenging because there are no geophysical techniques that provide direct images of the parameters of interest, such as porosity, permeability and fluid content. Magnetotelluric (MT) and seismic tomography methods yield information about subsurface distribution of resistivity and seismic velocity on similar scales and resolution. The lack of a fundamental law linking the two parameters, however, has limited joint interpretation to a qualitative analysis. By using a statistical approach in which the resistivity and velocity models are investigated in the joint parameter space, we are able to identify regions of high correlation and map these classes (or structures) back onto the spatial domain. This technique, applied to a seismic tomography-MT profile in the area of the Gross Schoenebeck geothermal site, allows us to identify a number of classes in accordance with the local geology. In particular, a high-velocity, low-resistivity class is interpreted as related to areas with thinner layers of evaporites; regions where these sedimentary layers are highly fractured may be of higher permeability. (author)

  11. Simplified mathematical models for interpreting the results of tests carried out by labelling the whole piezometric column in water wells

    International Nuclear Information System (INIS)

    Munera, H.A.

    1974-01-01

    Approximate methods used to interpret the results of tests based on radioactive tracer dilution in a single water well by labelling the whole piezometric column are described; these simple mathematical models have been used to obtain semi-quantitative data on the apparent velocity (horizontal) in non-homogeneous aquifers with flow rates of metres daily. Measurements have also been made in a homogeneous aquifer with velocities of centimetres daily. Interpretation is based on determination of the average concentration for the various well zones; this involves recognition of a mean velocity for each region. All the tracer dilution effects that are not due to horizontal or vertical flow between two zones, i.e. convection, artificial mixing, diffusion and so on, are grouped together as a single term, which is taken arbitrarily to be proportional to the difference in concentration between the regions under consideration; its value is obtained from the experimental dilution curve. The model was applied to the solution of the three cases encountered most frequently during our measurements in Colombia: (a) when the well penetrates a permeable zone and adjacent impermeable zone; (b) when the well penetrates a permeable zone contained between impermeable regions; and (c) when the well traverses an aquifer with two adjacent zones of different permeability contained between impermeable zones. The shape of the dilution curve (logarithm of concentration versus time, usually with two or more slopes) is predicted by the model, the approximate nature of which is consistent with the fact that the method of labelling the whole piezometric column is semi-quantitative. The results obtained for measurements made when there are considerable vertical flows are apparently correct, but there is no other experimental measurement available to confirm them. (author) [es

  12. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Science.gov (United States)

    Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji

    2017-01-01

    In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  13. Development and validation of open-source software for DNA mixture interpretation based on a quantitative continuous model.

    Directory of Open Access Journals (Sweden)

    Sho Manabe

    Full Text Available In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.

  14. Total Productive Maintenance And Role Of Interpretive Structural Modeling And Structural Equation Modeling In Analyzing Barriers In Its Implementation A Literature Review

    OpenAIRE

    Prasanth S. Poduval; Dr. Jagathy Raj V. P.; Dr. V. R. Pramod

    2015-01-01

    Abstract - The aim of the authors is to present a review of literature of Total Productive Maintenance and the barriers in implementation of Total Productive Maintenance TPM. The paper begins with a brief description of TPM and the barriers in implementation of TPM. Interpretive Structural Modeling ISM and its role in analyzing the barriers in TPM implementation is explained in brief. Applications of ISM in analyzing issues in various fields are highlighted with special emphasis on TPM. T...

  15. How a joint interpretation of seismic scattering, velocity, and attenuation models explains the nature of the Campi Flegrei (Italy).

    Science.gov (United States)

    Calo, M.; Tramelli, A.

    2017-12-01

    Seismic P and S velocity models (and their ratio Vp/Vs) help illuminating the geometrical structure of the bodies and give insight on the presence of water, molten or gas saturated regions. Seismic attenuation represents the anelastic behavior of the medium. Due to its dependence on temperature, fluid contents and cracks presence, this parameter is also largely used to characterize the structures of volcanoes and geothermal areas. Scattering attenuation is related, in the upper crust, to the amount, size and organization of the fractures giving complementary information on the state of the medium.Therefore a joint interpretation of these models provides an exhaustive view of the elastic parameters in volcanic regions. Campi Flegrei is an active Caldera marked by strong vertical deformations of the ground called bradyseisms and several models have been proposed to describe the nature and the geometry of the bodies responsible of the bradyseisms. Here we show Vp, Vp/Vs, Qp and scattering models carried out by applying an enhanced seismic tomography method that combines de double difference approach (Zhang and Thurber, 2003) and the Weigthed Average Method (Calò et al., 2009, Calò et al., 2011, 2013). The data used are the earthquakes recorded during the largest bradyseism crisis of the 80's. Our method allowed to image structures with linear dimension of 0.5-1.2km, resulting in an improvement of the resolving power at least two times of the other published models (e.g. Priolo et al., 2012). The joint interpretation of seismic models allowed to discern small anomalous bodies at shallow depth (0.5-2.0 km) marked by relatively low Vp, high Vp/Vs ratio and low Qp values associated with the presence of shallow geothermal water saturated reservoir from regions with low Vp, low Vp/Vs and low Qp related to the gas saturated part of the reservoir. At deeper depth (2-3.5 km) bodies with high Vp and Vp/Vs and low Qp are associated with magmatic intrusions. The Scattering

  16. Use of modeling and simulation in the planning, analysis and interpretation of ultrasonic testing; Einsatz von Modellierung und Simulation bei der Planung, Analyse und Interpretation von Ultraschallpruefungen

    Energy Technology Data Exchange (ETDEWEB)

    Algernon, Daniel [SVTI Schweizerischer Verein fuer technische Inspektionen, Wallisellen (Switzerland). ZfP-Labor; Grosse, Christian U. [Technische Univ. Muenchen (Germany). Lehrstuhl fuer Zerstoerungsfreie Pruefung

    2016-05-01

    Acoustic testing methods such as ultrasound and impact echo are an important tool in building diagnostics. The range includes thickness measurements, the representation of the internal component geometry as well as the detection of voids (gravel pockets), delaminations or possibly locating grouting faults in the interior of metallic cladding tubes of tendon ducts. Basically acoustic method for non-destructive testing (NDT) is based on the excitation of elastic waves that interact with the target object (e.g. to detect discontinuity in the component) at the acoustic interface. From the signal received at the component surface this interaction shall be detected and interpreted to draw conclusions about the presence of the target object, and optionally to determine its size and position (approximately). Although the basic underlying physical principles of the application of elastic waves in NDT are known, it can be complicated by complex relationships in the form of restricted access, component geometries, or the type and form of reflectors. To estimate the chances of success of a test is already often not trivial. These circumstances highlight the importance of using simulations that allow a theoretically sound basis for testing and allow easy optimizing test systems. The deployable simulation methods are varied. Common are in particular the finite element method, the Elasto Finite Integration Technique and semi-analytical calculation methods. [German] Akustische Pruefverfahren wie Ultraschall und Impact-Echo sind ein wichtiges Werkzeug der Bauwerksdiagnose. Das Einsatzspektrum beinhaltet Dickenmessungen, die Darstellung der inneren Bauteilgeometrie ebenso wie die Ortung von Kiesnestern, Delaminationen oder u.U. die Ortung von Verpressfehlern im Innern metallischer Huellrohre von Spannkanaelen. Grundsaetzlich beruhen akustische Verfahren zur Zerstoerungsfreien Pruefung (ZfP) auf der Anregung elastischer Wellen, die mit dem Zielobjekt (z. B. zu detektierende Ungaenze

  17. A neoinstitutionalist interpretation of the changes in the Russian oil model

    International Nuclear Information System (INIS)

    Locatelli, Catherine; Rossiaud, Sylvain

    2011-01-01

    This paper deals with the current change of the institutional and organizational framework of the Russian oil industry. Regarding this evolution, the main characteristic is the increasing involvement of national oil companies in the upstream activities. The point is to explain this reorganization by relying on the New Institutional Economics framework. These theoretical works highlight that institutional environment and governance structures complement each other. We argue that the current reorganization in an attempt to increase the coherence of the institutional arrangement governing the transaction between the Russian state and the private oil companies. - Highlights: → In this study, we analyse the evolution of the Russian oil model since the reform of the 1990s. → The aim of the present paper is to study reasons and consequences of renewed state control of Russia's oil industry, and the likelihood and manner of stabilization of the institutional framework in which operators are working. → One of the central hypotheses in this paper therefore is that NOCs are a substitute for weak and contested ownership rights. They are used by the State to protect its ownership rights.

  18. Mountains on Io: High-resolution Galileo observations, initial interpretations, and formation models

    Science.gov (United States)

    Turtle, E.P.; Jaeger, W.L.; Keszthelyi, L.P.; McEwen, A.S.; Milazzo, M.; Moore, J.; Phillips, C.B.; Radebaugh, J.; Simonelli, D.; Chuang, F.; Schuster, P.; Alexander, D.D.A.; Capraro, K.; Chang, S.-H.; Chen, A.C.; Clark, J.; Conner, D.L.; Culver, A.; Handley, T.H.; Jensen, D.N.; Knight, D.D.; LaVoie, S.K.; McAuley, M.; Mego, V.; Montoya, O.; Mortensen, H.B.; Noland, S.J.; Patel, R.R.; Pauro, T.M.; Stanley, C.L.; Steinwand, D.J.; Thaller, T.F.; Woncik, P.J.; Yagi, G.M.; Yoshimizu, J.R.; Alvarez Del Castillo, E.M.; Beyer, R.; Branston, D.; Fishburn, M.B.; Muller, Birgit; Ragan, R.; Samarasinha, N.; Anger, C.D.; Cunningham, C.; Little, B.; Arriola, S.; Carr, M.H.; Asphaug, E.; Morrison, D.; Rages, K.; Banfield, D.; Bell, M.; Burns, J.A.; Carcich, B.; Clark, B.; Currier, N.; Dauber, I.; Gierasch, P.J.; Helfenstein, P.; Mann, M.; Othman, O.; Rossier, L.; Solomon, N.; Sullivan, R.; Thomas, P.C.; Veverka, J.; Becker, T.; Edwards, K.; Gaddis, L.; Kirk, R.; Lee, E.; Rosanova, T.; Sucharski, R.M.; Beebe, R.F.; Simon, A.; Belton, M.J.S.; Bender, K.; Fagents, S.; Figueredo, P.; Greeley, R.; Homan, K.; Kadel, S.; Kerr, J.; Klemaszewski, J.; Lo, E.; Schwarz, W.; Williams, D.; Williams, K.; Bierhaus, B.; Brooks, S.; Chapman, C.R.; Merline, B.; Keller, J.; Tamblyn, P.; Bouchez, A.; Dyundian, U.; Ingersoll, A.P.; Showman, A.; Spitale, J.; Stewart, S.; Vasavada, A.; Breneman, H.H.; Cunningham, W.F.; Johnson, T.V.; Jones, T.J.; Kaufman, J.M.; Klaasen, K.P.; Levanas, G.; Magee, K.P.; Meredith, M.K.; Orton, G.S.; Senske, D.A.; West, A.; Winther, D.; Collins, G.; Fripp, W.J.; Head, J. W.; Pappalardo, R.; Pratt, S.; Prockter, L.; Spaun, N.; Colvin, T.; Davies, M.; DeJong, E.M.; Hall, J.; Suzuki, S.; Gorjian, Z.; Denk, T.; Giese, B.; Koehler, U.; Neukum, G.; Oberst, J.; Roatsch, T.; Tost, W.; Wagner, R.; Dieter, N.; Durda, D.; Geissler, P.; Greenberg, R.J.; Hoppa, G.; Plassman, J.; Tufts, R.; Fanale, F.P.; Granahan, J.C.

    2001-01-01

    During three close flybys in late 1999 and early 2000 the Galileo spacecraft ac-quired new observations of the mountains that tower above Io's surface. These images have revealed surprising variety in the mountains' morphologies. They range from jagged peaks several kilometers high to lower, rounded structures. Some are very smooth, others are covered by numerous parallel ridges. Many mountains have margins that are collapsing outward in large landslides or series of slump blocks, but a few have steep, scalloped scarps. From these observations we can gain insight into the structure and material properties of Io's crust as well as into the erosional processes acting on Io. We have also investigated formation mechanisms proposed for these structures using finite-element analysis. Mountain formation might be initiated by global compression due to the high rate of global subsidence associated with Io's high resurfacing rate; however, our models demonstrate that this hypothesis lacks a mechanism for isolating the mountains. The large fraction (???40%) of mountains that are associated with paterae suggests that in some cases these features are tectonically related. Therefore we have also simulated the stresses induced in Io's crust by a combination of a thermal upwelling in the mantle with global lithospheric compression and have shown that this can focus compressional stresses. If this mechanism is responsible for some of Io's mountains, it could also explain the common association of mountains with paterae. Copyright 2001 by the American Geophysical Union.

  19. The System Dynamics Model User Sustainability Explorer (SD-MUSE): a user-friendly tool for interpreting system dynamic models

    Science.gov (United States)

    System Dynamics (SD) models are useful for holistic integration of data to evaluate indirect and cumulative effects and inform decisions. Complex SD models can provide key insights into how decisions affect the three interconnected pillars of sustainability. However, the complexi...

  20. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    Science.gov (United States)

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    clinical utility of the proposed sepsis prediction model.

  1. Structural modeling of the Vichada impact structure from interpreted ground gravity and magnetic anomalies

    International Nuclear Information System (INIS)

    Hernandez, Orlando; Khurama, Sait; Alexander, Gretta C

    2011-01-01

    A prominent positive free-air gravity anomaly mapped over a roughly 50-km diameter basin is consistent with a mascon centered on (4 degrades 30 minutes N, 69 degrades 15 minutes W) in the Vichada Department, Colombia, South America. Ground follow up gravity and magnetic anomalies were modeled confirming the regional free air gravity anomalies. These potential field anomalies infer a hidden complex impact basin structure filled with tertiary sedimentary rocks and recent quaternary deposits. Negative Bouguer anomalies of 8 mgals to 15 mgals amplitude are associated with a concentric sedimentary basin with a varying thickness from 100 m to 500 m in the outer rings to 700 m to 1000 m at the center of the impact crater basin. Strong positive magnetic anomalies of 100 nt to 300 nt amplitude infer the presence of a local Precambrian crystalline basement that was affected by intensive faulting producing tectonic blocks dipping to the center of the structure, showing a typical domino structure of impact craters such as that of Sudbury, Ontario, Canada. Basic to intermediate mineralized veins and dikes with contrasting density and magnetic susceptibility properties could be emplaced along these faulting zones, as inferred from local gravity and magnetic highs. The geologic mapping of the area is limited by the flat topography and absence of outcrops/ geomorphologic units. Nevertheless, local normal faults along the inner ring together with radially sparse irregular blocks over flat terrains can be associated with terraced rims or collapse of the inner crater structure and eject blanket, respectively. A detailed airborne electromagnetic survey is recommended to confirm the gravity and magnetic anomalies together with a seismic program to evaluate the economic implications for energy and mineral exploration of the Vichada impact structure.

  2. Interpreting conjunctions.

    Science.gov (United States)

    Bott, Lewis; Frisson, Steven; Murphy, Gregory L

    2009-04-01

    The interpretation generated from a sentence of the form P and Q can often be different to that generated by Q and P, despite the fact that and has a symmetric truth-conditional meaning. We experimentally investigated to what extent this difference in meaning is due to the connective and and to what extent it is due to order of mention of the events in the sentence. In three experiments, we collected interpretations of sentences in which we varied the presence of the conjunction, the order of mention of the events, and the type of relation holding between the events (temporally vs. causally related events). The results indicated that the effect of using a conjunction was dependent on the discourse relation between the events. Our findings contradict a narrative marker theory of and, but provide partial support for a single-unit theory derived from Carston (2002). The results are discussed in terms of conjunction processing and implicatures of temporal order.

  3. Scanning Tunneling Microscopy - image interpretation

    International Nuclear Information System (INIS)

    Maca, F.

    1998-01-01

    The basic ideas of image interpretation in Scanning Tunneling Microscopy are presented using simple quantum-mechanical models and supplied with examples of successful application. The importance is stressed of a correct interpretation of this brilliant experimental surface technique

  4. Interpreting the nonlinear dielectric response of glass-formers in terms of the coupling model

    International Nuclear Information System (INIS)

    Ngai, K. L.

    2015-01-01

    Nonlinear dielectric measurements at high electric fields of glass-forming glycerol and propylene carbonate initially were carried out to elucidate the dynamic heterogeneous nature of the structural α-relaxation. Recently, the measurements were extended to sufficiently high frequencies to investigate the nonlinear dielectric response of faster processes including the so-called excess wing (EW), appearing as a second power law at high frequencies in the loss spectra of many glass formers without a resolved secondary relaxation. While a strong increase of dielectric constant and loss is found in the nonlinear dielectric response of the α-relaxation, there is a lack of significant change in the EW. A surprise to the experimentalists finding it, this difference in the nonlinear dielectric properties between the EW and the α-relaxation is explained in the framework of the coupling model by identifying the EW investigated with the nearly constant loss (NCL) of caged molecules, originating from the anharmonicity of the intermolecular potential. The NCL is terminated at longer times (lower frequencies) by the onset of the primitive relaxation, which is followed sequentially by relaxation processes involving increasing number of molecules until the terminal Kohlrausch α-relaxation is reached. These intermediate faster relaxations, combined to form the so-called Johari-Goldstein (JG) β-relaxation, are spatially and dynamically heterogeneous, and hence exhibit nonlinear dielectric effects, as found in glycerol and propylene carbonate, where the JG β-relaxation is not resolved and in D-sorbitol where it is resolved. Like the linear susceptibility, χ 1 (f), the frequency dispersion of the third-order dielectric susceptibility, χ 3 (f), was found to depend primarily on the α-relaxation time, and independent of temperature T and pressure P. I show this property of the frequency dispersions of χ 1 (f) and χ 3 (f) is the characteristic of the many-body relaxation

  5. Implementation of the NCRP wound model for interpretation of bioassay data for intake of radionuclides through contaminated wounds

    International Nuclear Information System (INIS)

    Ishigure, Nobuhito

    2009-01-01

    Emergency response preparedness for radiological accidents involving wound contamination has become more important, considering the current extending tendency in the nuclear industry related to the nuclear fuel cycle. The US National Council on Radiation Protection and Measurements (NCRP) proposed a biokinetic and dosimetric model for the intake of radionuclides through contaminated wounds in 2007. The present paper describes the implementation of this NCRP wound model for the prediction of systemic behaviour of some important radioactive elements encountered in workplaces related to the nuclear industry. The NCRP wound model was linked to the current ICRP systemic model at each blood compartment and simultaneous differential equations for the content of radioactivity in each compartment and excreta were solved with the Runge-Kutta method. The results of the calculation of wound, whole-body or specific organ retention and daily urinary or faecal excretion rate of some selected elements will be useful for the interpretation of bioassay data and dose assessment for cases of wound contamination. (author)

  6. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  7. Interpreting the nonlinear dielectric response of glass-formers in terms of the coupling model

    Energy Technology Data Exchange (ETDEWEB)

    Ngai, K. L. [CNR-IPCF, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy and Dipartimento di Fisica, Università di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa (Italy)

    2015-03-21

    Nonlinear dielectric measurements at high electric fields of glass-forming glycerol and propylene carbonate initially were carried out to elucidate the dynamic heterogeneous nature of the structural α-relaxation. Recently, the measurements were extended to sufficiently high frequencies to investigate the nonlinear dielectric response of faster processes including the so-called excess wing (EW), appearing as a second power law at high frequencies in the loss spectra of many glass formers without a resolved secondary relaxation. While a strong increase of dielectric constant and loss is found in the nonlinear dielectric response of the α-relaxation, there is a lack of significant change in the EW. A surprise to the experimentalists finding it, this difference in the nonlinear dielectric properties between the EW and the α-relaxation is explained in the framework of the coupling model by identifying the EW investigated with the nearly constant loss (NCL) of caged molecules, originating from the anharmonicity of the intermolecular potential. The NCL is terminated at longer times (lower frequencies) by the onset of the primitive relaxation, which is followed sequentially by relaxation processes involving increasing number of molecules until the terminal Kohlrausch α-relaxation is reached. These intermediate faster relaxations, combined to form the so-called Johari-Goldstein (JG) β-relaxation, are spatially and dynamically heterogeneous, and hence exhibit nonlinear dielectric effects, as found in glycerol and propylene carbonate, where the JG β-relaxation is not resolved and in D-sorbitol where it is resolved. Like the linear susceptibility, χ{sub 1}(f), the frequency dispersion of the third-order dielectric susceptibility, χ{sub 3}(f), was found to depend primarily on the α-relaxation time, and independent of temperature T and pressure P. I show this property of the frequency dispersions of χ{sub 1}(f) and χ{sub 3}(f) is the characteristic of the many

  8. Using simulation to interpret a discrete time survival model in a complex biological system: fertility and lameness in dairy cows.

    Directory of Open Access Journals (Sweden)

    Christopher D Hudson

    Full Text Available The ever-growing volume of data routinely collected and stored in everyday life presents researchers with a number of opportunities to gain insight and make predictions. This study aimed to demonstrate the usefulness in a specific clinical context of a simulation-based technique called probabilistic sensitivity analysis (PSA in interpreting the results of a discrete time survival model based on a large dataset of routinely collected dairy herd management data. Data from 12,515 dairy cows (from 39 herds were used to construct a multilevel discrete time survival model in which the outcome was the probability of a cow becoming pregnant during a given two day period of risk, and presence or absence of a recorded lameness event during various time frames relative to the risk period amongst the potential explanatory variables. A separate simulation model was then constructed to evaluate the wider clinical implications of the model results (i.e. the potential for a herd's incidence rate of lameness to influence its overall reproductive performance using PSA. Although the discrete time survival analysis revealed some relatively large associations between lameness events and risk of pregnancy (for example, occurrence of a lameness case within 14 days of a risk period was associated with a 25% reduction in the risk of the cow becoming pregnant during that risk period, PSA revealed that, when viewed in the context of a realistic clinical situation, a herd's lameness incidence rate is highly unlikely to influence its overall reproductive performance to a meaningful extent in the vast majority of situations. Construction of a simulation model within a PSA framework proved to be a very useful additional step to aid contextualisation of the results from a discrete time survival model, especially where the research is designed to guide on-farm management decisions at population (i.e. herd rather than individual level.

  9. [The enigma of the biological interpretation of the linear-quadratic model finally resolved? A summary for non-mathematicians].

    Science.gov (United States)

    Bodgi, L; Canet, A; Granzotto, A; Britel, M; Puisieux, A; Bourguignon, M; Foray, N

    2016-06-01

    The linear-quadratic (LQ) model is the only mathematical formula linking cellular survival and radiation dose that is sufficiently consensual to help radiation oncologists and radiobiologists in describing the radiation-induced events. However, this formula proposed in the 1970s and α and β parameters on which it is based remained without relevant biological meaning. From a collection of cutaneous fibroblasts with different radiosensitivity, built over 12 years by more than 50 French radiation oncologists, we recently pointed out that the ATM protein, major actor of the radiation response, diffuses from the cytoplasm to the nucleus after irradiation. The evidence of this nuclear shuttling of ATM allowed us to provide a biological interpretation of the LQ model in its mathematical features, validated by a hundred of radiosensitive cases. A mechanistic explanation of the radiosensitivity of syndromes caused by the mutation of cytoplasmic proteins and of the hypersensitivity to low-dose phenomenon has been proposed, as well. In this review, we present our resolution of the LQ model in the most didactic way. Copyright © 2016 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  10. Environment characterization as an aid to wheat improvement: interpreting genotype-environment interactions by modelling water-deficit patterns in North-Eastern Australia.

    Science.gov (United States)

    Chenu, K; Cooper, M; Hammer, G L; Mathews, K L; Dreccer, M F; Chapman, S C

    2011-03-01

    Genotype-environment interactions (GEI) limit genetic gain for complex traits such as tolerance to drought. Characterization of the crop environment is an important step in understanding GEI. A modelling approach is proposed here to characterize broadly (large geographic area, long-term period) and locally (field experiment) drought-related environmental stresses, which enables breeders to analyse their experimental trials with regard to the broad population of environments that they target. Water-deficit patterns experienced by wheat crops were determined for drought-prone north-eastern Australia, using the APSIM crop model to account for the interactions of crops with their environment (e.g. feedback of plant growth on water depletion). Simulations based on more than 100 years of historical climate data were conducted for representative locations, soils, and management systems, for a check cultivar, Hartog. The three main environment types identified differed in their patterns of simulated water stress around flowering and during grain-filling. Over the entire region, the terminal drought-stress pattern was most common (50% of production environments) followed by a flowering stress (24%), although the frequencies of occurrence of the three types varied greatly across regions, years, and management. This environment classification was applied to 16 trials relevant to late stages testing of a breeding programme. The incorporation of the independently-determined environment types in a statistical analysis assisted interpretation of the GEI for yield among the 18 representative genotypes by reducing the relative effect of GEI compared with genotypic variance, and helped to identify opportunities to improve breeding and germplasm-testing strategies for this region.

  11. Objective interpretation as conforming interpretation

    OpenAIRE

    Lidka Rodak

    2011-01-01

    The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and ...

  12. Localized Smart-Interpretation

    Science.gov (United States)

    Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom

    2014-05-01

    The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f

  13. Barriers to implement green supply chain management in automobile industry using interpretive structural modeling technique: An Indian perspective

    Directory of Open Access Journals (Sweden)

    Sunil Luthra

    2011-07-01

    Full Text Available Purpose: Green Supply Chain Management (GSCM has received growing attention in the last few years. Most of the automobile industries are setting up their own manufacturing plants in competitive Indian market. Due to public awareness, economic, environmental or legislative reasons, the requirement of GSCM has increased.  In this context, this study aims to develop a structural model of the barriers to implement GSCM in Indian automobile industry.Design/methodology/approach: We have identified various barriers and contextual relationships among the identified barriers. Classification of barriers has been carried out based upon dependence and driving power with the help of MICMAC analysis. In addition to this, a structural model of barriers to implement GSCM in Indian automobile industry has also been put forward using Interpretive Structural Modeling (ISM technique. Findings: Eleven numbers of relevant barriers have been identified from literature and subsequent discussions with experts from academia and industry. Out of which, five numbers of barriers have been identified as dependent variables; three number of barriers have been identified as the driver variables and three number of barriers have been identified as the linkage variables. No barrier has been identified as autonomous variable. Four barriers have been identified as top level barriers and one bottom level barrier. Removal of these barriers has also been discussed.Research limitations/implications: A hypothetical model of these barriers has been developed based upon experts’ opinions. The conclusions so drawn may be further modified to apply in real situation problem. Practical implications: Clear understanding of these barriers will help organizations to prioritize better and manage their resources in an efficient and effective way.Originality/value: Through this paper we contribute to identify the barriers to implement GSCM in Indian automobile industry and to prioritize them

  14. Interpreting consumer preferences: physicohedonic and psychohedonic models yield different information in a coffee-flavored dairy beverage.

    Science.gov (United States)

    Li, Bangde; Hayes, John E; Ziegler, Gregory R

    2014-09-01

    Designed experiments provide product developers feedback on the relationship between formulation and consumer acceptability. While actionable, this approach typically assumes a simple psychophysical relationship between ingredient concentration and perceived intensity. This assumption may not be valid, especially in cases where perceptual interactions occur. Additional information can be gained by considering the liking-intensity function, as single ingredients can influence more than one perceptual attribute. Here, 20 coffee-flavored dairy beverages were formulated using a fractional mixture design that varied the amount of coffee extract, fluid milk, sucrose, and water. Overall liking ( liking ) was assessed by 388 consumers using an incomplete block design (4 out of 20 prototypes) to limit fatigue; all participants also rated the samples for intensity of coffee flavor (coffee) , milk flavor (milk) , sweetness (sweetness) and thickness (thickness) . Across product means, the concentration variables explained 52% of the variance in liking in main effects multiple regression. The amount of sucrose (β = 0.46) and milk (β = 0.46) contributed significantly to the model (p's <0.02) while coffee extract (β = -0.17; p = 0.35) did not. A comparable model based on the perceived intensity explained 63% of the variance in mean liking ; sweetness (β = 0.53) and milk (β = 0.69) contributed significantly to the model (p's <0.04), while the influence of coffee flavor (β = 0.48) was positive but marginally (p = 0.09). Since a strong linear relationship existed between coffee extract concentration and coffee flavor, this discrepancy between the two models was unexpected, and probably indicates that adding more coffee extract also adds a negative attribute, e.g. too much bitterness. In summary, modeling liking as a function of both perceived intensity and physical concentration provides a richer interpretation of consumer data.

  15. Laboratory Experiments and Modeling for Interpreting Field Studies of Secondary Organic Aerosol Formation Using an Oxidation Flow Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Jose-Luis [Univ. of Colorado, Boulder, CO (United States)

    2016-02-01

    This grant was originally funded for deployment of a suite of aerosol instrumentation by our group in collaboration with other research groups and DOE/ARM to the Ganges Valley in India (GVAX) to study aerosols sources and processing. Much of the first year of this grant was focused on preparations for GVAX. That campaign was cancelled due to political reasons and with the consultation with our program manager, the research of this grant was refocused to study the applications of oxidation flow reactors (OFRs) for investigating secondary organic aerosol (SOA) formation and organic aerosol (OA) processing in the field and laboratory through a series of laboratory and modeling studies. We developed a gas-phase photochemical model of an OFR which was used to 1) explore the sensitivities of key output variables (e.g., OH exposure, O3, HO2/OH) to controlling factors (e.g., water vapor, external reactivity, UV irradiation), 2) develop simplified OH exposure estimation equations, 3) investigate under what conditions non-OH chemistry may be important, and 4) help guide design of future experiments to avoid conditions with undesired chemistry for a wide range of conditions applicable to the ambient, laboratory, and source studies. Uncertainties in the model were quantified and modeled OH exposure was compared to tracer decay measurements of OH exposure in the lab and field. Laboratory studies using OFRs were conducted to explore aerosol yields and composition from anthropogenic and biogenic VOC as well as crude oil evaporates. Various aspects of the modeling and laboratory results and tools were applied to interpretation of ambient and source measurements using OFR. Additionally, novel measurement methods were used to study gas/particle partitioning. The research conducted was highly successful and details of the key results are summarized in this report through narrative text, figures, and a complete list of publications acknowledging this grant.

  16. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Science.gov (United States)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  17. Using snowflake surface-area-to-volume ratio to model and interpret snowfall triple-frequency radar signatures

    Science.gov (United States)

    Gergely, Mathias; Cooper, Steven J.; Garrett, Timothy J.

    2017-10-01

    The snowflake microstructure determines the microwave scattering properties of individual snowflakes and has a strong impact on snowfall radar signatures. In this study, individual snowflakes are re