WorldWideScience

Sample records for models enable interpretation

  1. Interpretive Structural Modeling Of Implementation Enablers For Just In Time In ICPI

    Directory of Open Access Journals (Sweden)

    Nitin Upadhye

    2014-12-01

    Full Text Available Indian Corrugated Packaging Industries (ICPI have built up tough competition among the industries in terms of product cost, quality, product delivery, flexibility, and finally customer’s demand. As their customers, mostly OEMs are asking Just in Time deliveries, ICPI must implement JIT in their system. The term "JIT” as, it denotes a system that utilizes less, in terms of all inputs, to create the same outputs as those created by a traditional mass production system, while contributing increased varieties for the end customer. (Womack et al. 1990 "JIT" focuses on abolishing or reducing Muda (“Muda", the Japanese word for waste and on maximizing or fully utilizing activities that add value from the customer's perspective. There is lack of awareness in identifying the right enablers of JIT implementation. Therefore, this study has tried to find out the enablers from the literature review and expert’s opinions from corrugated packaging industries and developed the relationship matrix to see the driving power and dependence between them. In this study, modeling has been done in order to know the interrelationships between the enablers with the help of Interpretive Structural Modeling and Cross Impact Matrix Multiplication Applied to Classification (MICMAC analysis for the performance of Indian corrugated packaging industries.

  2. Formal modelling of cognitive interpretation

    OpenAIRE

    Rukšenas, R.; Curzon, P; Back, J.; Blandford, A.

    2007-01-01

    We formally specify the interpretation stage in a dual state space human-computer interaction cycle. This is done by extending/reorganising our previous cognitive architecture. In particular, we focus on shape related aspects of the interpretation process associated with device input prompts. A cash-point example illustrates our approach. Using the SAL model checking environment, we show how the extended cognitive architecture facilitates detection of prompt-shape induced human error. © Sprin...

  3. Interpretive and Formal Models of Discourse Processing.

    Science.gov (United States)

    Bulcock, Jeffrey W.; Beebe, Mona J.

    Distinguishing between interpretive and formal models of discourse processing and between qualitative and quantitative research, this paper argues that formal models are the analogues of interpretive models, and that the two are complementary. It observes that interpretive models of reading are being increasingly derived from qualitative research…

  4. Modeling and interpretation of images*

    Directory of Open Access Journals (Sweden)

    Min Michiel

    2015-01-01

    Full Text Available Imaging protoplanetary disks is a challenging but rewarding task. It is challenging because of the glare of the central star outshining the weak signal from the disk at shorter wavelengths and because of the limited spatial resolution at longer wavelengths. It is rewarding because it contains a wealth of information on the structure of the disks and can (directly probe things like gaps and spiral structure. Because it is so challenging, telescopes are often pushed to their limitations to get a signal. Proper interpretation of these images therefore requires intimate knowledge of the instrumentation, the detection method, and the image processing steps. In this chapter I will give some examples and stress some issues that are important when interpreting images from protoplanetary disks.

  5. Realising the Uncertainty Enabled Model Web

    Science.gov (United States)

    Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.

    2012-12-01

    The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address

  6. Chain graph models and their causal interpretations

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Richardson, Thomas S.

    2002-01-01

    the equilibrium distributions of dynamic models with feed-back. These dynamic interpretations lead to a simple theory of intervention, extending the theory developed for directed acyclic graphs. Finally, we contrast chain graph models under this interpretation with simultaneous equation models which have......Chain graphs are a natural generalization of directed acyclic graphs and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence hypotheses that they represent. There are many simple and apparently plausible, but ultimately fallacious......, interpretations of chain graphs that are often invoked, implicitly or explicitly. These interpretations also lead to flawed methods for applying background knowledge to model selection. We present a valid interpretation by showing how the distribution corresponding to a chain graph may be generated from...

  7. Aggregate driver model to enable predictable behaviour

    Science.gov (United States)

    Chowdhury, A.; Chakravarty, T.; Banerjee, T.; Balamuralidhar, P.

    2015-09-01

    The categorization of driving styles, particularly in terms of aggressiveness and skill is an emerging area of interest under the broader theme of intelligent transportation. There are two possible discriminatory techniques that can be applied for such categorization; a microscale (event based) model and a macro-scale (aggregate) model. It is believed that an aggregate model will reveal many interesting aspects of human-machine interaction; for example, we may be able to understand the propensities of individuals to carry out a given task over longer periods of time. A useful driver model may include the adaptive capability of the human driver, aggregated as the individual propensity to control speed/acceleration. Towards that objective, we carried out experiments by deploying smartphone based application to be used for data collection by a group of drivers. Data is primarily being collected from GPS measurements including position & speed on a second-by-second basis, for a number of trips over a two months period. Analysing the data set, aggregate models for individual drivers were created and their natural aggressiveness were deduced. In this paper, we present the initial results for 12 drivers. It is shown that the higher order moments of the acceleration profile is an important parameter and identifier of journey quality. It is also observed that the Kurtosis of the acceleration profiles stores major information about the driving styles. Such an observation leads to two different ranking systems based on acceleration data. Such driving behaviour models can be integrated with vehicle and road model and used to generate behavioural model for real traffic scenario.

  8. Green communication: The enabler to multiple business models

    DEFF Research Database (Denmark)

    Lindgren, Peter; Clemmensen, Suberia; Taran, Yariv

    2010-01-01

    Companies stand at the forefront of a new business model reality with new potentials - that will change their basic understanding and practice of running their business models radically. One of the drivers to this change is green communication, its strong relation to green business models and its...... possibility to enable lower energy consumption. This paper shows how green communication enables innovation of green business models and multiple business models running simultaneously in different markets to different customers....

  9. Modeling-Enabled Systems Nutritional Immunology

    Directory of Open Access Journals (Sweden)

    Meghna eVerma

    2016-02-01

    Full Text Available This review highlights the fundamental role of nutrition in the maintenance of health, the immune response and disease prevention. Emerging global mechanistic insights in the field of nutritional immunology cannot be gained through reductionist methods alone or by analyzing a single nutrient at a time. We propose to investigate nutritional immunology as a massively interacting system of interconnected multistage and multiscale networks that encompass hidden mechanisms by which nutrition, microbiome, metabolism, genetic predisposition and the immune system interact to delineate health and disease. The review sets an unconventional path to applying complex science methodologies to nutritional immunology research, discovery and development through ‘use cases’ centered around the impact of nutrition on the gut microbiome and immune responses. Our systems nutritional immunology analyses, that include modeling and informatics methodologies in combination with pre-clinical and clinical studies, have the potential to discover emerging systems-wide properties at the interface of the immune system, nutrition, microbiome, and metabolism.

  10. Interpretation of test data with dynamic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Biba, P. [Southern California Edison, San Clemente, CA (United States). San Onofre Nuclear Generating Station

    1999-11-01

    The in-service testing of many Important-to-safety components, such as valves, pumps, etc. is often performed while the plant is either shut-down or the particular system is in a test mode. Thus the test conditions may be different from the actual operating conditions under which the components would be required to operate. In addition, the components must function under various postulated accident scenarios, which can not be duplicated during plant normal operation. This paper deals with the method of interpretation of the test data by a dynamic model, which allows the evaluation of the many factors affecting the system performance, in order to assure component and system operability.

  11. Detailed Modeling and Response of Demand Response Enabled Appliances

    Energy Technology Data Exchange (ETDEWEB)

    Vyakaranam, Bharat; Fuller, Jason C.

    2014-04-14

    Proper modeling of end use loads is very important in order to predict their behavior, and how they interact with the power system, including voltage and temperature dependencies, power system and load control functions, and the complex interactions that occur between devices in such an interconnected system. This paper develops multi-state time variant residential appliance models with demand response enabled capabilities in the GridLAB-DTM simulation environment. These models represent not only the baseline instantaneous power demand and energy consumption, but the control systems developed by GE Appliances to enable response to demand response signals and the change in behavior of the appliance in response to the signal. These DR enabled appliances are simulated to estimate their capability to reduce peak demand and energy consumption.

  12. ORGANIZING SCENARIO VARIABLES BY APPLYING THE INTERPRETATIVE STRUCTURAL MODELING (ISM

    Directory of Open Access Journals (Sweden)

    Daniel Estima de Carvalho

    2009-10-01

    Full Text Available The scenario building method is a thought mode - taken to effect in an optimized, strategic manner - based on trends and uncertain events, concerning a large variety of potential results that may impact the future of an organization.In this study, the objective is to contribute towards a possible improvement in Godet and Schoemaker´s scenario preparation methods, by employing the Interpretative Structural Modeling (ISM as a tool for the analysis of variables.Given this is an exploratory theme, bibliographical research with tool definition and analysis, examples extraction from literature and a comparison exercise of referred methods, were undertaken.It was verified that ISM may substitute or complement the original tools for the analysis of variables of scenarios per Godet and Schoemaker’s methods, given the fact that it enables an in-depth analysis of relations between variables in a shorter period of time, facilitating both structuring and construction of possible scenarios.Key-words: Strategy. Future studies. Interpretative Structural Modeling.

  13. Interpretations

    Science.gov (United States)

    Bellac, Michel Le

    2014-11-01

    Although nobody can question the practical efficiency of quantum mechanics, there remains the serious question of its interpretation. As Valerio Scarani puts it, "We do not feel at ease with the indistinguishability principle (that is, the superposition principle) and some of its consequences." Indeed, this principle which pervades the quantum world is in stark contradiction with our everyday experience. From the very beginning of quantum mechanics, a number of physicists--but not the majority of them!--have asked the question of its "interpretation". One may simply deny that there is a problem: according to proponents of the minimalist interpretation, quantum mechanics is self-sufficient and needs no interpretation. The point of view held by a majority of physicists, that of the Copenhagen interpretation, will be examined in Section 10.1. The crux of the problem lies in the status of the state vector introduced in the preceding chapter to describe a quantum system, which is no more than a symbolic representation for the Copenhagen school of thought. Conversely, one may try to attribute some "external reality" to this state vector, that is, a correspondence between the mathematical description and the physical reality. In this latter case, it is the measurement problem which is brought to the fore. In 1932, von Neumann was first to propose a global approach, in an attempt to build a purely quantum theory of measurement examined in Section 10.2. This theory still underlies modern approaches, among them those grounded on decoherence theory, or on the macroscopic character of the measuring apparatus: see Section 10.3. Finally, there are non-standard interpretations such as Everett's many worlds theory or the hidden variables theory of de Broglie and Bohm (Section 10.4). Note, however, that this variety of interpretations has no bearing whatsoever on the practical use of quantum mechanics. There is no controversy on the way we should use quantum mechanics!

  14. Item hierarchy-based analysis of the Rivermead Mobility Index resulted in improved interpretation and enabled faster scoring in patients undergoing rehabilitation after stroke.

    Science.gov (United States)

    Roorda, Leo D; Green, John R; Houwink, Annemieke; Bagley, Pam J; Smith, Jane; Molenaar, Ivo W; Geurts, Alexander C

    2012-06-01

    To enable improved interpretation of the total score and faster scoring of the Rivermead Mobility Index (RMI) by studying item ordering or hierarchy and formulating start-and-stop rules in patients after stroke. Cohort study. Rehabilitation center in the Netherlands; stroke rehabilitation units and the community in the United Kingdom. Item hierarchy of the RMI was studied in an initial group of patients (n=620; mean age ± SD, 69.2±12.5y; 297 [48%] men; 304 [49%] left hemisphere lesion, and 269 [43%] right hemisphere lesion), and the adequacy of the item hierarchy-based start-and-stop rules was checked in a second group of patients (n=237; mean age ± SD, 60.0±11.3y; 139 [59%] men; 103 [44%] left hemisphere lesion, and 93 [39%] right hemisphere lesion) undergoing rehabilitation after stroke. Not applicable. Mokken scale analysis was used to investigate the fit of the double monotonicity model, indicating hierarchical item ordering. The percentages of patients with a difference between the RMI total score and the scores based on the start-and-stop rules were calculated to check the adequacy of these rules. The RMI had good fit of the double monotonicity model (coefficient H(T)=.87). The interpretation of the total score improved. Item hierarchy-based start-and-stop rules were formulated. The percentages of patients with a difference between the RMI total score and the score based on the recommended start-and-stop rules were 3% and 5%, respectively. Ten of the original 15 items had to be scored after applying the start-and-stop rules. Item hierarchy was established, enabling improved interpretation and faster scoring of the RMI. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. Plasma Modeling Enabled Technology Development Empowered by Fundamental Scattering Data

    Science.gov (United States)

    Kushner, Mark J.

    2016-05-01

    Technology development increasingly relies on modeling to speed the innovation cycle. This is particularly true for systems using low temperature plasmas (LTPs) and their role in enabling energy efficient processes with minimal environmental impact. In the innovation cycle, LTP modeling supports investigation of fundamental processes that seed the cycle, optimization of newly developed technologies, and prediction of performance of unbuilt systems for new applications. Although proof-of-principle modeling may be performed for idealized systems in simple gases, technology development must address physically complex systems that use complex gas mixtures that now may be multi-phase (e.g., in contact with liquids). The variety of fundamental electron and ion scattering, and radiation transport data (FSRD) required for this modeling increases as the innovation cycle progresses, while the accuracy required of that data depends on the intended outcome. In all cases, the fidelity, depth and impact of the modeling depends on the availability of FSRD. Modeling and technology development are, in fact, empowered by the availability and robustness of FSRD. In this talk, examples of the impact of and requirements for FSRD in the innovation cycle enabled by plasma modeling will be discussed using results from multidimensional and global models. Examples of fundamental studies and technology optimization will focus on microelectronics fabrication and on optically pumped lasers. Modeling of systems as yet unbuilt will address the interaction of atmospheric pressure plasmas with liquids. Work supported by DOE Office of Fusion Energy Science and the National Science Foundation.

  16. A Modeling Perspective on Interpreting Rates of Change in Context

    Science.gov (United States)

    Ärlebäck, Jonas B.; Doerr, Helen M.; O'Neil, AnnMarie H.

    2013-01-01

    Functions provide powerful tools for describing change, but research has shown that students find difficulty in using functions to create and interpret models of changing phenomena. In this study, we drew on a models and modeling perspective to design an instructional approach to develop students' abilities to describe and interpret rates of…

  17. Conversations about Art: A Disruptive Model of Interpretation.

    Science.gov (United States)

    Gooding-Brown, Jane

    This paper describes a disruptive model of interpretation which explores positions in discursive practices embedded in visual culture as a means of understanding self and difference. The model understands interpretation as a Foucauldian technique of the self, and its use may give art teachers and students strategies for understanding the social…

  18. Introducing the Leadership in Enabling Occupation (LEO) model.

    Science.gov (United States)

    Townsend, Elizabeth A; Polatajko, Helene J; Craik, Janet M; von Zweck, Claudia M

    2011-10-01

    Occupational therapy is a broad profession yet access to services remains restricted and uneven across Canada. Access to the potential breadth of occupational therapy is severely restrained by complex supply, retention, and funding challenges. To improve access to occupational therapy, widespread leadership is needed by all practitioners. This brief report introduces the Leadership in Enabling Occupation (LEO) Model, which displays the inter-relationship of four elements of everyday leadership as described in "Positioning Occupational Therapy for Leadership," Section IV, of Enabling Occupation II: Advancing a Vision of Health, Well-being and Justice through Occupation (Townsend & Polatajko, 2007). All occupational therapists have the power to develop leadership capacity within and beyond designated leadership positions. LEO is a leadership tool to extend all occupational therapists' strategic use of scholarship, new accountability approaches, existing and new funding, and workforce planning to improve access to occupational therapy.

  19. Injecting Abstract Interpretations into Linear Cost Models

    Directory of Open Access Journals (Sweden)

    David Cachera

    2010-06-01

    Full Text Available We present a semantics based framework for analysing the quantitative behaviour of programs with regard to resource usage. We start from an operational semantics equipped with costs. The dioid structure of the set of costs allows for defining the quantitative semantics as a linear operator. We then present an abstraction technique inspired from abstract interpretation in order to effectively compute global cost information from the program. Abstraction has to take two distinct notions of order into account: the order on costs and the order on states. We show that our abstraction technique provides a correct approximation of the concrete cost computations.

  20. The importance of structural model availability on seismic interpretation

    Science.gov (United States)

    Alcalde, Juan; Bond, Clare E.; Johnson, Gareth; Butler, Robert W. H.; Cooper, Mark A.; Ellis, Jennifer F.

    2017-04-01

    Interpretation of faults in seismic images is central to the creation of geological models of the subsurface. The use of prior knowledge acquired through learning allows interpreters to move from singular observations to reasoned interpretations based on the conceptual models available to them. The amount and variety of fault examples available in textbooks, articles and training exercises is therefore likely to be a determinant factor in the interpreters' ability to interpret realistic fault geometries in seismic data. We analysed the differences in fault type and geometry interpreted in seismic data by students before and after completing a masters module in structural geology, and compared them to the characteristics of faults represented in the module and textbooks. We propose that the observed over-representation of normal-planar faults in early teaching materials influences the interpretation of data, making this fault type and geometry dominant in the pre-module interpretations. However, when the students were exposed to a greater range in fault models in the module, the range of fault type and geometry increased. This work explores the role of model availability in interpretation and advocates for the use of realistic fault models in training materials.

  1. Interpretation models and charts of production profiles in horizontal wells

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Stratified flow is common for the gravity segregation and flow regimes are very complex because of borehole inclination,therefore,all the conventional production logging tools cannot be effectively applied in horizontal wells,thus significantly increasing the difficulties in log interpretation. In this paper,firstly,the overseas progress in updated integration tools for horizontal wells and production profile interpretation methods has been discussed in brief. Secondly,by means of theory study and experimental simulations,we have obtained the production profile interpretation model and experimental interpretation charts,which have been calibrated by the improved downhole technology and optimization methods. Finally,we have interpreted X-well with the production profile interpretation software designed by us,and it proves that the methods are useful for the production profile interpretation in horizontal wells.

  2. BIM-enabled Conceptual Modelling and Representation of Building Circulation

    Directory of Open Access Journals (Sweden)

    Jin Kook Lee

    2014-08-01

    Full Text Available This paper describes how a building information modelling (BIM-based approach for building circulation enables us to change the process of building design in terms of its computational representation and processes, focusing on the conceptual modelling and representation of circulation within buildings. BIM has been designed for use by several BIM authoring tools, in particular with the widely known interoperable industry foundation classes (IFCs, which follow an object-oriented data modelling methodology. Advances in BIM authoring tools, using space objects and their relations defined in an IFC’s schema, have made it possible to model, visualize and analyse circulation within buildings prior to their construction. Agent-based circulation has long been an interdisciplinary topic of research across several areas, including design computing, computer science, architectural morphology, human behaviour and environmental psychology. Such conventional approaches to building circulation are centred on navigational knowledge about built environments, and represent specific circulation paths and regulations. This paper, however, places emphasis on the use of ‘space objects’ in BIM-enabled design processes rather than on circulation agents, the latter of which are not defined in the IFCs’ schemas. By introducing and reviewing some associated research and projects, this paper also surveys how such a circulation representation is applicable to the analysis of building circulation-related rules.

  3. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  4. Interpretive and Critical Phenomenological Crime Studies: A Model Design

    Science.gov (United States)

    Miner-Romanoff, Karen

    2012-01-01

    The critical and interpretive phenomenological approach is underutilized in the study of crime. This commentary describes this approach, guided by the question, "Why are interpretive phenomenological methods appropriate for qualitative research in criminology?" Therefore, the purpose of this paper is to describe a model of the interpretive…

  5. Space Partitioning for Privacy Enabled 3D City Models

    Science.gov (United States)

    Filippovska, Y.; Wichmann, A.; Kada, M.

    2016-10-01

    Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.

  6. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  7. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...

  8. Model-Based Integration and Interpretation of Data

    DEFF Research Database (Denmark)

    Petersen, Johannes

    2004-01-01

    Data integration and interpretation plays a crucial role in supervisory control. The paper defines a set of generic inference steps for the data integration and interpretation process based on a three-layer model of system representations. The three-layer model is used to clarify the combination...... of constraint and object-centered representations of the work domain throwing new light on the basic principles underlying the data integration and interpretation process of Rasmussen's abstraction hierarchy as well as other model-based approaches combining constraint and object-centered representations. Based...

  9. Implementation of a capsular bag model to enable sufficient lens stabilization within a mechanical eye model

    Science.gov (United States)

    Bayer, Natascha; Rank, Elisabet; Traxler, Lukas; Beckert, Erik; Drauschke, Andreas

    2015-03-01

    Cataract still remains the leading cause of blindness affecting 20 million people worldwide. To restore the patients vision the natural lens is removed and replaced by an intraocular lens (IOL). In modern cataract surgery the posterior capsular bag is maintained to prevent inflammation and to enable stabilization of the implant. Refractive changes following cataract surgery are attributable to lens misalignments occurring due to postoperative shifts and tilts of the artificial lens. Mechanical eye models allow a preoperative investigation of the impact of such misalignments and are crucial to improve the quality of the patients' sense of sight. Furthermore, the success of sophisticated IOLs that correct high order aberrations is depending on a critical evaluation of the lens position. A new type of an IOL holder is designed and implemented into a preexisting mechanical eye model. A physiological representation of the capsular bag is realized with an integrated film element to guarantee lens stabilization and centering. The positioning sensitivity of the IOL is evaluated by performing shifts and tilts in reference to the optical axis. The modulation transfer function is used to measure the optical quality at each position. Lens stability tests within the holder itself are performed by determining the modulation transfer function before and after measurement sequence. Mechanical stability and reproducible measurement results are guaranteed with the novel capsular bag model that allows a precise interpretation of postoperative lens misalignments. The integrated film element offers additional stabilization during measurement routine without damaging the haptics or deteriorating the optical performance.

  10. Item Hierarchy-Based Analysis of the Rivermead Mobility Index Resulted in Improved Interpretation and Enabled Faster Scoring in Patients Undergoing Rehabilitation After Stroke

    NARCIS (Netherlands)

    Roorda, Leo D.; Green, John R.; Houwink, Annemieke; Bagley, Pam J.; Smith, Jane; Molenaar, Ivo W.; Geurts, Alexander C.

    2012-01-01

    Roorda L.D. Green JR, Houwink A, Bagley PJ, Smith J, Molenaar IW, Geurts AC. Item hierarchy-based analysis of the Rivermead Mobility Index resulted in improved interpretation and enabled faster scoring in patients undergoing rehabilitation after stroke.. Arch Phys Med Rehabil 2012;93: 1091-6. Object

  11. Superconnections: an interpretation of the standard model

    Directory of Open Access Journals (Sweden)

    Gert Roepstorff

    2000-07-01

    Full Text Available The mathematical framework of superbundles as pioneered by D. Quillen suggests that one consider the Higgs field as a natural constituent of a superconnection. I propose to take as superbundle the exterior algebra obtained from a Hermitian vector bundle of rank n where n=2 for the electroweak theory and n=5 for the full Standard Model. The present setup is similar to but avoids the use of non-commutative geometry.

  12. Modelling and Interpretation of Adsorption Isotherms

    Directory of Open Access Journals (Sweden)

    Nimibofa Ayawei

    2017-01-01

    Full Text Available The need to design low-cost adsorbents for the detoxification of industrial effluents has been a growing concern for most environmental researchers. So modelling of experimental data from adsorption processes is a very important means of predicting the mechanisms of various adsorption systems. Therefore, this paper presents an overall review of the applications of adsorption isotherms, the use of linear regression analysis, nonlinear regression analysis, and error functions for optimum adsorption data analysis.

  13. Conceptual design interpretations, mindset and models

    CERN Document Server

    Andreasen, Mogens Myrup; Cash, Philip

    2015-01-01

    Maximising reader insights into the theory, models, methods and fundamental reasoning of design, this book addresses design activities in industrial settings, as well as the actors involved. This approach offers readers a new understanding of design activities and related functions, properties and dispositions. Presenting a ‘design mindset’ that seeks to empower students, researchers, and practitioners alike, it features a strong focus on how designers create new concepts to be developed into products, and how they generate new business and satisfy human needs.   Employing a multi-faceted perspective, the book supplies the reader with a comprehensive worldview of design in the form of a proposed model that will empower their activities as student, researcher or practitioner. We draw the reader into the core role of design conceptualisation for society, for the development of industry, for users and buyers of products, and for citizens in relation to public systems. The book also features original con...

  14. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    Energy Technology Data Exchange (ETDEWEB)

    Heinz Pitsch

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high-fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation, a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet tranformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  15. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    Energy Technology Data Exchange (ETDEWEB)

    Pitsch, Heinz

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation; a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet transformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  16. A forward modeling approach for interpreting impeller flow logs.

    Science.gov (United States)

    Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T

    2010-01-01

    A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.

  17. Modeling Change Over Time: Conceptualization, Measurement, Analysis, and Interpretation

    Science.gov (United States)

    2009-11-12

    2007 to 29-11-2008 4. TITLE AND SUBTITLE Modeling Change Over Time: Conceptualization, Measurement, Analysis, and Interpretation 5a. CONTRACT NUMBER...Multilevel Modeling Portal (www.ats.ucla.edu/stat/ mlm /) and the Web site of the Center for Multilevel Modeling (http://multilevel.ioe.ac.uk/index.html

  18. Interpreting metabolomic profiles using unbiased pathway models.

    Directory of Open Access Journals (Sweden)

    Rahul C Deo

    2010-02-01

    Full Text Available Human disease is heterogeneous, with similar disease phenotypes resulting from distinct combinations of genetic and environmental factors. Small-molecule profiling can address disease heterogeneity by evaluating the underlying biologic state of individuals through non-invasive interrogation of plasma metabolite levels. We analyzed metabolite profiles from an oral glucose tolerance test (OGTT in 50 individuals, 25 with normal (NGT and 25 with impaired glucose tolerance (IGT. Our focus was to elucidate underlying biologic processes. Although we initially found little overlap between changed metabolites and preconceived definitions of metabolic pathways, the use of unbiased network approaches identified significant concerted changes. Specifically, we derived a metabolic network with edges drawn between reactant and product nodes in individual reactions and between all substrates of individual enzymes and transporters. We searched for "active modules"--regions of the metabolic network enriched for changes in metabolite levels. Active modules identified relationships among changed metabolites and highlighted the importance of specific solute carriers in metabolite profiles. Furthermore, hierarchical clustering and principal component analysis demonstrated that changed metabolites in OGTT naturally grouped according to the activities of the System A and L amino acid transporters, the osmolyte carrier SLC6A12, and the mitochondrial aspartate-glutamate transporter SLC25A13. Comparison between NGT and IGT groups supported blunted glucose- and/or insulin-stimulated activities in the IGT group. Using unbiased pathway models, we offer evidence supporting the important role of solute carriers in the physiologic response to glucose challenge and conclude that carrier activities are reflected in individual metabolite profiles of perturbation experiments. Given the involvement of transporters in human disease, metabolite profiling may contribute to improved

  19. Toward Holistic Scene Understanding: Feedback Enabled Cascaded Classification Models.

    Science.gov (United States)

    Li, Congcong; Kowdle, Adarsh; Saxena, Ashutosh; Chen, Tsuhan

    2012-07-01

    Scene understanding includes many related subtasks, such as scene categorization, depth estimation, object detection, etc. Each of these subtasks is often notoriously hard, and state-of-the-art classifiers already exist for many of them. These classifiers operate on the same raw image and provide correlated outputs. It is desirable to have an algorithm that can capture such correlation without requiring any changes to the inner workings of any classifier. We propose Feedback Enabled Cascaded Classification Models (FE-CCM), that jointly optimizes all the subtasks while requiring only a "black box" interface to the original classifier for each subtask. We use a two-layer cascade of classifiers, which are repeated instantiations of the original ones, with the output of the first layer fed into the second layer as input. Our training method involves a feedback step that allows later classifiers to provide earlier classifiers information about which error modes to focus on. We show that our method significantly improves performance in all the subtasks in the domain of scene understanding, where we consider depth estimation, scene categorization, event categorization, object detection, geometric labeling, and saliency detection. Our method also improves performance in two robotic applications: an object-grasping robot and an object-finding robot.

  20. Authentication Model Based Bluetooth-enabled Mobile Phone

    Directory of Open Access Journals (Sweden)

    Rania Abdelhameed

    2005-01-01

    Full Text Available Authentication is a mechanism to establish proof of identities, the authentication process ensure that who a particular user is. Current PC, laptop user authentication systems are always done once and hold until it explicitly revoked by the user, or asking the user to frequently reestablish his identity which encouraging him to disable authentication. Zero-Interaction Authentication (ZIA provides solution to this problem. In ZIA, a user wears a small authentication token that communicates with a laptop over a short-range, wireless link. ZIA combine authentication with a file encryption. Here we proposed a Laptop-user Authentication Based Mobile phone (LABM, in our model of authentication, a user uses his Bluetooth-enabled mobile phone, which work as an authentication token that provides the authentication for laptop over a Bluetooth wireless link, in the concept of transient authentication with out combining it with encryption file system. The user authenticate to the mobile phone infrequently. In turn, the mobile phone continuously authenticates to the laptop by means of the short-range, wireless link.

  1. Baroclinic instability in the two-layer model. Interpretations

    Energy Technology Data Exchange (ETDEWEB)

    Egger, Joseph [Meteorological Inst., Univ. of Munich (Germany)

    2009-10-15

    Two new interpretations of the wellknown instability criterion of the two-layer model of baroclinic instability are given whereby also a slight generalization of this model is introduced by admitting an interface on top with a reduced gravity g. It is found that instability sets in when the horizontal potential temperature advection by the barotropic mode becomes more important than the vertical temperature advection due to this mode. The second interpretation is based on potential vorticity (PV) thinking. Instability implies a dominance of the vertical PV coupling coefficient compared to horizontal mean state PV advection generated at the same level. The interface damps with decreasing g. (orig.)

  2. [How to fit and interpret multilevel models using SPSS].

    Science.gov (United States)

    Pardo, Antonio; Ruiz, Miguel A; San Martín, Rafael

    2007-05-01

    Hierarchic or multilevel models are used to analyse data when cases belong to known groups and sample units are selected both from the individual level and from the group level. In this work, the multilevel models most commonly discussed in the statistic literature are described, explaining how to fit these models using the SPSS program (any version as of the 11 th ) and how to interpret the outcomes of the analysis. Five particular models are described, fitted, and interpreted: (1) one-way analysis of variance with random effects, (2) regression analysis with means-as-outcomes, (3) one-way analysis of covariance with random effects, (4) regression analysis with random coefficients, and (5) regression analysis with means- and slopes-as-outcomes. All models are explained, trying to make them understandable to researchers in health and behaviour sciences.

  3. Help seeking in older Asian people with dementia in Melbourne: using the Cultural Exchange Model to explore barriers and enablers.

    Science.gov (United States)

    Haralambous, Betty; Dow, Briony; Tinney, Jean; Lin, Xiaoping; Blackberry, Irene; Rayner, Victoria; Lee, Sook-Meng; Vrantsidis, Freda; Lautenschlager, Nicola; Logiudice, Dina

    2014-03-01

    The prevalence of dementia is increasing in Australia. Limited research is available on access to Cognitive Dementia and Memory Services (CDAMS) for people with dementia from Culturally and Linguistically Diverse (CALD) communities. This study aimed to determine the barriers and enablers to accessing CDAMS for people with dementia and their families of Chinese and Vietnamese backgrounds. Consultations with community members, community workers and health professionals were conducted using the "Cultural Exchange Model" framework. For carers, barriers to accessing services included the complexity of the health system, lack of time, travel required to get to services, language barriers, interpreters and lack of knowledge of services. Similarly, community workers and health professionals identified language, interpreters, and community perceptions as key barriers to service access. Strategies to increase knowledge included providing information via radio, printed material and education in community group settings. The "Cultural Exchange Model" enabled engagement with and modification of the approaches to meet the needs of the targeted CALD communities.

  4. Interpretation Problems in Modelling Complex Artifacts for Diagnosis

    DEFF Research Database (Denmark)

    Lind, Morten

    1996-01-01

    The paper analyse the interpretation problems involved in building models for diagnosis of industrial systems. It is shown that the construction of a fault tree of a plant is based on general diagnostic knowledge and an extensive body of plant knowledge. It is also shown that the plant knowledge ...

  5. Interpreting Marginal Effects in the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2014-01-01

    This paper presents the challenges when researchers interpret results about relationships between variables from discrete choice models with multiple outcomes. The recommended approach is demonstrated by testing predictions from transaction cost theory on a sample of 246 Scandinavian firms that h...

  6. Continuous media interpretation of supersymmetric Wess-Zumino type models

    Energy Technology Data Exchange (ETDEWEB)

    Letelier, P.S. [Universidade Estadual de Campinas (Brazil). Departamento de Matematica Aplicada; Zanchin, V.T. [Departamento de Fisica-CCNE, Universidade Federal de Santa Maria, 97119, Santa Maria, R.S. (Brazil)

    1995-02-20

    Supersymmetric Wess-Zumino type models are considered as classical material media that can be interpreted as fluids of ordered strings with heat flow along the strings, or a mixture of fluids of ordered strings with either a cloud of particles or a flux of directed radiation. ((orig.))

  7. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  8. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    Science.gov (United States)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014

  9. Interpretation of searches for supersymmetry with simplified models

    Science.gov (United States)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Aguilo, E.; Bergauer, T.; Dragicevic, M.; Erö, J.; Fabjan, C.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Pernicka, M.; Rabady, D.; Rahbaran, B.; Rohringer, C.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Taurok, A.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Bansal, M.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Luyckx, S.; Mucibello, L.; Ochesanu, S.; Roland, B.; Rougny, R.; Selvaggi, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D'Hondt, J.; Gonzalez Suarez, R.; Kalogeropoulos, A.; Maes, M.; Olbrechts, A.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Clerbaux, B.; De Lentdecker, G.; Dero, V.; Gay, A. P. R.; Hreus, T.; Léonard, A.; Marage, P. E.; Mohammadi, A.; Reis, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Adler, V.; Beernaert, K.; Cimmino, A.; Costantini, S.; Garcia, G.; Grunewald, M.; Klein, B.; Lellouch, J.; Marinov, A.; Mccartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Walsh, S.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Bruno, G.; Castello, R.; Ceard, L.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Lemaitre, V.; Liao, J.; Militaru, O.; Nuttens, C.; Pagano, D.; Pin, A.; Piotrzkowski, K.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Alves, G. A.; Correa Martins Junior, M.; Martins, T.; Pol, M. E.; Souza, M. H. G.; Aldá Júnior, W. L.; Carvalho, W.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Malek, M.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Soares Jorge, L.; Sznajder, A.; Vilela Pereira, A.; Anjos, T. S.; Bernardes, C. A.; Dias, F. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Lagana, C.; Marinho, F.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Genchev, V.; Iaydjiev, P.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Tcholakov, V.; Trayanov, R.; Vutova, M.; Dimitrov, A.; Hadjiiska, R.; Kozhuharov, V.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Jiang, C. H.; Liang, D.; Liang, S.; Meng, X.; Tao, J.; Wang, J.; Wang, X.; Wang, Z.; Xiao, H.; Xu, M.; Zang, J.; Zhang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Guo, Y.; Li, W.; Liu, S.; Mao, Y.; Qian, S. J.; Teng, H.; Wang, D.; Zhang, L.; Zou, W.; Avila, C.; Gomez, J. P.; Gomez Moreno, B.; Osorio Oliveros, A. F.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Plestina, R.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Duric, S.; Kadija, K.; Luetic, J.; Mekterovic, D.; Morovic, S.; Attikis, A.; Galanti, M.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Finger, M.; Finger, M., Jr.; Assran, Y.; Elgammal, S.; Ellithi Kamel, A.; Mahmoud, M. A.; Mahrous, A.; Radi, A.; Kadastik, M.; Müntel, M.; Raidal, M.; Rebane, L.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Heikkinen, A.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Ungaro, D.; Wendland, L.; Banzuzi, K.; Karjalainen, A.; Korpela, A.; Tuuva, T.; Besancon, M.; Choudhury, S.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Millischer, L.; Nayak, A.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Benhabib, L.; Bianchini, L.; Bluj, M.; Busson, P.; Charlot, C.; Daci, N.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Florent, A.; Granier de Cassagnac, R.; Haguenauer, M.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Paganini, P.; Sabes, D.; Salerno, R.; Sirois, Y.; Veelken, C.; Zabi, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Bodin, D.; Brom, J.-M.; Cardaci, M.; Chabert, E. C.; Collard, C.; Conte, E.; Drouhin, F.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Juillot, P.; Le Bihan, A.-C.; Van Hove, P.; Fassi, F.; Mercier, D.; Beauceron, S.; Beaupere, N.; Bondu, O.; Boudoul, G.; Brochet, S.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Sgandurra, L.; Sordini, V.; Tschudi, Y.; Verdier, P.; Viret, S.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Calpas, B.; Edelhoff, M.; Feld, L.; Heracleous, N.; Hindrichs, O.; Jussen, R.; Klein, K.; Merz, J.

    2013-09-01

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98fb-1. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrain other theoretical models and to compare different supersymmetry-inspired analyses.

  10. Interpretation of searches for supersymmetry with simplified models

    CERN Document Server

    Chatrchyan, Serguei; Sirunyan, Albert M; Tumasyan, Armen; Adam, Wolfgang; Aguilo, Ernest; Bergauer, Thomas; Dragicevic, Marko; Erö, Janos; Fabjan, Christian; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Hörmann, Natascha; Hrubec, Josef; Jeitler, Manfred; Kiesenhofer, Wolfgang; Knünz, Valentin; Krammer, Manfred; Krätschmer, Ilse; Liko, Dietrich; Mikulec, Ivan; Pernicka, Manfred; Rabady, Dinyar; Rahbaran, Babak; Rohringer, Christine; Rohringer, Herbert; Schöfbeck, Robert; Strauss, Josef; Taurok, Anton; Waltenberger, Wolfgang; Wulz, Claudia-Elisabeth; Mossolov, Vladimir; Shumeiko, Nikolai; Suarez Gonzalez, Juan; Bansal, Monika; Bansal, Sunil; Cornelis, Tom; De Wolf, Eddi A; Janssen, Xavier; Luyckx, Sten; Mucibello, Luca; Ochesanu, Silvia; Roland, Benoit; Rougny, Romain; Selvaggi, Michele; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Van Spilbeeck, Alex; Blekman, Freya; Blyweert, Stijn; D'Hondt, Jorgen; Gonzalez Suarez, Rebeca; Kalogeropoulos, Alexis; Maes, Michael; Olbrechts, Annik; Van Doninck, Walter; Van Mulders, Petra; Van Onsem, Gerrit Patrick; Villella, Ilaria; Clerbaux, Barbara; De Lentdecker, Gilles; Dero, Vincent; Gay, Arnaud; Hreus, Tomas; Léonard, Alexandre; Marage, Pierre Edouard; Mohammadi, Abdollah; Reis, Thomas; Thomas, Laurent; Vander Velde, Catherine; Vanlaer, Pascal; Wang, Jian; Adler, Volker; Beernaert, Kelly; Cimmino, Anna; Costantini, Silvia; Garcia, Guillaume; Grunewald, Martin; Klein, Benjamin; Lellouch, Jérémie; Marinov, Andrey; Mccartin, Joseph; Ocampo Rios, Alberto Andres; Ryckbosch, Dirk; Strobbe, Nadja; Thyssen, Filip; Tytgat, Michael; Walsh, Sinead; Yazgan, Efe; Zaganidis, Nicolas; Basegmez, Suzan; Bruno, Giacomo; Castello, Roberto; Ceard, Ludivine; Delaere, Christophe; Du Pree, Tristan; Favart, Denis; Forthomme, Laurent; Giammanco, Andrea; Hollar, Jonathan; Lemaitre, Vincent; Liao, Junhui; Militaru, Otilia; Nuttens, Claude; Pagano, Davide; Pin, Arnaud; Piotrzkowski, Krzysztof; Vizan Garcia, Jesus Manuel; Beliy, Nikita; Caebergs, Thierry; Daubie, Evelyne; Hammad, Gregory Habib; Alves, Gilvan; Correa Martins Junior, Marcos; Martins, Thiago; Pol, Maria Elena; Henrique Gomes E Souza, Moacyr; Aldá Júnior, Walter Luiz; Carvalho, Wagner; Custódio, Analu; Da Costa, Eliza Melo; De Jesus Damiao, Dilson; De Oliveira Martins, Carley; Fonseca De Souza, Sandro; Malbouisson, Helena; Malek, Magdalena; Matos Figueiredo, Diego; Mundim, Luiz; Nogima, Helio; Prado Da Silva, Wanda Lucia; Santoro, Alberto; Soares Jorge, Luana; Sznajder, Andre; Vilela Pereira, Antonio; Souza Dos Anjos, Tiago; Bernardes, Cesar Augusto; De Almeida Dias, Flavia; Tomei, Thiago; De Moraes Gregores, Eduardo; Lagana, Caio; Da Cunha Marinho, Franciole; Mercadante, Pedro G; Novaes, Sergio F; Padula, Sandra; Genchev, Vladimir; Iaydjiev, Plamen; Piperov, Stefan; Rodozov, Mircho; Stoykova, Stefka; Sultanov, Georgi; Tcholakov, Vanio; Trayanov, Rumen; Vutova, Mariana; Dimitrov, Anton; Hadjiiska, Roumyana; Kozhuharov, Venelin; Litov, Leander; Pavlov, Borislav; Petkov, Peicho; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Jiang, Chun-Hua; Liang, Dong; Liang, Song; Meng, Xiangwei; Tao, Junquan; Wang, Jian; Wang, Xianyou; Wang, Zheng; Xiao, Hong; Xu, Ming; Zang, Jingjing; Zhang, Zhen; Asawatangtrakuldee, Chayanit; Ban, Yong; Guo, Yifei; Li, Wenbo; Liu, Shuai; Mao, Yajun; Qian, Si-Jin; Teng, Haiyun; Wang, Dayong; Zhang, Linlin; Zou, Wei; Avila, Carlos; Gomez, Juan Pablo; Gomez Moreno, Bernardo; Osorio Oliveros, Andres Felipe; Sanabria, Juan Carlos; Godinovic, Nikola; Lelas, Damir; Plestina, Roko; Polic, Dunja; Puljak, Ivica; Antunovic, Zeljko; Kovac, Marko; Brigljevic, Vuko; Duric, Senka; Kadija, Kreso; Luetic, Jelena; Mekterovic, Darko; Morovic, Srecko; Attikis, Alexandros; Galanti, Mario; Mavromanolakis, Georgios; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Finger, Miroslav; Finger Jr, Michael; Assran, Yasser; Elgammal, Sherif; Ellithi Kamel, Ali; Mahmoud, Mohammed; Mahrous, Ayman; Radi, Amr; Kadastik, Mario; Müntel, Mait; Raidal, Martti; Rebane, Liis; Tiko, Andres; Eerola, Paula; Fedi, Giacomo; Voutilainen, Mikko; Härkönen, Jaakko; Heikkinen, Mika Aatos; Karimäki, Veikko; Kinnunen, Ritva; Kortelainen, Matti J; Lampén, Tapio; Lassila-Perini, Kati; Lehti, Sami; Lindén, Tomas; Luukka, Panja-Riina; Mäenpää, Teppo; Peltola, Timo; Tuominen, Eija; Tuominiemi, Jorma; Tuovinen, Esa; Ungaro, Donatella; Wendland, Lauri; Banzuzi, Kukka; Karjalainen, Ahti; Korpela, Arja; Tuuva, Tuure; Besancon, Marc; Choudhury, Somnath; Dejardin, Marc; Denegri, Daniel; Fabbro, Bernard

    2013-01-01

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrain other theoretical models and to compare different supersymmetry-inspired analyses.

  11. Life course models: improving interpretation by consideration of total effects.

    Science.gov (United States)

    Green, Michael J; Popham, Frank

    2016-12-28

    Life course epidemiology has used models of accumulation and critical or sensitive periods to examine the importance of exposure timing in disease aetiology. These models are usually used to describe the direct effects of exposures over the life course. In comparison with consideration of direct effects only, we show how consideration of total effects improves interpretation of these models, giving clearer notions of when it will be most effective to intervene. We show how life course variation in the total effects depends on the magnitude of the direct effects and the stability of the exposure. We discuss interpretation in terms of total, direct and indirect effects and highlight the causal assumptions required for conclusions as to the most effective timing of interventions.

  12. The Marine Virtual Laboratory: enabling efficient ocean model configuration

    Directory of Open Access Journals (Sweden)

    P. R. Oke

    2015-11-01

    Full Text Available The technical steps involved in configuring a regional ocean model are analogous for all community models. All require the generation of a model grid, preparation and interpolation of topography, initial conditions, and forcing fields. Each task in configuring a regional ocean model is straight-forward – but the process of downloading and reformatting data can be time-consuming. For an experienced modeller, the configuration of a new model domain can take as little as a few hours – but for an inexperienced modeller, it can take much longer. In pursuit of technical efficiency, the Australian ocean modelling community has developed the Web-based MARine Virtual Laboratory (WebMARVL. WebMARVL allows a user to quickly and easily configure an ocean general circulation or wave model through a simple interface, reducing the time to configure a regional model to a few minutes. Through WebMARVL, a user is prompted to define the basic options needed for a model configuration, including the: model, run duration, spatial extent, and input data. Once all aspects of the configuration are selected, a series of data extraction, reprocessing, and repackaging services are run, and a "take-away bundle" is prepared for download. Building on the capabilities developed under Australia's Integrated Marine Observing System, WebMARVL also extracts all of the available observations for the chosen time-space domain. The user is able to download the take-away bundle, and use it to run the model of their choice. Models supported by WebMARVL include three community ocean general circulation models, and two community wave models. The model configuration from the take-away bundle is intended to be a starting point for scientific research. The user may subsequently refine the details of the model set-up to improve the model performance for the given application. In this study, WebMARVL is described along with a series of results from test cases comparing Web

  13. Reduced ENSO variability at the LGM revealed by an isotope-enabled Earth system model

    Science.gov (United States)

    Zhu, Jiang; Liu, Zhengyu; Brady, Esther; Otto-Bliesner, Bette; Zhang, Jiaxu; Noone, David; Tomas, Robert; Nusbaumer, Jesse; Wong, Tony; Jahn, Alexandra; Tabor, Clay

    2017-07-01

    Studying the El Niño-Southern Oscillation (ENSO) in the past can help us better understand its dynamics and improve its future projections. However, both paleoclimate reconstructions and model simulations of ENSO strength at the Last Glacial Maximum (LGM; 21 ka B.P.) have led to contradicting results. Here we perform model simulations using the recently developed water isotope-enabled Community Earth System Model (iCESM). For the first time, model-simulated oxygen isotopes are directly compared with those from ENSO reconstructions using the individual foraminifera analysis (IFA). We find that the LGM ENSO is most likely weaker comparing with the preindustrial. The iCESM suggests that total variance of the IFA records may only reflect changes in the annual cycle instead of ENSO variability as previously assumed. Furthermore, the interpretation of subsurface IFA records can be substantially complicated by the habitat depth of thermocline-dwelling foraminifera and their vertical migration with a temporally varying thermocline.

  14. A human brainstem glioma xenograft model enabled for bioluminescence imaging

    OpenAIRE

    Hashizume, Rintaro; Ozawa, Tomoko; Dinca, Eduard B.; Banerjee, Anuradha; Prados, Michael D.; James, Charles D.; Gupta, Nalin

    2009-01-01

    Despite the use of radiation and chemotherapy, the prognosis for children with diffuse brainstem gliomas is extremely poor. There is a need for relevant brainstem tumor models that can be used to test new therapeutic agents and delivery systems in pre-clinical studies. We report the development of a brainstem-tumor model in rats and the application of bioluminescence imaging (BLI) for monitoring tumor growth and response to therapy as part of this model. Luciferase-modified human glioblastoma...

  15. Modelling and interpreting spectral energy distributions of galaxies with BEAGLE

    CERN Document Server

    Chevallard, Jacopo

    2016-01-01

    We present a new-generation tool to model and interpret spectral energy distributions (SEDs) of galaxies, which incorporates in a consistent way the production of radiation and its transfer through the interstellar and intergalactic media. This flexible tool, named BEAGLE (for BayEsian Analysis of GaLaxy sEds), allows one to build mock galaxy catalogues as well as to interpret in terms of physical parameters any combination of photometric and spectroscopic galaxy observations. The current version of the tool includes the versatile modeling of the emission from stars and photoionized gas, attenuation by dust and the accounting for different instrumental effects. We show a first application of the BEAGLE tool to the interpretation of broadband SEDs of a published sample of ${\\sim}10^4$ galaxies at redshifts $0.1 \\lesssim z\\lesssim8$. We find that the constraints derived on photometric redshifts using this multi-purpose tool are comparable to those obtained using public, dedicated photometric-redshift codes and ...

  16. Backward Causation in Complex Action Model --- Superdeterminism and Transactional Interpretations

    CERN Document Server

    Nielsen, Holger B

    2010-01-01

    It is shown that the transactional interpretation of quantum mechanics being referred back to Feynman-Wheeler's time reversal symmetric radiation theory has reminiscences to our complex action model. In this complex action model the initial conditions are in principle even calculable. Thus it philosophically points towards superdeterminism, but really the Bell theorem problem is solved in our model of complex action by removing the significance of signals running slower than by light velocity. Our model as earlier published predicts that LHC should have some failure before reaching to have produced as many Higgs-particles as would have been produced the SSC accelerator. In the present article, we point out that a cardgame involving whether to restrict LHC-running as we have proposed to test our model will under all circumstances be a success.

  17. An Agent Memory Model Enabling Rational and Biased Reasoning

    NARCIS (Netherlands)

    Heuvelink, A.; Klein, M.C.A.; Treur, J.

    2008-01-01

    This paper presents an architecture for a memory model that facilitates versatile reasoning mechanisms over the beliefs stored in an agent's belief base. Based on an approach for belief aggregation, a model is introduced for controlling both the formation of abstract and complex beliefs and the

  18. Medicare Care Choices Model Enables Concurrent Palliative and Curative Care.

    Science.gov (United States)

    2015-01-01

    On July 20, 2015, the federal Centers for Medicare & Medicaid Services (CMS) announced hospices that have been selected to participate in the Medicare Care Choices Model. Fewer than half of the Medicare beneficiaries use hospice care for which they are eligible. Current Medicare regulations preclude concurrent palliative and curative care. Under the Medicare Choices Model, dually eligible Medicare beneficiaries may elect to receive supportive care services typically provided by hospice while continuing to receive curative services. This report describes how CMS has expanded the model from an originally anticipated 30 Medicare-certified hospices to over 140 Medicare-certified hospices and extended the duration of the model from 3 to 5 years. Medicare-certified hospice programs that will participate in the model are listed.

  19. New Cosmological Model and Its Implications on Observational Data Interpretation

    Directory of Open Access Journals (Sweden)

    Vlahovic Branislav

    2013-09-01

    Full Text Available The paradigm of ΛCDM cosmology works impressively well and with the concept of inflation it explains the universe after the time of decoupling. However there are still a few concerns; after much effort there is no detection of dark matter and there are significant problems in the theoretical description of dark energy. We will consider a variant of the cosmological spherical shell model, within FRW formalism and will compare it with the standard ΛCDM model. We will show that our new topological model satisfies cosmological principles and is consistent with all observable data, but that it may require new interpretation for some data. Considered will be constraints imposed on the model, as for instance the range for the size and allowed thickness of the shell, by the supernovae luminosity distance and CMB data. In this model propagation of the light is confined along the shell, which has as a consequence that observed CMB originated from one point or a limited space region. It allows to interpret the uniformity of the CMB without inflation scenario. In addition this removes any constraints on the uniformity of the universe at the early stage and opens a possibility that the universe was not uniform and that creation of galaxies and large structures is due to the inhomogeneities that originated in the Big Bang.

  20. A Conversation Model Enabling Intelligent Agents to Give Emotional Support

    OpenAIRE

    Van der Zwaan, J.M.; Dignum, V; Jonker, C.M.

    2012-01-01

    In everyday life, people frequently talk to others to help them deal with negative emotions. To some extent, everybody is capable of comforting other people, but so far conversational agents are unable to deal with this type of situation. To provide intelligent agents with the capability to give emotional support, we propose a domain-independent conversational model that is based on topics suggested by cognitive appraisal theories of emotion and the 5-phase model that is used to structure onl...

  1. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  2. The shared circuits model (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading.

    Science.gov (United States)

    Hurley, Susan

    2008-02-01

    Imitation, deliberation, and mindreading are characteristically human sociocognitive skills. Research on imitation and its role in social cognition is flourishing across various disciplines. Imitation is surveyed in this target article under headings of behavior, subpersonal mechanisms, and functions of imitation. A model is then advanced within which many of the developments surveyed can be located and explained. The shared circuits model (SCM) explains how imitation, deliberation, and mindreading can be enabled by subpersonal mechanisms of control, mirroring, and simulation. It is cast at a middle, functional level of description, that is, between the level of neural implementation and the level of conscious perceptions and intentional actions. The SCM connects shared informational dynamics for perception and action with shared informational dynamics for self and other, while also showing how the action/perception, self/other, and actual/possible distinctions can be overlaid on these shared informational dynamics. It avoids the common conception of perception and action as separate and peripheral to central cognition. Rather, it contributes to the situated cognition movement by showing how mechanisms for perceiving action can be built on those for active perception.;>;>The SCM is developed heuristically, in five layers that can be combined in various ways to frame specific ontogenetic or phylogenetic hypotheses. The starting point is dynamic online motor control, whereby an organism is closely attuned to its embedding environment through sensorimotor feedback. Onto this are layered functions of prediction and simulation of feedback, mirroring, simulation of mirroring, monitored inhibition of motor output, and monitored simulation of input. Finally, monitored simulation of input specifying possible actions plus inhibited mirroring of such possible actions can generate information about the possible as opposed to actual instrumental actions of others, and the

  3. Internal models for interpreting neural population activity during sensorimotor control.

    Science.gov (United States)

    Golub, Matthew D; Yu, Byron M; Chase, Steven M

    2015-01-01

    To successfully guide limb movements, the brain takes in sensory information about the limb, internally tracks the state of the limb, and produces appropriate motor commands. It is widely believed that this process uses an internal model, which describes our prior beliefs about how the limb responds to motor commands. Here, we leveraged a brain-machine interface (BMI) paradigm in rhesus monkeys and novel statistical analyses of neural population activity to gain insight into moment-by-moment internal model computations. We discovered that a mismatch between subjects' internal models and the actual BMI explains roughly 65% of movement errors, as well as long-standing deficiencies in BMI speed control. We then used the internal models to characterize how the neural population activity changes during BMI learning. More broadly, this work provides an approach for interpreting neural population activity in the context of how prior beliefs guide the transformation of sensory input to motor output.

  4. Model for the Interpretation of Hyperspectral Remote-Sensing Reflectance

    Science.gov (United States)

    Lee, Zhongping; Carder, Kendall L.; Hawes, Steve K.; Steward, Robert G.; Peacock, Thomas G.; Davis, Curtiss O.

    1994-01-01

    Remote-sensing reflectance is easier to interpret for the open ocean than for coastal regions because the optical signals are highly coupled to the phytoplankton (e.g., chlorophyll) concentrations. For estuarine or coastal waters, variable terrigenous colored dissolved organic matter (CDOM), suspended sediments, and bottom reflectance, all factors that do not covary with the pigment concentration, confound data interpretation. In this research, remote-sensing reflectance models are suggested for coastal waters, to which contributions that are due to bottom reflectance, CDOM fluorescence, and water Raman scattering are included. Through the use of two parameters to model the combination of the backscattering coefficient and the Q factor, excellent agreement was achieved between the measured and modeled remote-sensing reflectance for waters from the West Florida Shelf to the Mississippi River plume. These waters cover a range of chlorophyll of 0.2-40 mg/cu m and gelbstoff absorption at 440 nm from 0.02-0.4/m. Data with a spectral resolution of 10 nm or better, which is consistent with that provided by the airborne visible and infrared imaging spectrometer (AVIRIS) and spacecraft spectrometers, were used in the model evaluation.

  5. Enabling linear model for the IMGC-02 absolute gravimeter

    CERN Document Server

    Nagornyi, V D; Svitlov, S

    2013-01-01

    Measurement procedures of most rise-and-fall absolute gravimeters has to resolve singularity at the apex of the trajectory caused by the discrete fringe counting in the Michelson-type interferometers. Traditionally the singularity is addressed by implementing non-linear models of the trajectory, but they introduce problems of their own, such as biasness, non-uniqueness, and instability of the gravity estimates. Using IMGC-02 gravimeter as example, we show that the measurement procedure of the rise-and-fall gravimeters can be based on the linear models which successfully resolve the singularity and provide rigorous estimates of the gravity value. The linear models also facilitate further enhancements of the instrument, such as accounting for new types of disturbances and active compensation for the vibrations.

  6. Computer Modeling of Carbon Metabolism Enables Biofuel Engineering (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2011-09-01

    In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.

  7. Enabling Cross-Discipline Collaboration Via a Functional Data Model

    Science.gov (United States)

    Lindholm, D. M.; Wilson, A.; Baltzer, T.

    2016-12-01

    Many research disciplines have very specialized data models that are used to express the detailed semantics that are meaningful to that community and easily utilized by their data analysis tools. While invaluable to members of that community, such expressive data structures and metadata are of little value to potential collaborators from other scientific disciplines. Many data interoperability efforts focus on the difficult task of computationally mapping concepts from one domain to another to facilitate discovery and use of data. Although these efforts are important and promising, we have found that a great deal of discovery and dataset understanding still happens at the level of less formal, personal communication. However, a significant barrier to inter-disciplinary data sharing that remains is one of data access.Scientists and data analysts continue to spend inordinate amounts of time simply trying to get data into their analysis tools. Providing data in a standard file format is often not sufficient since data can be structured in many ways. Adhering to more explicit community standards for data structure and metadata does little to help those in other communities.The Functional Data Model specializes the Relational Data Model (used by many database systems)by defining relations as functions between independent (domain) and dependent (codomain) variables. Given that arrays of data in many scientific data formats generally represent functionally related parameters (e.g. temperature as a function of space and time), the Functional Data Model is quite relevant for these datasets as well. The LaTiS software framework implements the Functional Data Model and provides a mechanism to expose an existing data source as a LaTiS dataset. LaTiS datasets can be manipulated using a Functional Algebra and output in any number of formats.LASP has successfully used the Functional Data Model and its implementation in the LaTiS software framework to bridge the gap between

  8. Modelling and interpreting spectral energy distributions of galaxies with BEAGLE

    Science.gov (United States)

    Chevallard, Jacopo; Charlot, Stéphane

    2016-10-01

    We present a new-generation tool to model and interpret spectral energy distributions (SEDs) of galaxies, which incorporates in a consistent way the production of radiation and its transfer through the interstellar and intergalactic media. This flexible tool, named BEAGLE (for BayEsian Analysis of GaLaxy sEds), allows one to build mock galaxy catalogues as well as to interpret any combination of photometric and spectroscopic galaxy observations in terms of physical parameters. The current version of the tool includes versatile modelling of the emission from stars and photoionized gas, attenuation by dust and accounting for different instrumental effects, such as spectroscopic flux calibration and line spread function. We show a first application of the BEAGLE tool to the interpretation of broad-band SEDs of a published sample of ˜ 10^4 galaxies at redshifts 0.1 ≲ z ≲ 8. We find that the constraints derived on photometric redshifts using this multipurpose tool are comparable to those obtained using public, dedicated photometric-redshift codes and quantify this result in a rigorous statistical way. We also show how the post-processing of BEAGLE output data with the PYTHON extension PYP-BEAGLE allows the characterization of systematic deviations between models and observations, in particular through posterior predictive checks. The modular design of the BEAGLE tool allows easy extensions to incorporate, for example, the absorption by neutral galactic and circumgalactic gas, and the emission from an active galactic nucleus, dust and shock-ionized gas. Information about public releases of the BEAGLE tool will be maintained on http://www.jacopochevallard.org/beagle.

  9. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... for factors with differences in number of levels. For mixed models, where in general the relevant error terms for the fixed effects are not the pure residual error, it is suggested to base the d-prime-like interpretation on the residual error. The methods are illustrated on a multifactorial sensory profile...... inherently challenging effect size measure estimates in ANOVA settings....

  10. Domain-specific modeling enabling full code generation

    CERN Document Server

    Kelly, Steven

    2007-01-01

    Domain-Specific Modeling (DSM) is the latest approach tosoftware development, promising to greatly increase the speed andease of software creation. Early adopters of DSM have been enjoyingproductivity increases of 500–1000% in production for over adecade. This book introduces DSM and offers examples from variousfields to illustrate to experienced developers how DSM can improvesoftware development in their teams. Two authorities in the field explain what DSM is, why it works,and how to successfully create and use a DSM solution to improveproductivity and quality. Divided into four parts, the book covers:background and motivation; fundamentals; in-depth examples; andcreating DSM solutions. There is an emphasis throughout the book onpractical guidelines for implementing DSM, including how toidentify the nece sary language constructs, how to generate fullcode from models, and how to provide tool support for a new DSMlanguage. The example cases described in the book are available thebook's Website, www.dsmbook....

  11. ARCHITECTURES AND ALGORITHMS FOR COGNITIVE NETWORKS ENABLED BY QUALITATIVE MODELS

    DEFF Research Database (Denmark)

    Balamuralidhar, P.

    2013-01-01

    the qualitative models in a cognitive engine. Further I use the methodology in multiple functional scenarios of cognitive networks including self- optimization and self- monitoring. In the case of self-optimization, I integrate principles from monotonicity analysis to evaluate and enhance qualitative models......Complexity of communication networks is ever increasing and getting complicated by their heterogeneity and dynamism. Traditional techniques are facing challenges in network performance management. Cognitive networking is an emerging paradigm to make networks more intelligent, thereby overcoming...... traditional limitations and potentially achieving better performance. The vision is that, networks should be able to monitor themselves, reason upon changes in self and environment, act towards the achievement of specific goals and learn from experience. The concept of a Cognitive Engine (CE) supporting...

  12. LIME: 3D visualisation and interpretation of virtual geoscience models

    Science.gov (United States)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel

  13. Enabling analytical and Modeling Tools for Enhanced Disease Surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Dawn K. Manley

    2003-04-01

    Early detection, identification, and warning are essential to minimize casualties from a biological attack. For covert attacks, sick people are likely to provide the first indication of an attack. An enhanced medical surveillance system that synthesizes distributed health indicator information and rapidly analyzes the information can dramatically increase the number of lives saved. Current surveillance methods to detect both biological attacks and natural outbreaks are hindered by factors such as distributed ownership of information, incompatible data storage and analysis programs, and patient privacy concerns. Moreover, because data are not widely shared, few data mining algorithms have been tested on and applied to diverse health indicator data. This project addressed both integration of multiple data sources and development and integration of analytical tools for rapid detection of disease outbreaks. As a first prototype, we developed an application to query and display distributed patient records. This application incorporated need-to-know access control and incorporated data from standard commercial databases. We developed and tested two different algorithms for outbreak recognition. The first is a pattern recognition technique that searches for space-time data clusters that may signal a disease outbreak. The second is a genetic algorithm to design and train neural networks (GANN) that we applied toward disease forecasting. We tested these algorithms against influenza, respiratory illness, and Dengue Fever data. Through this LDRD in combination with other internal funding, we delivered a distributed simulation capability to synthesize disparate information and models for earlier recognition and improved decision-making in the event of a biological attack. The architecture incorporates user feedback and control so that a user's decision inputs can impact the scenario outcome as well as integrated security and role-based access-control for communicating

  14. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  15. Software Infrastructure to Enable Modeling & Simulation as a Service (M&SaaS) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR Phase 2 project will produce a software service infrastructure that enables most modeling and simulation (M&S) activities from code development and...

  16. Compact Ocean Models Enable Onboard AUV Autonomy and Decentralized Adaptive Sampling

    Science.gov (United States)

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Compact Ocean Models Enable Onboard AUV Autonomy and...Models Enable Onboard AUV Autonomy and Decentralized Adaptive Sampling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...onboard autonomy of underwater vehicles”, in Proc. AGU Ocean Science Meeting, Salt Lake City, UT. [published] ● Frolov, S., R., Kudela, J., Bellingham

  17. Pedagogic Models, Teachers' Frames of Interpretation and Assessment Practices.

    Science.gov (United States)

    Sakonidis, Haralambos; Tsatsaroni, Anna; Lamnias, Costas

    2002-01-01

    Constructed a theoretical framework to connect the internal structure of specialized educational discourse with the frames of interpretation that teachers used in dealing with teaching, learning, and assessment. Data from Greek elementary school teachers indicated that teachers' interpretive frames related to the serial languages of traditional,…

  18. Postural control model interpretation of stabilogram diffusion analysis

    Science.gov (United States)

    Peterka, R. J.

    2000-01-01

    Collins and De Luca [Collins JJ. De Luca CJ (1993) Exp Brain Res 95: 308-318] introduced a new method known as stabilogram diffusion analysis that provides a quantitative statistical measure of the apparently random variations of center-of-pressure (COP) trajectories recorded during quiet upright stance in humans. This analysis generates a stabilogram diffusion function (SDF) that summarizes the mean square COP displacement as a function of the time interval between COP comparisons. SDFs have a characteristic two-part form that suggests the presence of two different control regimes: a short-term open-loop control behavior and a longer-term closed-loop behavior. This paper demonstrates that a very simple closed-loop control model of upright stance can generate realistic SDFs. The model consists of an inverted pendulum body with torque applied at the ankle joint. This torque includes a random disturbance torque and a control torque. The control torque is a function of the deviation (error signal) between the desired upright body position and the actual body position, and is generated in proportion to the error signal, the derivative of the error signal, and the integral of the error signal [i.e. a proportional, integral and derivative (PID) neural controller]. The control torque is applied with a time delay representing conduction, processing, and muscle activation delays. Variations in the PID parameters and the time delay generate variations in SDFs that mimic real experimental SDFs. This model analysis allows one to interpret experimentally observed changes in SDFs in terms of variations in neural controller and time delay parameters rather than in terms of open-loop versus closed-loop behavior.

  19. Animal model and pharmacokinetic interpretation of nicotine poisoning in man.

    Science.gov (United States)

    Brady, M E; Ritschel, W A; Saelinger, D A; Cacini, W; Patterson, A J

    1979-01-01

    The purpose of the study was to find an animal model and possible pharmacolokinetic interpretation of the fact that a patient survived an accidental sc poisoning with a nicotine-containing animal tranquilizing dart. The same dose size of 3.58 mg/kg causing poisoning in man was administered to rabbits iv and sc. Blood samples were obtained for nicotine analysis by cardiac punctures; and blood pressure, respiration rate, and saliva flow were measured. Analysis of the original solution used in the dart excluded the possibility of sub-potency. The extent of unchanged drug reaching systemic circulation (extent of bioavailability) upon sc administration was 83%. Hence, the possibility of survival in man due to rapid tissue metabolism was ruled out. The pharmacokinetic analysis revealed a significant reduction in sc plasma levels during the first half hour which is reported as the most critical period for patients experiencing nicotine intoxication. The disposition of nicotine in the rabbit, i.e. distribution and elimination, are identical upon iv and sc administration. The reduced toxicity, i.e. blood pressure and saliva flow rate, upon sc dosing may be explained by the difference in plasma level peaks between sc and iv administration.

  20. Interpreting linear support vector machine models with heat map molecule coloring

    Directory of Open Access Journals (Sweden)

    Rosenbaum Lars

    2011-03-01

    Full Text Available Abstract Background Model-based virtual screening plays an important role in the early drug discovery stage. The outcomes of high-throughput screenings are a valuable source for machine learning algorithms to infer such models. Besides a strong performance, the interpretability of a machine learning model is a desired property to guide the optimization of a compound in later drug discovery stages. Linear support vector machines showed to have a convincing performance on large-scale data sets. The goal of this study is to present a heat map molecule coloring technique to interpret linear support vector machine models. Based on the weights of a linear model, the visualization approach colors each atom and bond of a compound according to its importance for activity. Results We evaluated our approach on a toxicity data set, a chromosome aberration data set, and the maximum unbiased validation data sets. The experiments show that our method sensibly visualizes structure-property and structure-activity relationships of a linear support vector machine model. The coloring of ligands in the binding pocket of several crystal structures of a maximum unbiased validation data set target indicates that our approach assists to determine the correct ligand orientation in the binding pocket. Additionally, the heat map coloring enables the identification of substructures important for the binding of an inhibitor. Conclusions In combination with heat map coloring, linear support vector machine models can help to guide the modification of a compound in later stages of drug discovery. Particularly substructures identified as important by our method might be a starting point for optimization of a lead compound. The heat map coloring should be considered as complementary to structure based modeling approaches. As such, it helps to get a better understanding of the binding mode of an inhibitor.

  1. Model sparsity and brain pattern interpretation of classification models in neuroimaging

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Churchill, Nathan W

    2012-01-01

    Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a ‘brain map’ derived from the classification model...

  2. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  3. New Model for Ecosystem Management Interpretation: Target Audiences on Military Lands.

    Science.gov (United States)

    Jacobson, Susan K.; Marynowski, Susan B.

    1998-01-01

    An interpretation model focusing on audience characteristics guided development of an ecosystem-management interpretive program targeting military leaders and planners at Eglin Air Force Base (Florida). Of five interpretative media tested, print mass media were most successful in increasing ecosystem knowledge and enhancing attitudes of both…

  4. Futures Business Models for an IoT Enabled Healthcare Sector: A Causal Layered Analysis Perspective

    Directory of Open Access Journals (Sweden)

    Julius Francis Gomes

    2016-12-01

    Full Text Available Purpose: To facilitate futures business research by proposing a novel way to combine business models as a conceptual tool with futures research techniques. Design: A futures perspective is adopted to foresight business models of the Internet of Things (IoT enabled healthcare sector by using business models as a futures business research tool. In doing so, business models is coupled with one of the most prominent foresight methodologies, Causal Layered Analysis (CLA. Qualitative analysis provides deeper understanding of the phenomenon through the layers of CLA; litany, social causes, worldview and myth. Findings: It is di cult to predict the far future for a technology oriented sector like healthcare. This paper presents three scenarios for short-, medium- and long-term future. Based on these scenarios we also present a set of business model elements for different future time frames. This paper shows a way to combine business models with CLA, a foresight methodology; in order to apply business models in futures business research. Besides offering early results for futures business research, this study proposes a conceptual space to work with individual business models for managerial stakeholders. Originality / Value: Much research on business models has offered conceptualization of the phenomenon, innovation through business model and transformation of business models. However, existing literature does not o er much on using business model as a futures research tool. Enabled by futures thinking, we collected key business model elements and building blocks for the futures market and ana- lyzed them through the CLA framework.

  5. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  6. IT-enabled dynamic capability on performance: An empirical study of BSC model

    Directory of Open Access Journals (Sweden)

    Adilson Carlos Yoshikuni

    2017-05-01

    Full Text Available ew studies have investigated the influence of “information capital,” through IT-enabled dynamic capability, on corporate performance, particularly in economic turbulence. Our study investigates the causal relationship between performance perspectives of the balanced scorecard using partial least squares path modeling. Using data on 845 Brazilian companies, we conduct a quantitative empirical study of firms during an economic crisis and observe the following interesting results. Operational and analytical IT-enabled dynamic capability had positive effects on business process improvement and corporate performance. Results pertaining to mediation (endogenous variables and moderation (control variables clarify IT’s role in and benefits for corporate performance.

  7. Towards a Comprehensive Model of Stereotypy: Integrating Operant and Neurobiological Interpretations

    Science.gov (United States)

    Lanovaz, Marc J.

    2011-01-01

    The predominant models on the emergence and maintenance of stereotypy in individuals with developmental disabilities are based on operant and neurobiological interpretations of the behavior. Although the proponents of the two models maintain largely independent lines of research, operant and neurobiological interpretations of stereotypy are not…

  8. Analysis of Challenges for Management Education in India Using Total Interpretive Structural Modelling

    Science.gov (United States)

    Mahajan, Ritika; Agrawal, Rajat; Sharma, Vinay; Nangia, Vinay

    2016-01-01

    Purpose: The purpose of this paper is to identify challenges for management education in India and explain their nature, significance and interrelations using total interpretive structural modelling (TISM), an innovative version of Warfield's interpretive structural modelling (ISM). Design/methodology/approach: The challenges have been drawn from…

  9. Modelling and interpreting biologically crusted dryland soil sub-surface structure using automated micropenetrometry

    Science.gov (United States)

    Hoon, Stephen R.; Felde, Vincent J. M. N. L.; Drahorad, Sylvie L.; Felix-Henningsen, Peter

    2015-04-01

    Soil penetrometers are used routinely to determine the shear strength of soils and deformable sediments both at the surface and throughout a depth profile in disciplines as diverse as soil science, agriculture, geoengineering and alpine avalanche-safety (e.g. Grunwald et al. 2001, Van Herwijnen et al. 2009). Generically, penetrometers comprise two principal components: An advancing probe, and a transducer; the latter to measure the pressure or force required to cause the probe to penetrate or advance through the soil or sediment. The force transducer employed to determine the pressure can range, for example, from a simple mechanical spring gauge to an automatically data-logged electronic transducer. Automated computer control of the penetrometer step size and probe advance rate enables precise measurements to be made down to a resolution of 10's of microns, (e.g. the automated electronic micropenetrometer (EMP) described by Drahorad 2012). Here we discuss the determination, modelling and interpretation of biologically crusted dryland soil sub-surface structures using automated micropenetrometry. We outline a model enabling the interpretation of depth dependent penetration resistance (PR) profiles and their spatial differentials using the model equations, σ {}(z) ={}σ c0{}+Σ 1n[σ n{}(z){}+anz + bnz2] and dσ /dz = Σ 1n[dσ n(z) /dz{} {}+{}Frn(z)] where σ c0 and σ n are the plastic deformation stresses for the surface and nth soil structure (e.g. soil crust, layer, horizon or void) respectively, and Frn(z)dz is the frictional work done per unit volume by sliding the penetrometer rod an incremental distance, dz, through the nth layer. Both σ n(z) and Frn(z) are related to soil structure. They determine the form of σ {}(z){} measured by the EMP transducer. The model enables pores (regions of zero deformation stress) to be distinguished from changes in layer structure or probe friction. We have applied this method to both artificial calibration soils in the

  10. INTEGRATION OF QSAR AND SAR METHODS FOR THE MECHANISTIC INTERPRETATION OF PREDICTIVE MODELS FOR CARCINOGENICITY

    Directory of Open Access Journals (Sweden)

    Natalja Fjodorova

    2012-07-01

    Full Text Available The knowledge-based Toxtree expert system (SAR approach was integrated with the statistically based counter propagation artificial neural network (CP ANN model (QSAR approach to contribute to a better mechanistic understanding of a carcinogenicity model for non-congeneric chemicals using Dragon descriptors and carcinogenic potency for rats as a response. The transparency of the CP ANN algorithm was demonstrated using intrinsic mapping technique specifically Kohonen maps. Chemical structures were represented by Dragon descriptors that express the structural and electronic features of molecules such as their shape and electronic surrounding related to reactivity of molecules. It was illustrated how the descriptors are correlated with particular structural alerts (SAs for carcinogenicity with recognized mechanistic link to carcinogenic activity. Moreover, the Kohonen mapping technique enables one to examine the separation of carcinogens and non-carcinogens (for rats within a family of chemicals with a particular SA for carcinogenicity. The mechanistic interpretation of models is important for the evaluation of safety of chemicals.

  11. Collaborative Cloud Manufacturing: Design of Business Model Innovations Enabled by Cyberphysical Systems in Distributed Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Erwin Rauch

    2016-01-01

    Full Text Available Collaborative cloud manufacturing, as a concept of distributed manufacturing, allows different opportunities for changing the logic of generating and capturing value. Cyberphysical systems and the technologies behind them are the enablers for new business models which have the potential to be disruptive. This paper introduces the topics of distributed manufacturing as well as cyberphysical systems. Furthermore, the main business model clusters of distributed manufacturing systems are described, including collaborative cloud manufacturing. The paper aims to provide support for developing business model innovations based on collaborative cloud manufacturing. Therefore, three business model architecture types of a differentiated business logic are discussed, taking into consideration the parameters which have an influence and the design of the business model and its architecture. As a result, new business models can be developed systematically and new ideas can be generated to boost the concept of collaborative cloud manufacturing within all sustainable business models.

  12. Adoption of information technology enabled innovations by primary care physicians: model and questionnaire development.

    OpenAIRE

    Dixon, D. R.; Dixon, B. J.

    1994-01-01

    A survey instrument was developed based on a model of the substantive factors influencing the adoption of Information Technology (IT) enabled innovations by physicians. The survey was given to all faculty and residents in a Primary Care teaching institution. Computerized literature searching was the IT innovation studied. The results support the role of the perceived ease of use and the perceived usefulness of an innovation as well as the intent to use an innovation as factors important for i...

  13. Information Management Workflow and Tools Enabling Multiscale Modeling Within ICME Paradigm

    Science.gov (United States)

    Arnold, Steven M.; Bednarcyk, Brett A.; Austin, Nic; Terentjev, Igor; Cebon, Dave; Marsden, Will

    2016-01-01

    With the increased emphasis on reducing the cost and time to market of new materials, the need for analytical tools that enable the virtual design and optimization of materials throughout their processing - internal structure - property - performance envelope, along with the capturing and storing of the associated material and model information across its lifecycle, has become critical. This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Fortunately, material information management systems and physics-based multiscale modeling methods have kept pace with the growing user demands. Herein, recent efforts to establish workflow for and demonstrate a unique set of web application tools for linking NASA GRC's Integrated Computational Materials Engineering (ICME) Granta MI database schema and NASA GRC's Integrated multiscale Micromechanics Analysis Code (ImMAC) software toolset are presented. The goal is to enable seamless coupling between both test data and simulation data, which is captured and tracked automatically within Granta MI®, with full model pedigree information. These tools, and this type of linkage, are foundational to realizing the full potential of ICME, in which materials processing, microstructure, properties, and performance are coupled to enable application-driven design and optimization of materials and structures.

  14. COINSTAC: A Privacy Enabled Model and Prototype for Leveraging and Processing Decentralized Brain Imaging Data

    Science.gov (United States)

    Plis, Sergey M.; Sarwate, Anand D.; Wood, Dylan; Dieringer, Christopher; Landis, Drew; Reed, Cory; Panta, Sandeep R.; Turner, Jessica A.; Shoemaker, Jody M.; Carter, Kim W.; Thompson, Paul; Hutchison, Kent; Calhoun, Vince D.

    2016-01-01

    The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and “closed” repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to “pooled-data” solutions (i.e., as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions. PMID:27594820

  15. Implementations and interpretations of the talbot-ogden infiltration model

    KAUST Repository

    Seo, Mookwon

    2014-11-01

    The interaction between surface and subsurface hydrology flow systems is important for water supplies. Accurate, efficient numerical models are needed to estimate the movement of water through unsaturated soil. We investigate a water infiltration model and develop very fast serial and parallel implementations that are suitable for a computer with a graphical processing unit (GPU).

  16. Re-orienting a remote acute care model towards a primary health care approach: key enablers.

    Science.gov (United States)

    Carroll, Vicki; Reeve, Carole A; Humphreys, John S; Wakerman, John; Carter, Maureen

    2015-01-01

    The objective of this study was to identify the key enablers of change in re-orienting a remote acute care model to comprehensive primary healthcare delivery. The setting of the study was a 12-bed hospital in Fitzroy Crossing, Western Australia. Individual key informant, in-depth interviews were completed with five of six identified senior leaders involved in the development of the Fitzroy Valley Health Partnership. Interviews were recorded and transcripts were thematically analysed by two investigators for shared views about the enabling factors strengthening primary healthcare delivery in a remote region of Australia. Participants described theestablishment of a culturally relevant primary healthcare service, using a community-driven, 'bottom up' approach characterised by extensive community participation. The formal partnership across the government and community controlled health services was essential, both to enable change to occur and to provide sustainability in the longer term. A hierarchy of major themes emerged. These included community participation, community readiness and desire for self-determination; linkages in the form of a government community controlled health service partnership; leadership; adequate infrastructure; enhanced workforce supply; supportive policy; and primary healthcare funding. The strong united leadership shown by the community and the health service enabled barriers to be overcome and it maximised the opportunities provided by government policy changes. The concurrent alignment around a common vision enabled implementation of change. The key principle learnt from this study is the importance of community and health service relationships and local leadership around a shared vision for the re-orientation of community health services.

  17. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    Science.gov (United States)

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration.

  18. ASPECTS OF MATHEMATICAL MODELING AND INTERPRETATION OF A MANUFACTURING SYSTEM

    Directory of Open Access Journals (Sweden)

    Mihaela ALDEA

    2013-05-01

    Full Text Available In the paper developing we started from a model that allows a detailed decoding of causalrelationships and getting the laws that determine the evolution of the phenomenon.The model chosen for the study is a discrete event system applicable to optimize the transport systemused in pottery. In order to simulate the manufacturing process we chose Matlab package that contains pntoollibrary, by which can be realized modeling of analyzed graphs. Since the timings of manufacture are very highand the process simulation is conducted with difficulty, we divided the graph according to the transport system.

  19. Model Convolution: A Computational Approach to Digital Image Interpretation

    Science.gov (United States)

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  20. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  1. Service and business model for technology enabled and home-based cardiac rehabilitation programs.

    Science.gov (United States)

    Sarela, Antti; Whittaker, Frank; Korhonen, Ilkka

    2009-01-01

    Cardiac rehabilitation programs are comprehensive life-style programs aimed at preventing recurrence of a cardiac event. However, the current programs have globally significantly low levels of uptake. Home-based model can be a viable alternative to hospital-based programs. We developed and analysed a service and business model for home based cardiac rehabilitation based on personal mentoring using mobile phones and web services. We analysed the different organizational and economical aspects of setting up and running the home based program and propose a potential business model for a sustainable and viable service. The model can be extended to management of other chronic conditions to enable transition from hospital and care centre based treatments to sustainable home-based care.

  2. A MULTI-OBJECTIVE ROBUST OPERATION MODEL FORELECTRONIC MARKET ENABLED SUPPLY CHAIN WITH UNCERTAIN DEMANDS

    Institute of Scientific and Technical Information of China (English)

    Jiawang XU; Xiaoyuan HUANG; Nina YAN

    2007-01-01

    A multi-objective robust operation model is proposed in this paper for an electronic market enabled supply chain consisting of multi-supplier and multi-customer with uncertain demands.Suppliers in this supply chain provide many kinds of products to different customers directly or through electronic market.Uncertain demands are described as a scenario set with certain probability; the supply chain operation model is constructed by using the robust optimization method based on scenario analyses.The operation model we proposed is a multi-objective programming problem satisfying several conflict objectives,such as meeting the demands of all customers,minimizing the system cost,the availabilities of suppliers' capacities not below a certain level,and robustness of decision to uncertain demands.The results of numerical examples proved that the solution of the model is most conservative; however,it can ensure the robustness of the operation of the supply chain effectively.

  3. Some Observations on the Identification and Interpretation of the 3PL IRT Model

    Science.gov (United States)

    Azevedo, Caio Lucidius Naberezny

    2009-01-01

    The paper by Maris, G., & Bechger, T. (2009) entitled, "On the Interpreting the Model Parameters for the Three Parameter Logistic Model," addressed two important questions concerning the three parameter logistic (3PL) item response theory (IRT) model (and in a broader sense, concerning all IRT models). The first one is related to the model…

  4. Adoption of mobile learning among 3g-enabled handheld users using extended technology acceptance model

    Directory of Open Access Journals (Sweden)

    Fadare Oluwaseun Gbenga

    2013-12-01

    Full Text Available This paper examines various constructs of an extended TAM, Technology Acceptance Model, that are theoretically influencing the adoption and acceptability of mobile learning among 3G enabled mobile users. Mobile learning activity- based, used for this study were drawn from behaviourist and “learning and teaching support” educational paradigms. An online and manual survey instruments were used to gather data. The structural equation modelling techniques were then employed to explain the adoption processes of hypothesized research model. A theoretical model ETAM is developed based on TAM. Our result proved that psychometric constructs of TAM can be extended and that ETAM is well suited, and of good pedagogical tool in understanding mobile learning among 3G enabled handheld devices in southwest part of Nigeria. Cognitive constructs, attitude toward m-learning, self-efficacy play significant roles in influencing behavioural intention for mobile learning, of which self-efficacy is the most importance construct. Implications of results and directions for future research are discussed.

  5. Geometric interpretation for the interacting-boson-fermion model

    Energy Technology Data Exchange (ETDEWEB)

    Leviatan, A.

    1988-08-11

    A geometric oriented approach for studying the interacting-boson-fermion model for odd-A nuclei is presented. A deformed single-particle hamiltonian is derived by means of an algebraic Born-Oppenheimer treatment. Observables concerning spectrum and transitions are calculated for the case of a single-j fermion coupled to a prolate core charge boson number and arbitrary deformations.

  6. Stoichiometric plant-herbivore models and their interpretation

    NARCIS (Netherlands)

    Kuang, Y.; Huisman, J.; Elser, J.J.

    2004-01-01

    The purpose of this note is to mechanistically formulate a math-ematically tractable model that specifically deals with the dynamics of plant-herbivore interaction in a closed phosphorus (P)-limiting environment. The key to our approach is the employment of the plant cell P quota and the Droop

  7. Interpretation of electrochemical impedance spectroscopy (EIS) circuit model for soils

    Institute of Scientific and Technical Information of China (English)

    韩鹏举; 张亚芬; 陈幼佳; 白晓红

    2015-01-01

    Based on three different kinds of conductive paths in microstructure of soil and theory of electrochemical impedance spectroscopy (EIS), an integrated equivalent circuit model and impedance formula for soils were proposed, which contain 6 meaningful resistance and reactance parameters. Considering the conductive properties of soils and dispersion effects, mathematical equations for impedance under various circuit models were deduced and studied. The mathematical expression presents two semicircles for theoretical EIS Nyquist spectrum, in which the center of one semicircle is degraded to simply the equivalent model. Based on the measured parameters of EIS Nyquist spectrum, meaningful soil parameters can easily be determined. Additionally, EIS was used to investigate the soil properties with different water contents along with the mathematical relationships and mechanism between the physical parameters and water content. Magnitude of the impedance decreases with the increase of testing frequency and water content for Bode graphs. The proposed model would help us to better understand the soil microstructure and properties and offer more reasonable explanations for EIS spectra.

  8. A statistical model for interpreting computerized dynamic posturography data

    Science.gov (United States)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  9. Interpretation of Higgs and SUSY searches in MSUGRA and GMSB models

    CERN Document Server

    De Vivie de Régie, J B

    2000-01-01

    Higgs and SUSY searches performed by the ALEPH experiment at LEP are interpreted in the framework of two constrained R-parity conserving models: minimal supergravity and minimal gauge mediated supersymmetry breaking. (4 refs).

  10. An exotic k-essence interpretation of interactive cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Forte, Monica [Universidad de Buenos Aires, Departamento de Fisica, Facultad de ciencias Exactas y Naturales, Buenos Aires (Argentina)

    2016-01-15

    We define a generalization of scalar fields with non-canonical kinetic term which we call exotic k-essence or, briefly, exotik. These fields are generated by the global description of cosmological models with two interactive fluids in the dark sector and under certain conditions they correspond to usual k-essences. The formalism is applied to the cases of constant potential and of inverse square potential and also we develop the purely exotik version for the modified holographic Ricci type (MHR) of dark energy, where the equations of state are not constant. With the kinetic function F = 1 + mx and the inverse square potential we recover, through the interaction term, the identification between k-essences and quintessences of an exponential potential, already known for Friedmann-Robertson-Walker and Bianchi type I geometries. Worked examples are shown that include the self-interacting MHR and also models with crossing of the phantom divide line (PDL). (orig.)

  11. An exotic k-essence interpretation of interactive cosmological models

    CERN Document Server

    Forte, Mónica

    2015-01-01

    We define a generalization of scalar fields with non-canonical kinetic term which we call exotic k-essence or briefly, exotik. These fields are generated by the global description of cosmological models with two interactive fluids in the dark sector and under certain conditions, they correspond to usual k-essences. The formalism is applied to the cases of constant potential and of inverse square potential and also we develop the purely exotik version for the modified holographic Ricci type of dark energy (MHR), where the equations of state are not constant. With the kinetic function $F=1+mx$ and the inverse square potential we recover, through the interaction term, the identification between k-essences and quintessences of exponential potential, already known for Friedmann-Robertson-Walker and Bianchi type I geometries. Worked examples are shown that include the self-interacting MHR and also models with crossing of the phantom divide line (PDL).

  12. Interpreting network communicability with stochastic models and data

    CERN Document Server

    Colman, Ewan

    2016-01-01

    The recently introduced concept of dynamic communicability is a valuable tool for ranking the importance of nodes in a temporal network. Two metrics, broadcast score and receive score, were introduced to measure the centrality of a node with respect to a model of contagion based on time-respecting walks. This article examines the temporal and structural factors influencing these metrics by considering a versatile stochastic temporal network model. We analytically derive formulae to accurately predict the expectation of the broadcast and receive scores when one or more columns in a temporal edge-list are shuffled. These methods are then applied to two publicly available data-sets and we quantify how much the centrality of each individual depends on structural or temporal influences. From our analysis we highlight two practical contributions: a way to control for temporal variation when computing dynamic communicability, and the conclusion that the broadcast and receive scores can, under a range of circumstance...

  13. Interpretation of topologically restricted measurements in lattice sigma-models

    CERN Document Server

    Bautista, Irais; Gerber, Urs; Hofmann, Christoph P; Mejía-Díaz, Héctor; Prado, Lilian

    2014-01-01

    We consider models with topological sectors, and difficulties with their Monte Carlo simulation. In particular we are concerned with the situation where a simulation has an extremely long auto-correlation time with respect to the topological charge. Then reliable numerical measurements are possible only within single topological sectors. The challenge is to assemble such restricted measurements to obtain an approximation for the full-fledged result, which corresponds to the correct sampling over the entire set of configurations. Under certain conditions this is possible, and it provides in addition an estimate for the topological susceptibility chi_t. Moreover, the evaluation of chi_t might be feasible even from data in just one topological sector, based on the correlation of the topological charge density. Here we present numerical test results for these techniques in the framework of non-linear sigma-models.

  14. Enabling Real-time Water Decision Support Services Using Model as a Service

    Science.gov (United States)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  15. Model Transformation for Model Driven Development of Semantic Web Enabled Multi-Agent Systems

    NARCIS (Netherlands)

    Kardas, G.; Göknil, Arda; Dikenelli, O.; Topaloglu, N.Y.; Weyns, D.; Holvoet, T.

    2007-01-01

    Model Driven Development (MDD) provides an infrastructure that simplifies Multi-agent System (MAS) development by increasing the abstraction level. In addition to defining models, transformation process for those models is also crucial in MDD. On the other hand, MAS modeling should also take care of

  16. Model Transformation for Model Driven Development of Semantic Web Enabled Multi-Agent Systems

    NARCIS (Netherlands)

    Kardas, G.; Göknil, A.; Dikenelli, O.; Topaloglu, N.Y.

    2007-01-01

    Model Driven Development (MDD) provides an infrastructure that simplifies Multi-agent System (MAS) development by increasing the abstraction level. In addition to defining models, transformation process for those models is also crucial in MDD. On the other hand, MAS modeling should also take care of

  17. Stieltjes electrostatic model interpretation for bound state problems

    Indian Academy of Sciences (India)

    K V S Shiv Chaitanya

    2014-07-01

    In this paper, it is shown that Stieltjes electrostatic model and quantum Hamilton Jacobi formalism are analogous to each other. This analogy allows the bound state problem to mimic as unit moving imaginary charges $i\\hbar$, which are placed in between the two fixed imaginary charges arising due to the classical turning points of the potential. The interaction potential between unit moving imaginary charges $i\\hbar$ is given by the logarithm of the wave function. For an exactly solvable potential, this system attains stable equilibrium position at the zeros of the orthogonal polynomials depending upon the interval of the classical turning points.

  18. A new model for enabling innovation in appropriate technology for sustainable development

    Directory of Open Access Journals (Sweden)

    Joshua Pearce

    2012-08-01

    Full Text Available The task of providing for basic human necessities such as food, water, shelter, and employment is growing as the world’s population continues to expand amid climate destabilization. One of the greatest challenges to development and innovation is access to relevant knowledge for quick technological dissemination. However, with the rise and application of advanced information technologies there is a great opportunity for knowledge building, community interaction, innovation, and collaboration using various online platforms. This article examines the potential of a novel model to enable innovation for collaborative enterprise, learning, and appropriate technology development on a global scale.

  19. A framework for structural modelling of an RFID-enabled intelligent distributed manufacturing control system

    Directory of Open Access Journals (Sweden)

    Barenji, Ali Vatankhah

    2014-08-01

    Full Text Available A modern manufacturing facility typically contains several distributed control systems, such as machining stations, assembly stations, and material handling and storage systems. Integrating Radio Frequency Identification (RFID technology into these control systems provides a basis for monitoring and configuring their components in real-time. With the right structural modelling, it is then possible to evaluate designs and translate them into new operational applications almost immediately. This paper proposes an architecture for the structural modelling of an intelligent distributed control system for a manufacturing facility, by utilising RFID technology. Emphasis is placed on a requirements analysis of the manufacturing system, the design of RFID-enabled intelligent distributed control systems using Unified Modelling Language (UML diagrams, and the use of efficient algorithms and tools for the implementation of these systems.

  20. Ames Culture Chamber System: Enabling Model Organism Research Aboard the international Space Station

    Science.gov (United States)

    Steele, Marianne

    2014-01-01

    Understanding the genetic, physiological, and behavioral effects of spaceflight on living organisms and elucidating the molecular mechanisms that underlie these effects are high priorities for NASA. Certain organisms, known as model organisms, are widely studied to help researchers better understand how all biological systems function. Small model organisms such as nem-atodes, slime mold, bacteria, green algae, yeast, and moss can be used to study the effects of micro- and reduced gravity at both the cellular and systems level over multiple generations. Many model organisms have sequenced genomes and published data sets on their transcriptomes and proteomes that enable scientific investigations of the molecular mechanisms underlying the adaptations of these organisms to space flight.

  1. A Computational Model of Syntactic Processing Ambiguity Resolution from Interpretation

    CERN Document Server

    Niv, M

    1994-01-01

    Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fact, the process of ambiguity resolution is almost always unconscious. But it is not infallible, however, as example 1 demonstrates. 1. The horse raced past the barn fell. This sentence is perfectly grammatical, as is evident when it appears in the following context: 2. Two horses were being shown off to a prospective buyer. One was raced past a meadow. and the other was raced past a barn. ... Grammatical yet unprocessable sentences such as 1 are called `garden-path sentences.' Their existence provides an opportunity to investigate the human sentence processing mechanism by studying how and when it fails. The aim of this thesis is to construct a computational model of language understanding which can predict processing difficulty. The data to be modeled are known examples of garden path and non-garden path sentences, and other results from psycholinguistics. It is widely believed that there are two distinct loci...

  2. Virtual Particle Interpretation of Quantum Mechanics - a non-dualistic model of QM with a natural probability interpretation

    CERN Document Server

    Karimäki, Janne Mikael

    2012-01-01

    An interpretation of non-relativistic quantum mechanics is presented in the spirit of Erwin Madelung's hydrodynamic formulation of QM and Louis de Broglie's and David Bohm's pilot wave models. The aims of the approach are as follows: 1) to have a clear ontology for QM, 2) to describe QM in a causal way, 3) to get rid of the wave-particle dualism in pilot wave theories, 4) to provide a theoretical framework for describing creation and annihilation of particles, and 5) to provide a possible connection between particle QM and virtual particles in QFT. These goals are achieved, if the wave function is replaced by a fluid of so called virtual particles. It is also assumed that in this fluid of virtual particles exist a few real particles and that only these real particles can be directly observed. This has relevance for the measurement problem in QM and it is found that quantum probabilities arise in a very natural way from the structure of the theory. The model presented here is very similar to a recent computati...

  3. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Science.gov (United States)

    Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-01-01

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415

  4. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  5. Enabling model checking for collaborative process analysis: from BPMN to `Network of Timed Automata'

    Science.gov (United States)

    Mallek, Sihem; Daclin, Nicolas; Chapurlat, Vincent; Vallespir, Bruno

    2015-04-01

    Interoperability is a prerequisite for partners involved in performing collaboration. As a consequence, the lack of interoperability is now considered a major obstacle. The research work presented in this paper aims to develop an approach that allows specifying and verifying a set of interoperability requirements to be satisfied by each partner in the collaborative process prior to process implementation. To enable the verification of these interoperability requirements, it is necessary first and foremost to generate a model of the targeted collaborative process; for this research effort, the standardised language BPMN 2.0 is used. Afterwards, a verification technique must be introduced, and model checking is the preferred option herein. This paper focuses on application of the model checker UPPAAL in order to verify interoperability requirements for the given collaborative process model. At first, this step entails translating the collaborative process model from BPMN into a UPPAAL modelling language called 'Network of Timed Automata'. Second, it becomes necessary to formalise interoperability requirements into properties with the dedicated UPPAAL language, i.e. the temporal logic TCTL.

  6. Enabling HCCI modeling: The RIOT/CMCS Web Service for Automatic Reaction Mechanism Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Oluwole, O; Pitz, W J; Schuchardt, K; Rahn, L A; Green, Jr., W H; Leahy, D; Pancerella, C; Sj?berg, M; Dec, J

    2005-12-12

    New approaches are being developed to facilitate multidisciplinary collaborative research of Homogeneous Charge Compression Ignition (HCCI) combustion processes. In this paper, collaborative sharing of the Range Identification and Optimization Toolkit (RIOT) and related data and models is discussed. RIOT is a developmental approach to reduce the computational complexity of detailed chemical kinetic mechanisms, enabling their use in modeling kinetically-controlled combustion applications such as HCCI. These approaches are being developed and piloted as a part of the Collaboratory for Multiscale Chemical Sciences (CMCS) project. The capabilities of the RIOT code are shared through a portlet in the CMCS portal that allows easy specification and processing of RIOT inputs, remote execution of RIOT, tracking of data pedigree and translation of RIOT outputs (such as the reduced model) to a table view and to the commonly-used CHEMKIN mechanism format. The reduced model is thus immediately ready to be used for more efficient simulation of the chemically reacting system of interest. This effort is motivated by the need to improve computational efficiency in modeling HCCI systems. Preliminary use of the web service to obtain reduced models for this application has yielded computational speedup factors of up to 20 as presented in this paper.

  7. Spin models inferred from patient data faithfully describe HIV fitness landscapes and enable rational vaccine design

    CERN Document Server

    Shekhar, Karthik; Ferguson, Andrew L; Barton, John P; Kardar, Mehran; Chakraborty, Arup K

    2013-01-01

    Mutational escape from vaccine induced immune responses has thwarted the development of a successful vaccine against AIDS, whose causative agent is HIV, a highly mutable virus. Knowing the virus' fitness as a function of its proteomic sequence can enable rational design of potent vaccines, as this information can focus vaccine induced immune responses to target mutational vulnerabilities of the virus. Spin models have been proposed as a means to infer intrinsic fitness landscapes of HIV proteins from patient-derived viral protein sequences. These sequences are the product of non-equilibrium viral evolution driven by patient-specific immune responses, and are subject to phylogenetic constraints. How can such sequence data allow inference of intrinsic fitness landscapes? We combined computer simulations and variational theory \\'{a} la Feynman to show that, in most circumstances, spin models inferred from patient-derived viral sequences reflect the correct rank order of the fitness of mutant viral strains. Our f...

  8. Modeling of surface myoelectric signals--Part II: Model-based signal interpretation.

    Science.gov (United States)

    Merletti, R; Roy, S H; Kupa, E; Roatta, S; Granata, A

    1999-07-01

    Experimental electromyogram (EMG) data from the human biceps brachii were simulated using the model described in [10] of this work. A multichannel linear electrode array, spanning the length of the biceps, was used to detect monopolar and bipolar signals, from which double differential signals were computed, during either voluntary or electrically elicited isometric contractions. For relatively low-level voluntary contractions (10%-30% of maximum force) individual firings of three to four-different motor units were identified and their waveforms were closely approximated by the model. Motor unit parameters such as depth, size, fiber orientation and length, location of innervation and tendonous zones, propagation velocity, and source width were estimated using the model. Two applications of the model are described. The first analyzes the effects of electrode rotation with respect to the muscle fiber direction and shows the possibility of conduction velocity (CV) over- and under-estimation. The second focuses on the myoelectric manifestations of fatigue during a sustained electrically elicited contraction and the interrelationship between muscle fiber CV, spectral and amplitude variables, and the length of the depolarization zone. It is concluded that a) surface EMG detection using an electrode array, when combined with a model of signal propagation, provides a useful method for understanding the physiological and anatomical determinants of EMG waveform characteristics and b) the model provides a way for the interpretation of fatigue plots.

  9. Social networks enabled coordination model for cost management of patient hospital admissions.

    Science.gov (United States)

    Uddin, Mohammed Shahadat; Hossain, Liaquat

    2011-09-01

    In this study, we introduce a social networks enabled coordination model for exploring the effect of network position of "patient," "physician," and "hospital" actors in a patient-centered care network that evolves during patient hospitalization period on the total cost of coordination. An actor is a node, which represents an entity such as individual and organization in a social network. In our analysis of actor networks and coordination in the healthcare literature, we identified that there is significant gap where a number of promising hospital coordination model have been developed (e.g., Guided Care Model, Chronic Care Model) for the current healthcare system focusing on quality of service and patient satisfaction. The health insurance dataset for total hip replacement (THR) from hospital contribution fund, a prominent Australian Health Insurance Company, are analyzed to examine our proposed coordination model. We consider network attributes of degree, connectedness, in-degree, out-degree, and tie strength to measure network position of actors. To measure the cost of coordination for a particular hospital, average of total hospitalization expenses for all THR hospital admissions is used. Results show that network positions of "patient," "physician," and "hospital" actors considering all hospital admissions that a particular hospital has have effect on the average of total hospitalization expenses of that hospital. These results can be used as guidelines to set up a cost-effective healthcare practice structure for patient hospitalization expenses.

  10. Statistical analysis of road-vehicle-driver interaction as an enabler to designing behavioural models

    Science.gov (United States)

    Chakravarty, T.; Chowdhury, A.; Ghose, A.; Bhaumik, C.; Balamuralidhar, P.

    2014-03-01

    Telematics form an important technology enabler for intelligent transportation systems. By deploying on-board diagnostic devices, the signatures of vehicle vibration along with its location and time are recorded. Detailed analyses of the collected signatures offer deep insights into the state of the objects under study. Towards that objective, we carried out experiments by deploying telematics device in one of the office bus that ferries employees to office and back. Data is being collected from 3-axis accelerometer, GPS, speed and the time for all the journeys. In this paper, we present initial results of the above exercise by applying statistical methods to derive information through systematic analysis of the data collected over four months. It is demonstrated that the higher order derivative of the measured Z axis acceleration samples display the properties Weibull distribution when the time axis is replaced by the amplitude of such processed acceleration data. Such an observation offers us a method to predict future behaviour where deviations from prediction are classified as context-based aberrations or progressive degradation of the system. In addition we capture the relationship between speed of the vehicle and median of the jerk energy samples using regression analysis. Such results offer an opportunity to develop a robust method to model road-vehicle interaction thereby enabling us to predict such like driving behaviour and condition based maintenance etc.

  11. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...

  12. Featuring Multiple Local Optima to Assist the User in the Interpretation of Induced Bayesian Network Models

    DEFF Research Database (Denmark)

    Dalgaard, Jens; Pena, Jose; Kocka, Tomas

    2004-01-01

    We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...

  13. Using AMMI, factorial regression and partial least squares regression models for interpreting genotype x environment interaction.

    NARCIS (Netherlands)

    Vargas, M.; Crossa, J.; Eeuwijk, van F.A.; Ramirez, M.E.; Sayre, K.

    1999-01-01

    Partial least squares (PLS) and factorial regression (FR) are statistical models that incorporate external environmental and/or cultivar variables for studying and interpreting genotype × environment interaction (GEl). The Additive Main effect and Multiplicative Interaction (AMMI) model uses only th

  14. Interpretation of the results of statistical measurements. [search for basic probability model

    Science.gov (United States)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  15. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  16. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    Science.gov (United States)

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis.

  17. Modeling of RFID-Enabled Real-Time Manufacturing Execution System in Mixed-Model Assembly Lines

    Directory of Open Access Journals (Sweden)

    Zhixin Yang

    2015-01-01

    Full Text Available To quickly respond to the diverse product demands, mixed-model assembly lines are well adopted in discrete manufacturing industries. Besides the complexity in material distribution, mixed-model assembly involves a variety of components, different process plans and fast production changes, which greatly increase the difficulty for agile production management. Aiming at breaking through the bottlenecks in existing production management, a novel RFID-enabled manufacturing execution system (MES, which is featured with real-time and wireless information interaction capability, is proposed to identify various manufacturing objects including WIPs, tools, and operators, etc., and to trace their movements throughout the production processes. However, being subject to the constraints in terms of safety stock, machine assignment, setup, and scheduling requirements, the optimization of RFID-enabled MES model for production planning and scheduling issues is a NP-hard problem. A new heuristical generalized Lagrangian decomposition approach has been proposed for model optimization, which decomposes the model into three subproblems: computation of optimal configuration of RFID senor networks, optimization of production planning subjected to machine setup cost and safety stock constraints, and optimization of scheduling for minimized overtime. RFID signal processing methods that could solve unreliable, redundant, and missing tag events are also described in detail. The model validity is discussed through algorithm analysis and verified through numerical simulation. The proposed design scheme has important reference value for the applications of RFID in multiple manufacturing fields, and also lays a vital research foundation to leverage digital and networked manufacturing system towards intelligence.

  18. Graphic Methods for Interpreting Longitudinal Dyadic Patterns From Repeated-Measures Actor-Partner Interdependence Models

    DEFF Research Database (Denmark)

    Perry, Nicholas; Baucom, Katherine; Bourne, Stacia

    2017-01-01

    Researchers commonly use repeated-measures actor–partner interdependence models (RM-APIM) to understand how romantic partners change in relation to one another over time. However, traditional interpretations of the results of these models do not fully or correctly capture the dyadic temporal...... patterns estimated in RM-APIM. Interpretation of results from these models largely focuses on the meaning of single-parameter estimates in isolation from all the others. However, considering individual coefficients separately impedes the understanding of how these associations combine to produce...... to improve the understanding and presentation of dyadic patterns of association described by standard RM-APIMs. The current article briefly reviews the conceptual foundations of RM-APIMs, demonstrates how change-as-outcome RM-APIMs and VFDs can aid interpretation of standard RM-APIMs, and provides a tutorial...

  19. Study about Interpretation Models and Algorithm of Water-Flooded Formation Based on Resistivity

    Institute of Scientific and Technical Information of China (English)

    WANGYinghui; TANDehui; WANGQiongfang; CAIHongjie

    2005-01-01

    Many oil fields are developed by water injection in the world, it's difficult to interpret by welllogging information. EPT and C/O identify residual oil saturation or moveable oil, but they are only fit for oil-reservoir with porosity over 20%, and not for borehole. Additionally, Archie model is not completely fit for dynamic but the static oil-reservoir. Therefore, it's more difficult for WF (Water-flooded) oil-zone (dynamic oil-reservoir) with LPP (Low porosity and low permeability) to be interpreted. Resistivity logging series are the dominating tools to WF formation, so it becomes significantly important to research new interpretation models and algorithm based on resistivity well-logging for WF oil-zone with LPP. A set of new interpretation models for WFZ (Water flooded zone) are established according to the “U” type curve from experimentation, as well as according to mathematics analysis. The notable Archie model is only one case of these new models under special conditions. It is most important that these new models are all fit from exploration stage to development stage in oil field. At last, algorithm process and application result of these models are described.

  20. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    Science.gov (United States)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  1. Cyber Enabled Collaborative Environment for Data and Modeling Driven Curriculum Modules for Hydrology and Geoscience Education

    Science.gov (United States)

    Merwade, V.; Ruddell, B. L.; Manduca, C. A.; Fox, S.; Kirk, K. B.

    2012-12-01

    With the access to emerging datasets and computational tools, there is a need to bring these capabilities into hydrology and geoscience classrooms. However, developing curriculum modules using data and models to augment classroom teaching is hindered by steep technology learning curve, rapid technology turnover, and lack of an organized community cyberinfrastructure (CI) for the dissemination, publication, and sharing of the latest tools and curriculum material for hydrology and geoscience education. The objective of this project is to overcome some of these limitations by developing a cyber enabled collaborative environment for publishing, sharing and adoption of data and modeling driven curriculum modules in hydrology and geoscience classroom. The CI is based on Carleton College's Science Education Resource Center (SERC) Content Management System. Building on its existing community authoring capabilities the system is being extended to allow assembly of new teaching activities by drawing on a collection of interchangeable building blocks; each of which represents a step in the modeling process. This poster presentation will describe the structure of the CI, the type and description of the modules that are under development, and the approach that will be used in assessing students' learning from using modules.

  2. Testing, Modeling, and Monitoring to Enable Simpler, Cheaper, Longer-Lived Surface Caps

    Energy Technology Data Exchange (ETDEWEB)

    Piet, Steven James; Breckenridge, Robert Paul; Burns, Douglas Edward

    2003-02-01

    Society has and will continue to generate hazardous wastes whose risks must be managed. For exceptionally toxic, long-lived, and feared waste, the solution is deep burial, e.g., deep geological disposal at Yucca Mtn. For some waste, recycle or destruction/treatment is possible. The alternative for other wastes is storage at or near the ground level (in someone’s back yard); most of these storage sites include a surface barrier (cap) to prevent downward water migration. Some of the hazards will persist indefinitely. As society and regulators have demanded additional proof that caps are robust against more threats and for longer time periods, the caps have become increasingly complex and expensive. As in other industries, increased complexity will eventually increase the difficulty in estimating performance, in monitoring system/component performance, and in repairing or upgrading barriers as risks are managed. An approach leading to simpler, less expensive, longer-lived, more manageable caps is needed. Our project, which started in April 2002, aims to catalyze a Barrier Improvement Cycle (iterative learning and application) and thus enable Remediation System Performance Management (doing the right maintenance neither too early nor too late). The knowledge gained and the capabilities built will help verify the adequacy of past remedial decisions, improve barrier management, and enable improved solutions for future decisions. We believe it will be possible to develop simpler, longer-lived, less expensive caps that are easier to monitor, manage, and repair. The project is planned to: a) improve the knowledge of degradation mechanisms in times shorter than service life; b) improve modeling of barrier degradation dynamics; c) develop sensor systems to identify early degradation; and d) provide a better basis for developing and testing of new barrier systems. This project combines selected exploratory studies (benchtop and field scale), coupled effects accelerated aging

  3. Testing, Modeling, and Monitoring to Enable Simpler, Cheaper, Longer-lived Surface Caps

    Energy Technology Data Exchange (ETDEWEB)

    Piet, S. J.; Breckenridge, R. P.; Burns, D. E.

    2003-02-25

    Society has and will continue to generate hazardous wastes whose risks must be managed. For exceptionally toxic, long-lived, and feared waste, the solution is deep burial, e.g., deep geological disposal at Yucca Mtn. For some waste, recycle or destruction/treatment is possible. The alternative for other wastes is storage at or near the ground level (in someone's back yard); most of these storage sites include a surface barrier (cap) to prevent downward water migration. Some of the hazards will persist indefinitely. As society and regulators have demanded additional proof that caps are robust against more threats and for longer time periods, the caps have become increasingly complex and expensive. As in other industries, increased complexity will eventually increase the difficulty in estimating performance, in monitoring system/component performance, and in repairing or upgrading barriers as risks are managed. An approach leading to simpler, less expensive, longer-lived, more manageable caps is needed. Our project, which started in April 2002, aims to catalyze a Barrier Improvement Cycle (iterative learning and application) and thus enable Remediation System Performance Management (doing the right maintenance neither too early nor too late). The knowledge gained and the capabilities built will help verify the adequacy of past remedial decisions, improve barrier management, and enable improved solutions for future decisions. We believe it will be possible to develop simpler, longer-lived, less expensive caps that are easier to monitor, manage, and repair. The project is planned to: (a) improve the knowledge of degradation mechanisms in times shorter than service life; (b) improve modeling of barrier degradation dynamics; (c) develop sensor systems to identify early degradation; and (d) provide a better basis for developing and testing of new barrier systems. This project combines selected exploratory studies (benchtop and field scale), coupled effects

  4. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    Science.gov (United States)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  5. Towards a realistic interpretation of quantum physics providing a physical model of the natural world

    CERN Document Server

    Santos, Emilio

    2012-01-01

    It is stressed the advantage of a realistic interpretation of quantum mechanics providing a physical model of the quantum world. After some critical comments on the most popular interpretations, the difficulties for a model are pointed out and possible solutions proposed. In particular the existence of discrete states, the quantum jumps, the alleged lack of objective properties, measurement theory, the probabilistic character of quantum physics, the wave-particle duality and the Bell inequalities are commented. It is conjectured that an intuitive picture of the quantum world could be obtained compatible with the quantum predictions for actual experiments, although maybe incompatible with alleged predictions for ideal, unrealizable, experiments.

  6. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    Science.gov (United States)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured

  7. Stochastic Hybrid Systems Modeling and Middleware-enabled DDDAS for Next-generation US Air Force Systems

    Science.gov (United States)

    2017-03-30

    AFRL-AFOSR-VA-TR-2017-0075 Stochastic Hybrid Systems Modeling and Middleware-enabled DDDAS for Next-generation US Air Force Systems Aniruddha...release. Air Force Research Laboratory AF Office Of Scientific Research (AFOSR)/RTA2 4/6/2017https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll a...Sep 2013 to 31 Dec 2016 4. TITLE AND SUBTITLE Stochastic Hybrid Systems Modeling and Middleware-enabled DDDAS for Next- generation US Air Force

  8. Interpretable Predictive Models for Knowledge Discovery from Home-Care Electronic Health Records

    Directory of Open Access Journals (Sweden)

    Bonnie L. Westra

    2011-01-01

    Full Text Available The purpose of this methodological study was to compare methods of developing predictive rules that are parsimonious and clinically interpretable from electronic health record (EHR home visit data, contrasting logistic regression with three data mining classification models. We address three problems commonly encountered in EHRs: the value of including clinically important variables with little variance, handling imbalanced datasets, and ease of interpretation of the resulting predictive models. Logistic regression and three classification models using Ripper, decision trees, and Support Vector Machines were applied to a case study for one outcome of improvement in oral medication management. Predictive rules for logistic regression, Ripper, and decision trees are reported and results compared using F-measures for data mining models and area under the receiver-operating characteristic curve for all models. The rules generated by the three classification models provide potentially novel insights into mining EHRs beyond those provided by standard logistic regression, and suggest steps for further study.

  9. Development of Disciplined Interpretation Using Computational Modeling in the Elementary Science Classroom

    CERN Document Server

    Farris, Amy Voss; Sengupta, Pratim

    2016-01-01

    Studies of scientists building models show that the development of scientific models involves a great deal of subjectivity. However, science as experienced in school settings typically emphasizes an overly objective and rationalistic view. In this paper, we argue for focusing on the development of disciplined interpretation as an epistemic and representational practice that progressively deepens students' computational modeling in science by valuing, rather than deemphasizing, the subjective nature of the experience of modeling. We report results from a study in which fourth grade children engaged in computational modeling throughout the academic year. We present three salient themes that characterize the development of students' disciplined interpretations in terms of their development of computational modeling as a way of seeing and doing science.

  10. A new interpretation of the Keller-Segel model based on multiphase modelling.

    Science.gov (United States)

    Byrne, Helen M; Owen, Markus R

    2004-12-01

    In this paper an alternative derivation and interpretation are presented of the classical Keller-Segel model of cell migration due to random motion and chemotaxis. A multiphase modelling approach is used to describe how a population of cells moves through a fluid containing a diffusible chemical to which the cells are attracted. The cells and fluid are viewed as distinct components of a two-phase mixture. The principles of mass and momentum balance are applied to each phase, and appropriate constitutive laws imposed to close the resulting equations. A key assumption here is that the stress in the cell phase is influenced by the concentration of the diffusible chemical. By restricting attention to one-dimensional cartesian geometry we show how the model reduces to a pair of nonlinear coupled partial differential equations for the cell density and the chemical concentration. These equations may be written in the form of the Patlak-Keller-Segel model, naturally including density-dependent nonlinearities in the cell motility coefficients. There is a direct relationship between the random motility and chemotaxis coefficients, both depending in an inter-related manner on the chemical concentration. We suggest that this may explain why many chemicals appear to stimulate both chemotactic and chemokinetic responses in cell populations. After specialising our model to describe slime mold we then show how the functional form of the chemical potential that drives cell locomotion influences the ability of the system to generate spatial patterns. The paper concludes with a summary of the key results and a discussion of avenues for future research.

  11. Dark Matter at the LHC and IceCube - a Simplified Model Interpretation

    CERN Document Server

    Heisig, Jan

    2015-01-01

    We present an interpretation of searches for Dark Matter in a simplified model approach. Considering Majorana fermion Dark Matter and a neutral vector mediator with axial-vector interactions we explore mono-jet searches at the LHC and searches for neutrinos from Dark Matter annihilation in the Sun at IceCube and place new limits on model parameter space. Further, we compare the simplified model with its effective field theory approximation and discuss the validity of the latter one.

  12. Cognitive Elements of Empowerment: An "Interpretive" Model of Intrinsic Task Motivation

    OpenAIRE

    Thomas, Kenneth W.; Velthouse, Betty A.

    1990-01-01

    This article presents a cognitive model of empowerment. Here, empowerment is defined as increased intrinsic task motivation, and our subsequent model identifies four cognitions (task assessments) as the basis for worker empowerment: sense of impact, competence, meaningfulness, and choice. Adopting an interpretive perspective, we have used the model also to describe cognitive processes through which workers reach these conclusions. Central to the processes we describe are ...

  13. Assessing pharmacokinetics of different doses of fosfomycin in laboratory rats enables adequate exposure for pharmacodynamic models.

    Science.gov (United States)

    Poeppl, Wolfgang; Lingscheid, Tilman; Bernitzky, Dominik; Donath, Oliver; Reznicek, Gottfried; Zeitlinger, Markus; Burgmann, Heinz

    2014-01-01

    Fosfomycin has been the subject of numerous pharmacodynamic in vivo models in recent years. The present study set out to determine fosfomycin pharmacokinetics in laboratory rats to enable adequate dosing regimens in future rodent models. Fosfomycin was given intraperitoneally as single doses of 75, 200 and 500 mg/kg bodyweight to 4 Sprague-Dawley rats per dose group. Blood samples were collected over 8 h and fosfomycin concentrations were determined by HPLC-mass spectrometry. Fosfomycin showed a dose-proportional pharmacokinetic profile indicated by a correlation of 0.99 for maximum concentration and area under the concentration-time curve (AUC). The mean AUC0-8 after intraperitoneal administration of 75, 200 or 500 mg/kg bodyweight fosfomycin were 109.4, 387.0 and 829.1 µg·h/ml, respectively. In conclusion, a dosing regimen of 200-500 mg/kg 3 times daily is appropriate to obtain serum concentrations in laboratory rats, closely mimicking human serum concentrations over time.

  14. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  15. Enabling Grid Computing resources within the KM3NeT computing model

    Science.gov (United States)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  16. A rotamer library to enable modeling and design of peptoid foldamers.

    Science.gov (United States)

    Renfrew, P Douglas; Craven, Timothy W; Butterfoss, Glenn L; Kirshenbaum, Kent; Bonneau, Richard

    2014-06-18

    Peptoids are a family of synthetic oligomers composed of N-substituted glycine units. Along with other "foldamer" systems, peptoid oligomer sequences can be predictably designed to form a variety of stable secondary structures. It is not yet evident if foldamer design can be extended to reliably create tertiary structure features that mimic more complex biomolecular folds and functions. Computational modeling and prediction of peptoid conformations will likely play a critical role in enabling complex biomimetic designs. We introduce a computational approach to provide accurate conformational and energetic parameters for peptoid side chains needed for successful modeling and design. We find that peptoids can be described by a "rotamer" treatment, similar to that established for proteins, in which the peptoid side chains display rotational isomerism to populate discrete regions of the conformational landscape. Because of the insufficient number of solved peptoid structures, we have calculated the relative energies of side-chain conformational states to provide a backbone-dependent (BBD) rotamer library for a set of 54 different peptoid side chains. We evaluated two rotamer library development methods that employ quantum mechanics (QM) and/or molecular mechanics (MM) energy calculations to identify side-chain rotamers. We show by comparison to experimental peptoid structures that both methods provide an accurate prediction of peptoid side chain placements in folded peptoid oligomers and at protein interfaces. We have incorporated our peptoid rotamer libraries into ROSETTA, a molecular design package previously validated in the context of protein design and structure prediction.

  17. Dream interpretation, affect, and the theory of neuronal group selection: Freud, Winnicott, Bion, and Modell.

    Science.gov (United States)

    Shields, Walker

    2006-12-01

    The author uses a dream specimen as interpreted during psychoanalysis to illustrate Modell's hypothesis that Edelman's theory of neuronal group selection (TNGS) may provide a valuable neurobiological model for Freud's dynamic unconscious, imaginative processes in the mind, the retranscription of memory in psychoanalysis, and intersubjective processes in the analytic relationship. He draws parallels between the interpretation of the dream material with keen attention to affect-laden meanings in the evolving analytic relationship in the domain of psychoanalysis and the principles of Edelman's TNGS in the domain of neurobiology. The author notes how this correlation may underscore the importance of dream interpretation in psychoanalysis. He also suggests areas for further investigation in both realms based on study of their interplay.

  18. The use of cloud enabled building information models – an expert analysis

    Directory of Open Access Journals (Sweden)

    Alan Redmond

    2012-12-01

    Full Text Available The dependency of today’s construction professionals to use singular commercial applications for design possibilities creates the risk of being dictated by the language-tools they use. This unknowingly approach to converting to the constraints of a particular computer application’s style, reduces one’s association with cutting-edge design as no single computer application can support all of the tasks associated with building-design and production. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. Cloud computing is a centralized heterogeneous platform that enables different applications to be connected to each other through using remote data servers. However, the possibility of providing an interoperable process based on binding several construction applications through a single repository platform ‘cloud computing’ required further analysis. The following Delphi questionnaires analysed the exchanging information opportunities of Building Information Modelling (BIM as the possible solution for the integration of applications on a cloud platform. The survey structure is modelled to; (i identify the most appropriate applications for advancing interoperability at the early design stage, (ii detect the most severe barriers of BIM implementation from a business and legal viewpoint, (iii examine the need for standards to address information exchange between design team, and (iv explore the use of the most common interfaces for exchanging information. The anticipated findings will assist in identifying a model that will enhance the standardized passing of information between systems at the feasibility design stage of a construction project.

  19. The use of cloud enabled building information models – an expert analysis

    Directory of Open Access Journals (Sweden)

    Alan Redmond

    2015-10-01

    Full Text Available The dependency of today’s construction professionals to use singular commercial applications for design possibilities creates the risk of being dictated by the language-tools they use. This unknowingly approach to converting to the constraints of a particular computer application’s style, reduces one’s association with cutting-edge design as no single computer application can support all of the tasks associated with building-design and production. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. Cloud computing is a centralized heterogeneous platform that enables different applications to be connected to each other through using remote data servers. However, the possibility of providing an interoperable process based on binding several construction applications through a single repository platform ‘cloud computing’ required further analysis. The following Delphi questionnaires analysed the exchanging information opportunities of Building Information Modelling (BIM as the possible solution for the integration of applications on a cloud platform. The survey structure is modelled to; (i identify the most appropriate applications for advancing interoperability at the early design stage, (ii detect the most severe barriers of BIM implementation from a business and legal viewpoint, (iii examine the need for standards to address information exchange between design team, and (iv explore the use of the most common interfaces for exchanging information. The anticipated findings will assist in identifying a model that will enhance the standardized passing of information between systems at the feasibility design stage of a construction project.

  20. How WM Load Influences Linguistic Processing in Adults : A Computational Model of Pronoun Interpretation in Discourse

    NARCIS (Netherlands)

    van Rij, Jacolien; van Rijn, Hedderik; Hendriks, Petra

    2013-01-01

    This paper presents a study of the effect of working memory load on the interpretation of pronouns in different discourse contexts: stories with and without a topic shift. We discuss a computational model (in ACT-R, Anderson, 2007) to explain how referring expressions are acquired and used. On the b

  1. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...

  2. RWater - A Novel Cyber-enabled Data-driven Educational Tool for Interpreting and Modeling Hydrologic Processes

    Science.gov (United States)

    Rajib, M. A.; Merwade, V.; Zhao, L.; Song, C.

    2014-12-01

    Explaining the complex cause-and-effect relationships in hydrologic cycle can often be challenging in a classroom with the use of traditional teaching approaches. With the availability of observed rainfall, streamflow and other hydrology data on the internet, it is possible to provide the necessary tools to students to explore these relationships and enhance their learning experience. From this perspective, a new online educational tool, called RWater, is developed using Purdue University's HUBzero technology. RWater's unique features include: (i) its accessibility including the R software from any java supported web browser; (ii) no installation of any software on user's computer; (iii) all the work and resulting data are stored in user's working directory on RWater server; and (iv) no prior programming experience with R software is necessary. In its current version, RWater can dynamically extract streamflow data from any USGS gaging station without any need for post-processing for use in the educational modules. By following data-driven modules, students can write small scripts in R and thereby create visualizations to identify the effect of rainfall distribution and watershed characteristics on runoff generation, investigate the impacts of landuse and climate change on streamflow, and explore the changes in extreme hydrologic events in actual locations. Each module contains relevant definitions, instructions on data extraction and coding, as well as conceptual questions based on the possible analyses which the students would perform. In order to assess its suitability in classroom implementation, and to evaluate users' perception over its utility, the current version of RWater has been tested with three different groups: (i) high school students, (ii) middle and high school teachers; and (iii) upper undergraduate/graduate students. The survey results from these trials suggest that the RWater has potential to improve students' understanding on various relationships in hydrologic cycle, leading towards effective dissemination of hydrology education ranging from K-12 to the graduate level. RWater is a publicly available for use at: https://mygeohub.org/tools/rwater.

  3. The embedded feature model for the interpretation of chromospheric contrast profiles

    Science.gov (United States)

    Steinitz, R.; Gebbie, K. B.; Bar, V.

    1977-01-01

    Contrast profiles obtained from chromospheric filtergrams and spectra of bright and dark mottles have to date been interpreted almost exclusively in terms of Becker's cloud model. Here we demonstrate the failure of this model to account in a physically consistent way for the observed contrasts. As an alternative, we introduce an embedded-feature model, restricting our discussion in this paper to stationary features. Our model is then characterized by three independent parameters: the density of absorbing atoms, the geometrical depth, and the profile of the absorption coefficient. An analytic approximation to the contrast resulting from such a model reproduces well the observed behavior of all types of contrast profiles.

  4. Enhancing CIDOC-CRM and compatible models with the concept of multiple interpretation

    Science.gov (United States)

    Van Ruymbeke, M.; Hallot, P.; Billen, R.

    2017-08-01

    Modelling cultural heritage and archaeological objects is used as much for management as for research purposes. To ensure the sustainable benefit of digital data, models benefit from taking the data specificities of historical and archaeological domains into account. Starting from a conceptual model tailored to storing these specificities, we present, in this paper, an extended mapping to CIDOC-CRM and its compatible models. Offering an ideal framework to structure and highlight the best modelling practices, these ontologies are essentially dedicated to storing semantic data which provides information about cultural heritage objects. Based on this standard, our proposal focuses on multiple interpretation and sequential reality.

  5. Enhancing CIDOC-CRM and compatible models with the concept of multiple interpretation

    Directory of Open Access Journals (Sweden)

    M. Van Ruymbeke

    2017-08-01

    Full Text Available Modelling cultural heritage and archaeological objects is used as much for management as for research purposes. To ensure the sustainable benefit of digital data, models benefit from taking the data specificities of historical and archaeological domains into account. Starting from a conceptual model tailored to storing these specificities, we present, in this paper, an extended mapping to CIDOC-CRM and its compatible models. Offering an ideal framework to structure and highlight the best modelling practices, these ontologies are essentially dedicated to storing semantic data which provides information about cultural heritage objects. Based on this standard, our proposal focuses on multiple interpretation and sequential reality.

  6. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-01

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.

  7. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity.

    Science.gov (United States)

    Barretina, Jordi; Caponigro, Giordano; Stransky, Nicolas; Venkatesan, Kavitha; Margolin, Adam A; Kim, Sungjoon; Wilson, Christopher J; Lehár, Joseph; Kryukov, Gregory V; Sonkin, Dmitriy; Reddy, Anupama; Liu, Manway; Murray, Lauren; Berger, Michael F; Monahan, John E; Morais, Paula; Meltzer, Jodi; Korejwa, Adam; Jané-Valbuena, Judit; Mapa, Felipa A; Thibault, Joseph; Bric-Furlong, Eva; Raman, Pichai; Shipway, Aaron; Engels, Ingo H; Cheng, Jill; Yu, Guoying K; Yu, Jianjun; Aspesi, Peter; de Silva, Melanie; Jagtap, Kalpana; Jones, Michael D; Wang, Li; Hatton, Charles; Palescandolo, Emanuele; Gupta, Supriya; Mahan, Scott; Sougnez, Carrie; Onofrio, Robert C; Liefeld, Ted; MacConaill, Laura; Winckler, Wendy; Reich, Michael; Li, Nanxin; Mesirov, Jill P; Gabriel, Stacey B; Getz, Gad; Ardlie, Kristin; Chan, Vivien; Myer, Vic E; Weber, Barbara L; Porter, Jeff; Warmuth, Markus; Finan, Peter; Harris, Jennifer L; Meyerson, Matthew; Golub, Todd R; Morrissey, Michael P; Sellers, William R; Schlegel, Robert; Garraway, Levi A

    2012-03-28

    The systematic translation of cancer genomic data into knowledge of tumour biology and therapeutic possibilities remains challenging. Such efforts should be greatly aided by robust preclinical model systems that reflect the genomic diversity of human cancers and for which detailed genetic and pharmacological annotation is available. Here we describe the Cancer Cell Line Encyclopedia (CCLE): a compilation of gene expression, chromosomal copy number and massively parallel sequencing data from 947 human cancer cell lines. When coupled with pharmacological profiles for 24 anticancer drugs across 479 of the cell lines, this collection allowed identification of genetic, lineage, and gene-expression-based predictors of drug sensitivity. In addition to known predictors, we found that plasma cell lineage correlated with sensitivity to IGF1 receptor inhibitors; AHR expression was associated with MEK inhibitor efficacy in NRAS-mutant lines; and SLFN11 expression predicted sensitivity to topoisomerase inhibitors. Together, our results indicate that large, annotated cell-line collections may help to enable preclinical stratification schemata for anticancer agents. The generation of genetic predictions of drug response in the preclinical setting and their incorporation into cancer clinical trial design could speed the emergence of 'personalized' therapeutic regimens.

  8. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    Science.gov (United States)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  9. Coloration of the Chilean Bellflower, Nolana paradoxa, interpreted with a scattering and absorbing layer stack model

    OpenAIRE

    Stavenga, Doekele G; van der Kooi, Casper J.

    2015-01-01

    Main conclusion An absorbing-layer-stack model allows quantitative analysis of the light flux in flowers and the resulting reflectance spectra. It provides insight in how plants can optimize their flower coloration for attracting pollinators. The coloration of flowers is due to the combined effect of pigments and light-scattering structures. To interpret flower coloration, we applied an optical model that considers a flower as a stack of layers, where each layer can be treated with the Kubelk...

  10. Extraction and interpretation of gammaN-->Delta form factors within a dynamical model

    Energy Technology Data Exchange (ETDEWEB)

    B. Juliá-Díaz, T.-S. H. Lee, T. Sato, and L. C. Smith

    2007-01-01

    Within the dynamical model of Refs. [Phys. Rev. C54, 2660 (1996); C63, 055201 (2001)], we perform an analysis of recent data of pion electroproduction reactions at energies near the {Delta}(1232) resonance. We discuss possible interpretations of the extracted bare and dressed {gamma} N {yields} {Delta} form factors in terms of relativistic constituent quark models and Lattice QCD calculations. Possible future developments are discussed.

  11. Quantum Structures of Model-Universe: Questioning the Everett Interpretation of Quantum Mechanics

    CERN Document Server

    Jeknic-Dugic, J; Francom, A

    2011-01-01

    Our objective is to demonstrate an inconsistency with both the original and modern Everettian Many Worlds Interpretations. We do this by examining two important corollaries of the universally valid quantum mechanics in the context of the Quantum Brownian Motion (QBM) model: "Entanglement Relativity" and the "parallel occurrence of decoherence." We conclude that the highlighted inconsistency demands that either there is a privileged spatial structure of the QBM model universe or that the Everettian Worlds are not physically real.

  12. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... conceived of and interpreted in terms of (and to some extent even calculated by means of) this long-run Phillips curve.    ...

  13. Exploring prospective secondary mathematics teachers' interpretation of student thinking through analysing students' work in modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-09-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that embodied solutions of mathematical modelling tasks. The data were collected from 25 prospective mathematics teachers enrolled in an undergraduate course through four 2-week-long cycles. Analysis of data revealed that the prospective teachers interpreted students' thinking in four ways: describing, questioning, explaining, and comparing. Moreover, whereas some of the prospective teachers showed a tendency to increase their attention to the meaning of students' ways of thinking more while they engaged in students' work in depth over time and experience, some of them continued to focus on only judging the accuracy of students' thinking. The implications of the findings for understanding and developing prospective teachers' ways of interpreting students' thinking are discussed.

  14. Enabling Parametric Optimal Ascent Trajectory Modeling During Early Phases of Design

    Science.gov (United States)

    Holt, James B.; Dees, Patrick D.; Diaz, Manuel J.

    2015-01-01

    -modal due to the interaction of various constraints. Additionally, when these obstacles are coupled with The Program to Optimize Simulated Trajectories [1] (POST), an industry standard program to optimize ascent trajectories that is difficult to use, it requires expert trajectory analysts to effectively optimize a vehicle's ascent trajectory. As it has been pointed out, the paradigm of trajectory optimization is still a very manual one because using modern computational resources on POST is still a challenging problem. The nuances and difficulties involved in correctly utilizing, and therefore automating, the program presents a large problem. In order to address these issues, the authors will discuss a methodology that has been developed. The methodology is two-fold: first, a set of heuristics will be introduced and discussed that were captured while working with expert analysts to replicate the current state-of-the-art; secondly, leveraging the power of modern computing to evaluate multiple trajectories simultaneously, and therefore, enable the exploration of the trajectory's design space early during the pre-conceptual and conceptual phases of design. When this methodology is coupled with design of experiments in order to train surrogate models, the authors were able to visualize the trajectory design space, enabling parametric optimal ascent trajectory information to be introduced with other pre-conceptual and conceptual design tools. The potential impact of this methodology's success would be a fully automated POST evaluation suite for the purpose of conceptual and preliminary design trade studies. This will enable engineers to characterize the ascent trajectory's sensitivity to design changes in an arbitrary number of dimensions and for finding settings for trajectory specific variables, which result in optimal performance for a "dialed-in" launch vehicle design. The effort described in this paper was developed for the Advanced Concepts Office [2] at NASA Marshall

  15. Antagonism and Mutual Dependency. Critial Models of Performance and “Piano Interpretation Schools”

    Directory of Open Access Journals (Sweden)

    Rui Cruz

    2011-12-01

    Full Text Available To polarize and, coincidently, intersect two different concepts, in terms of a distinction/analogy between “piano interpretation schools” and “critical models” is the aim of this paper. The former, with its prior connotations of both empiricism and dogmatism and not directly shaped by aesthetic criteria or interpretational ideals, depends mainly on the aural and oral tradition as well the teacher-student legacy; the latter employs ideally the generic criteria of interpretativeness, which can be measured in accordance to an aesthetic formula and can include features such as non-obviousness, inferentially, lack of consensus, concern with meaning or significance, concern with structure or design, etc. The relative autonomy of the former is a challenge to the latter, which embraces the range of perspectives available in the horizon of the history of ideas about music and interpretation. The effort of recognizing models of criticism within musical interpretation creates the vehicle for new understandings of the nature and the historical development of Western classical piano performance, promoting also the production of quality critical argument and the communication of key performance tendencies and styles.

  16. Neurobiological model of stimulated dopamine neurotransmission to interpret fast-scan cyclic voltammetry data.

    Science.gov (United States)

    Harun, Rashed; Grassi, Christine M; Munoz, Miranda J; Torres, Gonzalo E; Wagner, Amy K

    2015-03-02

    Fast-scan cyclic voltammetry (FSCV) is an electrochemical method that can assess real-time in vivo dopamine (DA) concentration changes to study the kinetics of DA neurotransmission. Electrical stimulation of dopaminergic (DAergic) pathways can elicit FSCV DA responses that largely reflect a balance of DA release and reuptake. Interpretation of these evoked DA responses requires a framework to discern the contribution of DA release and reuptake. The current, widely implemented interpretive framework for doing so is the Michaelis-Menten (M-M) model, which is grounded on two assumptions- (1) DA release rate is constant during stimulation, and (2) DA reuptake occurs through dopamine transporters (DAT) in a manner consistent with M-M enzyme kinetics. Though the M-M model can simulate evoked DA responses that rise convexly, response types that predominate in the ventral striatum, the M-M model cannot simulate dorsal striatal responses that rise concavely. Based on current neurotransmission principles and experimental FSCV data, we developed a novel, quantitative, neurobiological framework to interpret DA responses that assumes DA release decreases exponentially during stimulation and continues post-stimulation at a diminishing rate. Our model also incorporates dynamic M-M kinetics to describe DA reuptake as a process of decreasing reuptake efficiency. We demonstrate that this quantitative, neurobiological model is an extension of the traditional M-M model that can simulate heterogeneous regional DA responses following manipulation of stimulation duration, frequency, and DA pharmacology. The proposed model can advance our interpretive framework for future in vivo FSCV studies examining regional DA kinetics and their alteration by disease and DA pharmacology. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Enabling School Structure, Collective Responsibility, and a Culture of Academic Optimism: Toward a Robust Model of School Performance in Taiwan

    Science.gov (United States)

    Wu, Jason H.; Hoy, Wayne K.; Tarter, C. John

    2013-01-01

    Purpose: The purpose of this research is twofold: to test a theory of academic optimism in Taiwan elementary schools and to expand the theory by adding new variables, collective responsibility and enabling school structure, to the model. Design/methodology/approach: Structural equation modeling was used to test, refine, and expand an…

  18. Energy Consumption Model and Measurement Results for Network Coding-enabled IEEE 802.11 Meshed Wireless Networks

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Rasmussen, Ulrik Wilken; Hundebøll, Martin

    2012-01-01

    This paper presents an energy model and energy measurements for network coding enabled wireless meshed networks based on IEEE 802.11 technology. The energy model and the energy measurement testbed is limited to a simple Alice and Bob scenario. For this toy scenario we compare the energy usages...

  19. Supporting interpretation of dynamic simulation. Application to chemical kinetic models; Aides a l`interpretation de simulations dynamiques. Application aux modeles de cinetique chimique

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, B.

    1998-04-22

    Numerous scientific and technical domains make constant use of dynamical simulations. Such simulators are put in the hands of a growing number of users. This phenomenon is due both to the extraordinary increase in computing performance, and to better graphical user interfaces which make simulation models easy to operate. But simulators are still computer programs which produce series of numbers from other series of numbers, even if they are displayed graphically. This thesis presents new interaction paradigms between a dynamical simulator and its user. The simulator produces a self-made interpretation of its results, thanks to a dedicated representation of its domain with objects. It shows dominant cyclic mechanisms identified by their instantaneous loop gain estimates, it uses a notion of episodes for splitting the simulation into homogeneous time intervals, and completes this by animations which rely on the graphical structure of the system. These new approaches are demonstrated with examples from chemical kinetics, because of the energic and exemplary characteristics of the encountered behaviors. They are implemented in the Spike software, Software Platform for Interactive Chemical Kinetics Experiments. Similar concepts are also shown in two other domains: interpretation of seismic wave propagation, and simulation of large projects. (author) 95 refs.

  20. Numerical well testing interpretation model and applications in crossflow double-layer reservoirs by polymer flooding.

    Science.gov (United States)

    Yu, Haiyang; Guo, Hui; He, Youwei; Xu, Hainan; Li, Lei; Zhang, Tiantian; Xian, Bo; Du, Song; Cheng, Shiqing

    2014-01-01

    This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV), permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I) wellbore storage section, (II) intermediate flow section (transient section), (III) mid-radial flow section, (IV) crossflow section (from low permeability layer to high permeability layer), and (V) systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR).

  1. Numerical Well Testing Interpretation Model and Applications in Crossflow Double-Layer Reservoirs by Polymer Flooding

    Directory of Open Access Journals (Sweden)

    Haiyang Yu

    2014-01-01

    Full Text Available This work presents numerical well testing interpretation model and analysis techniques to evaluate formation by using pressure transient data acquired with logging tools in crossflow double-layer reservoirs by polymer flooding. A well testing model is established based on rheology experiments and by considering shear, diffusion, convection, inaccessible pore volume (IPV, permeability reduction, wellbore storage effect, and skin factors. The type curves were then developed based on this model, and parameter sensitivity is analyzed. Our research shows that the type curves have five segments with different flow status: (I wellbore storage section, (II intermediate flow section (transient section, (III mid-radial flow section, (IV crossflow section (from low permeability layer to high permeability layer, and (V systematic radial flow section. The polymer flooding field tests prove that our model can accurately determine formation parameters in crossflow double-layer reservoirs by polymer flooding. Moreover, formation damage caused by polymer flooding can also be evaluated by comparison of the interpreted permeability with initial layered permeability before polymer flooding. Comparison of the analysis of numerical solution based on flow mechanism with observed polymer flooding field test data highlights the potential for the application of this interpretation method in formation evaluation and enhanced oil recovery (EOR.

  2. Integrable models for quantum media excited by laser radiation: a method, physical interpretation, and examples

    OpenAIRE

    Savva, Vadim A.; Zelenkov, Vadim I.

    2014-01-01

    A method to build various integrable models for description of coherent excitation of multilevel media by laser pulses is suggested. Distribution functions over the energy levels of quantum systems depending on the time and frequency detuning are obtained. The distributions follow from Schr\\"odinger equation exact solutions and give the complete dynamical description of laser-excited quantum multilevel systems. Interpretation based on the Fourier spectra of the probability amplitudes of a qua...

  3. Model-based interpretation of the ECG: a methodology for temporal and spatial reasoning.

    OpenAIRE

    Tong, D. A.; Widman, L. E.

    1992-01-01

    A new software architecture for automatic interpretation of the electrocardiogram is presented. Using the hypothesize-and-test paradigm, a semi-quantitative physiological model and production rule-based knowledge are combined to reason about time- and space-varying characteristics of complex heart rhythms. A prototype system implementing the methodology accepts a semi-quantitative description of the onset and morphology of the P waves and QRS complexes that are observed in the body-surface el...

  4. Structural interpretation of El Hierro (Canary Islands) rifts system from gravity inversion modelling

    Science.gov (United States)

    Sainz-Maza, S.; Montesinos, F. G.; Martí, J.; Arnoso, J.; Calvo, M.; Borreguero, A.

    2017-08-01

    Recent volcanism in El Hierro Island is mostly concentrated along three elongated and narrow zones which converge at the center of the island. These zones with extensive volcanism have been identified as rift zones. The presence of similar structures is common in many volcanic oceanic islands, so understanding their origin, dynamics and structure is important to conduct hazard assessment in such environments. There is still not consensus on the origin of the El Hierro rift zones, having been associated with mantle uplift or interpreted as resulting from gravitational spreading and flank instability. To further understand the internal structure and origin of the El Hierro rift systems, starting from the previous gravity studies, we developed a new 3D gravity inversion model for its shallower layers, gathering a detailed picture of this part of the island, which has permitted a new interpretation about these rifts. Previous models already identified a main central magma accumulation zone and several shallower high density bodies. The new model allows a better resolution of the pathways that connect both levels and the surface. Our results do not point to any correspondence between the upper parts of these pathways and the rift identified at the surface. Non-clear evidence of progression toward deeper parts into the volcanic system is shown, so we interpret them as very shallow structures, probably originated by local extensional stresses derived from gravitational loading and flank instability, which are used to facilitate the lateral transport of magma when it arrives close to the surface.

  5. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  6. Testing the Youth Physical Activity Promotion Model: Fatness and Fitness as Enabling Factors

    Science.gov (United States)

    Chen, Senlin; Welk, Gregory J.; Joens-Matre, Roxane R.

    2014-01-01

    As the prevalence of childhood obesity increases, it is important to examine possible differences in psychosocial correlates of physical activity between normal weight and overweight children. The study examined fatness (weight status) and (aerobic) fitness as Enabling factors related to youth physical activity within the Youth Physical Activity…

  7. Testing the Youth Physical Activity Promotion Model: Fatness and Fitness as Enabling Factors

    Science.gov (United States)

    Chen, Senlin; Welk, Gregory J.; Joens-Matre, Roxane R.

    2014-01-01

    As the prevalence of childhood obesity increases, it is important to examine possible differences in psychosocial correlates of physical activity between normal weight and overweight children. The study examined fatness (weight status) and (aerobic) fitness as Enabling factors related to youth physical activity within the Youth Physical Activity…

  8. Interpretation of isotopic data in groundwater-rock systems: Model development and application to Sr isotope data from Yucca Mountain

    Science.gov (United States)

    Johnson, Thomas M.; Depaolo, Donald J.

    1994-05-01

    A model enabling extraction of hydrologic information from spatial and temporal patterns in measurements of isotope ratios in water-rock systems is presented. The model describes the evolution of isotope ratios in response to solute transport and water-rock interaction. In advective systems, a single dimensionless parameter (a Damköhler number, ND) dominates in determining the distance over which isotopic equilibrium between the water and rock is approached. Some isotope ratios act as conservative tracers (ND ≪ 1), while others reflect only interaction with the local host rock (ND ≫ 1). If ND is close to one (i.e., the distance for equilibration is close to the length scale of observation), isotope ratio measurements can be used to determine ND, which in turn may yield information concerning reaction rates, or spatial variations in water velocity. Zones of high velocity (e.g., as a result of greater fracture density), or less reactive zones, may be identified through observation of their lower ND values. The model is applied to paleohydrologic interpretations of Sr isotope data from calcite fracture fillings in drill cores from Yucca Mountain, Nevada (Marshall et al., 1992). The results agree with other studies suggesting "fast path" transport in the unsaturated zone. Also, we find that the data do not give a conclusive indication of paleowater table elevation because of the effects of water-rock interaction.

  9. The Role of Stochastic Models in Interpreting the Origins of Biological Chirality

    Directory of Open Access Journals (Sweden)

    Gábor Lente

    2010-04-01

    Full Text Available This review summarizes recent stochastic modeling efforts in the theoretical research aimed at interpreting the origins of biological chirality. Stochastic kinetic models, especially those based on the continuous time discrete state approach, have great potential in modeling absolute asymmetric reactions, experimental examples of which have been reported in the past decade. An overview of the relevant mathematical background is given and several examples are presented to show how the significant numerical problems characteristic of the use of stochastic models can be overcome by non-trivial, but elementary algebra. In these stochastic models, a particulate view of matter is used rather than the concentration-based view of traditional chemical kinetics using continuous functions to describe the properties system. This has the advantage of giving adequate description of single-molecule events, which were probably important in the origin of biological chirality. The presented models can interpret and predict the random distribution of enantiomeric excess among repetitive experiments, which is the most striking feature of absolute asymmetric reactions. It is argued that the use of the stochastic kinetic approach should be much more widespread in the relevant literature.

  10. Interpreting Space-Based Trends in Carbon Monoxide with Multiple Models

    Science.gov (United States)

    Strode, Sarah A.; Worden, Helen M.; Damon, Megan; Douglass, Anne R.; Duncan, Bryan N.; Emmons, Louisa K.; Lamarque, Jean-Francois; Manyin, Michael; Oman, Luke D.; Rodriguez, Jose M.; Strahan, Susan E.; Tilmes, Simone

    2016-01-01

    We use a series of chemical transport model and chemistry climate model simulations to investigate the observed negative trends in MOPITT CO over several regions of the world, and to examine the consistency of timedependent emission inventories with observations. We find that simulations driven by the MACCity inventory, used for the Chemistry Climate Modeling Initiative (CCMI), reproduce the negative trends in the CO column observed by MOPITT for 2000-2010 over the eastern United States and Europe. However, the simulations have positive trends over eastern China, in contrast to the negative trends observed by MOPITT. The model bias in CO, after applying MOPITT averaging kernels, contributes to the model-observation discrepancy in the trend over eastern China. This demonstrates that biases in a model's average concentrations can influence the interpretation of the temporal trend compared to satellite observations. The total ozone column plays a role in determining the simulated tropospheric CO trends. A large positive anomaly in the simulated total ozone column in 2010 leads to a negative anomaly in OH and hence a positive anomaly in CO, contributing to the positive trend in simulated CO. These results demonstrate that accurately simulating variability in the ozone column is important for simulating and interpreting trends in CO.

  11. PBPK and population modelling to interpret urine cadmium concentrations of the French population

    Energy Technology Data Exchange (ETDEWEB)

    Béchaux, Camille, E-mail: Camille.bechaux@anses.fr [ANSES, French Agency for Food, Environmental and Occupational Health Safety, 27-31 Avenue du Général Leclerc, 94701 Maisons-Alfort (France); Bodin, Laurent [ANSES, French Agency for Food, Environmental and Occupational Health Safety, 27-31 Avenue du Général Leclerc, 94701 Maisons-Alfort (France); Clémençon, Stéphan [Telecom ParisTech, 46 rue Barrault, 75634 Paris Cedex 13 (France); Crépet, Amélie [ANSES, French Agency for Food, Environmental and Occupational Health Safety, 27-31 Avenue du Général Leclerc, 94701 Maisons-Alfort (France)

    2014-09-15

    As cadmium accumulates mainly in kidney, urinary concentrations are considered as relevant data to assess the risk related to cadmium. The French Nutrition and Health Survey (ENNS) recorded the concentration of cadmium in the urine of the French population. However, as with all biomonitoring data, it needs to be linked to external exposure for it to be interpreted in term of sources of exposure and for risk management purposes. The objective of this work is thus to interpret the cadmium biomonitoring data of the French population in terms of dietary and cigarette smoke exposures. Dietary and smoking habits recorded in the ENNS study were combined with contamination levels in food and cigarettes to assess individual exposures. A PBPK model was used in a Bayesian population model to link this external exposure with the measured urinary concentrations. In this model, the level of the past exposure was corrected thanks to a scaling function which account for a trend in the French dietary exposure. It resulted in a modelling which was able to explain the current urinary concentrations measured in the French population through current and past exposure levels. Risk related to cadmium exposure in the general French population was then assessed from external and internal critical values corresponding to kidney effects. The model was also applied to predict the possible urinary concentrations of the French population in 2030 assuming there will be no more changes in the exposures levels. This scenario leads to significantly lower concentrations and consequently lower related risk. - Highlights: • Interpretation of urine cadmium concentrations in France • PBPK and Bayesian population modelling of cadmium exposure • Assessment of the historic time-trend of the cadmium exposure in France • Risk assessment from current and future external and internal exposure.

  12. Up-Scaling Field Observations to Ground Truth Seismic Interpretations and Test Dynamic Models of Deep Water Rifted Margins: What are the Challenges?

    Science.gov (United States)

    Manatschal, G.; Nirrengarten, M.; Epin, M. E.

    2015-12-01

    Recent advances on the study of rifted margins resulted from the development of new, high-resolution seismic imaging methods and dynamic modelling that enable to image the crustal scale structure of rifted margins and experiment under what conditions they formed. However, both the used parameter space as well as the seismic interpretations and model results need to be ground truth by direct observations and data. In the case of deep-water rifted margins, the problem is that drill hole data is expensive, rare and only available from a handful of examples worldwide. In contrast, remnants preserving kilometre-scale outcrops of former deep-water rifted margins have been described from the Alps and the Pyrenees in Western Europe. These large-scale outcrops provide a direct access to mantle and crustal rocks and the associated sedimentary sequences and magmatic additions. The combination of world-class outcrops, classical, field-based mapping and analytical methods can provide the missing data that is necessary to calibrate and test dynamic models as well as to ground truth seismic interpretations. In my presentation I will use observations and data from key outcrops from the most distal fossil Alpine Tethys margins exposed in SE Switzerland with the aim to describe the deformation processes and conditions during final rifting and to test rift modes (semi-ductile flow vs. brittle poly-phase faulting). I will in particular focus on the way strain is distributed and the bulk rheology evolves during hyper-extension and mantle exhumation and compare the observations with model results and seismic interpretations. Up-and down scaling observations/data and bridging multiple spatial and temporal scales is a key to understand the large-scale extensional processes that are at the origin of the formation of hyper-extend and exhumed mantle domains. The major challenge is to understand how the learnings obtained from the well-documented examples in the Alps and Pyrenees can be used

  13. Digital structural interpretation of mountain-scale photogrammetric 3D models (Kamnik Alps, Slovenia)

    Science.gov (United States)

    Dolžan, Erazem; Vrabec, Marko

    2015-04-01

    From the earliest days of geological science, mountainous terrains with their extreme topographic relief and sparse to non-existent vegetation were utilized to a great advantage for gaining 3D insight into geological structure. But whereas Alpine vistas may offer perfect panoramic views of geology, the steep mountain slopes and vertical cliffs make it very time-consuming and difficult (if not impossible) to acquire quantitative mapping data such as precisely georeferenced traces of geological boundaries and attitudes of structural planes. We faced this problem in mapping the central Kamnik Alps of northern Slovenia, which are built up from Mid to Late Triassic succession of carbonate rocks. Polyphase brittle tectonic evolution, monotonous lithology and the presence of temporally and spatially irregular facies boundary between bedded platform carbonates and massive reef limestones considerably complicate the structural interpretation of otherwise perfectly exposed, but hardly accessible massif. We used Agisoft Photoscan Structure-from-Motion photogrammetric software to process a series of overlapping high-resolution (~0.25 m ground resolution) vertical aerial photographs originally acquired by the Geodetic Authority of the Republic of Slovenia for surveying purposes, to derive very detailed 3D triangular mesh models of terrain and associated photographic textures. Phototextures are crucial for geological interpretation of the models as they provide additional levels of detail and lithological information which is not resolvable from geometrical mesh models alone. We then exported the models to Paradigm Gocad software to refine and optimize the meshing. Structural interpretation of the models, including mapping of traces and surfaces of faults and stratigraphic boundaries and determining dips of structural planes, was performed in MVE Move suite which offers a range of useful tools for digital mapping and interpretation. Photogrammetric model was complemented by

  14. Analysis, Interpretation, and Recognition of Facial Action Units and Expressions Using Neuro-Fuzzy Modeling

    CERN Document Server

    Khademi, Mahmoud; Manzuri-Shalmani, Mohammad T; Kiaei, Ali A

    2010-01-01

    In this paper an accurate real-time sequence-based system for representation, recognition, interpretation, and analysis of the facial action units (AUs) and expressions is presented. Our system has the following characteristics: 1) employing adaptive-network-based fuzzy inference systems (ANFIS) and temporal information, we developed a classification scheme based on neuro-fuzzy modeling of the AU intensity, which is robust to intensity variations, 2) using both geometric and appearance-based features, and applying efficient dimension reduction techniques, our system is robust to illumination changes and it can represent the subtle changes as well as temporal information involved in formation of the facial expressions, and 3) by continuous values of intensity and employing top-down hierarchical rule-based classifiers, we can develop accurate human-interpretable AU-to-expression converters. Extensive experiments on Cohn-Kanade database show the superiority of the proposed method, in comparison with support vect...

  15. Seismic Sedimentology Interpretation Method of Meandering Fluvial Reservoir:From Model to Real Data

    Institute of Scientific and Technical Information of China (English)

    Tao Zhang; Xianguo Zhang; Chengyan Lin; Jingfeng Yu; Shouxiu Zhang

    2015-01-01

    Reservoir architecture of meandering river deposition is complex and traditional seismic facies interpretation method cannot characterize it when layer thickness is under seismic vertical reso-lution. In this study, a seismic sedimentology interpretation method and workflow for point bar char-acterization is built. Firstly, the influences of seismic frequency and sandstone thickness on seismic re-flection are analyzed by outcrop detection with ground penetrating radar (GPR) and seismic forward modeling. It is found that (1) sandstone thickness can influence seismic reflection of point bar architec-ture. With the increasing of sandstone thickness from 1/4 wavelength (λ) to λ/2, seismic reflection ge-ometries various from ambiguous reflection,“V”type reflection to“X”type reflection;(2) seismic fre-quency can influence reservoirs’ seismic reflection geometry. Seismic events follow inclined lateral ag-gradation surfaces, which is isochronic depositional boundaries, in high frequency seismic data while the events extend along lithologic surfaces, which are level, in low frequency data. Secondly, strata slice interpretation method for thin layer depositional characterization is discussed with seismic forward modeling. Lastly, a method and workflow based on the above study is built which includes seismic fre-quency analysis, 90º phasing, stratal slicing and integrated interpretation of slice and seismic profile. This method is used in real data study in Tiger shoal, the Gulf of Mexico. Two episodes of meandering fluvial deposition is recognized in the study layer. Sandstone of the lower unit, which is formed in low base level stage, distributes limited. Sandstone distribution dimension and channel sinuosity become larger in the upper layer, which is high base level deposition.

  16. Contagion effect of enabling or coercive use of costing model within the managerial couple in lean organizations

    DEFF Research Database (Denmark)

    Kristensen, Thomas; Israelsen, Poul

    In the lean strategy is enabling formalization behaviour expected at the lower levels of management to be successful. We study the contagion effect between the superior, middle manager, of the lower level manager. This effect is proposed to be a dominant contingency variable for the use of costin...... models at the lower levels of management. Thus the use of costing models at the middle manager level is an important key to be successful with the lean package....

  17. Explicit kinetic heterogeneity: mechanistic models for interpretation of labeling data in heterogeneous populations

    Energy Technology Data Exchange (ETDEWEB)

    Ganusov, Vitaly V [Los Alamos National Laboratory

    2008-01-01

    Estimation of division and death rates of lymphocytes in different conditions is vital for quantitative understanding of the immune system. Deuterium, in the form of deuterated glucose or heavy water, can be used to measure rates of proliferation and death of lymphocytes in vivo. Inferring these rates from labeling and delabeling curves has been subject to considerable debate with different groups suggesting different mathematical models for that purpose. We show that the three models that are most commonly used are in fact mathematically identical and differ only in their interpretation of the estimated parameters. By extending these previous models, we here propose a more mechanistic approach for the analysis of data from deuterium labeling experiments. We construct a model of 'kinetic heterogeneity' in which the total cell population consists of many sub-populations with different rates of cell turnover. In this model, for a given distribution of the rates of turnover, the predicted fraction of labeled DNA accumulated and lost can be calculated. Our model reproduces several previously made experimental observations, such as a negative correlation between the length of the labeling period and the rate at which labeled DNA is lost after label cessation. We demonstrate the reliability of the new explicit kinetic heterogeneity model by applying it to artificially generated datasets, and illustrate its usefulness by fitting experimental data. In contrast to previous models, the explicit kinetic heterogeneity model (1) provides a mechanistic way of interpreting labeling data; (2) allows for a non-exponential loss of labeled cells during delabeling, and (3) can be used to describe data with variable labeling length.

  18. Explicit kinetic heterogeneity: mechanistic models for interpretation of labeling data in heterogeneous populations

    Energy Technology Data Exchange (ETDEWEB)

    Ganusov, Vitaly V [Los Alamos National Laboratory

    2008-01-01

    Estimation of division and death rates of lymphocytes in different conditions is vital for quantitative understanding of the immune system. Deuterium, in the form of deuterated glucose or heavy water, can be used to measure rates of proliferation and death of lymphocytes in vivo. Inferring these rates from labeling and delabeling curves has been subject to considerable debate with different groups suggesting different mathematical models for that purpose. We show that the three models that are most commonly used are in fact mathematically identical and differ only in their interpretation of the estimated parameters. By extending these previous models, we here propose a more mechanistic approach for the analysis of data from deuterium labeling experiments. We construct a model of 'kinetic heterogeneity' in which the total cell population consists of many sub-populations with different rates of cell turnover. In this model, for a given distribution of the rates of turnover, the predicted fraction of labeled DNA accumulated and lost can be calculated. Our model reproduces several previously made experimental observations, such as a negative correlation between the length of the labeling period and the rate at which labeled DNA is lost after label cessation. We demonstrate the reliability of the new explicit kinetic heterogeneity model by applying it to artificially generated datasets, and illustrate its usefulness by fitting experimental data. In contrast to previous models, the explicit kinetic heterogeneity model (1) provides a mechanistic way of interpreting labeling data; (2) allows for a non-exponential loss of labeled cells during delabeling, and (3) can be used to describe data with variable labeling length.

  19. Enabling Integrated Decision Making for Electronic-Commerce by Modelling an Enterprise's Sharable Knowledge.

    Science.gov (United States)

    Kim, Henry M.

    2000-01-01

    An enterprise model, a computational model of knowledge about an enterprise, is a useful tool for integrated decision-making by e-commerce suppliers and customers. Sharable knowledge, once represented in an enterprise model, can be integrated by the modeled enterprise's e-commerce partners. Presents background on enterprise modeling, followed by…

  20. Enabling Integrated Decision Making for Electronic-Commerce by Modelling an Enterprise's Sharable Knowledge.

    Science.gov (United States)

    Kim, Henry M.

    2000-01-01

    An enterprise model, a computational model of knowledge about an enterprise, is a useful tool for integrated decision-making by e-commerce suppliers and customers. Sharable knowledge, once represented in an enterprise model, can be integrated by the modeled enterprise's e-commerce partners. Presents background on enterprise modeling, followed by…

  1. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  2. New interpretation of arterial stiffening due to cigarette smoking using a structurally motivated constitutive model

    DEFF Research Database (Denmark)

    Enevoldsen, Marie Sand; Henneberg, Kaj-Åge; Jensen, Jørgen Arendt

    2011-01-01

    Cigarette smoking is the leading self-inflicted risk factor for cardiovascular diseases; it causes arterial stiffening with serious sequelea including atherosclerosis and abdominal aortic aneurysms. This work presents a new interpretation of arterial stiffening caused by smoking based on data...... caused by smoking was reflected by consistent increase in an elastin-associated parameter and moreover by marked increase in the collagen-associated parameters. That is, we suggest that arterial stiffening due to cigarette smoking appears to be isotropic, which may allow simpler phenomenological models...

  3. New interpretation of arterial stiffening due to cigarette smoking using a structurally motivated constitutive model

    DEFF Research Database (Denmark)

    Enevoldsen, Majken; Henneberg, K-A; Jensen, J A

    2011-01-01

    Cigarette smoking is the leading self-inflicted risk factor for cardiovascular diseases; it causes arterial stiffening with serious sequelea including atherosclerosis and abdominal aortic aneurysms. This work presents a new interpretation of arterial stiffening caused by smoking based on data...... by smoking was reflected by consistent increase in an elastin-associated parameter and moreover by marked increase in the collagen-associated parameters. That is, we suggest that arterial stiffening due to cigarette smoking appears to be isotropic, which may allow simpler phenomenological models to capture...

  4. Entropy-Based Model for Interpreting Life Systems in Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Guo-lian Kang

    2008-01-01

    Full Text Available Traditional Chinese medicine (TCM treats qi as the core of the human life systems. Starting with a hypothetical correlation between TCM qi and the entropy theory, we address in this article a holistic model for evaluating and unveiling the rule of TCM life systems. Several new concepts such as acquired life entropy (ALE, acquired life entropy flow (ALEF and acquired life entropy production (ALEP are propounded to interpret TCM life systems. Using the entropy theory, mathematical models are established for ALE, ALEF and ALEP, which reflect the evolution of life systems. Some criteria are given on physiological activities and pathological changes of the body in different stages of life. Moreover, a real data-based simulation shows life entropies of the human body with different ages, Cold and Hot constitutions and in different seasons in North China are coincided with the manifestations of qi as well as the life evolution in TCM descriptions. Especially, based on the comparative and quantitative analysis, the entropy-based model can nicely describe the evolution of life entropies in Cold and Hot individuals thereby fitting the Yin–Yang theory in TCM. Thus, this work establishes a novel approach to interpret the fundamental principles in TCM, and provides an alternative understanding for the complex life systems.

  5. Caribbean sclerosponge radiocarbon measurements re-interpreted in terms of U/Th age models

    Energy Technology Data Exchange (ETDEWEB)

    Rosenheim, Brad E. [Woods Hole Oceanographic Institution, Department of Geology and Geophysics, MS 8, Woods Hole, MA 02546 (United States)]. E-mail: brosenheim@whoi.edu; Swart, Peter K. [University of Miami, Rosenstiel School of Marine and Atmospheric Science, Division of Marine Geology and Geophysics, Miami, FL (United States)

    2007-06-15

    Previously unpublished AMS radiocarbon measurements of a sclerosponge from tongue of the ocean (TOTO), Bahamas, as well as preliminary data from an investigation of the radiocarbon records of sclerosponges living at different depths in the adjacent Bahamas basin, Exuma Sound, are interpreted in terms of U-series age models. The data are compared to an existing Caribbean sclerosponge radiocarbon bomb curve measured using standard gas proportional beta counting and used to interpret a {sup 210}Pb age model. The {delta}{sup 14}C records from the sclerosponges illustrate a potential for use of radiocarbon both as a tracer of subsurface water masses or as an additional age constraint on recently sampled sclerosponges. By using an independent age model, this study lays the framework for utilizing sclerosponges from different locations in the tropics and subtropics and different depths within their wide depth range (0-250 m) to constrain changes in production of subtropical underwater in the Atlantic Ocean. This framework is significant because the proxy approach is necessary to supplement the short and coarse time series being used to constrain variability in the formation of Caribbean subtropical underwater, the return flow of a shallow circulation cell responsible for nearly 10% of the heat transported poleward in the N. Atlantic.

  6. Semiotic Interpretation of Lotka–Volterra Model and its Usage in Knowledge Management

    Directory of Open Access Journals (Sweden)

    Evdokimov Kirill E.

    2016-01-01

    Full Text Available Convergence of NBICS-technologies makes relevant the exact definition of objective goals’ spectrum, which pursued this self-organizing system of technologies. Authors consider the objective goals of this system of technologies as “semiotic attractors” and the tasks related to knowledge management at the NBICS-technologies niche as management of competition between the goals, which cause processes of creation, transmission, reception, usage and duplication of the new knowledge. Competitive interaction of these goals (and their symbolizations were researched on the grounds of Lotka–Volterra model. The original interpretation of Lotka–Volterra model is posed on the basis of stated interconnection between the stages of complex systems’ non-linear dynamics, this self-organization’s information mechanisms and the semiotic results of information processes’ stages. This synthesis of synergetic, cybernetic and semiotic paradigms is implemented on the grounds of A. N. Whitehead process philosophy. Semiotic interpretation of the model allowed determining the order of goals’ conversion and defining the stages of dynamics at which this transformation by means of knowledge management is constructive.

  7. Pre-Trip Expectations and Post-Trip Satisfaction with Marine Tour Interpretation in Hawaii: Applying the Norm Activation Model

    Science.gov (United States)

    Littlejohn, Kerrie; Needham, Mark D.; Szuster, Brian W.; Jordan, Evan J.

    2016-01-01

    This article examines environmental education by focusing on recreationist expectations for interpretation on marine tours, satisfaction with this interpretation and whether expectations were met, and how these perceptions correlate with components of the norm activation model. Recreationists surveyed before and after tours to Molokini, Hawaii (n…

  8. On the Practical Interpretability of Cross-Lagged Panel Models: Rethinking a Developmental Workhorse.

    Science.gov (United States)

    Berry, Daniel; Willoughby, Michael T

    2017-07-01

    Reciprocal feedback processes between experience and development are central to contemporary developmental theory. Autoregressive cross-lagged panel (ARCL) models represent a common analytic approach intended to test such dynamics. The authors demonstrate that-despite the ARCL model's intuitive appeal-it typically (a) fails to align with the theoretical processes that it is intended to test and (b) yields estimates that are difficult to interpret meaningfully. Specifically, using a Monte Carlo simulation and two empirical examples concerning the reciprocal relation between spanking and child aggression, it is shown that the cross-lagged estimates derived from the ARCL model reflect a weighted-and typically uninterpretable-amalgam of between- and within-person associations. The authors highlight one readily implemented respecification that better addresses these multiple levels of inference. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  9. The Marine Virtual Laboratory (version 2.1): enabling efficient ocean model configuration

    Science.gov (United States)

    Oke, Peter R.; Proctor, Roger; Rosebrock, Uwe; Brinkman, Richard; Cahill, Madeleine L.; Coghlan, Ian; Divakaran, Prasanth; Freeman, Justin; Pattiaratchi, Charitha; Roughan, Moninya; Sandery, Paul A.; Schaeffer, Amandine; Wijeratne, Sarath

    2016-09-01

    The technical steps involved in configuring a regional ocean model are analogous for all community models. All require the generation of a model grid, preparation and interpolation of topography, initial conditions, and forcing fields. Each task in configuring a regional ocean model is straightforward - but the process of downloading and reformatting data can be time-consuming. For an experienced modeller, the configuration of a new model domain can take as little as a few hours - but for an inexperienced modeller, it can take much longer. In pursuit of technical efficiency, the Australian ocean modelling community has developed the Web-based MARine Virtual Laboratory (WebMARVL). WebMARVL allows a user to quickly and easily configure an ocean general circulation or wave model through a simple interface, reducing the time to configure a regional model to a few minutes. Through WebMARVL, a user is prompted to define the basic options needed for a model configuration, including the model, run duration, spatial extent, and input data. Once all aspects of the configuration are selected, a series of data extraction, reprocessing, and repackaging services are run, and a "take-away bundle" is prepared for download. Building on the capabilities developed under Australia's Integrated Marine Observing System, WebMARVL also extracts all of the available observations for the chosen time-space domain. The user is able to download the take-away bundle and use it to run the model of his or her choice. Models supported by WebMARVL include three community ocean general circulation models and two community wave models. The model configuration from the take-away bundle is intended to be a starting point for scientific research. The user may subsequently refine the details of the model set-up to improve the model performance for the given application. In this study, WebMARVL is described along with a series of results from test cases comparing WebMARVL-configured models to observations

  10. NUMERICAL MODELS AS ENABLING TOOLS FOR TIDAL-STREAM ENERGY EXTRACTION AND ENVIRONMENTAL IMPACT ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhaoqing; Wang, Taiping

    2016-06-24

    This paper presents a modeling study conducted to evaluate tidal-stream energy extraction and its associated potential environmental impacts using a three-dimensional unstructured-grid coastal ocean model, which was coupled with a water-quality model and a tidal-turbine module.

  11. Under the pile. Understanding subsurface dynamics of historical cities trough geophysical models interpretation

    Science.gov (United States)

    Bernardes, Paulo; Pereira, Bruno; Alves, Mafalda; Fontes, Luís; Sousa, Andreia; Martins, Manuela; Magalhães, Fernanda; Pimenta, Mário

    2017-04-01

    Braga is one of the oldest cities of the Iberian NW and as of so, the research team's studying the city's historical core for the past 40 years is often confronted with the unpredictability factor laying beneath an urban site with such a long construction history. In fact, Braga keeps redesigning its urban structure over itself on for the past 2000 years, leaving us with a research object filled with an impressive set of construction footprints from the various planning decisions that were taken in the city along its historical path. Aiming for a predicting understanding of the subsoil, we have used near surface geophysics as an effort of minimizing the areas of intervention for traditional archaeological survey techniques. The Seminário de Santiago integrated geophysical survey is an example of the difficulties of interpreting geophysical models in very complex subsurface scenarios. This geophysical survey was planned in order to aid the requalification project being designed for this set of historical buildings, that are estimated to date back to the 16h century, and that were built over one of the main urban arteries of both roman and medieval layers of Braga. We have used both GPR as well as ERT methods for the geophysical survey, but for the purpose of this article, we will focus in the use of the ERT alone. For the interpretation of the geophysical models we've cross-referenced the dense knowledge existing over the building's construction phases with the complex geophysical data collected, using mathematical processing and volume-based visualization techniques, resorting to the use of Res2Inv©, Paraview© and Voxler® software's. At the same time we tried to pinpoint the noise caused by the past 30 year's infrastructural interventions regarding the replacement of the building's water and sanitation systems and for which we had no design plants, regardless of its recent occurring. The deep impact of this replacement actions revealed by the archaeological

  12. Environmental Model Interoperability Enabled by Open Geospatial Standards - Results of a Feasibility Study (Invited)

    Science.gov (United States)

    Benedict, K. K.; Yang, C.; Huang, Q.

    2010-12-01

    The availability of high-speed research networks such as the US National Lambda Rail and the GÉANT network, scalable on-demand commodity computing resources provided by public and private "cloud" computing systems, and increasing demand for rapid access to the products of environmental models for both research and public policy development contribute to a growing need for the evaluation and development of environmental modeling systems that distribute processing, storage, and data delivery capabilities between network connected systems. In an effort to address the feasibility of developing a standards-based distributed modeling system in which model execution systems are physically separate from data storage and delivery systems, the research project presented in this paper developed a distributed dust forecasting system in which two nested atmospheric dust models are executed at George Mason University (GMU, in Fairfax, VA) while data and model output processing services are hosted at the University of New Mexico (UNM, in Albuquerque, NM). Exchange of model initialization and boundary condition parameters between the servers at UNM and the model execution systems at GMU is accomplished through Open Geospatial Consortium (OGC) Web Coverage Services (WCS) and Web Feature Services (WFS) while model outputs are pushed from GMU systems back to UNM using a REST web service interface. In addition to OGC and non-OGC web services for exchange between UNM and GMU, the servers at UNM also provide access to the input meteorological model products, intermediate and final dust model outputs, and other products derived from model outputs through OGC WCS, WFS, and OGC Web Map Services (WMS). The performance of the nested versus non-nested models is assessed in this research, with the results of the performance analysis providing the core content of the produced feasibility study. System integration diagram illustrating the storage and service platforms hosted at the Earth Data

  13. An interpretation model of GPR point data in tunnel geological prediction

    Science.gov (United States)

    He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya

    2017-02-01

    GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.

  14. Interpretation of Vector-like Quark Searches: Heavy Gluons in Composite Higgs Models

    CERN Document Server

    Araque, Juan Pedro; Santiago, Jose

    2015-01-01

    Pair production of new vector-like quarks in pp collisions is considered model independent as it is usually dominated by QCD production. We discuss the interpretation of vector-like quark searches in the case that QCD is not the only relevant production mechanism for the new quarks. In particular we consider the effect of a new massive color octet vector boson with sizeable decay branching ratio into the new quarks. We pay special attention to the sensitivity of the Large Hadron Collider experiments, both in run-1 and early run-2, to differences in the kinematical distributions from the different production mechanisms. We have found that even though there can be significant differences in some kinematical distributions at the parton level, the differences are washed out at the reconstruction level. Thus, the published experimental results can be reinterpreted in models with heavy gluons by simply rescaling the production cross section.

  15. Interpreting predictive maps of disease: highlighting the pitfalls of distribution models in epidemiology

    Directory of Open Access Journals (Sweden)

    Nicola A. Wardrop

    2014-11-01

    Full Text Available The application of spatial modelling to epidemiology has increased significantly over the past decade, delivering enhanced understanding of the environmental and climatic factors affecting disease distributions and providing spatially continuous representations of disease risk (predictive maps. These outputs provide significant information for disease control programmes, allowing spatial targeting and tailored interventions. However, several factors (e.g. sampling protocols or temporal disease spread can influence predictive mapping outputs. This paper proposes a conceptual framework which defines several scenarios and their potential impact on resulting predictive outputs, using simulated data to provide an exemplar. It is vital that researchers recognise these scenarios and their influence on predictive models and their outputs, as a failure to do so may lead to inaccurate interpretation of predictive maps. As long as these considerations are kept in mind, predictive mapping will continue to contribute significantly to epidemiological research and disease control planning.

  16. Discussion of using artificial neural nets to identify the well-test interpretation model

    Energy Technology Data Exchange (ETDEWEB)

    Yeung, K. (Univ. of Alberta, Edmonton, Alberta (Canada)); Chakrabarty, C. (Golder Associates, Nottingham (United Kingdom)); Wu, S. (Univ. of Melbourne (Australia))

    1994-09-01

    Use of artificial neural nets (ANN's) to identify noisy and apparently unrecognizable patterns is common for many real-world problems, ranging from applications such as speech recognition to stock market prediction. ANN approaches are often good candidates for recognizing patterns when rigid mathematical models do not exist or are insufficient to meet a full-scale identification requirement. Al-Kaabi and Lee's proposal of using ANN's to identify the well-test interpretation model is appropriate because well-test data is often highly nonlinear and noisy. The purpose of this discussion is to present some of the authors results in a similar study and to suggest a simple technique that would enhance the use of ANN's in Al-Kaabi and Lee's approach.

  17. Customer involvement in greening the supply chain: an interpretive structural modeling methodology

    Science.gov (United States)

    Kumar, Sanjay; Luthra, Sunil; Haleem, Abid

    2013-04-01

    The role of customers in green supply chain management needs to be identified and recognized as an important research area. This paper is an attempt to explore the involvement aspect of customers towards greening of the supply chain (SC). An empirical research approach has been used to collect primary data to rank different variables for effective customer involvement in green concept implementation in SC. An interpretive structural-based model has been presented, and variables have been classified using matrice d' impacts croises- multiplication appliqué a un classement analysis. Contextual relationships among variables have been established using experts' opinions. The research may help practicing managers to understand the interaction among variables affecting customer involvement. Further, this understanding may be helpful in framing the policies and strategies to green SC. Analyzing interaction among variables for effective customer involvement in greening SC to develop the structural model in the Indian perspective is an effort towards promoting environment consciousness.

  18. Nested by design: model fitting and interpretation in a mixed model era

    National Research Council Canada - National Science Library

    Schielzeth, Holger; Nakagawa, Shinichi; Freckleton, Robert

    2013-01-01

    ...‐effects models offer a powerful framework to do so. Nested effects can usually be fitted using the syntax for crossed effects in mixed models, provided that the coding reflects implicit nesting...

  19. Usage Intention Framework Model: A Fuzzy Logic Interpretation of the Classical Utaut Model

    Science.gov (United States)

    Sandaire, Johnny

    2009-01-01

    A fuzzy conjoint analysis (FCA: Turksen, 1992) model for enhancing management decision in the technology adoption domain was implemented as an extension to the UTAUT model (Venkatesh, Morris, Davis, & Davis, 2003). Additionally, a UTAUT-based Usage Intention Framework Model (UIFM) introduced a closed-loop feedback system. The empirical evidence…

  20. Usage Intention Framework Model: A Fuzzy Logic Interpretation of the Classical Utaut Model

    Science.gov (United States)

    Sandaire, Johnny

    2009-01-01

    A fuzzy conjoint analysis (FCA: Turksen, 1992) model for enhancing management decision in the technology adoption domain was implemented as an extension to the UTAUT model (Venkatesh, Morris, Davis, & Davis, 2003). Additionally, a UTAUT-based Usage Intention Framework Model (UIFM) introduced a closed-loop feedback system. The empirical evidence…

  1. Open Knee: Open Source Modeling & Simulation to Enable Scientific Discovery and Clinical Care in Knee Biomechanics

    Science.gov (United States)

    Erdemir, Ahmet

    2016-01-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical function of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor intensive reproduction of model development steps can be avoided. The interested parties can immediately utilize readily available models for scientific discovery and for clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes detailed anatomical representation of the joint's major tissue structures, their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next generation knee models are noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  2. Parallelization and High-Performance Computing Enables Automated Statistical Inference of Multi-scale Models.

    Science.gov (United States)

    Jagiella, Nick; Rickert, Dennis; Theis, Fabian J; Hasenauer, Jan

    2017-02-22

    Mechanistic understanding of multi-scale biological processes, such as cell proliferation in a changing biological tissue, is readily facilitated by computational models. While tools exist to construct and simulate multi-scale models, the statistical inference of the unknown model parameters remains an open problem. Here, we present and benchmark a parallel approximate Bayesian computation sequential Monte Carlo (pABC SMC) algorithm, tailored for high-performance computing clusters. pABC SMC is fully automated and returns reliable parameter estimates and confidence intervals. By running the pABC SMC algorithm for ∼10(6) hr, we parameterize multi-scale models that accurately describe quantitative growth curves and histological data obtained in vivo from individual tumor spheroid growth in media droplets. The models capture the hybrid deterministic-stochastic behaviors of 10(5)-10(6) of cells growing in a 3D dynamically changing nutrient environment. The pABC SMC algorithm reliably converges to a consistent set of parameters. Our study demonstrates a proof of principle for robust, data-driven modeling of multi-scale biological systems and the feasibility of multi-scale model parameterization through statistical inference.

  3. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Osval A. Montesinos-López

    2017-06-01

    Full Text Available There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments.

  4. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation.

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  5. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation (presentation)

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  6. Environmental Models as a Service: Enabling Interoperability through RESTful Endpoints and API Documentation

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantag...

  7. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  8. Parametric Generation of Polygonal Tree Models for Rendering on Tessellation-Enabled Hardware

    OpenAIRE

    Nystad, Jørgen

    2010-01-01

    The main contribution of this thesis is a parametric method for generation of single-mesh polygonal tree models that follow natural rules as indicated by da Vinci in his notebooks. Following these rules allow for a relatively simple scheme of connecting branches to parent branches. Proper branch connection is a requirement for gaining the benefits of subdivision. Techniques for proper texture coordinate generation and subdivision are also explored.The result is a tree model generation scheme ...

  9. Enabling Energy-Awareness in the Semantic 3d City Model of Vienna

    Science.gov (United States)

    Agugiaro, G.

    2016-09-01

    This paper presents and discusses the first results regarding selection, analysis, preparation and eventual integration of a number of energy-related datasets, chosen in order to enrich a CityGML-based semantic 3D city model of Vienna. CityGML is an international standard conceived specifically as information and data model for semantic city models at urban and territorial scale. The still-in-development Energy Application Domain Extension (ADE) is a CityGML extension conceived to specifically model, manage and store energy-related features and attributes for buildings. The work presented in this paper is embedded within the European Marie-Curie ITN project "CINERGY, Smart cities with sustainable energy systems", which aims, among the rest, at developing urban decision making and operational optimisation software tools to minimise non-renewable energy use in cities. Given the scope and scale of the project, it is therefore vital to set up a common, unique and spatio-semantically coherent urban data model to be used as information hub for all applications being developed. This paper reports about the experiences done so far, it describes the test area in Vienna, Austria, and the available data sources, it shows and exemplifies the main data integration issues, the strategies developed to solve them in order to obtain the enriched 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.

  10. The DSET Tool Library: A software approach to enable data exchange between climate system models

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, J. [Lawrence Livermore National Lab., CA (United States)

    1994-12-01

    Climate modeling is a computationally intensive process. Until recently computers were not powerful enough to perform the complex calculations required to simulate the earth`s climate. As a result standalone programs were created that represent components of the earth`s climate (e.g., Atmospheric Circulation Model). However, recent advances in computing, including massively parallel computing, make it possible to couple the components forming a complete earth climate simulation. The ability to couple different climate model components will significantly improve our ability to predict climate accurately and reliably. Historically each major component of the coupled earth simulation is a standalone program designed independently with different coordinate systems and data representations. In order for two component models to be coupled, the data of one model must be mapped to the coordinate system of the second model. The focus of this project is to provide a general tool to facilitate the mapping of data between simulation components, with an emphasis on using object-oriented programming techniques to provide polynomial interpolation, line and area weighting, and aggregation services.

  11. Enabling high-quality observations of surface imperviousness for water runoff modelling from unmanned aerial vehicles

    Science.gov (United States)

    Tokarczyk, Piotr; Leitao, Joao Paulo; Rieckermann, Jörg; Schindler, Konrad; Blumensaat, Frank

    2015-04-01

    Modelling rainfall-runoff in urban areas is increasingly applied to support flood risk assessment particularly against the background of a changing climate and an increasing urbanization. These models typically rely on high-quality data for rainfall and surface characteristics of the area. While recent research in urban drainage has been focusing on providing spatially detailed rainfall data, the technological advances in remote sensing that ease the acquisition of detailed land-use information are less prominently discussed within the community. The relevance of such methods increase as in many parts of the globe, accurate land-use information is generally lacking, because detailed image data is unavailable. Modern unmanned air vehicles (UAVs) allow acquiring high-resolution images on a local level at comparably lower cost, performing on-demand repetitive measurements, and obtaining a degree of detail tailored for the purpose of the study. In this study, we investigate for the first time the possibility to derive high-resolution imperviousness maps for urban areas from UAV imagery and to use this information as input for urban drainage models. To do so, an automatic processing pipeline with a modern classification method is tested and applied in a state-of-the-art urban drainage modelling exercise. In a real-life case study in the area of Lucerne, Switzerland, we compare imperviousness maps generated from a consumer micro-UAV and standard large-format aerial images acquired by the Swiss national mapping agency (swisstopo). After assessing their correctness, we perform an end-to-end comparison, in which they are used as an input for an urban drainage model. Then, we evaluate the influence which different image data sources and their processing methods have on hydrological and hydraulic model performance. We analyze the surface runoff of the 307 individual sub-catchments regarding relevant attributes, such as peak runoff and volume. Finally, we evaluate the model

  12. Modeling and interpreting speckle pattern formation in swept-source optical coherence tomography (Conference Presentation)

    Science.gov (United States)

    Demidov, Valentin; Vitkin, I. Alex; Doronin, Alexander; Meglinski, Igor

    2017-03-01

    We report on the development of a unified Monte-Carlo based computational model for exploring speckle pattern formation in swept-source optical coherence tomography (OCT). OCT is a well-established optical imaging modality capable of acquiring cross-sectional images of turbid media, including biological tissues, utilizing back scattered low coherence light. The obtained OCT images include characteristic features known as speckles. Currently, there is a growing interest to the OCT speckle patterns due to their potential application for quantitative analysis of medium's optical properties. Here we consider the mechanisms of OCT speckle patterns formation for swept-source OCT approaches and introduce further developments of a Monte-Carlo based model for simulation of OCT signals and images. The model takes into account polarization and coherent properties of light, mutual interference of back-scattering waves, and their interference with the reference waves. We present a corresponding detailed description of the algorithm for modeling these light-medium interactions. The developed model is employed for generation of swept-source OCT images, analysis of OCT speckle formation and interpretation of the experimental results. The obtained simulation results are compared with selected analytical solutions and experimental studies utilizing various sizes / concentrations of scattering microspheres.

  13. Development of HT-BP nueral network system for the identification of well test interpretation model

    Energy Technology Data Exchange (ETDEWEB)

    Sung, W.; Hanyang, U.; Yoo, I. [and others

    1995-12-31

    The neural network technique that is a field of artificial intelligence (AI) has proved to be a good model classifier in all areas of engineering and especially, it has gained a considerable acceptance in well test interpretation model (WTIM) identification of petroleum engineering. Conventionally, identification of the WTIM has been approached by graphical analysis method that requires an experienced expert. Recently, neural network technique equipped with back propagation (BP) learning algorithm was presented and it differs from the AI technique such as symbolic approach that must be accompanied with the data preparation procedures such as smoothing, segmenting, and symbolic transformation. In this paper, we developed BP neural network with Hough transform (HT) technique to overcome data selection problem and to use single neural network rather sequential nets. The Hough transform method was proved to be a powerful tool for the shape detection in image processing and computer vision technologies. Along these lines, a number of exercises were conducted with the actual well test data in two steps. First, the newly developed AI model, namely, ANNIS (Artificial intelligence Neural Network Identification System) was utilized to identify WTIM. Secondly, we obtained reservoir characteristics with the well test model equipped with modified Levenberg-Marquart method. The results show that ANNIS was proved to be quite reliable model for the data having noisy, missing, and extraneous points. They also demonstrate that reservoir parameters were successfully estimated.

  14. Characterization of Rock Mechanical Properties Using Lab Tests and Numerical Interpretation Model of Well Logs

    Directory of Open Access Journals (Sweden)

    Hao Xu

    2016-01-01

    Full Text Available The tight gas reservoir in the fifth member of the Xujiahe formation contains heterogeneous interlayers of sandstone and shale that are low in both porosity and permeability. Elastic characteristics of sandstone and shale are analyzed in this study based on petrophysics tests. The tests indicate that sandstone and mudstone samples have different stress-strain relationships. The rock tends to exhibit elastic-plastic deformation. The compressive strength correlates with confinement pressure and elastic modulus. The results based on thin-bed log interpretation match dynamic Young’s modulus and Poisson’s ratio predicted by theory. The compressive strength is calculated from density, elastic impedance, and clay contents. The tensile strength is calibrated using compressive strength. Shear strength is calculated with an empirical formula. Finally, log interpretation of rock mechanical properties is performed on the fifth member of the Xujiahe formation. Natural fractures in downhole cores and rock microscopic failure in the samples in the cross section demonstrate that tensile fractures were primarily observed in sandstone, and shear fractures can be observed in both mudstone and sandstone. Based on different elasticity and plasticity of different rocks, as well as the characteristics of natural fractures, a fracture propagation model was built.

  15. Trade-offs between accuracy and interpretability in von Bertalanffy random-effects models of growth.

    Science.gov (United States)

    Vincenzi, Simone; Crivelli, Alain J; Munch, Stephan; Skaug, Hans J; Mangel, Marc

    2016-07-01

    Better understanding of variation in growth will always be an important problem in ecology. Individual variation in growth can arise from a variety of processes; for example, individuals within a population vary in their intrinsic metabolic rates and behavioral traits, which may influence their foraging dynamics and access to resources. However, when adopting a growth model, we face trade-offs between model complexity, biological interpretability of parameters, and goodness of fit. We explore how different formulations of the von Bertalanffy growth function (vBGF) with individual random effects and environmental predictors affect these trade-offs. In the vBGF, the growth of an organism results from a dynamic balance between anabolic and catabolic processes. We start from a formulation of the vBGF that models the anabolic coefficient (q) as a function of the catabolic coefficient (k), a coefficient related to the properties of the environment (γ) and a parameter that determines the relative importance of behavior and environment in determining growth (ψ). We treat the vBGF parameters as a function of individual random effects and environmental variables. We use simulations to show how different functional forms and individual or group variability in the growth function's parameters provide a very flexible description of growth trajectories. We then consider a case study of two fish populations of Salmo marmoratus and Salmo trutta to test the goodness of fit and predictive power of the models, along with the biological interpretability of vBGF's parameters when using different model formulations. The best models, according to AIC, included individual variability in both k and γ and cohort as predictor of growth trajectories, and are consistent with the hypothesis that habitat selection is more important than behavioral and metabolic traits in determining lifetime growth trajectories of the two fish species. Model predictions of individual growth trajectories were

  16. In Silico Model for Developmental Toxicity: How to Use QSAR Models and Interpret Their Results.

    Science.gov (United States)

    Marzo, Marco; Roncaglioni, Alessandra; Kulkarni, Sunil; Barton-Maclaren, Tara S; Benfenati, Emilio

    2016-01-01

    Modeling developmental toxicity has been a challenge for (Q)SAR model developers due to the complexity of the endpoint. Recently, some new in silico methods have been developed introducing the possibility to evaluate the integration of existing methods by taking advantage of various modeling perspectives. It is important that the model user is aware of the underlying basis of the different models in general, as well as the considerations and assumptions relative to the specific predictions that are obtained from these different models for the same chemical. The evaluation on the predictions needs to be done on a case-by-case basis, checking the analogs (possibly using structural, physicochemical, and toxicological information); for this purpose, the assessment of the applicability domain of the models provides further confidence in the model prediction. In this chapter, we present some examples illustrating an approach to combine human-based rules and statistical methods to support the prediction of developmental toxicity; we also discuss assumptions and uncertainties of the methodology.

  17. A learning-enabled neuron array IC based upon transistor channel models of biological phenomena.

    Science.gov (United States)

    Brink, S; Nease, S; Hasler, P; Ramakrishnan, S; Wunderlich, R; Basu, A; Degnan, B

    2013-02-01

    We present a single-chip array of 100 biologically-based electronic neuron models interconnected to each other and the outside environment through 30,000 synapses. The chip was fabricated in a standard 350 nm CMOS IC process. Our approach used dense circuit models of synaptic behavior, including biological computation and learning, as well as transistor channel models. We use Address-Event Representation (AER) spike communication for inputs and outputs to this IC. We present the IC architecture and infrastructure, including IC chip, configuration tools, and testing platform. We present measurement of small network of neurons, measurement of STDP neuron dynamics, and measurement from a compiled spiking neuron WTA topology, all compiled into this IC.

  18. ENABLING “ENERGY-AWARENESS” IN THE SEMANTIC 3D CITY MODEL OF VIENNA

    Directory of Open Access Journals (Sweden)

    G. Agugiaro

    2016-09-01

    Full Text Available This paper presents and discusses the first results regarding selection, analysis, preparation and eventual integration of a number of energy-related datasets, chosen in order to enrich a CityGML-based semantic 3D city model of Vienna. CityGML is an international standard conceived specifically as information and data model for semantic city models at urban and territorial scale. The still-in-development Energy Application Domain Extension (ADE is a CityGML extension conceived to specifically model, manage and store energy-related features and attributes for buildings. The work presented in this paper is embedded within the European Marie-Curie ITN project “CINERGY, Smart cities with sustainable energy systems”, which aims, among the rest, at developing urban decision making and operational optimisation software tools to minimise non-renewable energy use in cities. Given the scope and scale of the project, it is therefore vital to set up a common, unique and spatio-semantically coherent urban data model to be used as information hub for all applications being developed. This paper reports about the experiences done so far, it describes the test area in Vienna, Austria, and the available data sources, it shows and exemplifies the main data integration issues, the strategies developed to solve them in order to obtain the enriched 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.

  19. Multi-dimensional Magnetotelluric Modeling of General Anisotropy and Its Implication for Structural Interpretation

    Science.gov (United States)

    Guo, Z.; Wei, W.; Egbert, G. D.

    2015-12-01

    Although electrical anisotropy is likely at various scales in the Earth, present 3D inversion codes only allow for isotropic models. In fact, any effects of anisotropy present in any real data can always be accommodated by (possibly fine scale) isotropic structures. This suggests that some complex structures found in 3D inverse solutions (e.g., alternating elongate conductive and resistive "streaks" of Meqbel et al. (2014)), may actually represent anisotropic layers. As a step towards better understanding how anisotropy is manifest in 3D inverse models, and to better incorporate anisotropy in 3D MT interpretations, we have implemented new 1D, 2D AND 3D forward modeling codes which allow for general anisotropy and are implemented in matlab using an object oriented (OO) approach. The 1D code is used primarily to provide boundary conditions (BCs). For the 2D case we have used the OO approach to quickly develop and compare several variants including different formulations (three coupled electric field components, one electric and one magnetic component coupled) and different discretizations (staggered and fixed grids). The 3D case is implemented in integral form on a staggered grid, using either 1D or 2D BC. Iterative solvers, including divergence correction, allow solution for large model grids. As an initial application of these codes we are conducting synthetic inversion tests. We construct test models by replacing streaky conductivity layers, as found at the top of the mantle in the EarthScope models of Meqbel et al. (2014), with simpler smoothly varying anisotropic layers. The modeling process is iterated to obtain a reasonable match to actual data. Synthetic data generated from these 3D anisotropic models can then be inverted with a 3D code (ModEM) and compared to the inversions obtained with actual data. Results will be assessed, taking into account the diffusive nature of EM imaging, to better understand how actual anisotropy is mapped to structure by 3D

  20. Neonatal tolerance induction enables accurate evaluation of gene therapy for MPS I in a canine model.

    Science.gov (United States)

    Hinderer, Christian; Bell, Peter; Louboutin, Jean-Pierre; Katz, Nathan; Zhu, Yanqing; Lin, Gloria; Choa, Ruth; Bagel, Jessica; O'Donnell, Patricia; Fitzgerald, Caitlin A; Langan, Therese; Wang, Ping; Casal, Margret L; Haskins, Mark E; Wilson, James M

    2016-09-01

    High fidelity animal models of human disease are essential for preclinical evaluation of novel gene and protein therapeutics. However, these studies can be complicated by exaggerated immune responses against the human transgene. Here we demonstrate that dogs with a genetic deficiency of the enzyme α-l-iduronidase (IDUA), a model of the lysosomal storage disease mucopolysaccharidosis type I (MPS I), can be rendered immunologically tolerant to human IDUA through neonatal exposure to the enzyme. Using MPS I dogs tolerized to human IDUA as neonates, we evaluated intrathecal delivery of an adeno-associated virus serotype 9 vector expressing human IDUA as a therapy for the central nervous system manifestations of MPS I. These studies established the efficacy of the human vector in the canine model, and allowed for estimation of the minimum effective dose, providing key information for the design of first-in-human trials. This approach can facilitate evaluation of human therapeutics in relevant animal models, and may also have clinical applications for the prevention of immune responses to gene and protein replacement therapies. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Quality Concerns in Technical Education in India: A Quantifiable Quality Enabled Model

    Science.gov (United States)

    Gambhir, Victor; Wadhwa, N. C.; Grover, Sandeep

    2016-01-01

    Purpose: The paper aims to discuss current Technical Education scenarios in India. It proposes modelling the factors affecting quality in a technical institute and then applying a suitable technique for assessment, comparison and ranking. Design/methodology/approach: The paper chose graph theoretic approach for quantification of quality-enabled…

  2. Thermal modelling approaches to enable mitigation measures implementation for salmonid gravel stages in hydropeaking rivers

    Science.gov (United States)

    Casas-Mulet, R.; Alfredsen, K. T.

    2016-12-01

    The dewatering of salmon spawning redds can lead to early life stages mortality due to hydropeaking operations, with higher impact on the alevins stages as they have lower tolerance to dewatering than the eggs. Targeted flow-related mitigations measures can reduce such mortality, but it is essential to understand how hydropeaking change thermal regimes in rivers and may impact embryo development; only then optimal measures can be implemented at the right development stage. We present a set of experimental approaches and modelling tools for the estimation of hatch and swim-up dates based on water temperature data in the river Lundesokna (Norway). We identified critical periods for gravel-stages survival and through comparing hydropeaking vs unregulated thermal and hydrological regimes, we established potential flow-release measures to minimise mortality. Modelling outcomes were then used assess the cost-efficiency of each measure. The combinations of modelling tools used in this study were overall satisfactory and their application can be useful especially in systems where little field data is available. Targeted measures built on well-informed modelling approaches can be pre-tested based on their efficiency to mitigate dewatering effects vs. the hydropower system capacity to release or conserve water for power production. Overall, environmental flow releases targeting specific ecological objectives can provide better cost-effective options than conventional operational rules complying with general legislation.

  3. Developmental Impact Analysis of an ICT-Enabled Scalable Healthcare Model in BRICS Economies

    Directory of Open Access Journals (Sweden)

    Dhrubes Biswas

    2012-06-01

    Full Text Available This article highlights the need for initiating a healthcare business model in a grassroots, emerging-nation context. This article’s backdrop is a history of chronic anomalies afflicting the healthcare sector in India and similarly placed BRICS nations. In these countries, a significant percentage of populations remain deprived of basic healthcare facilities and emergency services. Community (primary care services are being offered by public and private stakeholders as a panacea to the problem. Yet, there is an urgent need for specialized (tertiary care services at all levels. As a response to this challenge, an all-inclusive health-exchange system (HES model, which utilizes information communication technology (ICT to provide solutions in rural India, has been developed. The uniqueness of the model lies in its innovative hub-and-spoke architecture and its emphasis on affordability, accessibility, and availability to the masses. This article describes a developmental impact analysis (DIA that was used to assess the impact of this model. The article contributes to the knowledge base of readers by making them aware of the healthcare challenges emerging nations are facing and ways to mitigate those challenges using entrepreneurial solutions.

  4. Plant parameters for plant functional groups of western rangelands to enable process-based simulation modeling

    Science.gov (United States)

    Regional environmental assessments with process-based models require realistic estimates of plant parameters for the primary plant functional groups in the region. “Functional group” in this context is an operational term, based on similarities in plant type and in plant parameter values. Likewise...

  5. Alternative 3D Modeling Approaches Based on Complex Multi-Source Geological Data Interpretation

    Institute of Scientific and Technical Information of China (English)

    李明超; 韩彦青; 缪正建; 高伟

    2014-01-01

    Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological struc-ture through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and inte-grated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline (NURBS) technique is then pre-sented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.

  6. Interpreting microarray data to build models of microbial genetic regulation networks

    Science.gov (United States)

    Sokhansanj, Bahrad A.; Garnham, Janine B.; Fitch, J. Patrick

    2002-06-01

    Microarrays and DNA chips are an efficient, high-throughput technology for measuring temporal changes in the expression of message RNA (mRNA) from thousands of genes (often the entire genome of an organism) in a single experiment. A crucial drawback of microarray experiments is that results are inherently qualitative: data are generally neither quantitatively repeatable, nor may microarray spot intensities be calibrated to in vivo mRNA concentrations. Nevertheless, microarrays represent by the far the cheapest and fastest way to obtain information about a cell's global genetic regulatory networks. Besides poor signal characteristics, the massive number of data produced by microarray experiments pose challenges for visualization, interpretation and model building. Towards initial model development, we have developed a Java tool for visualizing the spatial organization of gene expression in bacteria. We are also developing an approach to inferring and testing qualitative fuzzy logic models of gene regulation using microarray data. Because we are developing and testing qualitative hypotheses that do not require quantitative precision, our statistical evaluation of experimental data is limited to checking for validity and consistency. Our goals are to maximize the impact of inexpensive microarray technology, bearing in mind that biological models and hypotheses are typically qualitative.

  7. Interpreting Microarray Data to Build Models of Microbial Genetic Regulation Networks

    Energy Technology Data Exchange (ETDEWEB)

    Sokhansanj, B; Garnham, J B; Fitch, J P

    2002-01-23

    Microarrays and DNA chips are an efficient, high-throughput technology for measuring temporal changes in the expression of message RNA (mRNA) from thousands of genes (often the entire genome of an organism) in a single experiment. A crucial drawback of microarray experiments is that results are inherently qualitative: data are generally neither quantitatively repeatable, nor may microarray spot intensities be calibrated to in vivo mRNA concentrations. Nevertheless, microarrays represent by the far the cheapest and fastest way to obtain information about a cells global genetic regulatory networks. Besides poor signal characteristics, the massive number of data produced by microarray experiments poses challenges for visualization, interpretation and model building. Towards initial model development, we have developed a Java tool for visualizing the spatial organization of gene expression in bacteria. We are also developing an approach to inferring and testing qualitative fuzzy logic models of gene regulation using microarray data. Because we are developing and testing qualitative hypotheses that do not require quantitative precision, our statistical evaluation of experimental data is limited to checking for validity and consistency. Our goals are to maximize the impact of inexpensive microarray technology, bearing in mind that biological models and hypotheses are typically qualitative.

  8. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.

    Science.gov (United States)

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-09-18

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  9. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    Directory of Open Access Journals (Sweden)

    Muhammad Golam Kibria

    2015-09-01

    Full Text Available User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  10. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    Industries need to adopt the environmental management concepts in the traditional supply chain management. The green supply chain management (GSCM) is an established concept to ensure environment-friendly activities in industry. This paper identifies the relationship of driving and dependence...... that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...... as a vital role among other practices. Commitment to GSCM from senior managers and cooperation with customers for cleaner production occupy the highest level. © 2013 © 2013 Taylor & Francis....

  11. Interpretation of graphene mobility data by means of a semiclassical Monte Carlo transport model

    Science.gov (United States)

    Bresciani, M.; Palestri, P.; Esseni, D.; Selmi, L.; Szafranek, B.; Neumaier, D.

    2013-11-01

    In this paper we compare experimental data and simulations based on a semiclassical model in order to investigate the relative importance of a several scattering mechanisms on the mobility of graphene nano-ribbons. Furthermore, some new experimental results complementing the range of ribbon widths available in the literature are also reported. We show that scattering with remote phonons originating in the substrate insulator can appreciably reduce the mobility of graphene and it should not be neglected in the interpretation of graphene mobility data. In fact by accounting for remote phonon scattering we could reproduce fairly well the experimentally observed dependence of the mobility on the ribbon width, the temperature and the inversion density, whereas the agreement with experiments is much worse when remote phonons are not included in the calculations.

  12. Analysis of interactions among the barriers to JIT production: interpretive structural modelling approach

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2015-12-01

    `Survival of the fittest' is the reality in modern global competition. Organizations around the globe are adopting or willing to embrace just-in-time (JIT) production to reinforce the competitiveness. Even though JIT is the most powerful inventory management methodologies it is not free from barriers. Barriers derail the implementation of JIT production system. One of the most significant tasks of top management is to identify and understand the relationship between the barriers to JIT production for alleviating its bad effects. The aims of this paper are to study the barriers hampering the implementation of successful JIT production and analysing the interactions among the barriers using interpretive structural modelling technique. Twelve barriers have been identified after reviewing literature. This paper offers a roadmap for preparing an action plan to tackle the barriers in successful implementation of JIT production.

  13. Model-based interpretation of the ECG: a methodology for temporal and spatial reasoning.

    Science.gov (United States)

    Tong, D A; Widman, L E

    1993-06-01

    A new software architecture for automatic interpretation of the electrocardiographic rhythm is presented. Using the hypothesize-and-test paradigm, a semiquantitative physiological model and production rule-based knowledge are combined to reason about time- and space-varying characteristics of complex heart rhythms. A prototype system implementing the methodology accepts a semiquantitative description of the onset and morphology of the P waves and QRS complexes that are observed in the body-surface electrocardiogram. A beat-by-beat explanation of the origin and consequences of each wave is produced. The output is in the standard cardiology laddergram format. The current prototype generates the full differential diagnosis of narrow-complex tachycardia and correctly diagnoses complex rhythms, such as atrioventricular (AV) nodal reentrant tachycardia with either hidden or visible P waves and varying degrees of AV block.

  14. RNA-seq in the tetraploid Xenopus laevis enables genome-wide insight in a classic developmental biology model organism.

    Science.gov (United States)

    Amin, Nirav M; Tandon, Panna; Osborne Nishimura, Erin; Conlon, Frank L

    2014-04-01

    Advances in sequencing technology have significantly advanced the landscape of developmental biology research. The dissection of genetic networks in model and non-model organisms has been greatly enhanced with high-throughput sequencing technologies. RNA-seq has revolutionized the ability to perform developmental biology research in organisms without a published genome sequence. Here, we describe a protocol for developmental biologists to perform RNA-seq on dissected tissue or whole embryos. We start with the isolation of RNA and generation of sequencing libraries. We further show how to interpret and analyze the large amount of sequencing data that is generated in RNA-seq. We explore the abilities to examine differential expression, gene duplication, transcript assembly, alternative splicing and SNP discovery. For the purposes of this article, we use Xenopus laevis as the model organism to discuss uses of RNA-seq in an organism without a fully annotated genome sequence. Copyright © 2013. Published by Elsevier Inc.

  15. Time-varying effect modeling with longitudinal data truncated by death: conditional models, interpretations, and inference.

    Science.gov (United States)

    Estes, Jason P; Nguyen, Danh V; Dalrymple, Lorien S; Mu, Yi; Şentürk, Damla

    2016-05-20

    Recent studies found that infection-related hospitalization was associated with increased risk of cardiovascular (CV) events, such as myocardial infarction and stroke in the dialysis population. In this work, we develop time-varying effects modeling tools in order to examine the CV outcome risk trajectories during the time periods before and after an initial infection-related hospitalization. For this, we propose partly conditional and fully conditional partially linear generalized varying coefficient models (PL-GVCMs) for modeling time-varying effects in longitudinal data with substantial follow-up truncation by death. Unconditional models that implicitly target an immortal population is not a relevant target of inference in applications involving a population with high mortality, like the dialysis population. A partly conditional model characterizes the outcome trajectory for the dynamic cohort of survivors, where each point in the longitudinal trajectory represents a snapshot of the population relationships among subjects who are alive at that time point. In contrast, a fully conditional approach models the time-varying effects of the population stratified by the actual time of death, where the mean response characterizes individual trends in each cohort stratum. We compare and contrast partly and fully conditional PL-GVCMs in our aforementioned application using hospitalization data from the United States Renal Data System. For inference, we develop generalized likelihood ratio tests. Simulation studies examine the efficacy of estimation and inference procedures.

  16. A probabilistic generative model for quantification of DNA modifications enables analysis of demethylation pathways.

    Science.gov (United States)

    Äijö, Tarmo; Huang, Yun; Mannerström, Henrik; Chavez, Lukas; Tsagaratou, Ageliki; Rao, Anjana; Lähdesmäki, Harri

    2016-03-14

    We present a generative model, Lux, to quantify DNA methylation modifications from any combination of bisulfite sequencing approaches, including reduced, oxidative, TET-assisted, chemical-modification assisted, and methylase-assisted bisulfite sequencing data. Lux models all cytosine modifications (C, 5mC, 5hmC, 5fC, and 5caC) simultaneously together with experimental parameters, including bisulfite conversion and oxidation efficiencies, as well as various chemical labeling and protection steps. We show that Lux improves the quantification and comparison of cytosine modification levels and that Lux can process any oxidized methylcytosine sequencing data sets to quantify all cytosine modifications. Analysis of targeted data from Tet2-knockdown embryonic stem cells and T cells during development demonstrates DNA modification quantification at unprecedented detail, quantifies active demethylation pathways and reveals 5hmC localization in putative regulatory regions.

  17. Cultural Resources as Sustainability Enablers: Towards a Community-Based Cultural Heritage Resources Management (COBACHREM Model

    Directory of Open Access Journals (Sweden)

    Susan O. Keitumetse

    2013-12-01

    Full Text Available People inhabit and change environments using socio-cultural and psycho-social behaviors and processes. People use their socio-cultural understanding of phenomena to interact with the environment. People are carriers of cultural heritage. These characteristics make cultural values ubiquitous in all people-accessed and people-inhabited geographic spaces of the world, making people readily available assets through which environmental sustainability can be implemented. Yet, people’s conservation development is rarely planned using cultural resources. It is against this background that a Community-Based Cultural Heritage Resources Management (COBACHREM model is initiated as a new approach that outlines the symbiosis between cultural heritage, environment and various stakeholders, with a view to create awareness about neglected conservation indicators inherent in cultural resources and better placed to complement already existing natural resources conservation indicators. The model constitutes a two-phased process with four (04 levels of operation, namely: level I (production; level II (reproduction; level III (consumption that distinguish specific components of cultural heritage resources to be monitored at level IV for sustainability using identified cultural conservation indicators. Monitored indicators, which are limitless, constitute work in progress of the model and will be constantly reviewed, renewed and updated through time. Examples of monitoring provided in this article are the development of cultural competency-based training curriculum that will assist communities to transform cultural information into certifiable intellectual (educational and culture-economic (tourism assets. Another monitoring example is the mainstreaming of community cultural qualities into already existing environmental conservation frameworks such as eco-certification to infuse new layers of conservation indicators that enrich resource sustainability. The technical

  18. Robust Workflow Systems + Flexible Geoprocessing Services = Geo-enabled Model Web?

    OpenAIRE

    GRANELL CANUT CARLOS

    2013-01-01

    The chapter begins briefly exploring the concept of modeling in geosciences which notably benefits from advances on the integration of geoprocessing services and workflow systems. In section 3, we provide a comprehensive background on the technology trends we treat in the chapter. On one hand we deal with workflow systems, categorized normally in the literature as scientific and business workflow systems (Barga and Gannon 2007). In particular, we introduce some prominent examples of scient...

  19. Distributed analysis of simultaneous EEG-fMRI time-series: modeling and interpretation issues.

    Science.gov (United States)

    Esposito, Fabrizio; Aragri, Adriana; Piccoli, Tommaso; Tedeschi, Gioacchino; Goebel, Rainer; Di Salle, Francesco

    2009-10-01

    Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) represent brain activity in terms of a reliable anatomical localization and a detailed temporal evolution of neural signals. Simultaneous EEG-fMRI recordings offer the possibility to greatly enrich the significance and the interpretation of the single modality results because the same neural processes are observed from the same brain at the same time. Nonetheless, the different physical nature of the measured signals by the two techniques renders the coupling not always straightforward, especially in cognitive experiments where spatially localized and distributed effects coexist and evolve temporally at different temporal scales. The purpose of this article is to illustrate the combination of simultaneously recorded EEG and fMRI signals exploiting the principles of EEG distributed source modeling. We define a common source space for fMRI and EEG signal projection and gather a conceptually unique framework for the spatial and temporal comparative analysis. We illustrate this framework in a graded-load working-memory simultaneous EEG-fMRI experiment based on the n-back task where sustained load-dependent changes in the blood-oxygenation-level-dependent (BOLD) signals during continuous item memorization co-occur with parametric changes in the EEG theta power induced at each single item. In line with previous studies, we demonstrate on two single-subject cases how the presented approach is capable of colocalizing in midline frontal regions two phenomena simultaneously observed at different temporal scales, such as the sustained negative changes in BOLD activity and the parametric EEG theta synchronization. We discuss the presented approach in relation to modeling and interpretation issues typically arising in simultaneous EEG-fMRI studies.

  20. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  1. Better interpretation of snow remote sensing data with physics-based models

    Science.gov (United States)

    Sandells, M.; Davenport, I. J.; Quaife, T. L.; Flerchinger, G. N.; Marks, D. G.; Gurney, R. J.

    2012-12-01

    Interpretation of remote sensing data requires a model and some assumptions, and the quality of the end product depends on the accuracy and appropriateness of these. Snow is a vital component of the water cycle, both socially and economically, so accurate monitoring of this resource is important. However, the snow mass products from passive microwave data may have large errors in them, and were deemed too unreliable for consideration in the latest Intergovernmental Panel on Climate Change Assessment Report. The SSM/I passive microwave snow mass retrieval algorithm uses a linear brightness temperature difference model, and assumptions that snow has a fixed grain diameter of 0.8mm and density of 300 kg m-3. In reality, the properties of the snow vary in time and space depending on its thermal history, and scattering of microwave radiation is very sensitive to snow properties. If snow mass retrievals are to be made from remote sensing data, then these properties must be known rather well. Layered physics-based models are capable of simulating the evolution of profiles of temperature, water content in the snow or soil, and snow grain size. These simulations could be used to provide information to help understand remote sensing data. Additional information from other remote sensing sources could enhance the accuracy of the product. For example, surface snow grain size can be obtained from near-infrared reflectance observations, and these data can be used to constrain the physically-based model, as could thermal observations. Here, we will present a new method that could be used to derive better estimates of snow mass and soil moisture. The system is comprised of a physically-based model of the snow and soil to derive snow and soil properties, a snow microwave emission model to estimate the satellite observations and ancillary data to constrain the physically-based model. These components will be used to estimate snow mass from passive microwave data with data

  2. SciDAC-Data, A Project to Enabling Data Driven Modeling of Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mubarak, M.; Ding, P.; Aliaga, L.; Tsaris, A.; Norman, A.; Lyon, A.; Ross, R.

    2016-10-10

    The SciDAC-Data project is a DOE funded initiative to analyze and exploit two decades of information and analytics that have been collected by the Fermilab Data Center on the organization, movement, and consumption of High Energy Physics data. The project will analyze the analysis patterns and data organization that have been used by the NOvA, MicroBooNE, MINERvA and other experiments, to develop realistic models of HEP analysis workflows and data processing. The SciDAC-Data project aims to provide both realistic input vectors and corresponding output data that can be used to optimize and validate simulations of HEP analysis. These simulations are designed to address questions of data handling, cache optimization and workflow structures that are the prerequisites for modern HEP analysis chains to be mapped and optimized to run on the next generation of leadership class exascale computing facilities. We will address the use of the SciDAC-Data distributions acquired from Fermilab Data Center’s analysis workflows and corresponding to around 71,000 HEP jobs, as the input to detailed queuing simulations that model the expected data consumption and caching behaviors of the work running in HPC environments. In particular we describe in detail how the Sequential Access via Metadata (SAM) data handling system in combination with the dCache/Enstore based data archive facilities have been analyzed to develop the radically different models of the analysis of HEP data. We present how the simulation may be used to analyze the impact of design choices in archive facilities.

  3. Remote patient management: technology-enabled innovation and evolving business models for chronic disease care.

    Science.gov (United States)

    Coye, Molly Joel; Haselkorn, Ateret; DeMello, Steven

    2009-01-01

    Remote patient management (RPM) is a transformative technology that improves chronic care management while reducing net spending for chronic disease. Broadly deployed within the Veterans Health Administration and in many small trials elsewhere, RPM has been shown to support patient self-management, shift responsibilities to non-clinical providers, and reduce the use of emergency department and hospital services. Because transformative technologies offer major opportunities to advance national goals of improved quality and efficiency in health care, it is important to understand their evolution, the experiences of early adopters, and the business models that may support their deployment.

  4. Understanding Systematics in ZZ Ceti Model Fitting to Enable Differential Seismology

    Science.gov (United States)

    Fuchs, J. T.; Dunlap, B. H.; Clemens, J. C.; Meza, J. A.; Dennihy, E.; Koester, D.

    2017-03-01

    We are conducting a large spectroscopic survey of over 130 Southern ZZ Cetis with the Goodman Spectrograph on the SOAR Telescope. Because it employs a single instrument with high UV throughput, this survey will both improve the signal-to-noise of the sample of SDSS ZZ Cetis and provide a uniform dataset for model comparison. We are paying special attention to systematics in the spectral fitting and quantify three of those systematics here. We show that relative positions in the log g -Teff plane are consistent for these three systematics.

  5. Understanding Systematics in ZZ Ceti Model Fitting to Enable Differential Seismology

    CERN Document Server

    Fuchs, J T; Clemens, J C; Meza, J A; Dennihy, E; Koester, D

    2016-01-01

    We are conducting a large spectroscopic survey of over 130 Southern ZZ Cetis with the Goodman Spectrograph on the SOAR Telescope. Because it employs a single instrument with high UV throughput, this survey will both improve the signal-to-noise of the sample of SDSS ZZ Cetis and provide a uniform dataset for model comparison. We are paying special attention to systematics in the spectral fitting and quantify three of those systematics here. We show that relative positions in the $\\log{g}$-$T_{\\rm eff}$ plane are consistent for these three systematics.

  6. Upgrading aquifer test interpretations with numerical axisymmetric flow models using MODFLOW in the Donana area (Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Bravo, N.; Guardiola-Albert, C.

    2011-07-01

    Though axisymmetric modelling is not widely used it can be incorporated into MODFLOW by tricking the grids with a log-scaling method to simulate the radial flow to a well and to upgrade hydraulic properties. Furthermore, it may reduce computer runtimes considerably by decreasing the number of dimensions. The Almonte-Marismas aquifer is a heterogeneous multi-layer aquifer underlying the Donana area, one of the most important wetlands in Europe. The characterization of hydraulic conductivity is of great importance, because this factor is included in the regional groundwater model, the main water-management support tool in the area. Classical interpretations of existing pumping tests have never taken into account anisotropy, heterogeneity and large head gradients. Thus, to improve the characterization of hydraulic conductivity in the groundwater model, five former pumping tests, located in different hydrogeological areas, have been modelled numerically to represent radial flow in different parts of the aquifer. These numerical simulations have proved to be suitable for reproducing groundwater flow during a pumping test, to corroborate hypotheses concerning unconfined or semi-confined aquifers and even to estimate different hydraulic conductivity values for each lithological layer drilled, which constitutes the main improvement of this model in comparison with classical methods. A comparison of the results shows that the values of the numerical model are similar to those obtained by classical analytic techniques but are always lower for the most permeable layer. It is also clear that the less complex the lithological distribution the more accurate the estimations of hydraulic conductivity. (Author) 46 refs.

  7. Structure classification from the joint interpretation of seismic and magnetotelluric models

    Science.gov (United States)

    Bedrosian, P. A.; Maercklin, N.; Ritter, O.; Ryberg, T.; Weckmann, U.

    2004-12-01

    Magnetotelluric (MT) and seismic methods provide information about the conductivity and velocity structure of the subsurface on similar scales and resolutions. The independent electrical and seismic tomograms can be combined, using a classification approach, to map lithologic, tectonic, and hydrologic boundaries. The method employed is independent of theoretical/empirical relations linking electrical and seismic parameters, and based solely on the statistical correlation of physical property models in parameter space. Regions of high correlation (classes) can in turn be examined in the spatial domain. The spatial distribution of these clusters, and the boundaries between them, provide structural information not always evident from the individual models. The method is applied to coincident seismic velocity and electrical resistivity models from two active transform margins. Along the San Andreas Fault, classification studies reveal the strong lithological contrast across the fault, suggesting it is sub-vertical in the upper crust throughout central California. A possible hydrologic boundary is further identified to the northeast of the fault. Classification studies along the Dead Sea Transform reflect the dominant lithologies surrounding the fault, and suggest the fault is again vertical in the upper crust, but offset to the east of the surface trace. There are indications that the basement is uplifted by ˜ 2 km east of the fault. These results suggest a quantitative, joint interpretation of MT and seismic data can greatly improve our ability to delineate lithologic, tectonic, and hydrologic boundaries, thus overcoming some of the resolution limitations inherent to the MT and seismic methods.

  8. The non-power model of the genetic code: a paradigm for interpreting genomic information.

    Science.gov (United States)

    Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo

    2016-03-13

    In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. © 2016 The Author(s).

  9. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  10. Modular degradable dendrimers enable small RNAs to extend survival in an aggressive liver cancer model

    Science.gov (United States)

    Zhou, Kejin; Nguyen, Liem H.; Miller, Jason B.; Yan, Yunfeng; Kos, Petra; Xiong, Hu; Li, Lin; Hao, Jing; Minnig, Jonathan T.; Siegwart, Daniel J.

    2016-01-01

    RNA-based cancer therapies are hindered by the lack of delivery vehicles that avoid cancer-induced organ dysfunction, which exacerbates carrier toxicity. We address this issue by reporting modular degradable dendrimers that achieve the required combination of high potency to tumors and low hepatotoxicity to provide a pronounced survival benefit in an aggressive genetic cancer model. More than 1,500 dendrimers were synthesized using sequential, orthogonal reactions where ester degradability was systematically integrated with chemically diversified cores, peripheries, and generations. A lead dendrimer, 5A2-SC8, provided a broad therapeutic window: identified as potent [EC50 75 mg/kg dendrimer repeated dosing). Delivery of let-7g microRNA (miRNA) mimic inhibited tumor growth and dramatically extended survival. Efficacy stemmed from a combination of a small RNA with the dendrimer’s own negligible toxicity, therefore illuminating an underappreciated complication in treating cancer with RNA-based drugs. PMID:26729861

  11. Enabling Dark Energy Science with Deep Generative Models of Galaxy Images

    CERN Document Server

    Ravanbakhsh, Siamak; Mandelbaum, Rachel; Schneider, Jeff; Poczos, Barnabas

    2016-01-01

    Understanding the nature of dark energy, the mysterious force driving the accelerated expansion of the Universe, is a major challenge of modern cosmology. The next generation of cosmological surveys, specifically designed to address this issue, rely on accurate measurements of the apparent shapes of distant galaxies. However, shape measurement methods suffer from various unavoidable biases and therefore will rely on a precise calibration to meet the accuracy requirements of the science analysis. This calibration process remains an open challenge as it requires large sets of high quality galaxy images. To this end, we study the application of deep conditional generative models in generating realistic galaxy images. In particular we consider variations on conditional variational autoencoder and introduce a new adversarial objective for training of conditional generative networks. Our results suggest a reliable alternative to the acquisition of expensive high quality observations for generating the calibration d...

  12. A transgenic quail model that enables dynamic imaging of amniote embryogenesis.

    Science.gov (United States)

    Huss, David; Benazeraf, Bertrand; Wallingford, Allison; Filla, Michael; Yang, Jennifer; Fraser, Scott E; Lansford, Rusty

    2015-08-15

    Embryogenesis is the coordinated assembly of tissues during morphogenesis through changes in individual cell behaviors and collective cell movements. Dynamic imaging, combined with quantitative analysis, is ideal for investigating fundamental questions in developmental biology involving cellular differentiation, growth control and morphogenesis. However, a reliable amniote model system that is amenable to the rigors of extended, high-resolution imaging and cell tracking has been lacking. To address this shortcoming, we produced a novel transgenic quail that ubiquitously expresses nuclear localized monomer cherry fluorescent protein (chFP). We characterize the expression pattern of chFP and provide concrete examples of how Tg(PGK1:H2B-chFP) quail can be used to dynamically image and analyze key morphogenetic events during embryonic stages X to 11.

  13. A Gamma-Knife-Enabled Mouse Model of Cerebral Single-Hemisphere Delayed Radiation Necrosis.

    Directory of Open Access Journals (Sweden)

    Xiaoyu Jiang

    Full Text Available To develop a Gamma Knife-based mouse model of late time-to-onset, cerebral radiation necrosis (RN with serial evaluation by magnetic resonance imaging (MRI and histology.Mice were irradiated with the Leksell Gamma Knife® (GK PerfexionTM (Elekta AB; Stockholm, Sweden with total single-hemispheric radiation doses (TRD of 45- to 60-Gy, delivered in one to three fractions. RN was measured using T2-weighted MR images, while confirmation of tissue damage was assessed histologically by hematoxylin & eosin, trichrome, and PTAH staining.MRI measurements demonstrate that TRD is a more important determinant of both time-to-onset and progression of RN than fractionation. The development of RN is significantly slower in mice irradiated with 45-Gy than 50- or 60-Gy, where RN development is similar. Irradiated mouse brains demonstrate all of the pathologic features observed clinically in patients with confirmed RN. A semi-quantitative (0 to 3 histologic grading system, capturing both the extent and severity of injury, is described and illustrated. Tissue damage, as assessed by a histologic score, correlates well with total necrotic volume measured by MRI (correlation coefficient = 0.948, with p<0.0001, and with post-irradiation time (correlation coefficient = 0.508, with p<0.0001.Following GK irradiation, mice develop late time-to-onset cerebral RN histology mirroring clinical observations. MR imaging provides reliable quantification of the necrotic volume that correlates well with histologic score. This mouse model of RN will provide a platform for mechanism of action studies, the identification of imaging biomarkers of RN, and the development of clinical studies for improved mitigation and neuroprotection.

  14. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu [Laboratory of Genetics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715 (United States)

    2014-11-28

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution. Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.

  15. Concepts, Instruments, and Model Systems that Enabled the Rapid Evolution of Surface Science

    Energy Technology Data Exchange (ETDEWEB)

    Somorjai, Gabor A.; Park, Jeong Y.

    2009-01-10

    Over the past forty years, surface science has evolved to become both an atomic scale and a molecular scale science. Gerhard Ertl's group has made major contributions in the field of molecular scale surface science, focusing on vacuum studies of adsorption chemistry on single crystal surfaces. In this review, we outline three important aspects which have led to recent advances in surface chemistry: the development of new concepts, in situ instruments for molecular scale surface studies at buried interfaces (solid-gas and solid-liquid), and new model nanoparticle surface systems, in addition to single crystals. Combined molecular beam surface scattering and low energy electron diffraction (LEED)- surface structure studies on metal single crystal surfaces revealed concepts, including adsorbate-induced surface restructuring and the unique activity of defects, atomic steps, and kinks on metal surfaces. We have combined high pressure catalytic reaction studies with ultra high vacuum (UHV) surface characterization techniques using a UHV chamber equipped with a high pressure reaction cell. New instruments, such as high pressure sum frequency generation (SFG) vibrational spectroscopy and scanning tunneling microscopy (STM) which permit molecular-level surface studies have been developed. Tools that access broad ranges of pressures can be used for both the in situ characterization of solid-gas and solid-liquid buried interfaces and the study of catalytic reaction intermediates. The model systems for the study of molecular surface chemistry have evolved from single crystals to nanoparticles in the 1-10 nm size range, which are currently the preferred media in catalytic reaction studies.

  16. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  17. Interpreting expression data with metabolic flux models: predicting Mycobacterium tuberculosis mycolic acid production.

    Directory of Open Access Journals (Sweden)

    Caroline Colijn

    2009-08-01

    Full Text Available Metabolism is central to cell physiology, and metabolic disturbances play a role in numerous disease states. Despite its importance, the ability to study metabolism at a global scale using genomic technologies is limited. In principle, complete genome sequences describe the range of metabolic reactions that are possible for an organism, but cannot quantitatively describe the behaviour of these reactions. We present a novel method for modeling metabolic states using whole cell measurements of gene expression. Our method, which we call E-Flux (as a combination of flux and expression, extends the technique of Flux Balance Analysis by modeling maximum flux constraints as a function of measured gene expression. In contrast to previous methods for metabolically interpreting gene expression data, E-Flux utilizes a model of the underlying metabolic network to directly predict changes in metabolic flux capacity. We applied E-Flux to Mycobacterium tuberculosis, the bacterium that causes tuberculosis (TB. Key components of mycobacterial cell walls are mycolic acids which are targets for several first-line TB drugs. We used E-Flux to predict the impact of 75 different drugs, drug combinations, and nutrient conditions on mycolic acid biosynthesis capacity in M. tuberculosis, using a public compendium of over 400 expression arrays. We tested our method using a model of mycolic acid biosynthesis as well as on a genome-scale model of M. tuberculosis metabolism. Our method correctly predicts seven of the eight known fatty acid inhibitors in this compendium and makes accurate predictions regarding the specificity of these compounds for fatty acid biosynthesis. Our method also predicts a number of additional potential modulators of TB mycolic acid biosynthesis. E-Flux thus provides a promising new approach for algorithmically predicting metabolic state from gene expression data.

  18. Modeling ductal carcinoma in situ: a HER2-Notch3 collaboration enables luminal filling.

    LENUS (Irish Health Repository)

    Pradeep, C-R

    2012-02-16

    A large fraction of ductal carcinoma in situ (DCIS), a non-invasive precursor lesion of invasive breast cancer, overexpresses the HER2\\/neu oncogene. The ducts of DCIS are abnormally filled with cells that evade apoptosis, but the underlying mechanisms remain incompletely understood. We overexpressed HER2 in mammary epithelial cells and observed growth factor-independent proliferation. When grown in extracellular matrix as three-dimensional spheroids, control cells developed a hollow lumen, but HER2-overexpressing cells populated the lumen by evading apoptosis. We demonstrate that HER2 overexpression in this cellular model of DCIS drives transcriptional upregulation of multiple components of the Notch survival pathway. Importantly, luminal filling required upregulation of a signaling pathway comprising Notch3, its cleaved intracellular domain and the transcriptional regulator HES1, resulting in elevated levels of c-MYC and cyclin D1. In line with HER2-Notch3 collaboration, drugs intercepting either arm reverted the DCIS-like phenotype. In addition, we report upregulation of Notch3 in hyperplastic lesions of HER2 transgenic animals, as well as an association between HER2 levels and expression levels of components of the Notch pathway in tumor specimens of breast cancer patients. Therefore, it is conceivable that the integration of the Notch and HER2 signaling pathways contributes to the pathophysiology of DCIS.

  19. Enabling Health Reform through Regional Health Information Exchange: A Model Study from China

    Directory of Open Access Journals (Sweden)

    Jianbo Lei

    2017-01-01

    Full Text Available Objective. To investigate and share the major challenges and experiences of building a regional health information exchange system in China in the context of health reform. Methods. This study used interviews, focus groups, a field study, and a literature review to collect insights and analyze data. The study examined Xinjin’s approach to developing and implementing a health information exchange project, using exchange usage data for analysis. Results. Within three years and after spending approximately $2.4 million (15 million RMB, Xinjin County was able to build a complete, unified, and shared information system and many electronic health record components to integrate and manage health resources for 198 health institutions in its jurisdiction, thus becoming a model of regional health information exchange for facilitating health reform. Discussion. Costs, benefits, experiences, and lessons were discussed, and the unique characteristics of the Xinjin case and a comparison with US cases were analyzed. Conclusion. The Xinjin regional health information exchange system is different from most of the others due to its government-led, government-financed approach. Centralized and coordinated efforts played an important role in its operation. Regional health information exchange systems have been proven critical for meeting the global challenges of health reform.

  20. Connectivity Homology Enables Inter-Species Network Models of Synthetic Lethality

    Science.gov (United States)

    Jacunski, Alexandra; Dixon, Scott J.; Tatonetti, Nicholas P.

    2015-01-01

    Synthetic lethality is a genetic interaction wherein two otherwise nonessential genes cause cellular inviability when knocked out simultaneously. Drugs can mimic genetic knock-out effects; therefore, our understanding of promiscuous drugs, polypharmacology-related adverse drug reactions, and multi-drug therapies, especially cancer combination therapy, may be informed by a deeper understanding of synthetic lethality. However, the colossal experimental burden in humans necessitates in silico methods to guide the identification of synthetic lethal pairs. Here, we present SINaTRA (Species-INdependent TRAnslation), a network-based methodology that discovers genome-wide synthetic lethality in translation between species. SINaTRA uses connectivity homology, defined as biological connectivity patterns that persist across species, to identify synthetic lethal pairs. Importantly, our approach does not rely on genetic homology or structural and functional similarity, and it significantly outperforms models utilizing these data. We validate SINaTRA by predicting synthetic lethality in S. pombe using S. cerevisiae data, then identify over one million putative human synthetic lethal pairs to guide experimental approaches. We highlight the translational applications of our algorithm for drug discovery by identifying clusters of genes significantly enriched for single- and multi-drug cancer therapies. PMID:26451775

  1. Evaluating the skills of isotope-enabled general circulation models against in situ atmospheric water vapor isotope observations

    Science.gov (United States)

    Steen-Larsen, H. C.; Risi, C.; Werner, M.; Yoshimura, K.; Masson-Delmotte, V.

    2017-01-01

    The skills of isotope-enabled general circulation models are evaluated against atmospheric water vapor isotopes. We have combined in situ observations of surface water vapor isotopes spanning multiple field seasons (2010, 2011, and 2012) from the top of the Greenland Ice Sheet (NEEM site: 77.45°N, 51.05°W, 2484 m above sea level) with observations from the marine boundary layer of the North Atlantic and Arctic Ocean (Bermuda Islands 32.26°N, 64.88°W, year: 2012; south coast of Iceland 63.83°N, 21.47°W, year: 2012; South Greenland 61.21°N, 47.17°W, year: 2012; Svalbard 78.92°N, 11.92°E, year: 2014). This allows us to benchmark the ability to simulate the daily water vapor isotope variations from five different simulations using isotope-enabled general circulation models. Our model-data comparison documents clear isotope biases both on top of the Greenland Ice Sheet (1-11‰ for δ18O and 4-19‰ for d-excess depending on model and season) and in the marine boundary layer (maximum differences for the following: Bermuda δ18O = 1‰, d-excess = 3‰; South coast of Iceland δ18O = 2‰, d-excess = 5‰; South Greenland δ18O = 4‰, d-excess = 7‰; Svalbard δ18O = 2‰, d-excess = 7‰). We find that the simulated isotope biases are not just explained by simulated biases in temperature and humidity. Instead, we argue that these isotope biases are related to a poor simulation of the spatial structure of the marine boundary layer water vapor isotopic composition. Furthermore, we specifically show that the marine boundary layer water vapor isotopes of the Baffin Bay region show strong influence on the water vapor isotopes at the NEEM deep ice core-drilling site in northwest Greenland. Our evaluation of the simulations using isotope-enabled general circulation models also documents wide intermodel spatial variability in the Arctic. This stresses the importance of a coordinated water vapor isotope-monitoring network in order to discriminate amongst these model

  2. The value of soil respiration measurements for interpreting and modeling terrestrial carbon cycling

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Claire L.; Bond-Lamberty, Ben; Desai, Ankur R.; Lavoie, Martin; Risk, Dave; Tang, Jianwu; Todd-Brown, Katherine; Vargas, Rodrigo

    2016-11-16

    A recent acceleration of model-data synthesis activities has leveraged many terrestrial carbon (C) datasets, but utilization of soil respiration (RS) data has not kept pace with other types such as eddy covariance (EC) fluxes and soil C stocks. Here we argue that RS data, including non-continuous measurements from survey sampling campaigns, have unrealized value and should be utilized more extensively and creatively in data synthesis and modeling activities. We identify three major challenges in interpreting RS data, and discuss opportunities to address them. The first challenge is that when RS is compared to ecosystem respiration (RECO) measured from EC towers, it is not uncommon to find substantial mismatch, indicating one or both flux methodologies are unreliable. We argue the most likely cause of mismatch is unreliable EC data, and there is an unrecognized opportunity to utilize RS for EC quality control. The second challenge is that RS integrates belowground heterotrophic (RH) and autotrophic (RA) activity, whereas modelers generally prefer partitioned fluxes, and few models include an explicit RS output. Opportunities exist to use the total RS flux for data assimilation and model benchmarking methods rather than less-certain partitioned fluxes. Pushing for more experiments that not only partition RS but also monitor the age of RA and RH, as well as for the development of belowground RA components in models, would allow for more direct comparison between measured and modeled values. The third challenge is that soil respiration is generally measured at a very different resolution than that needed for comparison to EC or ecosystem- to global-scale models. Measuring soil fluxes with finer spatial resolution and more extensive coverage, and downscaling EC fluxes to match the scale of RS, will improve chamber and tower comparisons. Opportunities also exist to estimate RH at regional scales by implementing decomposition functional types, akin to plant functional

  3. A suite of R packages for web-enabled modeling and analysis of surface waters

    Science.gov (United States)

    Read, J. S.; Winslow, L. A.; Nüst, D.; De Cicco, L.; Walker, J. I.

    2014-12-01

    Researchers often create redundant methods for downloading, manipulating, and analyzing data from online resources. Moreover, the reproducibility of science can be hampered by complicated and voluminous data, lack of time for documentation and long-term maintenance of software, and fear of exposing programming skills. The combination of these factors can encourage unshared one-off programmatic solutions instead of openly provided reusable methods. Federal and academic researchers in the water resources and informatics domains have collaborated to address these issues. The result of this collaboration is a suite of modular R packages that can be used independently or as elements in reproducible analytical workflows. These documented and freely available R packages were designed to fill basic needs for the effective use of water data: the retrieval of time-series and spatial data from web resources (dataRetrieval, geoknife), performing quality assurance and quality control checks of these data with robust statistical methods (sensorQC), the creation of useful data derivatives (including physically- and biologically-relevant indices; GDopp, LakeMetabolizer), and the execution and evaluation of models (glmtools, rLakeAnalyzer). Here, we share details and recommendations for the collaborative coding process, and highlight the benefits of an open-source tool development pattern with a popular programming language in the water resources discipline (such as R). We provide examples of reproducible science driven by large volumes of web-available data using these tools, explore benefits of accessing packages as standardized web processing services (WPS) and present a working platform that allows domain experts to publish scientific algorithms in a service-oriented architecture (WPS4R). We assert that in the era of open data, tools that leverage these data should also be freely shared, transparent, and developed in an open innovation environment.

  4. Conceptual model and economic experiments to explain nonpersistence and enable mechanism designs fostering behavioral change.

    Science.gov (United States)

    Djawadi, Behnud Mir; Fahr, René; Turk, Florian

    2014-12-01

    Medical nonpersistence is a worldwide problem of striking magnitude. Although many fields of studies including epidemiology, sociology, and psychology try to identify determinants for medical nonpersistence, comprehensive research to explain medical nonpersistence from an economics perspective is rather scarce. The aim of the study was to develop a conceptual framework that augments standard economic choice theory with psychological concepts of behavioral economics to understand how patients' preferences for discontinuing with therapy arise over the course of the medical treatment. The availability of such a framework allows the targeted design of mechanisms for intervention strategies. Our conceptual framework models the patient as an active economic agent who evaluates the benefits and costs for continuing with therapy. We argue that a combination of loss aversion and mental accounting operations explains why patients discontinue with therapy at a specific point in time. We designed a randomized laboratory economic experiment with a student subject pool to investigate the behavioral predictions. Subjects continue with therapy as long as experienced utility losses have to be compensated. As soon as previous losses are evened out, subjects perceive the marginal benefit of persistence lower than in the beginning of the treatment. Consequently, subjects start to discontinue with therapy. Our results highlight that concepts of behavioral economics capture the dynamic structure of medical nonpersistence better than does standard economic choice theory. We recommend that behavioral economics should be a mandatory part of the development of possible intervention strategies aimed at improving patients' compliance and persistence behavior. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Interpreting the 750 GeV diphoton excess in the minimal dilaton model

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Junjie [Wuhan University, Center for Theoretical Physics, School of Physics and Technology, Wuhan (China); Henan Normal University, Department of Physics, Xinxiang (China); Shang, Liangliang [Henan Normal University, Department of Physics, Xinxiang (China); Su, Wei; Zhang, Yang [Institute of Theoretical Physics, Academia Sinica, State Key Laboratory of Theoretical Physics, Beijing (China); Zhu, Jinya [Wuhan University, Center for Theoretical Physics, School of Physics and Technology, Wuhan (China)

    2016-05-15

    We try to interpret the 750 GeV diphoton excess in the minimal dilaton model, which extends the SM by adding one linearized dilaton field and vector-like fermions. We first show by analytic formulas in this framework that the production rates of the γγ, gg, Zγ, ZZ, WW*, t anti t, and hh signals at the 750 GeV resonance are only sensitive to the dilaton-Higgs mixing angle θ{sub S} and the parameter η ≡ vN{sub X}/f, where f is the dilaton decay constant and N{sub X} denotes the number of the fermions. Then we scan the two parameters by considering various theoretical and experimental constraints to find the solutions to the diphoton excess. We conclude that the model can predict the central value of the diphoton rate without conflicting with any constraints. The signatures of our explanation at the LHC Run II and the vacuum stability at high energy scale are also discussed. (orig.)

  6. Interpretation of machine-learning-based disruption models for plasma control

    Science.gov (United States)

    Parsons, Matthew S.

    2017-08-01

    While machine learning techniques have been applied within the context of fusion for predicting plasma disruptions in tokamaks, they are typically interpreted with a simple ‘yes/no’ prediction or perhaps a probability forecast. These techniques take input signals, which could be real-time signals from machine diagnostics, to make a prediction of whether a transient event will occur. A major criticism of these methods is that, due to the nature of machine learning, there is no clear correlation between the input signals and the output prediction result. Here is proposed a simple method that could be applied to any existing prediction model to determine how sensitive the state of a plasma is at any given time with respect to the input signals. This is accomplished by computing the gradient of the decision function, which effectively identifies the quickest path away from a disruption as a function of the input signals and therefore could be used in a plasma control setting to avoid them. A numerical example is provided for illustration based on a support vector machine model, and the application to real data is left as an open opportunity.

  7. Interpretation of CEMP(s) and CEMP(s + r) Stars with AGB Models

    CERN Document Server

    Bisterzo, S; Straniero, O; Aoki, W; 10.1071/AS08055

    2009-01-01

    Asymptotic Giant Branch (AGB) stars play a fundamental role in the s-process nucleosynthesis during their thermal pulsing phase. The theoretical predictions obtained by AGB models at different masses, s-process efficiencies, dilution factors and initial r-enrichment, are compared with spectroscopic observations of Carbon-Enhanced Metal-Poor stars enriched in s-process elements, CEMP(s), collected from the literature. We discuss here five stars as example, CS 22880-074, CS 22942-019, CS 29526-110, HE 0202-2204, and LP 625-44. All these objects lie on the main-sequence or on the giant phase, clearly before the TP-AGB stage: the hypothesis of mass transfer from an AGB companion, would explain the observed s-process enhancement. CS 29526-110 and LP 625-44 are CEMP(s+r) objects, and are interpreted assuming that the molecular cloud, from which the binary system formed, was already enriched in r-process elements by SNII pollution. In several cases, the observed s-process distribution may be accounted for AGB models...

  8. Interpreting the 750 GeV diphoton excess in the minimal dilaton model

    Science.gov (United States)

    Cao, Junjie; Shang, Liangliang; Su, Wei; Zhang, Yang; Zhu, Jinya

    2016-05-01

    We try to interpret the 750 GeV diphoton excess in the minimal dilaton model, which extends the SM by adding one linearized dilaton field and vector-like fermions. We first show by analytic formulas in this framework that the production rates of the γ γ , gg, Zγ , ZZ, WW^*, tbar{t}, and hh signals at the 750 GeV resonance are only sensitive to the dilaton-Higgs mixing angle θ _S and the parameter η ≡ v N_X/f, where f is the dilaton decay constant and N_X denotes the number of the fermions. Then we scan the two parameters by considering various theoretical and experimental constraints to find the solutions to the diphoton excess. We conclude that the model can predict the central value of the diphoton rate without conflicting with any constraints. The signatures of our explanation at the LHC Run II and the vacuum stability at high energy scale are also discussed.

  9. Interpreting the 750 GeV diphoton excess in the Minimal Dilaton Model

    CERN Document Server

    Cao, Junjie; Su, Wei; Zhang, Yang; Zhu, Jingya

    2016-01-01

    We try to interpret the 750 GeV diphoton excess in the Minimal Dilaton Model, which extends the SM by adding one linearized dilaton field and vector-like fermions. We first show by analytic formulae in this framework that the production rates of the $\\gamma \\gamma$, $gg$, $Z\\gamma$, $ZZ$, $WW$, $t\\bar{t}$ and $hh$ signals at the $750 {\\rm GeV}$ resonance are only sensitive to the dilaton-Higgs mixing angle $\\theta_S$ and the parameter $\\eta \\equiv v N_X/f$, where $f$ is the dilaton decay constant and $N_X$ denotes the number of the fermions. Then we scan the two parameters by considering various theoretical and experimental constraints to find the solutions to the diphoton excess. We conclude that the model can predict the central value of the diphoton rate without conflicting with any constraints. The signatures of our explanation at the LHC Run II and the stability of the vacuum at high energy scale are also discussed.

  10. Modeling of Information Sharing Enablers for building Trust in Indian Manufacturing Industry: An Integrated ISM and Fuzzy MICMAC Approach

    Directory of Open Access Journals (Sweden)

    M K KHURANA

    2010-06-01

    Full Text Available Trust is regarded as one of the most critical and essential ingredient in most of business activities for collaborative relationship among the supply chain members. Maintaining and building trust among supply chain members depends mainly upon continued commitment to communication together with sharing information. Trust becomes critical when uncertainty and asymmetric information are present in the transaction of a supply chain. Information sharing system has very critical importance for the creation and maintenance of Trust. Trust is concerned with both the receipt and the dissemination of information. The present research aims to provide a comprehensive framework for the various important factors of information sharing system affecting the level of trust in supply chain management. ISM and Fuzzy MICMAC have been deployed to identify and classify the key criterion of information sharing enablers that influence trust based on their direct and indirect relationship. In this paper role of different factors of information sharing those responsible for infusing trust has been analyzed. In this research, an integrated model of information sharing enablers has been developed which may be helpful to supply chain managers to employ this model in order to identify and classify the important criteria for their needs and to reveal the direct and indirect effects of each criterion on the trust building process in supply chain management.

  11. Mars’ Low Dissipation Factor at 11-h - Interpretation from Anelasticity-Based Dissipation Model

    Science.gov (United States)

    Castillo-Rogez, Julie; Choukroun, M.

    2010-10-01

    We explore the information contained in the ratio of the tidal Love number k2 to the dissipation factor Q characterizing the response of Mars to the tides exerted by its satellite Phobos (11-h period). Assuming that Mars can be approximated as a Maxwell body, Bills et al. [1] have inferred an average viscosity of the Martian mantle 8.7x1014 Pa s. Such a low viscosity appears inconsistent with Mars’ thermal evolution and current heat budget models. Alternative explanations include the presence of partial melt in the mantle [2], or the presence of an aquifer in the crust [3]. We revisit the interpretation of Mars’ k2/Q using a laboratory-based attenuation model that accounts for material viscoelasticity and anelasticity. As a first step, we have computed Mars’ k2/Q for an interior model that includes a solid inner core, a liquid core layer, a mantle, and crust (consistent with the observed moment of inertia, and k2 measured at the orbital period), and searched for the range of mantle viscosities that can explain the observed k2/Q. Successful models are characterized by an average mantle viscosity between 1018 and 1022 Pa s, which rules out the presence of partial melt in the mantle. We can narrow down that range by performing a more detailed calculation of the mineralogy and temperature profiles. Preliminary results will be presented at the meeting. References: [1] Bills et al. (2005) JGR 110, E00704; [2] Ruedas et al. (2009 White paper to the NRC Planetary Science decadal survey; [3] Bills et al. (2009) LPS 40, 1712. MC is supported by a NASA Postdoctoral Program Fellowship, administered by Oak Ridge Associated Universities. This work has been conducted at the Jet Propulsion Laboratory, California Institute of Technology, under a contract to NASA. Government sponsorship acknowledged.

  12. DOMstudio: an integrated workflow for Digital Outcrop Model reconstruction and interpretation

    Science.gov (United States)

    Bistacchi, Andrea

    2015-04-01

    Different Remote Sensing technologies, including photogrammetry and LIDAR, allow collecting 3D dataset that can be used to create 3D digital representations of outcrop surfaces, called Digital Outcrop Models (DOM), or sometimes Virtual Outcrop Models (VOM). Irrespective of the Remote Sensing technique used, DOMs can be represented either by photorealistic point clouds (PC-DOM) or textured surfaces (TS-DOM). The first are datasets composed of millions of points with XYZ coordinates and RGB colour, whilst the latter are triangulated surfaces onto which images of the outcrop have been mapped or "textured" (applying a tech-nology originally developed for movies and videogames). Here we present a workflow that allows exploiting in an integrated and efficient, yet flexible way, both kinds of dataset: PC-DOMs and TS-DOMs. The workflow is composed of three main steps: (1) data collection and processing, (2) interpretation, and (3) modelling. Data collection can be performed with photogrammetry, LIDAR, or other techniques. The quality of photogrammetric datasets obtained with Structure From Motion (SFM) techniques has shown a tremendous improvement over the past few years, and this is becoming the more effective way to collect DOM datasets. The main advantages of photogrammetry over LIDAR are represented by the very simple and lightweight field equipment (a digital camera), and by the arbitrary spatial resolution, that can be increased simply getting closer to the out-crop or by using a different lens. It must be noted that concerns about the precision of close-range photogrammetric surveys, that were justified in the past, are no more a problem if modern software and acquisition schemas are applied. In any case, LIDAR is a well-tested technology and it is still very common. Irrespective of the data collection technology, the output will be a photorealistic point cloud and a collection of oriented photos, plus additional imagery in special projects (e.g. infrared images

  13. First Multitarget Chemo-Bioinformatic Model To Enable the Discovery of Antibacterial Peptides against Multiple Gram-Positive Pathogens.

    Science.gov (United States)

    Speck-Planche, Alejandro; Kleandrova, Valeria V; Ruso, Juan M; Cordeiro, M N D S

    2016-03-28

    Antimicrobial peptides (AMPs) have emerged as promising therapeutic alternatives to fight against the diverse infections caused by different pathogenic microorganisms. In this context, theoretical approaches in bioinformatics have paved the way toward the creation of several in silico models capable of predicting antimicrobial activities of peptides. All current models have several significant handicaps, which prevent the efficient search for highly active AMPs. Here, we introduce the first multitarget (mt) chemo-bioinformatic model devoted to performing alignment-free prediction of antibacterial activity of peptides against multiple Gram-positive bacterial strains. The model was constructed from a data set containing 2488 cases of AMPs sequences assayed against at least 1 out of 50 Gram-positive bacterial strains. This mt-chemo-bioinformatic model displayed percentages of correct classification higher than 90.00% in both training and prediction (test) sets. For the first time, two computational approaches derived from basic concepts in genetics and molecular biology were applied, allowing the calculations of the relative contributions of any amino acid (in a defined position) to the antibacterial activity of an AMP and depending on the bacterial strain used in the biological assay. The present mt-chemo-bioinformatic model constitutes a powerful tool to enable the discovery of potent and versatile AMPs.

  14. Interpreting Climate Model Projections of Extreme Weather Events for Decision Makers

    Science.gov (United States)

    Vavrus, S. J.; Notaro, M.

    2014-12-01

    The proliferation of output from climate model ensembles, such as CMIP3 and CMIP5, has greatly expanded access to future projections, but there is no accepted blueprint for how this data should be interpreted. Decision makers are thus faced with difficult questions when trying to utilize such information: How reliable are the multi-model mean projections? How should the changes simulated by outlier models be treated? How can raw projections of temperature and precipitation be translated into probabilities? The multi-model average is often regarded as the most accurate single estimate of future conditions, but higher-order moments representing the variance and skewness of the distribution of projections provide important information about uncertainty. We have analyzed a set of statistically downscaled climate model projections from the CMIP3 archive to conduct an assessment of extreme weather events at a level designed to be relevant for decision makers. Our analysis uses the distribution of 13 GCM projections to derive the inter-model standard deviation (and coefficient of variation, COV), skewness, and percentile ranges for simulated changes in extreme heat, cold, and precipitation during the middle and late 21st century for the A1B emissions scenario. These metrics help to establish the overall confidence level across the entire range of projections (via the inter-model COV), relative confidence in the simulated high-end versus low-end changes (via skewness), and probabilistic uncertainty bounds derived from a bootstrapping technique. Over our analysis domain centered on the United States Midwest, some primary findings include: (1) Greater confidence in projections of less extreme cold than more extreme heat and intense precipitation, (2) Greater confidence in the low-end than high-end projections of extreme heat, and (3) Higher spatial and temporal variability in the confidence of projected increases of heavy precipitation. In addition, our bootstrapping

  15. Clustering and interpretation of local earthquake tomography models in the southern Dead Sea basin

    Science.gov (United States)

    Bauer, Klaus; Braeuer, Benjamin

    2016-04-01

    The Dead Sea transform (DST) marks the boundary between the Arabian and the African plates. Ongoing left-lateral relative plate motion and strike-slip deformation started in the Early Miocene (20 MA) and produced a total shift of 107 km until presence. The Dead Sea basin (DSB) located in the central part of the DST is one of the largest pull-apart basins in the world. It was formed from step-over of different fault strands at a major segment boundary of the transform fault system. The basin development was accompanied by deposition of clastics and evaporites and subsequent salt diapirism. Ongoing deformation within the basin and activity of the boundary faults are indicated by increased seismicity. The internal architecture of the DSB and the crustal structure around the DST were subject of several large scientific projects carried out since 2000. Here we report on a local earthquake tomography study from the southern DSB. In 2006-2008, a dense seismic network consisting of 65 stations was operated for 18 months in the southern part of the DSB and surrounding regions. Altogether 530 well-constrained seismic events with 13,970 P- and 12,760 S-wave arrival times were used for a travel time inversion for Vp, Vp/Vs velocity structure and seismicity distribution. The work flow included 1D inversion, 2.5D and 3D tomography, and resolution analysis. We demonstrate a possible strategy how several tomographic models such as Vp, Vs and Vp/Vs can be integrated for a combined lithological interpretation. We analyzed the tomographic models derived by 2.5D inversion using neural network clustering techniques. The method allows us to identify major lithologies by their petrophysical signatures. Remapping the clusters into the subsurface reveals the distribution of basin sediments, prebasin sedimentary rocks, and crystalline basement. The DSB shows an asymmetric structure with thickness variation from 5 km in the west to 13 km in the east. Most importantly, a well-defined body

  16. Modeling and interpreting biological effects of mixtures in the environment: introduction to the metal mixture modeling evaluation project.

    Science.gov (United States)

    Van Genderen, Eric; Adams, William; Dwyer, Robert; Garman, Emily; Gorsuch, Joseph

    2015-04-01

    The fate and biological effects of chemical mixtures in the environment are receiving increased attention from the scientific and regulatory communities. Understanding the behavior and toxicity of metal mixtures poses unique challenges for incorporating metal-specific concepts and approaches, such as bioavailability and metal speciation, in multiple-metal exposures. To avoid the use of oversimplified approaches to assess the toxicity of metal mixtures, a collaborative 2-yr research project and multistakeholder group workshop were conducted to examine and evaluate available higher-tiered chemical speciation-based metal mixtures modeling approaches. The Metal Mixture Modeling Evaluation project and workshop achieved 3 important objectives related to modeling and interpretation of biological effects of metal mixtures: 1) bioavailability models calibrated for single-metal exposures can be integrated to assess mixture scenarios; 2) the available modeling approaches perform consistently well for various metal combinations, organisms, and endpoints; and 3) several technical advancements have been identified that should be incorporated into speciation models and environmental risk assessments for metals.

  17. Electrostatic component of binding energy: Interpreting predictions from poisson-boltzmann equation and modeling protocols.

    Science.gov (United States)

    Chakavorty, Arghya; Li, Lin; Alexov, Emil

    2016-10-30

    Macromolecular interactions are essential for understanding numerous biological processes and are typically characterized by the binding free energy. Important component of the binding free energy is the electrostatics, which is frequently modeled via the solutions of the Poisson-Boltzmann Equations (PBE). However, numerous works have shown that the electrostatic component (ΔΔGelec ) of binding free energy is very sensitive to the parameters used and modeling protocol. This prompted some researchers to question the robustness of PBE in predicting ΔΔGelec . We argue that the sensitivity of the absolute ΔΔGelec calculated with PBE using different input parameters and definitions does not indicate PBE deficiency, rather this is what should be expected. We show how the apparent sensitivity should be interpreted in terms of the underlying changes in several numerous and physical parameters. We demonstrate that PBE approach is robust within each considered force field (CHARMM-27, AMBER-94, and OPLS-AA) once the corresponding structures are energy minimized. This observation holds despite of using two different molecular surface definitions, pointing again that PBE delivers consistent results within particular force field. The fact that PBE delivered ΔΔGelec values may differ if calculated with different modeling protocols is not a deficiency of PBE, but natural results of the differences of the force field parameters and potential functions for energy minimization. In addition, while the absolute ΔΔGelec values calculated with different force field differ, their ordering remains practically the same allowing for consistent ranking despite of the force field used. © 2016 Wiley Periodicals, Inc.

  18. Enabling the dynamic coupling between sensor web and Earth system models - The Self-Adaptive Earth Predictive Systems (SEPS) framework

    Science.gov (United States)

    di, L.; Yu, G.; Chen, N.

    2007-12-01

    The self-adaptation concept is the central piece of the control theory widely and successfully used in engineering and military systems. Such a system contains a predictor and a measurer. The predictor takes initial condition and makes an initial prediction and the measurer then measures the state of a real world phenomenon. A feedback mechanism is built in that automatically feeds the measurement back to the predictor. The predictor takes the measurement against the prediction to calculate the prediction error and adjust its internal state based on the error. Thus, the predictor learns from the error and makes a more accurate prediction in the next step. By adopting the self-adaptation concept, we proposed the Self-adaptive Earth Predictive System (SEPS) concept for enabling the dynamic coupling between the sensor web and the Earth system models. The concept treats Earth System Models (ESM) and Earth Observations (EO) as integral components of the SEPS coupled by the SEPS framework. EO measures the Earth system state while ESM predicts the evolution of the state. A feedback mechanism processes EO measurements and feeds them into ESM during model runs or as initial conditions. A feed-forward mechanism analyzes the ESM predictions against science goals for scheduling optimized/targeted observations. The SEPS framework automates the Feedback and Feed-forward mechanisms (the FF-loop). Based on open consensus-based standards, a general SEPS framework can be developed for supporting the dynamic, interoperable coupling between ESMs and EO. Such a framework can support the plug-in-and-play capability of both ESMs and diverse sensors and data systems as long as they support the standard interfaces. This presentation discusses the SEPS concept, the service-oriented architecture (SOA) of SEPS framework, standards of choices for the framework, and the implementation. The presentation also presents examples of SEPS to demonstrate dynamic, interoperable, and live coupling of

  19. Understanding influential factors on implementing green supply chain management practices: An interpretive structural modelling analysis.

    Science.gov (United States)

    Agi, Maher A N; Nishant, Rohit

    2017-03-01

    In this study, we establish a set of 19 influential factors on the implementation of Green Supply Chain Management (GSCM) practices and analyse the interaction between these factors and their effect on the implementation of GSCM practices using the Interpretive Structural Modelling (ISM) method and the "Matrice d'Impacts Croisés Multiplication Appliquée à un Classement" (MICMAC) analysis on data compiled from interviews with supply chain (SC) executives based in the Gulf countries (Middle East region). The study reveals a strong influence and driving power of the nature of the relationships between SC partners on the implementation of GSCM practices. We especially found that dependence, trust, and durability of the relationship with SC partners have a very high influence. In addition, the size of the company, the top management commitment, the implementation of quality management and the employees training and education exert a critical influence on the implementation of GSCM practices. Contextual elements such as the industry sector and region and their effect on the prominence of specific factors are also highlighted through our study. Finally, implications for research and practice are discussed.

  20. AIRSAR observations of the Gulf Stream with interpretation from sea truth and modeling

    Science.gov (United States)

    Valenzuela, G. R.; Chubb, S. R.; Marmorino, G. O.; Trump, C. L.; Lee, J. S.; Cooper, A. L.; Askari, F.; Keller, W. C.; Kaiser, J. A. C.; Mied, R. P.

    1991-01-01

    On 20 Jul., JPL/DC-8 synthetic aperture radar (SAR) participated in the 17-21 Jul. 1990 NRL Gulf Stream (GS) experiment in preparation for SIR-C missions in 1993, 1994, and 1996 for calibration purposes and to check modes and techniques for operation at our experimental site off the east coast of the US. During this experiment, coordinated and near simultaneous measurements were performed from ship (R/V Cape Henlopen) and other aircraft (NADC/P-3 and NRL/P-3) to address scientific questions relating to the origin of 'slick-like' features observed by Scully-Power, the refraction and modulation of waves by variable currents, the effect of current and thermal fronts on radar imagery signatures and the modification of Kelvin ship wakes by fronts. The JPL/DC-8 and NADC/P-3 SAR's are fully polarimetric systems. Their composite frequency range varies between P- and X-band. We describe in detail the Airborne SAR (AIRSAR) participation in the Jul. 1990 GS experiment and present preliminary results of the ongoing analysis and interpretation of the radar imagery in the context of ground truth, other remote measurements, and modeling efforts.

  1. On the zigzagging causility model of EPR correlations and on the interpretation of quantum mechanics

    Science.gov (United States)

    de Beauregard, O. Costa

    1988-09-01

    Being formalized inside the S-matrix scheme, the zigzagging causility model of EPR correlations has full Lorentz and CPT invariance. EPR correlations, proper or reversed, and Wheeler's smoky dragon metaphor are respectively pictured in spacetime or in the momentum-energy space, as V-shaped, A-shaped, or C-shaped ABC zigzags, with a summation at B over virtual states |B> = *. The formal parrallelism breaks down at the level of interpretation because (A|C) = ||2. CPT invariance implies the Fock and Watanabe principle that, in quantum mechanics, retarded (advanced) waves are used for prediction (retrodiction), an expression of which is = = , with |Φ> denoting a preparation, |Ψ> a measurement, and U the evolution operator. The transformation |Ψ> = |UΦ> or |Φ> = |U-1Ψ> exchanges the “preparation representation” and the “measurement representation” of a system and is ancillary in the formalization of the quantum chance game by the “wavelike algebra” of conditional amplitude. In 1935 EPR overlooked that a conditional amplitude = Σ between the two distant measurements is at stake, and that only measurements actually performed do make sense. The reversibility = * implies that causality is CPT-invariant, or arrowless, at the microlevel. Arrowed causality is a macroscopic emergence, corollary to wave retardation and probability increase. Factlike irreversibility states repression, not suppression, of “blind statistical retrodiction”—that is, of “final cause.”

  2. Response of pine forest to disturbance of pine wood nematode with interpretative structural model

    Institute of Scientific and Technical Information of China (English)

    Juan SHI; Youqing LUO; Xiaosu YAN; Weiping CHEN; Ping JIANG

    2009-01-01

    Pine wood nematode (PWN, Bursaphelenchus xylophilus), originating from North America, causes destructive pine wilt disease. Different pine forest ecosystems have different resistances to B. xylophilus,and after its invasion, the resilience and restoration direction of different ecosystems also varies. In this study, an interpretative structural model was applied for analyzing the response of pine forest ecosystem to PWN disturbance. The result showed that a five-degree multi-stage hierarchical system affected the response of the pine forest ecosystem to PWN disturbance, in which direct affecting factors are resistance and resilience. Furthermore,the analysis to the 2nd, 3rd and 4th degree factors showed that not only does distribution pattern of plant species and pine's ecological features affect the resistance of pine forests' ecosystem, but removal of attacked trees and other measures also influence the resistance through indirectly affecting the damage degree of Monochamus alternatus and distribution pattern of plant species. As for resilience,it is influenced directly by soil factors, hydrology,surrounding species provenance and biological character-istics of the second and jointly dominant species, and the climate factors can also have a direct or indirect effect on it by affecting the above factors. Among the fifth elements,the elevation, gradient and slope direction, topographical factors, diversity of geographical location and improve-ment of prevention technology all influence the response of pine forest ecosystem to PWN disturbance.

  3. The Policy Dystopia Model: An Interpretive Analysis of Tobacco Industry Political Activity.

    Directory of Open Access Journals (Sweden)

    Selda Ulucanlar

    2016-09-01

    Full Text Available Tobacco industry interference has been identified as the greatest obstacle to the implementation of evidence-based measures to reduce tobacco use. Understanding and addressing industry interference in public health policy-making is therefore crucial. Existing conceptualisations of corporate political activity (CPA are embedded in a business perspective and do not attend to CPA's social and public health costs; most have not drawn on the unique resource represented by internal tobacco industry documents. Building on this literature, including systematic reviews, we develop a critically informed conceptual model of tobacco industry political activity.We thematically analysed published papers included in two systematic reviews examining tobacco industry influence on taxation and marketing of tobacco; we included 45 of 46 papers in the former category and 20 of 48 papers in the latter (n = 65. We used a grounded theory approach to build taxonomies of "discursive" (argument-based and "instrumental" (action-based industry strategies and from these devised the Policy Dystopia Model, which shows that the industry, working through different constituencies, constructs a metanarrative to argue that proposed policies will lead to a dysfunctional future of policy failure and widely dispersed adverse social and economic consequences. Simultaneously, it uses diverse, interlocking insider and outsider instrumental strategies to disseminate this narrative and enhance its persuasiveness in order to secure its preferred policy outcomes. Limitations are that many papers were historical (some dating back to the 1970s and focused on high-income regions.The model provides an evidence-based, accessible way of understanding diverse corporate political strategies. It should enable public health actors and officials to preempt these strategies and develop realistic assessments of the industry's claims.

  4. The Policy Dystopia Model: An Interpretive Analysis of Tobacco Industry Political Activity

    Science.gov (United States)

    Ulucanlar, Selda; Fooks, Gary J.; Gilmore, Anna B.

    2016-01-01

    Background Tobacco industry interference has been identified as the greatest obstacle to the implementation of evidence-based measures to reduce tobacco use. Understanding and addressing industry interference in public health policy-making is therefore crucial. Existing conceptualisations of corporate political activity (CPA) are embedded in a business perspective and do not attend to CPA’s social and public health costs; most have not drawn on the unique resource represented by internal tobacco industry documents. Building on this literature, including systematic reviews, we develop a critically informed conceptual model of tobacco industry political activity. Methods and Findings We thematically analysed published papers included in two systematic reviews examining tobacco industry influence on taxation and marketing of tobacco; we included 45 of 46 papers in the former category and 20 of 48 papers in the latter (n = 65). We used a grounded theory approach to build taxonomies of “discursive” (argument-based) and “instrumental” (action-based) industry strategies and from these devised the Policy Dystopia Model, which shows that the industry, working through different constituencies, constructs a metanarrative to argue that proposed policies will lead to a dysfunctional future of policy failure and widely dispersed adverse social and economic consequences. Simultaneously, it uses diverse, interlocking insider and outsider instrumental strategies to disseminate this narrative and enhance its persuasiveness in order to secure its preferred policy outcomes. Limitations are that many papers were historical (some dating back to the 1970s) and focused on high-income regions. Conclusions The model provides an evidence-based, accessible way of understanding diverse corporate political strategies. It should enable public health actors and officials to preempt these strategies and develop realistic assessments of the industry’s claims. PMID:27649386

  5. Geomorphic Map of Worcester County, Maryland, Interpreted from a LIDAR-Based, Digital Elevation Model

    Science.gov (United States)

    Newell, Wayne L.; Clark, Inga

    2008-01-01

    A recently compiled mosaic of a LIDAR-based digital elevation model (DEM) is presented with geomorphic analysis of new macro-topographic details. The geologic framework of the surficial and near surface late Cenozoic deposits of the central uplands, Pocomoke River valley, and the Atlantic Coast includes Cenozoic to recent sediments from fluvial, estuarine, and littoral depositional environments. Extensive Pleistocene (cold climate) sandy dune fields are deposited over much of the terraced landscape. The macro details from the LIDAR image reveal 2 meter-scale resolution of details of the shapes of individual dunes, and fields of translocated sand sheets. Most terrace surfaces are overprinted with circular to elliptical rimmed basins that represent complex histories of ephemeral ponds that were formed, drained, and overprinted by younger basins. The terrains of composite ephemeral ponds and the dune fields are inter-shingled at their margins indicating contemporaneous erosion, deposition, and re-arrangement and possible internal deformation of the surficial deposits. The aggregate of these landform details and their deposits are interpreted as the products of arid, cold climate processes that were common to the mid-Atlantic region during the Last Glacial Maximum. In the Pocomoke valley and its larger tributaries, erosional remnants of sandy flood plains with anastomosing channels indicate the dynamics of former hydrology and sediment load of the watershed that prevailed at the end of the Pleistocene. As the climate warmed and precipitation increased during the transition from late Pleistocene to Holocene, dune fields were stabilized by vegetation, and the stream discharge increased. The increased discharge and greater local relief of streams graded to lower sea levels stimulated down cutting and created the deeply incised valleys out onto the continental shelf. These incised valleys have been filling with fluvial to intertidal deposits that record the rising sea

  6. Significance of Kinetics for Sorption on Inorganic Colloids: Modeling and Data Interpretation Issues

    Science.gov (United States)

    Painter, S.; Cvetkovic, V.; Pickett, D.; Turner, D.

    2001-12-01

    Irreversible or slowly reversible attachment to inorganic colloids is a process that may enhance radionuclide transport in the environment. An understanding of sorption kinetics is critical in evaluating this process. A two-site kinetic model for sorption on inorganic colloids is developed and used to evaluate laboratory data. This model was developed as an alternative to the equilibrium colloid sorption model employed by the U.S. Department of Energy (DOE) in their performance assessment for the proposed repository for high-level nuclear waste at Yucca Mountain, Nevada. The model quantifies linear first-order sorption on two types of hypothetical sites (fast and slow) characterized by two pairs of rates (forward and reverse). We use the model to explore data requirements for long-term predictive calculations and to evaluate laboratory kinetic sorption data of Lu et al. Five batch sorption data sets are considered with Pu(V) as the tracer and montmorillonite, hematite, silica, and smectite as colloids. Using asymptotic results applicable on the 240 hour time-scale of the experiments, a robust estimation procedure is developed for the fast-site partitioning coefficient and the slow forward rate. The estimated range for the partition coefficient is 1.1-76 L/g; the range for the slow forward rate is 0.0017-0.02 L/h. Comparison of one-site and two-site sorption interpretations reveals the difficulty in discriminating between the two models for montmorillonite and to a lesser extent for hematite. For silica and smectite the two-site model clearly provides a better representation of the data as compared with a single site model. Kinetic data for silica are available for different colloid concentrations (0.2 g/L and 1.0 g/L). For the range of experimental conditions considered, the forward rate appears to be independent of the colloid concentration. The slow reverse rate cannot be estimated on the time scale of the experiments; we estimate the detection limits for the

  7. Estimating and interpreting migration of Amazonian forests using spatially implicit and semi-explicit neutral models.

    Science.gov (United States)

    Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans

    2017-06-01

    With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of

  8. Finger Thickening during Extra-Heavy Oil Waterflooding: Simulation and Interpretation Using Pore-Scale Modelling

    Science.gov (United States)

    Bondino, Igor; Hamon, Gerald

    2017-01-01

    Although thermal methods have been popular and successfully applied in heavy oil recovery, they are often found to be uneconomic or impractical. Therefore, alternative production protocols are being actively pursued and interesting options include water injection and polymer flooding. Indeed, such techniques have been successfully tested in recent laboratory investigations, where X-ray scans performed on homogeneous rock slabs during water flooding experiments have shown evidence of an interesting new phenomenon–post-breakthrough, highly dendritic water fingers have been observed to thicken and coalesce, forming braided water channels that improve sweep efficiency. However, these experimental studies involve displacement mechanisms that are still poorly understood, and so the optimization of this process for eventual field application is still somewhat problematic. Ideally, a combination of two-phase flow experiments and simulations should be put in place to help understand this process more fully. To this end, a fully dynamic network model is described and used to investigate finger thickening during water flooding of extra-heavy oils. The displacement physics has been implemented at the pore scale and this is followed by a successful benchmarking exercise of the numerical simulations against the groundbreaking micromodel experiments reported by Lenormand and co-workers in the 1980s. A range of slab-scale simulations has also been carried out and compared with the corresponding experimental observations. We show that the model is able to replicate finger architectures similar to those observed in the experiments and go on to reproduce and interpret, for the first time to our knowledge, finger thickening following water breakthrough. We note that this phenomenon has been observed here in homogeneous (i.e. un-fractured) media: the presence of fractures could be expected to exacerbate such fingering still further. Finally, we examine the impact of several system

  9. A battery model that enables consideration of realistic anisotropic environment surrounding an active material particle and its application

    Science.gov (United States)

    Lin, Xianke; Lu, Wei

    2017-07-01

    This paper proposes a model that enables consideration of the realistic anisotropic environment surrounding an active material particle by incorporating both diffusion and migration of lithium ions and electrons in the particle. This model makes it possible to quantitatively evaluate effects such as fracture on capacity degradation. In contrast, the conventional model assumes isotropic environment and only considers diffusion in the active particle, which cannot capture the effect of fracture since it would predict results contradictory to experimental observations. With the developed model we have investigated the effects of active material electronic conductivity, particle size, and State of Charge (SOC) swing window when fracture exists. The study shows that the low electronic conductivity of active material has a significant impact on the lithium ion pattern. Fracture increases the resistance for electron transport and therefore reduces lithium intercalation/deintercalation. Particle size plays an important role in lithium ion transport. Smaller particle size is preferable for mitigating capacity loss when fracture happens. The study also shows that operating at high SOC reduces the impact of fracture.

  10. GATE V6: a major enhancement of the GATE simulation platform enabling modelling of CT and radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Jan, S; Becheva, E [DSV/I2BM/SHFJ, Commissariat a l' Energie Atomique, Orsay (France); Benoit, D; Rehfeld, N; Stute, S; Buvat, I [IMNC-UMR 8165 CNRS-Paris 7 and Paris 11 Universities, 15 rue Georges Clemenceau, 91406 Orsay Cedex (France); Carlier, T [INSERM U892-Cancer Research Center, University of Nantes, Nantes (France); Cassol, F; Morel, C [Centre de physique des particules de Marseille, CNRS-IN2P3 and Universite de la Mediterranee, Aix-Marseille II, 163, avenue de Luminy, 13288 Marseille Cedex 09 (France); Descourt, P; Visvikis, D [INSERM, U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Brest (France); Frisson, T; Grevillot, L; Guigues, L; Sarrut, D; Zahra, N [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U630, INSA-Lyon, Universite Lyon 1, Centre Leon Berard (France); Maigne, L; Perrot, Y [Laboratoire de Physique Corpusculaire, 24 Avenue des Landais, 63177 Aubiere Cedex (France); Schaart, D R [Delft University of Technology, Radiation Detection and Medical Imaging, Mekelweg 15, 2629 JB Delft (Netherlands); Pietrzyk, U, E-mail: buvat@imnc.in2p3.fr [Reseach Center Juelich, Institute of Neurosciences and Medicine and Department of Physics, University of Wuppertal (Germany)

    2011-02-21

    GATE (Geant4 Application for Emission Tomography) is a Monte Carlo simulation platform developed by the OpenGATE collaboration since 2001 and first publicly released in 2004. Dedicated to the modelling of planar scintigraphy, single photon emission computed tomography (SPECT) and positron emission tomography (PET) acquisitions, this platform is widely used to assist PET and SPECT research. A recent extension of this platform, released by the OpenGATE collaboration as GATE V6, now also enables modelling of x-ray computed tomography and radiation therapy experiments. This paper presents an overview of the main additions and improvements implemented in GATE since the publication of the initial GATE paper (Jan et al 2004 Phys. Med. Biol. 49 4543-61). This includes new models available in GATE to simulate optical and hadronic processes, novelties in modelling tracer, organ or detector motion, new options for speeding up GATE simulations, examples illustrating the use of GATE V6 in radiotherapy applications and CT simulations, and preliminary results regarding the validation of GATE V6 for radiation therapy applications. Upon completion of extensive validation studies, GATE is expected to become a valuable tool for simulations involving both radiotherapy and imaging.

  11. Lightning arrester models enabling highly accurate lightning surge analysis; Koseidona kaminari surge kaiseki wo kano ni suru hiraiki model

    Energy Technology Data Exchange (ETDEWEB)

    Ueda, T. [Chubu Electric Power Co. Inc., Nagoya (Japan); Funabashi, T.; Hagiwara, T.; Watanabe, H. [Meidensha Corp., Tokyo (Japan)

    1998-12-28

    Introduced herein are a dynamic behavior model for lightning arresters designed for power stations and substations and a flashover model for a lightning arresting device designed for transmission, both developed by the author et al. The author et al base their zinc oxide type lightning arrester model on the conventional static V-I characteristics, and supplement them with difference in voltage between static and dynamic characteristics. The model is easily simulated using EMTP (Electromagnetic Transients Program) etc. There is good agreement between the results of calculation performed using this model and actually measured values. Lightning arresting devices for transmission have come into practical use, and their effectiveness is introduced on various occasions. For the proper application of such devices, an analysis model capable of faithfully describing the flashover characteristics of arcing horns installed in great numbers along transmission lines, and of lightning arresting devices for transmission, are required. The author et al have newly developed a flashover model for the devices and uses the model for the analysis of lightning surges. It is found that the actually measured values of discharge characteristics of lightning arresting devices for transmission agree well with the values calculated by use of the model. (NEDO)

  12. Extended temperature-accelerated dynamics: enabling long-time full-scale modeling of large rare-event systems.

    Science.gov (United States)

    Bochenkov, Vladimir; Suetin, Nikolay; Shankar, Sadasivan

    2014-09-07

    A new method, the Extended Temperature-Accelerated Dynamics (XTAD), is introduced for modeling long-timescale evolution of large rare-event systems. The method is based on the Temperature-Accelerated Dynamics approach [M. Sørensen and A. Voter, J. Chem. Phys. 112, 9599 (2000)], but uses full-scale parallel molecular dynamics simulations to probe a potential energy surface of an entire system, combined with the adaptive on-the-fly system decomposition for analyzing the energetics of rare events. The method removes limitations on a feasible system size and enables to handle simultaneous diffusion events, including both large-scale concerted and local transitions. Due to the intrinsically parallel algorithm, XTAD not only allows studies of various diffusion mechanisms in solid state physics, but also opens the avenue for atomistic simulations of a range of technologically relevant processes in material science, such as thin film growth on nano- and microstructured surfaces.

  13. Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

    Science.gov (United States)

    Dinov, Ivo D

    2016-01-01

    Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

  14. Resistor network as a model of fractures in granitic rocks - model for ERT interpretation in crystalline rocks

    Science.gov (United States)

    Vilhelm, Jan; Jirků, Jaroslav; Janeček, Josef; Slavík, Lubomír; Bárta, Jaroslav

    2017-04-01

    Recently we have developed and tested system for long-term monitoring of underground excavation stability in granitic rocks. It is based on repeated ultrasonic time-of-flight measurement and electrical resistivity tomography (ERT) measurement. The ERT measurement is performed directly on the rock wall using 48 electrodes. The spacing between electrodes was selected 20 centimeters. Based on sensitivity function it can be expected that maximum penetration depth of ERT is about 1.5 m. The observed time changes in apparent resistivity are expected to be mainly result of changes in fracture water saturation. To get some basic knowledge about relation between electrical resistivity in the rock fracture zone and its saturation a series of laboratory tests with rock samples with different porosity and different saturation was performed. The model of crystalline rock with sparse net of fractures is highly inhomogeneous medium and can be hardly considered as 2D layered model, which is usually used in ERT inversion. Therefore, we prepared resistor-network model for the qualitative/quantitative interpretation of observed apparent resistivity changes. Some preliminary results of our experience with this new type of resistivity model are presented. The results can be used for underground storage monitoring projects. Acknowledgments: This work was partially supported by the Technology Agency of the Czech Republic, project No. TA 0302408

  15. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the charge distribution model

    Science.gov (United States)

    Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.

    2012-10-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models

  16. Using Enabling Technologies to Facilitate the Comparison of Satellite Observations with the Model Forecasts for Hurricane Study

    Science.gov (United States)

    Li, P.; Knosp, B.; Hristova-Veleva, S. M.; Niamsuwan, N.; Johnson, M. P.; Shen, T. P. J.; Tanelli, S.; Turk, J.; Vu, Q. A.

    2014-12-01

    Due to their complexity and volume, the satellite data are underutilized in today's hurricane research and operations. To better utilize these data, we developed the JPL Tropical Cyclone Information System (TCIS) - an Interactive Data Portal providing fusion between Near-Real-Time satellite observations and model forecasts to facilitate model evaluation and improvement. We have collected satellite observations and model forecasts in the Atlantic Basin and the East Pacific for the hurricane seasons since 2010 and supported the NASA Airborne Campaigns for Hurricane Study such as the Genesis and Rapid Intensification Processes (GRIP) in 2010 and the Hurricane and Severe Storm Sentinel (HS3) from 2012 to 2014. To enable the direct inter-comparisons of the satellite observations and the model forecasts, the TCIS was integrated with the NASA Earth Observing System Simulator Suite (NEOS3) to produce synthetic observations (e.g. simulated passive microwave brightness temperatures) from a number of operational hurricane forecast models (HWRF and GFS). An automated process was developed to trigger NEOS3 simulations via web services given the location and time of satellite observations, monitor the progress of the NEOS3 simulations, display the synthetic observation and ingest them into the TCIS database when they are done. In addition, three analysis tools, the joint PDF analysis of the brightness temperatures, ARCHER for finding the storm-center and the storm organization and the Wave Number Analysis tool for storm asymmetry and morphology analysis were integrated into TCIS to provide statistical and structural analysis on both observed and synthetic data. Interactive tools were built in the TCIS visualization system to allow the spatial and temporal selections of the datasets, the invocation of the tools with user specified parameters, and the display and the delivery of the results. In this presentation, we will describe the key enabling technologies behind the design of

  17. On interpretation

    Directory of Open Access Journals (Sweden)

    Michał Januszkiewicz

    2013-01-01

    Full Text Available The article entitled “On interpretation” is an attempt to formulate a viewpoint on the issue of textual interpretation. It presents different ideas related to interpretation, including especially those that are concerned with a text’s meaning and with the way in which it is interpreted by the reader. The author proposes another interpretation method which he calls transactional. The primary concern is how to possibly justify the fundamental character of interpretation and interpretative activity while at the same time preserving and respecting the relative autonomy of an interpreted text.

  18. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  19. Enabling Interoperation of High Performance, Scientific Computing Applications: Modeling Scientific Data with the Sets & Fields (SAF) Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M C; Reus, J F; Matzke, R P; Arrighi, W J; Schoof, L A; Hitt, R T; Espen, P K; Butler, D M

    2001-02-07

    This paper describes the Sets and Fields (SAF) scientific data modeling system. It is a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math-oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or meshes to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshes. SAF addresses this problem by providing a small set of mathematical building blocks--sets, relations and fields--out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. A short historical perspective, a conceptual model and an overview of SAF along with preliminary results from its use in a few ASCI codes are discussed.

  20. A Cyber Enabled Collaborative Environment for Creating, Sharing and Using Data and Modeling Driven Curriculum Modules for Hydrology Education

    Science.gov (United States)

    Merwade, V.; Ruddell, B. L.; Fox, S.; Iverson, E. A. R.

    2014-12-01

    With the access to emerging datasets and computational tools, there is a need to bring these capabilities into hydrology classrooms. However, developing curriculum modules using data and models to augment classroom teaching is hindered by a steep technology learning curve, rapid technology turnover, and lack of an organized community cyberinfrastructure (CI) for the dissemination, publication, and sharing of the latest tools and curriculum material for hydrology and geoscience education. The objective of this project is to overcome some of these limitations by developing a cyber enabled collaborative environment for publishing, sharing and adoption of data and modeling driven curriculum modules in hydrology and geosciences classroom. The CI is based on Carleton College's Science Education Resource Center (SERC) Content Management System. Building on its existing community authoring capabilities the system is being extended to allow assembly of new teaching activities by drawing on a collection of interchangeable building blocks; each of which represents a step in the modeling process. Currently the system hosts more than 30 modules or steps, which can be combined to create multiple learning units. Two specific units: Unit Hydrograph and Rational Method, have been used in undergraduate hydrology class-rooms at Purdue University and Arizona State University. The structure of the CI and the lessons learned from its implementation, including preliminary results from student assessments of learning will be presented.

  1. Inkjet-based deposition of polymer thin films enabled by a lubrication model incorporating nano-scale parasitics

    Science.gov (United States)

    Singhal, Shrawan; Meissl, Mario J.; Bonnecaze, Roger T.; Sreenivasan, S. V.

    2013-09-01

    Thin film lubrication theory has been widely used to model multi-scale fluid phenomena. Variations of the same have also found application in fluid-based manufacturing process steps for micro- and nano-scale devices over large areas where a natural disparity in length scales exists. Here, a novel inkjet material deposition approach has been enabled by an enhanced thin film lubrication theory that accounts for nano-scale substrate parasitics. This approach includes fluid interactions with a thin flexible superstrate towards a new process called Jet and Coat of Thin-films (JCT). Numerical solutions of the model have been verified, and also validated against controlled experiments of polymer film deposition with good agreement. Understanding gleaned from the experimentally validated model has then been used to facilitate JCT process synthesis resulting in substantial reduction in the influence of parasitics and a concomitant improvement in the film thickness uniformity. Polymer films ranging from 20 to 500 nm mean thickness have been demonstrated with standard deviation of less than 2% of the mean film thickness. The JCT process offers advantages over spin coating which is not compatible with roll-to-roll processing and large area processing for displays. It also improves over techniques such as knife edge coating, slot die coating, as they are limited in the range of thicknesses of films that can be deposited without compromising uniformity.

  2. Numerical Modelling and Geological Interpretation of Geothermal Fields in Black Sea

    Science.gov (United States)

    Kostyanev, Simeon; Trapov, Georgi; Dimovski, Stefan; Vasilev, Atanas; Stoyanov, Velislav; Kostadinov, Evgeni

    2013-04-01

    A numerical solution to the thermal conductivity equation was carried out along three profiles; the Varna-Sukhumi profile and two transverse profiles. The purpose of this paper is a more detailed study of the distribution in depth of the thermal field in the light of the latest geological and geophysical data concerning the age and structure of the sedimentary rocks and the Black Sea basement. Specified seismic and tomographic data about the sedimentary formation and the region basement were obtained and employed in order to precise the results obtained from the previous studies. Calculations were carried out along a geological profile using real properties of sedimentary rocks and basement and they have shown that the regional variation of temperature along the Moho plane varies from 420 to 754° ?. The heat flow along the same plane varies from 15-20 t? 29-41 mW /m2. The part of the heat flow that is caused by radiogenic sources amounts to 17-30 mW/m2. The modelling results are presented as sections that illustrate the distribution of temperature and heat flow in depth. This article is initiated by the fact that between 1st January 2009 and 12th December 2011, Project # 226592, entitled "UP-GRADE BLACK SEA SCIENTIFIC NETWORK", was worked out as part of the Seventh Framework Program (FP7). A team from the University of Mining and Geology, Sofia, took part in the project developing a geothermal database for the Black Sea basin. Part of the data was employed for the modeling of then geothermal field along the Varna-Sukhumi Profile. A catalogue is being prepared that is going to comprise all geothermal data of the Black Sea that are available so far and that amount more than 750 at present. The authors wish to thank the Project Management for the provided opportunity to work on this problem. The numerical modelling the analysis and interpretation of geothermal data will contribute to the study of the geological evolution of the lithosphere of the Black Sea depression.

  3. Composition of uppermost mantle beneath the Northern Fennoscandia - numerical modeling and petrological interpretation

    Science.gov (United States)

    Virshylo, Ivan; Kozlovskaya, Elena; Prodaivoda, George; Silvennoinen, Hanna

    2013-04-01

    Studying of the uppermost mantle beneath the northern Fennoscandia is based on the data of the POLENET/LAPNET passive seismic array. Firstly, arrivals of P-waves of teleseismic events were inverted into P-wave velocity model using non-linear tomography (Silvennoinen et al., in preparation). The second stage was numerical petrological interpretation of referred above velocity model. This study presents estimation of mineralogical composition of the uppermost mantle as a result of numerical modeling. There are many studies concerning calculation of seismic velocities for polymineral media under high pressure and temperature conditions (Afonso, Fernàndez, Ranalli, Griffin, & Connolly, 2008; Fullea et al., 2009; Hacker, 2004; Xu, Lithgow-Bertelloni, Stixrude, & Ritsema, 2008). The elastic properties under high pressure and temperature (PT) conditions were modelled using the expanded Hook's law - Duhamel-Neumann equation, which allows computation of thermoelastic strains. Furthermore, we used a matrix model with multi-component inclusions that has no any restrictions on shape, orientation or concentration of inclusions. Stochastic method of conditional moment with computation scheme of Mori-Tanaka (Prodaivoda, Khoroshun, Nazarenko, & Vyzhva, 2000) is applied instead of traditional Voigt-Reuss-Hill and Hashin-Shtrikman equations. We developed software for both forward and inverse problem calculation. Inverse algorithm uses methods of global non-linear optimization. We prefer a "model-based" approach for ill-posed problem, which means that the problem is solved using geological and geophysical constraints for each parameter of a priori and final models. Additionally, we are checking at least several different hypothesis explaining how it is possible to get the solution with good fit to the observed data. If the a priori model is close to the real medium, the nearest solution would be found by the inversion. Otherwise, the global optimization is searching inside the

  4. Teacher Effectiveness Examined as a System: Interpretive Structural Modeling and Facilitation Sessions with U.S. and Japanese Students

    Science.gov (United States)

    Georgakopoulos, Alexia

    2009-01-01

    This study challenges narrow definitions of teacher effectiveness and uses a systems approach to investigate teacher effectiveness as a multi-dimensional, holistic phenomenon. The methods of Nominal Group Technique and Interpretive Structural Modeling were used to assist U.S. and Japanese students separately construct influence structures during…

  5. On the Physical Interpretation of the Saleh-Valenzuela Model and the definition of its power delay profiles

    NARCIS (Netherlands)

    Meijerink, Arjan; Molisch, Andreas F.

    2014-01-01

    The physical motivation and interpretation of the stochastic propagation channel model of Saleh and Valenzuela are discussed in detail. This motivation mainly relies on assumptions on the stochastic properties of the positions of transmitter, receiver and scatterers in the propagation environment,

  6. A classical mechanics model for the interpretation of piezoelectric property data

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Andrew J., E-mail: a.j.bell@leeds.ac.uk [Institute for Materials Research, School of Chemical and Process Engineering, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2015-12-14

    In order to provide a means of understanding, the relationship between the primary electromechanical coefficients and simple crystal chemistry parameters for piezoelectric materials, a static analysis of a 3 atom, dipolar molecule has been undertaken to derive relationships for elastic compliance s{sup E}, dielectric permittivity ε{sup X}, and piezoelectric charge coefficient d in terms of an effective ionic charge and two inter-atomic force constants. The relationships demonstrate the mutual interdependence of the three coefficients, in keeping with experimental evidence from a large dataset of commercial piezoelectric materials. It is shown that the electromechanical coupling coefficient k is purely an expression of the asymmetry in the two force constants or bond compliances. The treatment is extended to show that the quadratic electrostriction relation between strain and polarization, in both centrosymmetric and non-centrosymmetric systems, is due to the presence of a non-zero 2nd order term in the bond compliance. Comparison with experimental data explains the counter-intuitive, positive correlation of k with s{sup E} and ε{sup X} and supports the proposition that high piezoelectric activity in single crystals is dominated by large compliance coupled with asymmetry in the sub-cell force constants. However, the analysis also shows that in polycrystalline materials, the dielectric anisotropy of the constituent crystals can be more important for attaining large charge coefficients. The model provides a completely new methodology for the interpretation of piezoelectric and electrostrictive property data and suggests methods for rapid screening for high activity in candidate piezoelectric materials, both experimentally and by novel interrogation of ab initio calculations.

  7. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    Science.gov (United States)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  8. Interpreting aerosol lifetimes using the GEOS-Chem model and constraints from radionuclide measurements

    Science.gov (United States)

    Croft, B.; Pierce, J. R.; Martin, R. V.

    2014-04-01

    mean lifetime of 3.9 days for the 137Cs emissions injected with a uniform spread through the model's Northern Hemisphere boundary layer. Simulated e-folding times were insensitive to emission parameters (altitude, location, and time), suggesting that these measurement-based e-folding times provide arobust constraint on simulated e-folding times. Despite the reasonable global mean agreement of GEOS-Chem with measurement e-folding times, site by site comparisons yield differences of up to a factor of two, which suggest possible deficiencies in either the model transport, removal processes or the representation of 137Cs removal, particularly in the tropics and at high latitudes. There is an ongoing need to develop constraints on aerosol lifetimes, but these measurement-based constraints must be carefully interpreted given the sensitivity of mean lifetimes and e-folding times to both mixing and removal processes.

  9. Modeling and interpretation of Q logs in carbonate rock using a double porosity model and well logs

    Science.gov (United States)

    Parra, Jorge O.; Hackert, Chris L.

    2006-03-01

    Attenuation data extracted from full waveform sonic logs is sensitive to vuggy and matrix porosities in a carbonate aquifer. This is consistent with the synthetic attenuation (1 / Q) as a function of depth at the borehole-sonic source-peak frequency of 10 kHz. We use velocity and densities versus porosity relationships based on core and well log data to determine the matrix, secondary, and effective bulk moduli. The attenuation model requires the bulk modulus of the primary and secondary porosities. We use a double porosity model that allows us to investigate attenuation at the mesoscopic scale. Thus, the secondary and primary porosities in the aquifer should respond with different changes in fluid pressure. The results show a high permeability region with a Q that varies from 25 to 50 and correlates with the stiffer part of the carbonate formation. This pore structure permits water to flow between the interconnected vugs and the matrix. In this region the double porosity model predicts a decrease in the attenuation at lower frequencies that is associated with fluid flowing from the more compliant high-pressure regions (interconnected vug space) to the relatively stiff, low-pressure regions (matrix). The chalky limestone with a low Q of 17 is formed by a muddy porous matrix with soft pores. This low permeability region correlates with the low matrix bulk modulus. A low Q of 18 characterizes the soft sandy carbonate rock above the vuggy carbonate. This paper demonstrates the use of attenuation logs for discriminating between lithology and provides information on the pore structure when integrated with cores and other well logs. In addition, the paper demonstrates the practical application of a new double porosity model to interpret the attenuation at sonic frequencies by achieving a good match between measured and modeled attenuation.

  10. An Interpretation of Tevatron SUSY Trilepton Search Results in mSUGRA and in a Model-independent Fashion

    CERN Document Server

    Dube, Sourabh; Somalwar, Sunil; Sood, Alexander

    2008-01-01

    Both the CDF and D0 experiments at the Tevatron search for supersymmetry using the golden three lepton and missing energy "trilepton" signature of chargino-neutralino production. However, the experimental results are presented for specific parameter values of a given model or for custom-made scenarios in the region of sensitivity. By breaking down search sensitivity according to the tau-lepton content of the trileptons, we are able to present in this paper a recipe to extend the interpretation of the Tevatron trilepton search results to the general mSUGRA model. We also attempt to interprete the search results in a model-independent fashion by expressing them in terms of relevant sparticle masses instead of specific parameters of a model such as mSUGRA.

  11. Demand response-enabled model predictive HVAC load control in buildings using real-time electricity pricing

    Science.gov (United States)

    Avci, Mesut

    A practical cost and energy efficient model predictive control (MPC) strategy is proposed for HVAC load control under dynamic real-time electricity pricing. The MPC strategy is built based on a proposed model that jointly minimizes the total energy consumption and hence, cost of electricity for the user, and the deviation of the inside temperature from the consumer's preference. An algorithm that assigns temperature set-points (reference temperatures) to price ranges based on the consumer's discomfort tolerance index is developed. A practical parameter prediction model is also designed for mapping between the HVAC load and the inside temperature. The prediction model and the produced temperature set-points are integrated as inputs into the MPC controller, which is then used to generate signal actions for the AC unit. To investigate and demonstrate the effectiveness of the proposed approach, a simulation based experimental analysis is presented using real-life pricing data. An actual prototype for the proposed HVAC load control strategy is then built and a series of prototype experiments are conducted similar to the simulation studies. The experiments reveal that the MPC strategy can lead to significant reductions in overall energy consumption and cost savings for the consumer. Results suggest that by providing an efficient response strategy for the consumers, the proposed MPC strategy can enable the utility providers to adopt efficient demand management policies using real-time pricing. Finally, a cost-benefit analysis is performed to display the economic feasibility of implementing such a controller as part of a building energy management system, and the payback period is identified considering cost of prototype build and cost savings to help the adoption of this controller in the building HVAC control industry.

  12. Analysis and interpretation of the model of a Faraday cage for electromagnetic compatibility testing

    Directory of Open Access Journals (Sweden)

    Nenad V. Munić

    2014-02-01

    Full Text Available In order to improve the work of the Laboratory for Electromagnetic Compatibility Testing in the Technical Test Center (TTC, we investigated the influence of the Faraday cage on measurement results. The primary goal of this study is the simulation of the fields in the cage, especially around resonant frequencies, in order to be able to predict results of measurements of devices under test in the anechoic chamber or in any other environment. We developed simulation (computer models of the cage step by step, by using the Wipl-D program and by comparing the numerical results with measurements as well as by resolving difficulties due to the complex structure and imperfections of the cage. The subject of this paper is to present these simulation models and the corresponding results of the computations and measurements. Construction of the cage The cage is made of steel plates with the dimensions 1.25 m x 2.5 m. The base of the cage is a square; the footprint interior dimensions are 3.76 m x 3.76 m, and the height is 2.5 m. The cage ceiling is lowered by plasticized aluminum strips. The strips are loosely attached to the carriers which are screwed to the ceiling. The cage has four ventilation openings (two on the ceiling and two on one wall, made of honeycomb waveguide holes. In one corner of the cage, there is a single door with springs made of beryllium bronze. For frequencies of a few tens of MHz, the skin effect is fully developed in the cage walls. By measuring the input impedance of the wire line parallel to a wall of the cage, we calculated the surface losses of the cage plates. In addition, we used a magnetic probe to detect shield discontinuities. We generated a strong current at a frequency of 106 kHz outside the cage and measured the magnetic field inside the cage at the places of cage shield discontinuities. In this paper, we showed the influence of these places on the measurement results, especially on the qualitative and quantitative

  13. Making Tree Ensembles Interpretable

    OpenAIRE

    Hara, Satoshi; Hayashi, Kohei

    2016-01-01

    Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited. In this paper, we propose a post processing method that improves the model interpretability of tree ensembles. After learning a complex tree ensembles in a standard way, we approximate it by a simpler model that is interpretable for human. To obtain the simpler model, we derive the EM algorithm minimizing the KL divergence from the ...

  14. Mapping socio-economic scenarios of land cover change: a GIS method to enable ecosystem service modelling.

    Science.gov (United States)

    Swetnam, R D; Fisher, B; Mbilinyi, B P; Munishi, P K T; Willcock, S; Ricketts, T; Mwakalila, S; Balmford, A; Burgess, N D; Marshall, A R; Lewis, S L

    2011-03-01

    We present a GIS method to interpret qualitatively expressed socio-economic scenarios in quantitative map-based terms. (i) We built scenarios using local stakeholders and experts to define how major land cover classes may change under different sets of drivers; (ii) we formalized these as spatially explicit rules, for example agriculture can only occur on certain soil types; (iii) we created a future land cover map which can then be used to model ecosystem services. We illustrate this for carbon storage in the Eastern Arc Mountains of Tanzania using two scenarios: the first based on sustainable development, the second based on 'business as usual' with continued forest-woodland degradation and poor protection of existing forest reserves. Between 2000 and 2025 4% of carbon stocks were lost under the first scenario compared to a loss of 41% of carbon stocks under the second scenario. Quantifying the impacts of differing future scenarios using the method we document here will be important if payments for ecosystem services are to be used to change policy in order to maintain critical ecosystem services.

  15. Exploring Prospective Secondary Mathematics Teachers' Interpretation of Student Thinking through Analysing Students' Work in Modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-01-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that…

  16. Modeling and Inversion Methods for the Interpretation of Resistivity Logging Tool Response

    NARCIS (Netherlands)

    Anderson, B.I.

    2001-01-01

    The electrical resistivity measured by well logging tools is one of the most important rock parameters for indicating the amount of hydrocarbons present in a reservoir. The main interpretation challenge is to invert the measured data, solving for the true resistivity values in each zone of a reservo

  17. Towards a Good Practice Model for an Entrepreneurial HEI: Perspectives of Academics, Enterprise Enablers and Graduate Entrepreneurs

    Science.gov (United States)

    Williams, Perri; Fenton, Mary

    2013-01-01

    This paper reports on an examination of the perspectives of academics, enterprise enablers and graduate entrepreneurs of an entrepreneurial higher education institution (HEI). The research was conducted in Ireland among 30 graduate entrepreneurs and 15 academics and enterprise enablers (enterprise development agency personnel) to provide a…

  18. Bipole-dipole interpretation with three-dimensional models (including a field study of Las Alturas, New Mexico)

    Energy Technology Data Exchange (ETDEWEB)

    Hohmann, G.W.; Jiracek, G.R.

    1979-09-01

    The bipole-dipole responses of three-dimensional (3D) prisms were studied using an integral equation numerical solution. Although response patterns are quite complex, the bipole-dipole method appears to be a useful, efficient means of mapping the areal distribution of resistivity. However, 3D modeling is required for quantitative interpretation. Computer time for our solution varies from negligible for small bodies to 6 minutes on a UNIVAC 1108 for the largest possible body (85 cubes). Bipole-dipole response varies significantly with bipole orientation and position, but simply changing the distance between the bipole and the body does not greatly affect the response. However, the response is complex and interpretation ambiguous if both transmitter electrodes are located directly over a body. Boundaries of shallow bodies are much better resolved than those of deep bodies. Conductive bodies produce false polarization highs that can confuse interpretation. It is difficult to distinguish the effects of depth and resistivity contrast, and, as with all electrical methods, depth extent is difficult to resolve. Interactive interpretation of bipole-dipole field results from a geothermal prospect in New Mexico illustrates the value of the 3D modeling technique.

  19. Interpretability formalized

    NARCIS (Netherlands)

    Joosten, Joost Johannes

    2004-01-01

    The dissertation is in the first place a treatment of mathematical interpretations. Interpretations themselves will be studied, but also shall they be used to study formal theories. Interpretations, when used in comparing theories, tell us, in a natural way, something about proof-strength of form

  20. The Thermodynamic Flow-Force Interpretation of Root Nutrient Uptake Kinetics: A Powerful Formalism for Agronomic and Phytoplanktonic Models.

    Science.gov (United States)

    Le Deunff, Erwan; Tournier, Pierre-Henri; Malagoli, Philippe

    2016-01-01

    The ion influx isotherms obtained by measuring unidirectional influx across root membranes with radioactive or stable tracers are mostly interpreted by enzyme-substrate-like modeling. However, recent analyses from ion transporter mutants clearly demonstrate the inadequacy of the conventional interpretation of ion isotherms. Many genetically distinct carriers are involved in the root catalytic function. Parameters Vmax and Km deduced from this interpretation cannot therefore be regarded as microscopic parameters of a single transporter, but are instead macroscopic parameters (V[Formula: see text] and K[Formula: see text], apparent maximum velocity and affinity constant) that depend on weighted activities of multiple transporters along the root. The flow-force interpretation based on the thermodynamic principle of irreversible processes is an alternative macroscopic modeling approach for ion influx isotherms in which macroscopic parameters Lj (overall conductance of the root system for the substrate j) and πj (thermodynamic parameter when Jj = 0) have a straightforward meaning with respect to the biological sample studied. They characterize the efficiency of the entire root catalytic structure without deducing molecular characteristics. Here we present the basic principles of this theory and how its use can be tested and improved by changing root pre- and post-wash procedures before influx measurements in order to come as close as possible to equilibrium conditions. In addition, the constant values of Vm and Km in the Michaelis-Menten (MM) formalism of enzyme-substrate interpretation do not reflect variations in response to temperature, nutrient status or nutrient regimes. The linear formalism of the flow-force approach, which integrates temperature effect on nutrient uptake, could usefully replace MM formalism in the 1-3-dimension models of plants and phytoplankton. This formalism offers a simplification of parametrization to help find more realistic analytical

  1. Optogenetics enables functional analysis of human embryonic stem cell–derived grafts in a Parkinson’s disease model

    Science.gov (United States)

    Steinbeck, Julius A; Choi, Se Joon; Mrejeru, Ana; Ganat, Yosif; Deisseroth, Karl; Sulzer, David; Mosharov, Eugene V; Studer, Lorenz

    2016-01-01

    Recent studies have shown evidence of behavioral recovery after transplantation of human pluripotent stem cell (PSC)-derived neural cells in animal models of neurological disease1–4. However, little is known about the mechanisms underlying graft function. Here we use optogenetics to modulate in real time electrophysiological and neurochemical properties of mesencephalic dopaminergic (mesDA) neurons derived from human embryonic stem cells (hESCs). In mice that had recovered from lesion-induced Parkinsonian motor deficits, light-induced selective silencing of graft activity rapidly and reversibly re-introduced the motor deficits. The re-introduction of motor deficits was prevented by the dopamine agonist apomorphine. These results suggest that functionality depends on graft neuronal activity and dopamine release. Combining optogenetics, slice electrophysiology and pharmacological approaches, we further show that mesDA-rich grafts modulate host glutamatergic synaptic transmission onto striatal medium spiny neurons in a manner reminiscent of endogenous mesDA neurons. Thus, application of optogenetics in cell therapy can link transplantation, animal behavior and postmortem analysis to enable the identification of mechanisms that drive recovery. PMID:25580598

  2. Proposal for a Conceptual Model for Evaluating Lean Product Development Performance: A Study of LPD Enablers in Manufacturing Companies

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    The instability in today's market and the emerging demands for mass customized products by customers, are driving companies to seek for cost effective and time efficient improvements in their production system and this have led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. Among such developmental architecture adopted, is the integration of lean thinking in the product development process. However, due to lack of clear understanding of the lean performance and its measurements, many companies are unable to implement and fully integrate the lean principle into their product development process and without a proper performance measurement, the performance level of the organizational value stream will be unknown and the specific area of improvement as it relates to the LPD program cannot be tracked. Hence, it will result in poor decision making in the LPD implementation. This paper therefore seeks to present a conceptual model for evaluation of LPD performances by identifying and analysing the core existing LPD enabler (Chief Engineer, Cross-functional teams, Set-based engineering, Poka-yoke (mistakeproofing), Knowledge-based environment, Value-focused planning and development, Top management support, Technology, Supplier integration, Workforce commitment and Continuous improvement culture) for assessing the LPD performance.

  3. Optogenetics enables functional analysis of human embryonic stem cell-derived grafts in a Parkinson's disease model.

    Science.gov (United States)

    Steinbeck, Julius A; Choi, Se Joon; Mrejeru, Ana; Ganat, Yosif; Deisseroth, Karl; Sulzer, David; Mosharov, Eugene V; Studer, Lorenz

    2015-02-01

    Recent studies have shown evidence of behavioral recovery after transplantation of human pluripotent stem cell (PSC)-derived neural cells in animal models of neurological disease. However, little is known about the mechanisms underlying graft function. Here we use optogenetics to modulate in real time electrophysiological and neurochemical properties of mesencephalic dopaminergic (mesDA) neurons derived from human embryonic stem cells (hESCs). In mice that had recovered from lesion-induced Parkinsonian motor deficits, light-induced selective silencing of graft activity rapidly and reversibly re-introduced the motor deficits. The re-introduction of motor deficits was prevented by the dopamine agonist apomorphine. These results suggest that functionality depends on graft neuronal activity and dopamine release. Combining optogenetics, slice electrophysiology and pharmacological approaches, we further show that mesDA-rich grafts modulate host glutamatergic synaptic transmission onto striatal medium spiny neurons in a manner reminiscent of endogenous mesDA neurons. Thus, application of optogenetics in cell therapy can link transplantation, animal behavior and postmortem analysis to enable the identification of mechanisms that drive recovery.

  4. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  5. The Connecting South West Ontario (cSWO) Benefits Model: An Approach for the Collaborative Capture of Value of Electronic Health Records and Enabling Technology.

    Science.gov (United States)

    Alexander, Ted; Huebner, Lori-Anne; Alarakhia, Mohamed; Hollohan, Kirk

    2017-01-01

    This paper explains the benefits model developed and deployed by the connecting South West Ontario (cSWO) program. The cSWO approach is founded on the principles of enabling clinical and organizational value and the recognition that enabling requires a collaborative approach that can include several perspectives. We describe our approach which is aimed at creating a four-part harmony between change management and adoption, best practice research and quality indicators, data analytics and clinical value production.

  6. Open-source Peer-to-Peer Environment to Enable Sensor Web Architecture: Application to Geomagnetic Observations and Modeling

    Science.gov (United States)

    Holland, M.; Pulkkinen, A.

    2007-12-01

    A flexible, dynamic, and reliable secure peer-to-peer (P2P) communication environment is under development at NASA's Goddard Space Flight Center (GSFC). Popular open-source P2P software technology provides a self- organizing, self-healing ad hoc "virtual network overlay" protocol-suite. The current effort builds a proof-of-concept geomagnetic Sensor Web upon this foundation. Our long-term objective is to enable an evolution of many types of distributed Earth system sensors and related processing/storage components into elements of an operational Sensor Web via integration into this P2P Environment. In general, the Environment distributes data communication tasks among the sensors (viewed as peers, each assigned a peer-role) and controls the flow of data. This work encompasses dynamic discovery, monitoring, control, and configuration as well as autonomous operations, real-time modeling and data processing, and secure ubiquitous communications. We currently restrict our communications to be within the secure GSFC network environment, and have integrated "simulated" (via historical data) geomagnetic sensors. Each remote sensor has operating modes to manage (from remote interfaces) and is designed to have features nearly indistinguishable from a live magnetometer. We have implemented basic identity management features (organized around GSFC identity-management practices); providing mechanisms which restrict data-serving privileges to authorized users, and which allow improved trust and accountability among users of the Environment. Data-serving peers digitally "sign" their services, and their data-browsing counterparts will only accept the products of services whose signature (and hence identity) can be verified. The current usage scenario involves modeling-peers, which operate within the same Environment as the sensors and also have operating modes to remotely manage, portraying a near-real- time global representation of geomagnetic activity from dynamic sensor

  7. A novel humanized GLP-1 receptor model enables both affinity purification and Cre-LoxP deletion of the receptor.

    Directory of Open Access Journals (Sweden)

    Lucy S Jun

    Full Text Available Class B G protein-coupled receptors (GPCRs are important regulators of endocrine physiology, and peptide-based therapeutics targeting some of these receptors have proven effective at treating disorders such as hypercalcemia, osteoporosis, and type 2 diabetes mellitus (T2DM. As next generation efforts attempt to develop novel non-peptide, orally available molecules for these GPCRs, new animal models expressing human receptor orthologs may be required because small molecule ligands make fewer receptor contacts, and thus, the impact of amino acid differences across species may be substantially greater. The objective of this report was to generate and characterize a new mouse model of the human glucagon-like peptide-1 receptor (hGLP-1R, a class B GPCR for which established peptide therapeutics exist for the treatment of T2DM. hGLP-1R knock-in mice express the receptor from the murine Glp-1r locus. Glucose tolerance tests and gastric emptying studies show hGLP-1R mice and their wild-type littermates display similar physiological responses for glucose metabolism, insulin secretion, and gastric transit, and treatment with the GLP-1R agonist, exendin-4, elicits similar responses in both groups. Further, ex vivo assays show insulin secretion from humanized islets is glucose-dependent and enhanced by GLP-1R agonists. To enable additional utility, the targeting construct of the knock-in line was engineered to contain both flanking LoxP sites and a C-terminal FLAG epitope. Anti-FLAG affinity purification shows strong expression of hGLP-1R in islets, lung, and stomach. We crossed the hGLP-1R line with Rosa26Cre mice and generated global Glp-1r-/- animals. Immunohistochemistry of pancreas from humanized and knock-out mice identified a human GLP-1R-specific antibody that detects the GLP-1R in human pancreas as well as in the pancreas of hGLP-1r knock-in mice. This new hGLP-1R model will allow tissue-specific deletion of the GLP-1R, purification of potential

  8. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall

    2016-01-01

    Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973

  9. A healthy fear of the unknown: perspectives on the interpretation of parameter fits from computational models in neuroscience.

    Directory of Open Access Journals (Sweden)

    Matthew R Nassar

    2013-04-01

    Full Text Available Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.

  10. A Microeconomic Interpretation of the Maximum Entropy Estimator of Multinomial Logit Models and Its Equivalence to the Maximum Likelihood Estimator

    Directory of Open Access Journals (Sweden)

    Louis de Grange

    2010-09-01

    Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.

  11. Linguistics in Text Interpretation

    DEFF Research Database (Denmark)

    Togeby, Ole

    2011-01-01

    A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'.......A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'....

  12. Visual Environment for Rich Data Interpretation (VERDI) program for environmental modeling systems

    Science.gov (United States)

    VERDI is a flexible, modular, Java-based program used for visualizing multivariate gridded meteorology, emissions and air quality modeling data created by environmental modeling systems such as the CMAQ model and WRF.

  13. Using global magnetospheric models for simulation and interpretation of Swarm external field measurements

    DEFF Research Database (Denmark)

    Moretto, T.; Vennerstrøm, Susanne; Olsen, Nils

    2006-01-01

    We have used a global model of the solar wind magnetosphere interaction to model the high latitude part of the external contributions to the geomagnetic field near the Earth. The model also provides corresponding values for the electric field. Geomagnetic quiet conditions were modeled to provide...

  14. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  15. Two-zone model for the broadband Crab nebula spectrum: microscopic interpretation

    Directory of Open Access Journals (Sweden)

    Fraschetti F.

    2017-01-01

    Full Text Available We develop a simple two-zone interpretation of the broadband baseline Crab nebula spectrum between 10−5 eV and ~ 100 TeV by using two distinct log-parabola energetic electrons distributions. We determine analytically the very-high energy photon spectrum as originated by inverse-Compton scattering of the far-infrared soft ambient photons within the nebula off a first population of electrons energized at the nebula termination shock. The broad and flat 200 GeV peak jointly observed by Fermi/LAT and MAGIC is naturally reproduced. The synchrotron radiation from a second energetic electron population explains the spectrum from the radio range up to ~ 10 keV. We infer from observations the energy dependence of the microscopic probability of remaining in proximity of the shock of the accelerating electrons.

  16. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks.

    Science.gov (United States)

    Caywood, Matthew S; Roberts, Daniel M; Colombe, Jeffrey B; Greenwald, Hal S; Weiland, Monica Z

    2016-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as "black boxes" that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model's predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy.

  17. Anisotropic rock physics models for interpreting pore structures in carbonate reservoirs

    Institute of Scientific and Technical Information of China (English)

    Li Sheng-Jie; Shao Yu; Chen Xu-Qiang

    2016-01-01

    We developed an anisotropic effective theoretical model for modeling the elastic behavior of anisotropic carbonate reservoirs by combining the anisotropic self-consistent approximation and differential effective medium models. By analyzing the measured data from carbonate samples in the TL area, a carbonate pore-structure model for estimating the elastic parameters of carbonate rocks is proposed, which is a prerequisite in the analysis of carbonate reservoirs. A workfl ow for determining elastic properties of carbonate reservoirs is established in terms of the anisotropic effective theoretical model and the pore-structure model. We performed numerical experiments and compared the theoretical prediction and measured data. The result of the comparison suggests that the proposed anisotropic effective theoretical model can account for the relation between velocity and porosity in carbonate reservoirs. The model forms the basis for developing new tools for predicting and evaluating the properties of carbonate reservoirs.%♦Corresponding author: Li Sheng-Jie (Email: Richard@cup.edu.cn)

  18. A New Interpretation of Spontaneous Sway Measures Based on a Simple Model of Human Postural Control

    National Research Council Canada - National Science Library

    Maurer, Christoph; Peterka, Robert J

    ...) traces that closely resemble physiologically measured COP functions can be produced by an appropriate selection of model parameters in a simple feedback model of the human postural control system...

  19. Effects of waveform model systematics on the interpretation of GW150914

    OpenAIRE

    2016-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein's e...

  20. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks

    Science.gov (United States)

    Caywood, Matthew S.; Roberts, Daniel M.; Colombe, Jeffrey B.; Greenwald, Hal S.; Weiland, Monica Z.

    2017-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as “black boxes” that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model’s predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy. PMID:28123359

  1. Interpretation of cloud-climate feedback as produced by 14 atmospheric general circulation models

    Science.gov (United States)

    Cess, R. D.; Potter, G. L.; Ghan, S. J.; Blanchet, J. P.; Boer, G. J.

    1989-01-01

    Understanding the cause of differences among general circulation model projections of carbon dioxide-induced climatic change is a necessary step toward improving the models. An intercomparison of 14 atmospheric general circulation models, for which sea surface temperature perturbations were used as a surrogate climate change, showed that there was a roughly threefold variation in global climate sensitivity. Most of this variation is attributable to differences in the models' depictions of cloud-climate feedback, a result that emphasizes the need for improvements in the treatment of clouds in these models if they are ultimately to be used as climatic predictors.

  2. Animal-Assisted Therapy for persons with disabilities based on canine tail language interpretation via fuzzy emotional behavior model.

    Science.gov (United States)

    Phanwanich, Warangkhana; Kumdee, Orrawan; Ritthipravat, Panrasee; Wongsawat, Yodchanan

    2011-01-01

    Animal-Assisted Therapy (AAT) is the science that employs the merit of human-animal interaction to alleviate mental and physical problems of persons with disabilities. However, to achieve the goal of AAT for persons with severe disabilities (e.g. spinal cord injury and amyotrophic lateral sclerosis), real-time animal language interpretation is needed. Since canine behaviors can be visually distinguished from its tail, this paper proposes the automatic real-time interpretation of canine tail language for human-canine interaction in the case of persons with severe disabilities. Canine tail language is captured via two 3-axis accelerometers. Directions and frequency are selected as our features of interests. New fuzzy rules and center of gravity (COG)-based defuzzification method are proposed in order to interpret the features into three canine emotional behaviors, i.e., agitate, happy, and scare as well as its blended emotional behaviors. The emotional behavior model is performed in the simulated dog. The average recognition rate in real dog is 93.75% accuracy.

  3. Should hydraulic tomography data be interpreted using geostatistical inverse modeling? A laboratory sandbox investigation

    Science.gov (United States)

    Illman, Walter A.; Berg, Steven J.; Zhao, Zhanfeng

    2015-05-01

    The robust performance of hydraulic tomography (HT) based on geostatistics has been demonstrated through numerous synthetic, laboratory, and field studies. While geostatistical inverse methods offer many advantages, one key disadvantage is its highly parameterized nature, which renders it computationally intensive for large-scale problems. Another issue is that geostatistics-based HT may produce overly smooth images of subsurface heterogeneity when there are few monitoring interval data. Therefore, some may question the utility of the geostatistical inversion approach in certain situations and seek alternative approaches. To investigate these issues, we simultaneously calibrated different groundwater models with varying subsurface conceptualizations and parameter resolutions using a laboratory sandbox aquifer. The compared models included: (1) isotropic and anisotropic effective parameter models; (2) a heterogeneous model that faithfully represents the geological features; and (3) a heterogeneous model based on geostatistical inverse modeling. The performance of these models was assessed by quantitatively examining the results from model calibration and validation. Calibration data consisted of steady state drawdown data from eight pumping tests and validation data consisted of data from 16 separate pumping tests not used in the calibration effort. Results revealed that the geostatistical inversion approach performed the best among the approaches compared, although the geological model that faithfully represented stratigraphy came a close second. In addition, when the number of pumping tests available for inverse modeling was small, the geological modeling approach yielded more robust validation results. This suggests that better knowledge of stratigraphy obtained via geophysics or other means may contribute to improved results for HT.

  4. Objective interpretation as conforming interpretation

    Directory of Open Access Journals (Sweden)

    Lidka Rodak

    2011-12-01

    Full Text Available The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and should be rather linked with conforming interpretation. And what this actually implies is that it is not the virtue of certainty and predictability – which are usually associated with objectivity- but coherence that makes the foundation of applicability of objectivity in law.What could be observed from the analyses, is that both the phenomenon of conforming interpretation and objective interpretation play the role of arguments in the interpretive discourse, arguments that provide justification that interpretation is not arbitrary or subjective. With regards to the important part of the ideology of legal application which is the conviction that decisions should be taken on the basis of law in order to exclude arbitrariness, objective interpretation could be read as a question “what kind of authority “supports” certain interpretation”? that is almost never free of judicial creativity and judicial activism.One can say that, objective and conforming interpretation are just another arguments used in legal discourse.

  5. Interpreting the effects of altered brain anatomical connectivity on fMRI functional connectivity: a role for computational neural modeling.

    Science.gov (United States)

    Horwitz, Barry; Hwang, Chuhern; Alstott, Jeff

    2013-01-01

    Recently, there have been a large number of studies using resting state fMRI to characterize abnormal brain connectivity in patients with a variety of neurological, psychiatric, and developmental disorders. However, interpreting what the differences in resting state fMRI functional connectivity (rsfMRI-FC) actually reflect in terms of the underlying neural pathology has proved to be elusive because of the complexity of brain anatomical connectivity. The same is the case for task-based fMRI studies. In the last few years, several groups have used large-scale neural modeling to help provide some insight into the relationship between brain anatomical connectivity and the corresponding patterns of fMRI-FC. In this paper we review several efforts at using large-scale neural modeling to investigate the relationship between structural connectivity and functional/effective connectivity to determine how alterations in structural connectivity are manifested in altered patterns of functional/effective connectivity. Because the alterations made in the anatomical connectivity between specific brain regions in the model are known in detail, one can use the results of these simulations to determine the corresponding alterations in rsfMRI-FC. Many of these simulation studies found that structural connectivity changes do not necessarily result in matching changes in functional/effective connectivity in the areas of structural modification. Often, it was observed that increases in functional/effective connectivity in the altered brain did not necessarily correspond to increases in the strength of the anatomical connection weights. Note that increases in rsfMRI-FC in patients have been interpreted in some cases as resulting from neural plasticity. These results suggest that this interpretation can be mistaken. The relevance of these simulation findings to the use of functional/effective fMRI connectivity as biomarkers for brain disorders is also discussed.

  6. Impact of the interfaces for wind and wave modeling - interpretation using COAWST, SAR and point measurements

    DEFF Research Database (Denmark)

    -ocean-atmosphere-wave-sediment-transport (COAWST) modeling system. The roughness length has been calculated using seven schemes (Charnock, Fan, Oost, Drennen, Liu, Andreas, Taylor-Yelland). The stress approach is applied through a wave boundary layer model in SWAN. The experiments are done to a case where the Synthetic Aperture Radar (SAR) image......Air and sea interacts, where winds generate waves and waves affect the winds. This topic is ever relevant for offshore functions such as shipping, portal routines, wind farm operation and maintenance. In a coupled modeling system, the atmospheric modeling and the wave modeling interfere with each...... other through an interface. In most modeling system the interface is described through the roughness length. The roughness length is parameterized with the basic idea of the Charnock formulation while the coefficients could be functions of simply wind speed, or wave parameters. More advanced interfaces...

  7. Interpreting the variability of CO2 columns over North America using a chemistry transport model: application to SCIAMACHY data

    Directory of Open Access Journals (Sweden)

    P. S. Monks

    2008-04-01

    Full Text Available We use the GEOS-Chem chemistry transport model to interpret variability of CO2 columns and associated column-averaged volume mixing ratios (CVMRs observed by the SCIAMACHY satellite instrument during the 2003 North American growing season, accounting for the instrument averaging kernel. Model and observed columns, largely determined by surface topography, averaged on a 2°×2.5° grid, are in excellent agreement (model bias=3%, r>0.9, as expected. Model and observed CVMRs, determined by scaling column CO2 by surface pressure data, are on average within 3% but are only weakly correlated, reflecting a large positive model bias (10–15 ppmv at 50–70° N during midsummer at the peak of biospheric uptake. GEOS-Chem generally reproduces the magnitude and seasonal cycle of observed CO2 surface VMRs across North America. During midsummer we find that model CVMRs and surface VMRs converge, reflecting the instrument vertical sensitivity and the strong influence of the land biosphere on lower tropospheric CO2 columns. We use model tagged tracers to show that local fluxes largely determine CVMR variability over North America, with the largest individual CVMR contributions (1.1% from the land biosphere. Fuel sources are relatively constant while biomass burning make a significant contribution only during midsummer. We also show that non-local sources contribute significantly to total CVMRs over North America, with the boreal Asian land biosphere contributing close to 1% in midsummer at high latitudes. We used the monthly-mean Jacobian matrix for North America to illustrate that: 1 North American CVMRs represent a superposition of many weak flux signatures, but differences in flux distributions should permit independent flux estimation; and 2 the atmospheric e-folding lifetimes for many of these flux signatures are 3–4 months, beyond which time they are too well-mixed to interpret.

  8. "On Clocks and Clouds:" Confirming and Interpreting Climate Models as Scientific Hypotheses (Invited)

    Science.gov (United States)

    Donner, L.

    2009-12-01

    The certainty of climate change projected under various scenarios of emissions using general circulation models is an issue of vast societal importance. Unlike numerical weather prediction, a problem to which general circulation models are also applied, projected climate changes usually lie outside of the range of external forcings for which the models generating these changes have been directly evaluated. This presentation views climate models as complex scientific hypotheses and thereby frames these models within a well-defined process of both advancing scientific knowledge and recognizing its limitations. Karl Popper's Logik der Forschung (The Logic of Scientific Discovery, 1934) and 1965 essay “On Clocks and Clouds” capture well the methodologies and challenges associated with constructing climate models. Indeed, the process of a problem situation generating tentative theories, refined by error elimination, characterizes aptly the routine of general circulation model development. Limitations on certainty arise from the distinction Popper perceived in types of natural processes, which he exemplified by clocks, capable of exact measurement, and clouds, subject only to statistical approximation. Remarkably, the representation of clouds in general circulation models remains the key uncertainty in understanding atmospheric aspects of climate change. The asymmetry of hypothesis falsification by negation and much vaguer development of confidence in hypotheses consistent with some of their implications is an important practical challenge to confirming climate models. The presentation will discuss the ways in which predictions made by climate models for observable aspects of the present and past climate can be regarded as falsifiable hypotheses. The presentation will also include reasons why “passing” these tests does not provide complete confidence in predictions about the future by climate models. Finally, I will suggest that a “reductionist” view, in

  9. Polychronous Interpretation of Synoptic, a Domain Specific Modeling Language for Embedded Flight-Software

    CERN Document Server

    Besnard, L; Ouy, J; Talpin, J -P; Bodeveix, J -P; Cortier, A; Pantel, M; Strecker, M; Garcia, G; Rugina, A; Buisson, J; Dagnat, F

    2010-01-01

    The SPaCIFY project, which aims at bringing advances in MDE to the satellite flight software industry, advocates a top-down approach built on a domain-specific modeling language named Synoptic. In line with previous approaches to real-time modeling such as Statecharts and Simulink, Synoptic features hierarchical decomposition of application and control modules in synchronous block diagrams and state machines. Its semantics is described in the polychronous model of computation, which is that of the synchronous language Signal.

  10. Author's reply to discussion of using artificial neural nets to identify the well-test interpretation model

    Energy Technology Data Exchange (ETDEWEB)

    Al-Kaabi, A.U.; Lee, W.J. (Texas A and M Univ., College Station, TX (United States))

    1994-09-01

    The authors thank Yeung et al. for their discussion about their original paper. They agree with Yeung et al. that their proposed scaling method, when applied to patterns with distinct subparts such as the one shown, represents an improvement on the method they proposed. This is particularly true because Yeung et al.'s method eliminates the need to train the artificial neural networks (ANN's) on different sizes (scales) of the same pattern of a specific interpretation model. This paper presents the following comments for discussion and suggestions for further improvement.

  11. Challenges for biological interpretation of environmental proteomics data in non-model organisms.

    Science.gov (United States)

    Dowd, W Wesley

    2012-11-01

    Environmental physiology, toxicology, and ecology and evolution stand to benefit substantially from the relatively recent surge of "omics" technologies into these fields. These approaches, and proteomics in particular, promise to elucidate novel and integrative functional responses of organisms to diverse environmental challenges, over a variety of time scales and at different levels of organization. However, application of proteomics to environmental questions suffers from several challenges--some unique to high-throughput technologies and some relevant to many related fields--that may confound downstream biological interpretation of the data. I explore three of these challenges in environmental proteomics, emphasizing the dependence of biological conclusions on (1) the specific experimental context, (2) the choice of statistical analytical methods, and (3) the degree of proteome coverage and protein identification rates, both of which tend to be much less than 100% (i.e., analytical incompleteness). I use both a review of recent publications and data generated from my previous and ongoing proteomics studies of coastal marine animals to examine the causes and consequences of these challenges, in one case analyzing the same multivariate proteomics data set using 29 different combinations of statistical techniques common in the literature. Although some of the identified issues await further critical assessment and debate, when possible I offer suggestions for meeting these three challenges.

  12. Teaching Real Data Interpretation with Models (TRIM): Analysis of Student Dialogue in a Large-Enrollment Cell and Developmental Biology Course.

    Science.gov (United States)

    Zagallo, Patricia; Meddleton, Shanice; Bolger, Molly S

    2016-01-01

    We present our design for a cell biology course to integrate content with scientific practices, specifically data interpretation and model-based reasoning. A 2-yr research project within this course allowed us to understand how students interpret authentic biological data in this setting. Through analysis of written work, we measured the extent to which students' data interpretations were valid and/or generative. By analyzing small-group audio recordings during in-class activities, we demonstrated how students used instructor-provided models to build and refine data interpretations. Often, students used models to broaden the scope of data interpretations, tying conclusions to a biological significance. Coding analysis revealed several strategies and challenges that were common among students in this collaborative setting. Spontaneous argumentation was present in 82% of transcripts, suggesting that data interpretation using models may be a way to elicit this important disciplinary practice. Argumentation dialogue included frequent co-construction of claims backed by evidence from data. Other common strategies included collaborative decoding of data representations and noticing data patterns before making interpretive claims. Focusing on irrelevant data patterns was the most common challenge. Our findings provide evidence to support the feasibility of supporting students' data-interpretation skills within a large lecture course.

  13. Teaching Real Data Interpretation with Models (TRIM): Analysis of Student Dialogue in a Large-Enrollment Cell and Developmental Biology Course

    Science.gov (United States)

    Zagallo, Patricia; Meddleton, Shanice; Bolger, Molly S.

    2016-01-01

    We present our design for a cell biology course to integrate content with scientific practices, specifically data interpretation and model-based reasoning. A 2-year research project within this course allowed us to understand how students interpret authentic biological data in this setting. Through analysis of written work, we measured the extent…

  14. Teaching Real Data Interpretation with Models (TRIM): Analysis of Student Dialogue in a Large-Enrollment Cell and Developmental Biology Course

    Science.gov (United States)

    Zagallo, Patricia; Meddleton, Shanice; Bolger, Molly S.

    2016-01-01

    We present our design for a cell biology course to integrate content with scientific practices, specifically data interpretation and model-based reasoning. A 2-year research project within this course allowed us to understand how students interpret authentic biological data in this setting. Through analysis of written work, we measured the extent…

  15. Some Cautionary Notes on the Specification and Interpretation of LISREL-type Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice

    LISREL-type structural equation modeling is a powerful statistical technique that seems appropriate for social science variables which are complex and difficult to measure. The literature on the specification, estimation, and testing of such models is voluminous. The greatest proportion of this literature, however, focuses on the technical aspects…

  16. The Effects of Leadership Training and Experience: A Contingency Model Interpretation

    Science.gov (United States)

    Fiedler, Fred E.

    1972-01-01

    Summarizes recent studies based on the contingency model of leadership effectiveness, which suggest why research has failed to show that leadership training and experience increase organizational performance. The contingency model postulated that group performance depends on the match between situational favorableness, i.e., the leader's control…

  17. Long-term development of how students interpret a model; Complementarity of contexts and mathematics.

    NARCIS (Netherlands)

    Vos, Francisca; Roorda, Gerrit

    2016-01-01

    When students engage in rich mathematical modelling tasks, they have to handle real-world contexts and mathematics in chorus. This is not easy. In this chapter, contexts and mathematics are perceived as complementary, which means they can be integrated. Based on four types of approaches to modelling

  18. Effects of waveform model systematics on the interpretation of GW150914

    CERN Document Server

    Abbott, B P; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N; Aguiar, O D; Aiello, L; Ain, A; Ajith, P; Allen, B; Allocca, A; Altin, P A; Ananyeva, A; Anderson, S B; Anderson, W G; Appert, S; Arai, K; Araya, M C; Areeda, J S; Arnaud, N; Arun, K G; Ascenzi, S; Ashton, G; Ast, M; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Avila-Alvarez, A; Babak, S; Bacon, P; Bader, M K M; Baker, P T; Baldaccini, F; Ballardin, G; Ballmer, S W; Barayoga, J C; Barclay, S E; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barta, D; Bartlett, J; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Baune, C; Bavigadda, V; Bazzan, M; Beer, C; Bejger, M; Belahcene, I; Belgin, M; Bell, A S; Berger, B K; Bergmann, G; Berry, C P L; Bersanetti, D; Bertolini, A; Betzwieser, J; Bhagwat, S; Bhandare, R; Bilenko, I A; Billingsley, G; Billman, C R; Birch, J; Birney, R; Birnholtz, O; Biscans, S; Bisht, A; Bitossi, M; Biwer, C; Bizouard, M A; Blackburn, J K; Blackman, J; Blair, C D; Blair, D G; Blair, R M; Bloemen, S; Bock, O; Boer, M; Bogaert, G; Bohe, A; Bondu, F; Bonnand, R; Boom, B A; Bork, R; Boschi, V; Bose, S; Bouffanais, Y; Bozzi, A; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Brillet, A; Brinkmann, M; Brisson, V; Brockill, P; Broida, J E; Brooks, A F; Brown, D A; Brown, D D; Brown, N M; Brunett, S; Buchanan, C C; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cabero, M; Cadonati, L; Cagnoli, G; Cahillane, C; Bustillo, J Calder'on; Callister, T A; Calloni, E; Camp, J B; Cannon, K C; Cao, H; Cao, J; Capano, C D; Capocasa, E; Carbognani, F; Caride, S; Diaz, J Casanueva; Casentini, C; Caudill, S; Cavagli`a, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C B; Baiardi, L Cerboni; Cerretani, G; Cesarini, E; Chamberlin, S J; Chan, M; Chao, S; Charlton, P; Chassande-Mottin, E; Cheeseboro, B D; Chen, H Y; Chen, Y; Cheng, H -P; Chincarini, A; Chiummo, A; Chmiel, T; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, A J K; Chua, S; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Cocchieri, C; Coccia, E; Cohadon, P -F; Colla, A; Collette, C G; Cominsky, L; Constancio, M; Conti, L; Cooper, S J; Corbitt, T R; Cornish, N; Corsi, A; Cortese, S; Costa, C A; Coughlin, M W; Coughlin, S B; Coulon, J -P; Countryman, S T; Couvares, P; Covas, P B; Cowan, E E; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Creighton, J D E; Creighton, T D; Cripe, J; Crowder, S G; Cullen, T J; Cumming, A; Cunningham, L; Cuoco, E; Canton, T Dal; Danilishin, S L; D'Antonio, S; Danzmann, K; Dasgupta, A; Costa, C F Da Silva; Dattilo, V; Dave, I; Davier, M; Davies, G S; Davis, D; Daw, E J; Day, B; Day, R; De, S; DeBra, D; Debreczeni, G; Degallaix, J; De Laurentis, M; Del'eglise, S; Del Pozzo, W; Denker, T; Dent, T; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Devenson, J; Devine, R C; Dhurandhar, S; D'iaz, M C; Di Fiore, L; Di Giovanni, M; Di Girolamo, T; Di Lieto, A; Di Pace, S; Di Palma, I; Di Virgilio, A; Doctor, Z; Dolique, V; Donovan, F; Dooley, K L; Doravari, S; Dorrington, I; Douglas, R; 'Alvarez, M Dovale; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S E; Edo, T B; Edwards, M C; Effler, A; Eggenstein, H -B; Ehrens, P; Eichholz, J; Eikenberry, S S; Eisenstein, R A; Essick, R C; Etienne, Z; Etzel, T; Evans, M; Evans, T M; Everett, R; Factourovich, M; Fafone, V; Fair, H; Fairhurst, S; Fan, X; Farinon, S; Farr, B; Farr, W M; Fauchon-Jones, E J; Favata, M; Fays, M; Fehrmann, H; Fejer, M M; Galiana, A Fern'andez; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Fiori, I; Fiorucci, D; Fisher, R P; Flaminio, R; Fletcher, M; Fong, H; Forsyth, S S; Fournier, J -D; Frasca, S; Frasconi, F; Frei, Z; Freise, A; Frey, R; Frey, V; Fries, E M; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gabbard, H; Gadre, B U; Gaebel, S M; Gair, J R; Gammaitoni, L; Gaonkar, S G; Garufi, F; Gaur, G; Gayathri, V; Gehrels, N; Gemme, G; Genin, E; Gennai, A; George, J; Gergely, L; Germain, V; Ghonge, S; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, K; Glaefke, A; Goetz, E; Goetz, R; Gondan, L; Gonz'alez, G; Castro, J M Gonzalez; Gopakumar, A; Gorodetsky, M L; Gossan, S E; Gosselin, M; Gouaty, R; Grado, A; Graef, C; Granata, M; Grant, A; Gras, S; Gray, C; Greco, G; Green, A C; Groot, P; Grote, H; Grunewald, S; Guidi, G M; Guo, X; Gupta, A; Gupta, M K; Gushwa, K E; Gustafson, E K; Gustafson, R; Hacker, J J; Hall, B R; Hall, E D; Hammond, G; Haney, M; Hanke, M M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Hardwick, T; Harms, J; Harry, G M; Harry, I W; Hart, M J; Hartman, M T; Haster, C -J; Haughian, K; Healy, J; Heidmann, A; Heintze, M C; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Hennig, J; Henry, J; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hofman, D; Holt, K; Holz, D E; Hopkins, P; Hough, J; Houston, E A; Howell, E J; Hu, Y M; Huerta, E A; Huet, D; Hughey, B; Husa, S; Huttner, S H; Huynh-Dinh, T; Indik, N; Ingram, D R; Inta, R; Isa, H N; Isac, J -M; Isi, M; Isogai, T; Iyer, B R; Izumi, K; Jacqmin, T; Jani, K; Jaranowski, P; Jawahar, S; Jim'enez-Forteza, F; Johnson, W W; Jones, D I; Jones, R; Jonker, R J G; Ju, L; Junker, J; Kalaghatgi, C V; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karki, S; Karvinen, K S; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, S; Kaur, T; Kawabe, K; K'ef'elian, F; Keitel, D; Kelley, D B; Kennedy, R; Key, J S; Khalili, F Y; Khan, I; Khan, S; Khan, Z; Khazanov, E A; Kijbunchoo, N; Kim, Chunglee; Kim, J C; Kim, Whansun; Kim, W; Kim, Y -M; Kimbrell, S J; King, E J; King, P J; Kirchhoff, R; Kissel, J S; Klein, B; Kleybolte, L; Klimenko, S; Koch, P; Koehlenbeck, S M; Koley, S; Kondrashov, V; Kontos, A; Korobko, M; Korth, W Z; Kowalska, I; Kozak, D B; Kr"amer, C; Kringel, V; Krishnan, B; Kr'olak, A; Kuehn, G; Kumar, P; Kumar, R; Kuo, L; Kutynia, A; Lackey, B D; Landry, M; Lang, R N; Lange, J; Lantz, B; Lanza, R K; Lartaux-Vollard, A; Lasky, P D; Laxen, M; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, K; Lehmann, J; Lenon, A; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levin, Y; Li, T G F; Libson, A; Littenberg, T B; Liu, J; Lockerbie, N A; Lombardi, A L; London, L T; Lord, J E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J D; Lovelace, G; L"uck, H; Lundgren, A P; Lynch, R; Ma, Y; Macfoy, S; Machenschalk, B; MacInnis, M; Macleod, D M; Magana-Sandoval, F; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Mandic, V; Mangano, V; Mansell, G L; Manske, M; Mantovani, M; Marchesoni, F; Marion, F; M'arka, S; M'arka, Z; Markosyan, A S; Maros, E; Martelli, F; Martellini, L; Martin, I W; Martynov, D V; Mason, K; Masserot, A; Massinger, T J; Masso-Reid, M; Mastrogiovanni, S; Matichard, F; Matone, L; Mavalvala, N; Mazumder, N; McCarthy, R; McClelland, D E; McCormick, S; McGrath, C; McGuire, S C; McIntyre, G; McIver, J; McManus, D J; McRae, T; McWilliams, S T; Meacher, D; Meadors, G D; Meidam, J; Melatos, A; Mendell, G; Mendoza-Gandara, D; Mercer, R A; Merilh, E L; Merzougui, M; Meshkov, S; Messenger, C; Messick, C; Metzdorff, R; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Middleton, H; Mikhailov, E E; Milano, L; Miller, A L; Miller, A; Miller, B B; Miller, J; Millhouse, M; Minenkov, Y; Ming, J; Mirshekari, S; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moggi, A; Mohan, M; Mohapatra, S R P; Montani, M; Moore, B C; Moore, C J; Moraru, D; Moreno, G; Morriss, S R; Mours, B; Mow-Lowry, C M; Mueller, G; Muir, A W; Mukherjee, Arunava; Mukherjee, D; Mukherjee, S; Mukund, N; Mullavey, A; Munch, J; Muniz, E A M; Murray, P G; Mytidis, A; Napier, K; Nardecchia, I; Naticchioni, L; Nelemans, G; Nelson, T J N; Neri, M; Nery, M; Neunzert, A; Newport, J M; Newton, G; Nguyen, T T; Nielsen, A B; Nissanke, S; Nitz, A; Noack, A; Nocera, F; Nolting, D; Normandin, M E N; Nuttall, L K; Oberling, J; Ochsner, E; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oliver, M; Oppermann, P; Oram, Richard J; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Overmier, H; Owen, B J; Pace, A E; Page, J; Pai, A; Pai, S A; Palamos, J R; Palashov, O; Palomba, C; Pal-Singh, A; Pan, H; Pankow, C; Pannarale, F; Pant, B C; Paoletti, F; Paoli, A; Papa, M A; Paris, H R; Parker, W; Pascucci, D; Pasqualetti, A; Passaquieti, R; Passuello, D; Patricelli, B; Pearlstone, B L; Pedraza, M; Pedurand, R; Pekowsky, L; Pele, A; Penn, S; Perez, C J; Perreca, A; Perri, L M; Pfeiffer, H P; Phelps, M; Piccinni, O J; Pichot, M; Piergiovanni, F; Pierro, V; Pillant, G; Pinard, L; Pinto, I M; Pitkin, M; Poe, M; Poggiani, R; Popolizio, P; Post, A; Powell, J; Prasad, J; Pratt, J W W; Predoi, V; Prestegard, T; Prijatelj, M; Principe, M; Privitera, S; Prodi, G A; Prokhorov, L G; Puncken, O; Punturo, M; Puppo, P; P"urrer, M; Qi, H; Qin, J; Qiu, S; Quetschke, V; Quintero, E A; Quitzow-James, R; Raab, F J; Rabeling, D S; Radkins, H; Raffai, P; Raja, S; Rajan, C; Rakhmanov, M; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Read, J; Regimbau, T; Rei, L; Reid, S; Reitze, D H; Rew, H; Reyes, S D; Rhoades, E; Ricci, F; Riles, K; Rizzo, M; Robertson, N A; Robie, R; Robinet, F; Rocchi, A; Rolland, L; Rollins, J G; Roma, V J; Romano, J D; Romano, R; Romie, J H; Rosi'nska, D; Rowan, S; R"udiger, A; Ruggi, P; Ryan, K; Sachdev, S; Sadecki, T; Sadeghian, L; Sakellariadou, M; Salconi, L; Saleem, M; Salemi, F; Samajdar, A; Sammut, L; Sampson, L M; Sanchez, E J; Sandberg, V; Sanders, J R; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Sauter, O; Savage, R L; Sawadsky, A; Schale, P; Scheuer, J; Schmidt, E; Schmidt, J; Schmidt, P; Schnabel, R; Schofield, R M S; Sch"onbeck, A; Schreiber, E; Schuette, D; Schutz, B F; Schwalbe, S G; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Setyawati, Y; Shaddock, D A; Shaffer, T J; Shahriar, M S; Shapiro, B; Shawhan, P; Sheperd, A; Shoemaker, D H; Shoemaker, D M; Siellez, K; Siemens, X; Sieniawska, M; Sigg, D; Silva, A D; Singer, A; Singer, L P; Singh, A; Singh, R; Singhal, A; Sintes, A M; Slagmolen, B J J; Smith, B; Smith, J R; Smith, R J E; Son, E J; Sorazu, B; Sorrentino, F; Souradeep, T; Spencer, A P; Srivastava, A K; Staley, A; Steinke, M; Steinlechner, J; Steinlechner, S; Steinmeyer, D; Stephens, B C; Stevenson, S P; Stone, R; Strain, K A; Straniero, N; Stratta, G; Strigin, S E; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, L; Sunil, S; Sutton, P J; Swinkels, B L; Szczepa'nczyk, M J; Tacca, M; Talukder, D; Tanner, D B; T'apai, M; Taracchini, A; Taylor, R; Theeg, T; Thomas, E G; Thomas, M; Thomas, P; Thorne, K A; Thrane, E; Tippens, T; Tiwari, S; Tiwari, V; Tokmakov, K V; Toland, K; Tomlinson, C; Tonelli, M; Tornasi, Z; Torrie, C I; T"oyr"a, D; Travasso, F; Traylor, G; Trifir`o, D; Trinastic, J; Tringali, M C; Trozzo, L; Tse, M; Tso, R; Turconi, M; Tuyenbayev, D; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; van Bakel, N; van Beuzekom, M; Brand, J F J van den; Broeck, C Van Den; Vander-Hyde, D C; van der Schaaf, L; van Heijningen, J V; van Veggel, A A; Vardaro, M; Varma, V; Vass, S; Vas'uth, M; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Venugopalan, G; Verkindt, D; Vetrano, F; Vicer'e, A; Viets, A D; Vinciguerra, S; Vine, D J; Vinet, J -Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Voss, D V; Vousden, W D; Vyatchanin, S P; Wade, A R; Wade, L E; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, G; Wang, H; Wang, M; Wang, Y; Ward, R L; Warner, J; Was, M; Watchi, J; Weaver, B; Wei, L -W; Weinert, M; Weinstein, A J; Weiss, R; Wen, L; Wessels, P; Westphal, T; Wette, K; Whelan, J T; Whiting, B F; Whittle, C; Williams, D; Williams, R D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M H; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Woehler, J; Worden, J; Wright, J L; Wu, D S; Wu, G; Yam, W; Yamamoto, H; Yancey, C C; Yap, M J; Yu, Hang; Yu, Haocun; Yvert, M; zny, A Zadro; Zangrando, L; Zanolin, M; Zendri, J -P; Zevin, M; Zhang, L; Zhang, M; Zhang, T; Zhang, Y; Zhao, C; Zhou, M; Zhou, Z; Zhu, S J; Zhu, X J; Zucker, M E; Zweizig, J; Boyle, M; Chu, T; Hemberger, D; Hinder, I; Kidder, L E; Ossokine, S; Scheel, M; Szilagyi, B; Teukolsky, S; Vano-Vinuales, A

    2016-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein's equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analyses on mock signals from numerical simulations of a serie...

  19. Accelerated search for biomolecular network models to interpret high-throughput experimental data

    Directory of Open Access Journals (Sweden)

    Sokhansanj Bahrad A

    2007-07-01

    Full Text Available Abstract Background The functions of human cells are carried out by biomolecular networks, which include proteins, genes, and regulatory sites within DNA that encode and control protein expression. Models of biomolecular network structure and dynamics can be inferred from high-throughput measurements of gene and protein expression. We build on our previously developed fuzzy logic method for bridging quantitative and qualitative biological data to address the challenges of noisy, low resolution high-throughput measurements, i.e., from gene expression microarrays. We employ an evolutionary search algorithm to accelerate the search for hypothetical fuzzy biomolecular network models consistent with a biological data set. We also develop a method to estimate the probability of a potential network model fitting a set of data by chance. The resulting metric provides an estimate of both model quality and dataset quality, identifying data that are too noisy to identify meaningful correlations between the measured variables. Results Optimal parameters for the evolutionary search were identified based on artificial data, and the algorithm showed scalable and consistent performance for as many as 150 variables. The method was tested on previously published human cell cycle gene expression microarray data sets. The evolutionary search method was found to converge to the results of exhaustive search. The randomized evolutionary search was able to converge on a set of similar best-fitting network models on different training data sets after 30 generations running 30 models per generation. Consistent results were found regardless of which of the published data sets were used to train or verify the quantitative predictions of the best-fitting models for cell cycle gene dynamics. Conclusion Our results demonstrate the capability of scalable evolutionary search for fuzzy network models to address the problem of inferring models based on complex, noisy biomolecular

  20. Interpreting lateral dynamic weight shifts using a simple inverted pendulum model.

    Science.gov (United States)

    Kennedy, Michael W; Bretl, Timothy; Schmiedeler, James P

    2014-01-01

    Seventy-five young, healthy adults completed a lateral weight-shifting activity in which each shifted his/her center of pressure (CoP) to visually displayed target locations with the aid of visual CoP feedback. Each subject's CoP data were modeled using a single-link inverted pendulum system with a spring-damper at the joint. This extends the simple inverted pendulum model of static balance in the sagittal plane to lateral weight-shifting balance. The model controlled pendulum angle using PD control and a ramp setpoint trajectory, and weight-shifting was characterized by both shift speed and a non-minimum phase (NMP) behavior metric. This NMP behavior metric examines the force magnitude at shift initiation and provides weight-shifting balance performance information that parallels the examination of peak ground reaction forces in gait analysis. Control parameters were optimized on a subject-by-subject basis to match balance metrics for modeled results to metric values calculated from experimental data. Overall, the model matches experimental data well (average percent error of 0.35% for shifting speed and 0.05% for NMP behavior). These results suggest that the single-link inverted pendulum model can be used effectively to capture lateral weight-shifting balance, as it has been shown to model static balance. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Using models to interpret the impact of roadside barriers on near-road air quality

    Science.gov (United States)

    Amini, Seyedmorteza; Ahangar, Faraz Enayati; Schulte, Nico; Venkatram, Akula

    2016-08-01

    The question this paper addresses is whether semi-empirical dispersion models based on data from controlled wind tunnel and tracer experiments can describe data collected downwind of a sound barrier next to a real-world urban highway. Both models are based on the mixed wake model described in Schulte et al. (2014). The first neglects the effects of stability on dispersion, and the second accounts for reduced entrainment into the wake of the barrier under unstable conditions. The models were evaluated with data collected downwind of a kilometer-long barrier next to the I-215 freeway running next to the University of California campus in Riverside. The data included measurements of 1) ultrafine particle (UFP) concentrations at several distances from the barrier, 2) micrometeorological variables upwind and downwind of the barrier, and 3) traffic flow separated by automobiles and trucks. Because the emission factor for UFP is highly uncertain, we treated it as a model parameter whose value is obtained by fitting model estimates to observations of UFP concentrations measured at distances where the barrier impact is not dominant. Both models provide adequate descriptions of both the magnitude and the spatial variation of observed concentrations. The good performance of the models reinforces the conclusion from Schulte et al. (2014) that the presence of the barrier is equivalent to shifting the line sources on the road upwind by a distance of about HU/u∗ where H is the barrier height, U is the wind velocity at half of the barrier height, and u∗ is the friction velocity. The models predict that a 4 m barrier results in a 35% reduction in average concentration within 40 m (10 times the barrier height) of the barrier, relative to the no-barrier site. This concentration reduction is 55% if the barrier height is doubled.

  2. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  3. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

     This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  4. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  5. Link or sink: a modelling interpretation of the open Baltic biogeochemistry

    Directory of Open Access Journals (Sweden)

    M. Vichi

    2004-01-01

    Full Text Available A 1-D model system, consisting of the 1-D version of the Princeton Ocean Model (POM coupled with the European Regional Seas Ecosystem Model (ERSEM has been applied to a sub-basin of the Baltic Proper, the Bornholm basin. The model has been forced with 3h meteorological data for the period 1979-1990, producing a 12-year hindcast validated with datasets from the Baltic Environmental Database for the same period. The model results demonstrate the model to hindcast the time-evolution of the physical structure very well, confirming the view of the open Baltic water column as a three layer system of surface, intermediate and bottom waters. Comparative analyses of modelled hydrochemical components with respect to the independent data have shown that the long-term system behaviour of the model is within the observed ranges. Also primary production processes, deduced from oxygen (oversaturation are hindcast correctly over the entire period and the annual net primary production is within the observed range. The largest mismatch with observations is found in simulating the biogeochemistry of the Baltic intermediate waters. Modifications in the structure of the model (addition of fast-sinking detritus and polysaccharide dynamics have shown that the nutrient dynamics are linked to the quality and dimensions of the organic matter produced in the euphotic zone, highlighting the importance of the residence time of the organic matter within the microbial foodweb in the intermediate waters. Experiments with different scenarios of riverine nutrient loads, assessed in the limits of a 1-D setup, have shown that the external input of organic matter makes the open Baltic model more heterotrophic. The characteristics of the inputs also drive the dynamics of nitrogen in the bottom layers leading either to nitrate accumulation (when the external sources are inorganic, or to coupled nitrification-denitrification (under strong organic inputs. The model indicates the

  6. Link or sink: a modelling interpretation of the open Baltic biogeochemistry

    Directory of Open Access Journals (Sweden)

    J. W. Baretta

    2004-08-01

    Full Text Available A 1-D model system, consisting of the 1-D version of the Princeton Ocean Model (POM coupled with the European Regional Seas Ecosystem Model (ERSEM has been applied to a sub-basin of the Baltic Proper, the Bornholm basin. The model has been forced with 3h meteorological data for the period 1979-1990, producing a 12-year hindcast validated with datasets from the Baltic Environmental Database for the same period. The model results demonstrate the model to hindcast the time-evolution of the physical structure very well, confirming the view of the open Baltic water column as a three layer system of surface, intermediate and bottom waters. Comparative analyses of modelled hydrochemical components with respect to the independent data have shown that the long-term system behaviour of the model is within the observed ranges. Also primary production processes, deduced from oxygen (oversaturation are hindcast correctly over the entire period and the annual net primary production is within the observed range. The largest mismatch with observations is found in simulating the biogeochemistry of the Baltic intermediate waters. Modifications in the structure of the model (addition of fast-sinking detritus and polysaccharide dynamics have shown that the nutrient dynamics is linked to the quality and dimensions of the organic matter produced in the euphotic zone, highlighting the importance of the residence time of the organic matter within the microbial foodweb in the intermediate waters. Experiments with different scenarios of riverine nutrient loads, assessed in the limits of a 1-D setup, have shown that the external input of organic matter makes the open Baltic model more heterotrophic. The characteristics of the inputs also drive the dynamics of nitrogen in the bottom layers leading either to nitrate accumulation (when the external sources are inorganic, or to coupled nitrification-denitrification (under strong organic inputs. The model indicates the

  7. A Dynamical Interpretation of Connes' Unimodularity Condition in Standard Model and Majorana Neutrino

    CERN Document Server

    Morita, K; Morita, Katsusada; Okumura, Yoshitaka

    2004-01-01

    Standard model is minimally extended using the unitary group $G'=U(3)\\times SU(2)\\times U(1)$ of Connes' color-flavor algebra. In place of Connes' unimodularity condition an extra Higgs is assumed to spontaneously break $G'$ down to standard model gauge group. It is shown that the theory becomes anomaly-free only if right-handed neutrino is present in each generation. It is also shown that the extra Higgs gives rise to large Majorana mass of right-handed neutrino and the model contains new vectorial neutral current.

  8. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  9. Interpretation of laser/multi-sensor data for short range terrain modeling and hazard detection

    Science.gov (United States)

    Messing, B. S.

    1980-01-01

    A terrain modeling algorithm that would reconstruct the sensed ground images formed by the triangulation scheme, and classify as unsafe any terrain feature that would pose a hazard to a roving vehicle is described. This modeler greatly reduces quantization errors inherent in a laser/sensing system through the use of a thinning algorithm. Dual filters are employed to separate terrain steps from the general landscape, simplifying the analysis of terrain features. A crosspath analysis is utilized to detect and avoid obstacles that would adversely affect the roll of the vehicle. Computer simulations of the rover on various terrains examine the performance of the modeler.

  10. Polychronous Interpretation of Synoptic, a Domain Specific Modeling Language for Embedded Flight-Software

    Directory of Open Access Journals (Sweden)

    Loïc Besnard

    2010-03-01

    Full Text Available The SPaCIFY project, which aims at bringing advances in MDE to the satellite flight software industry, advocates a top-down approach built on a domain-specific modeling language named Synoptic. In line with previous approaches to real-time modeling such as Statecharts and Simulink, Synoptic features hierarchical decomposition of application and control modules in synchronous block diagrams and state machines. Its semantics is described in the polychronous model of computation, which is that of the synchronous language SIGNAL.

  11. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    Data.gov (United States)

    National Aeronautics and Space Administration — This article discusses several aspects of uncertainty represen- tation and management for model-based prognostics method- ologies based on our experience with Kalman...

  12. Implementation of empirical-mathematical modelling in upper secondary physics: Teachers’ interpretations and considerations

    Directory of Open Access Journals (Sweden)

    Carl Angell

    2008-11-01

    Full Text Available This paper reports on the implementation of an upper secondary physics curriculum with an empirical-mathematical modelling approach. In project PHYS 21, we used the notion of multiple representations of physical phenomena as a framework for developing modelling activities for students. Interviews with project teachers indicate that implementation of empirical-mathematical modelling varied widely among classes. The new curriculum ideas were adapted to teachers’ ways of doing andreflecting on teaching and learning rather than radically changing these. Modelling was taken up as a method for reaching the traditional content goals of physics teaching, whereas goals related to process skills and the nature of science were given a lower priority by the teachers. Our results indicate that more attention needs to be focused on teachers’ and students’ meta-understanding of physics and physics learning.

  13. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Science.gov (United States)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  14. Model Interpretation of Climate Signals: Application to the Asian Monsoon Climate

    Science.gov (United States)

    Lau, William K. M.

    2002-01-01

    This is an invited review paper intended to be published as a Chapter in a book entitled "The Global Climate System: Patterns, Processes and Teleconnections" Cambridge University Press. The author begins with an introduction followed by a primer of climate models, including a description of various modeling strategies and methodologies used for climate diagnostics and predictability studies. Results from the CLIVAR Monsoon Model Intercomparison Project (MMIP) were used to illustrate the application of the strategies to modeling the Asian monsoon. It is shown that state-of-the art atmospheric GCMs have reasonable capability in simulating the seasonal mean large scale monsoon circulation, and response to El Nino. However, most models fail to capture the climatological as well as interannual anomalies of regional scale features of the Asian monsoon. These include in general over-estimating the intensity and/or misplacing the locations of the monsoon convection over the Bay of Bengal, and the zones of heavy rainfall near steep topography of the Indian subcontinent, Indonesia, and Indo-China and the Philippines. The intensity of convection in the equatorial Indian Ocean is generally weaker in models compared to observations. Most important, an endemic problem in all models is the weakness and the lack of definition of the Mei-yu rainbelt of the East Asia, in particular the part of the Mei-yu rainbelt over the East China Sea and southern Japan are under-represented. All models seem to possess certain amount of intraseasonal variability, but the monsoon transitions, such as the onset and breaks are less defined compared with the observed. Evidences are provided that a better simulation of the annual cycle and intraseasonal variability is a pre-requisite for better simulation and better prediction of interannual anomalies.

  15. Fluid flow model of the Cerro Prieto Geothermal Field based on well log interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Halfman, S.E.; Lippmann, M.J.; Zelwe, R.; Howard, J.H.

    1982-08-10

    The subsurface geology of the Cerro Prieto geothermal field was analyzed using geophysical and lithologic logs. The distribution of permeable and relatively impermeable units and the location of faults are shown in a geologic model of the system. By incorporating well completion data and downhole temperature profiles into the geologic model, it was possible to determine the direction of geothermal fluid flow and the role of subsurface geologic features that control this movement.

  16. AMICON: A multi-model interpretative code for two phase flow instrumentation with uncertainty analysis

    Science.gov (United States)

    Teague, J. W., II

    1981-08-01

    The code was designed to calculate mass fluxes and mass flux standard deviations, as well as certain other fluid physical properties. Several models are used to compute mass fluxes and uncertainties since some models provide more reliable results than others under certain flow situations. The program was specifically prepared to compute these variables using data gathered from spoolpiece instrumentation on the Thermal-Hydraulic Test Facility (THTF) and written to an Engineering Units (EU) data set.

  17. Quantum Structures of a Model-Universe: An Inconsistency with Everett Interpretation of Quantum Mechanics

    OpenAIRE

    2011-01-01

    We observe a Quantum Brownian Motion (QBM) Model Universe in conjunction with recently established Entanglement Relativity and Parallel Occurrence of Decoherence. The Parallel Occurrence of Decoherence establishes the simultaneous occurrence of decoherence for two mutually irreducible structures (decomposition into subsystems) of the total QBM model universe. First we find that Everett world branching for one structure excludes branching for the alternate structure and in order to reconcile t...

  18. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project

    Directory of Open Access Journals (Sweden)

    Berman Barbara

    2011-03-01

    Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.

  19. Ignoring imperfect detection in biological surveys is dangerous: a response to 'fitting and interpreting occupancy models'.

    Directory of Open Access Journals (Sweden)

    Gurutzeta Guillera-Arroita

    Full Text Available In a recent paper, Welsh, Lindenmayer and Donnelly (WLD question the usefulness of models that estimate species occupancy while accounting for detectability. WLD claim that these models are difficult to fit and argue that disregarding detectability can be better than trying to adjust for it. We think that this conclusion and subsequent recommendations are not well founded and may negatively impact the quality of statistical inference in ecology and related management decisions. Here we respond to WLD's claims, evaluating in detail their arguments, using simulations and/or theory to support our points. In particular, WLD argue that both disregarding and accounting for imperfect detection lead to the same estimator performance regardless of sample size when detectability is a function of abundance. We show that this, the key result of their paper, only holds for cases of extreme heterogeneity like the single scenario they considered. Our results illustrate the dangers of disregarding imperfect detection. When ignored, occupancy and detection are confounded: the same naïve occupancy estimates can be obtained for very different true levels of occupancy so the size of the bias is unknowable. Hierarchical occupancy models separate occupancy and detection, and imprecise estimates simply indicate that more data are required for robust inference about the system in question. As for any statistical method, when underlying assumptions of simple hierarchical models are violated, their reliability is reduced. Resorting in those instances where hierarchical occupancy models do no perform well to the naïve occupancy estimator does not provide a satisfactory solution. The aim should instead be to achieve better estimation, by minimizing the effect of these issues during design, data collection and analysis, ensuring that the right amount of data is collected and model assumptions are met, considering model extensions where appropriate.

  20. Viral epidemics in a cell culture: novel high resolution data and their interpretation by a percolation theory based model.

    Directory of Open Access Journals (Sweden)

    Balázs Gönci

    Full Text Available Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model is capable of reproducing the basic features of our observations, i.e., the observed behaviour is likely to be applicable to many different kinds of systems. Statistical physics inspired approaches to our data, such as fractal dimension of the infected clusters as well as their size distribution, seem to fit into a percolation theory based interpretation. We suggest that our observations may be used to model epidemics in more complex systems, which are difficult to study in isolation.

  1. Formal models in animal-metacognition research: the problem of interpreting animals' behavior.

    Science.gov (United States)

    Smith, J David; Zakrzewski, Alexandria C; Church, Barbara A

    2016-10-01

    Ongoing research explores whether animals have precursors to metacognition-that is, the capacity to monitor mental states or cognitive processes. Comparative psychologists have tested apes, monkeys, rats, pigeons, and a dolphin using perceptual, memory, foraging, and information-seeking paradigms. The consensus is that some species have a functional analog to human metacognition. Recently, though, associative modelers have used formal-mathematical models hoping to describe animals' "metacognitive" performances in associative-behaviorist ways. We evaluate these attempts to reify formal models as proof of particular explanations of animal cognition. These attempts misunderstand the content and proper application of models. They embody mistakes of scientific reasoning. They blur fundamental distinctions in understanding animal cognition. They impede theoretical development. In contrast, an energetic empirical enterprise is achieving strong success in describing the psychology underlying animals' metacognitive performances. We argue that this careful empirical work is the clear path to useful theoretical development. The issues raised here about formal modeling-in the domain of animal metacognition-potentially extend to biobehavioral research more broadly.

  2. Interpreting Recent Global-Mean Temperature Changes in the Lower Stratosphere Simulated by Climate Models

    Science.gov (United States)

    Geller, M. A.; Zhou, T.; Martin, W. G. K.; Song, H.; Wang, S.; Nazarenko, L.; Lo, K. W. K.

    2014-12-01

    It has been suggested that state-of-the-art climate models, both with interactive chemistry and without interactive chemistry (CCMVal-2 and CMIP5) do not reproduce the observed lower stratosphere temperature anomalies that are observed by satellite microwave sounding instruments. We find that making two changes in the analysis can eliminate this disagreement. One is a change in the definition of the temperature anomalies as being zero for the 4-year mean (1979-1982) at the beginning of the data and modeling analysis period. Such a definition of the zero temperature anomaly does not take into proper account that observations over a relatively short period represent a single realization of several possible climate states, and thus this zero anomaly definition can be misleading when comparing anomalies from observations and models. The other change is our taking into account all CMIP-5 and CCMVal-2 model runs that ran realistic scenarios for the period 1979-2005. With these two changes in the analysis, we conclude that temperature changes from both CMIP-5 and CCMVal-2 models agree well with MSU-4 observations over the period 1979-2005.

  3. Rodent models of obsessive compulsive disorder: Evaluating validity to interpret emerging neurobiology.

    Science.gov (United States)

    Zike, Isaac; Xu, Tim; Hong, Natalie; Veenstra-VanderWeele, Jeremy

    2017-03-14

    Obsessive Compulsive Disorder (OCD) is a common neuropsychiatric disorder with unknown molecular underpinnings. Identification of genetic and non-genetic risk factors has largely been elusive, primarily because of a lack of power. In contrast, neuroimaging has consistently implicated the cortico-striatal-thalamo-cortical circuits in OCD. Pharmacological treatment studies also show specificity, with consistent response of OCD symptoms to chronic treatment with serotonin reuptake inhibitors; although most patients are left with residual impairment. In theory, animal models could provide a bridge from the neuroimaging and pharmacology data to an understanding of pathophysiology at the cellular and molecular level. Several mouse models have been proposed using genetic, immunological, pharmacological, and optogenetic tools. These experimental model systems allow testing of hypotheses about the origins of compulsive behavior. Several models have generated behavior that appears compulsive-like, particularly excessive grooming, and some have demonstrated response to chronic serotonin reuptake inhibitors, establishing both face validity and predictive validity. Construct validity is more difficult to establish in the context of a limited understanding of OCD risk factors. Our current models may help us to dissect the circuits and molecular pathways that can elicit OCD-relevant behavior in rodents. We can hope that this growing understanding, coupled with developing technology, will prepare us when robust OCD risk factors are better understood.

  4. On the interpretation of recharge estimates from steady-state model calibrations.

    Science.gov (United States)

    Anderson, William P; Evans, David G

    2007-01-01

    Ground water recharge is often estimated through the calibration of ground water flow models. We examine the nature of calibration errors by considering some simple mathematical and numerical calculations. From these calculations, we conclude that calibrating a steady-state ground water flow model to water level extremes yields estimates of recharge that have the same value as the time-varying recharge at the time the water levels are measured. These recharge values, however, are a subdued version of the actual transient recharge signal. In addition, calibrating a steady-state ground water flow model to data collected during periods of rising water levels will produce recharge values that underestimate the actual transient recharge. Similarly, calibrating during periods of falling water levels will overestimate the actual transient recharge. We also demonstrate that average water levels can be used to estimate the actual average recharge rate provided that water level data have been collected for a sufficient amount of time.

  5. Interpreting the von Bertalanffy model of somatic growth in fishes: the cost of reproduction.

    Science.gov (United States)

    Lester, N P; Shuter, B J; Abrams, P A

    2004-08-07

    We develop a model for somatic growth in fishes that explicitly allows for the energy demand imposed by reproduction. We show that the von Bertalanffy (VB) equation provides a good description of somatic growth after maturity, but not before. We show that the parameters of the VB equation are simple functions of age at maturity and reproductive investment. We use this model to show how the energy demands for both growth and reproduction trade off to determine optimal life-history traits. Assuming that both age at maturity and reproductive investment adapt to variations in adult mortality to maximize lifetime offspring production, our model predicts that: (i) the optimal age of maturity is inversely related to adult mortality rate; (ii) the optimal reproductive effort is approximately equal to adult mortality rate. These predictions are consistent with observed variations in the life-history traits of a large sample of iteroparous freshwater fishes. Copyright 2004 The Royal Society

  6. Consistent interpretation of molecular simulation kinetics using Markov state models biased with external information

    CERN Document Server

    Rudzinski, Joseph F; Bereau, Tristan

    2016-01-01

    Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically ...

  7. A numerical analysis model for the interpretation of in vivo platelet consumption data.

    Directory of Open Access Journals (Sweden)

    Ted S Strom

    Full Text Available Unlike anemias, most thrombocytopenias cannot be separated into those due to impaired production and those due to accelerated consumption. While rapid clearance of labeled platelets from the bloodstream can be followed in thrombocytopenic individuals, no model exists for quantitatively inferring from autologous or allogeneic platelet consumption data what changes in random consumption, lifespan dependent consumption, and platelet production rate may have caused the thrombocytopenia. Here we describe a numerical analysis model which resolves these issues. The model applies three parameter values (a random consumption rate constant, a lognormally-distributed platelet lifespan, and the standard deviation of the latter to a matrix comprising a series of platelet cohorts which are sequentially produced and fractionally consumed in a series of time intervals. The cohort platelet counts achieved after equilibration of production and consumption both enumerate the population age distribution and sum to the population platelet count. Continued platelet consumption after production is halted then serves to model in vivo platelet consumption data, with consumption rate in the first such interval defining the equilibrium platelet production rate. We use a least squares fitting procedure to find parameter values which best fit observed platelet consumption data obtained in WT and thrombocytopenic WASP(- mice. Equilibrium platelet age distributions are then 'grafted' into the matrix to allow modeling of the consumption of WT platelets in WASP(- recipients, and vice versa. The optimal parameter values obtained indicate that random WT platelet consumption accounts for a larger fraction of platelet turnover than was previously suspected. Platelet WASP deficiency accelerates random consumption, and a trans effect of recipient WASP deficiency contributes to this. Application of the model to clinical data will allow distinctions to be made between thrombocytopenias

  8. Fundamentals of PV Efficiency Interpreted by a Two-Level Model

    CERN Document Server

    Alam, Muhammad A

    2012-01-01

    Elementary physics of photovoltaic energy conversion in a two-level atomic PV is considered. We explain the conditions for which the Carnot efficiency is reached and how it can be exceeded! The loss mechanisms - thermalization, angle entropy, and below-bandgap transmission - explain the gap between Carnot efficiency and the Shockley-Queisser limit. Wide varieties of techniques developed to reduce these losses (e.g., solar concentrators, solar-thermal, tandem cells, etc.) are reinterpreted by using a two level model. Remarkably, the simple model appears to capture the essence of PV operation and reproduce the key results and important insights that are known to the experts through complex derivations.

  9. U Modele pour L'enregistrement et L'interpretation Physique D'images Tridimensionnelles EN Couleur

    Science.gov (United States)

    Baribeau, Rejean

    Le modele propose dans cette these repose sur l'utilisation d'un capteur de vision tridimensionnelle, couple a un laser RGB, pour la saisie des coordonnees surfaciques d'un objet de meme que des couleurs associees. Une calibration du capteur et l'exploitation des donnees 3D permettent la recuperation des facteurs de reflectance spectrale des elements de surface. On montre que la performance du systeme est limitee par le speckle, et une analyse quantitative du phenomene est proposee. Divers modeles de reflexion sont ensuite etablis puis appliques a l'extraction des parametres physiques intrinseques d'objets dans des scenes 3D en couleur. Une telle modelisation physique facilite la tache d'interpretation et permet plus de souplesse dans la manipulation des objets sous forme de realites virtuelles.

  10. Interpreting the 750 GeV diphoton excess within topflavor seesaw model

    Science.gov (United States)

    Cao, Junjie; Shang, Liangliang; Su, Wei; Wang, Fei; Zhang, Yang

    2016-10-01

    We propose that the extension of the Standard Model by typical vector-like SU(2)L doublet fermions and non-singlet scalar field can account for the observed 750 GeV diphoton excess in experimentally allowed parameter space. Such an idea can be realized in a typical topflavor seesaw model where the new resonance X is identified as a CP-even or CP-odd scalar emerging from a certain bi-doublet Higgs field, and it can couple rather strongly to photons and gluons through mediators such as vector-like fermions, scalars as well as gauge bosons predicted by the model. Numerical analysis indicates that the model can predict the central value of the diphoton excess without contradicting any constraints from 8 TeV LHC. Among all the constraints, the tightest one comes from the Zγ channel with σ8 TeVZγ ≲ 3.6 fb, which requires σ13 TeVγγ ≲ 6 fb in most of the favored parameter space. Theoretical issues such as vacuum stability and Landau pole are also addressed.

  11. Energetic protons at Mars: interpretation of SLED/Phobos-2 observations by a kinetic model

    Directory of Open Access Journals (Sweden)

    E. Kallio

    2012-11-01

    Full Text Available Mars has neither a significant global intrinsic magnetic field nor a dense atmosphere. Therefore, solar energetic particles (SEPs from the Sun can penetrate close to the planet (under some circumstances reaching the surface. On 13 March 1989 the SLED instrument aboard the Phobos-2 spacecraft recorded the presence of SEPs near Mars while traversing a circular orbit (at 2.8 RM. In the present study the response of the Martian plasma environment to SEP impingement on 13 March was simulated using a kinetic model. The electric and magnetic fields were derived using a 3-D self-consistent hybrid model (HYB-Mars where ions are modelled as particles while electrons form a massless charge neutralizing fluid. The case study shows that the model successfully reproduced several of the observed features of the in situ observations: (1 a flux enhancement near the inbound bow shock, (2 the formation of a magnetic shadow where the energetic particle flux was decreased relative to its solar wind values, (3 the energy dependency of the flux enhancement near the bow shock and (4 how the size of the magnetic shadow depends on the incident particle energy. Overall, it is demonstrated that the Martian magnetic field environment resulting from the Mars–solar wind interaction significantly modulated the Martian energetic particle environment.

  12. Communication: Consistent interpretation of molecular simulation kinetics using Markov state models biased with external information

    Science.gov (United States)

    Rudzinski, Joseph F.; Kremer, Kurt; Bereau, Tristan

    2016-02-01

    Molecular simulations can provide microscopic insight into the physical and chemical driving forces of complex molecular processes. Despite continued advancement of simulation methodology, model errors may lead to inconsistencies between simulated and reference (e.g., from experiments or higher-level simulations) observables. To bound the microscopic information generated by computer simulations within reference measurements, we propose a method that reweights the microscopic transitions of the system to improve consistency with a set of coarse kinetic observables. The method employs the well-developed Markov state modeling framework to efficiently link microscopic dynamics with long-time scale constraints, thereby consistently addressing a wide range of time scales. To emphasize the robustness of the method, we consider two distinct coarse-grained models with significant kinetic inconsistencies. When applied to the simulated conformational dynamics of small peptides, the reweighting procedure systematically improves the time scale separation of the slowest processes. Additionally, constraining the forward and backward rates between metastable states leads to slight improvement of their relative stabilities and, thus, refined equilibrium properties of the resulting model. Finally, we find that difficulties in simultaneously describing both the simulated data and the provided constraints can help identify specific limitations of the underlying simulation approach.

  13. Molecular interpretation of nonclassical gas dynamics of dense vapors under the van der Waals model

    NARCIS (Netherlands)

    Colonna, P.; Guardone, A.

    2006-01-01

    The van der Waals polytropic gas model is used to investigate the role of attractive and repulsive intermolecular forces and the influence of molecular complexity on the possible nonclassical gas dynamic behavior of vapors near the liquid-vapor saturation curve. The decrease of the sound speed upon

  14. A spatial interpretation of the density dependence model in industrial demography

    NARCIS (Netherlands)

    van Wissen, L

    2004-01-01

    In this paper the density dependence model, which was developed in organizational ecology, is compared to the economic-geographical notion of agglomeration economies. There is a basic resemblance: both involve some form of positive feedback between size of the population and growth. The paper explor

  15. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for interpretatio

  16. Household Labour Supply in Britain and Denmark: Some Interpretations Using a Model of Pareto Optimal Behaviour

    DEFF Research Database (Denmark)

    Barmby, Tim; Smith, Nina

    1996-01-01

    This paper analyses the labour supply behaviour of households in Denmark and Britain. It employs models in which the preferences of individuals within the household are explicitly represented. The households are then assumed to decide on their labour supply in a Pareto-Optimal fashion. Describing...

  17. Interpreting the 750 GeV diphoton excess within topflavor seesaw model

    Directory of Open Access Journals (Sweden)

    Junjie Cao

    2016-10-01

    Full Text Available We propose that the extension of the Standard Model by typical vector-like SU(2L doublet fermions and non-singlet scalar field can account for the observed 750 GeV diphoton excess in experimentally allowed parameter space. Such an idea can be realized in a typical topflavor seesaw model where the new resonance X is identified as a CP-even or CP-odd scalar emerging from a certain bi-doublet Higgs field, and it can couple rather strongly to photons and gluons through mediators such as vector-like fermions, scalars as well as gauge bosons predicted by the model. Numerical analysis indicates that the model can predict the central value of the diphoton excess without contradicting any constraints from 8 TeV LHC. Among all the constraints, the tightest one comes from the Zγ channel with σ8 TeVZγ≲3.6 fb, which requires σ13 TeVγγ≲6 fb in most of the favored parameter space. Theoretical issues such as vacuum stability and Landau pole are also addressed.

  18. A new mixed-mode model for interpreting and predicting protein elution during isoelectric chromatofocusing.

    Science.gov (United States)

    Choy, Derek Y C; Creagh, A Louise; von Lieres, Eric; Haynes, Charles

    2014-05-01

    Experimental data are combined with classic theories describing electrolytes in solution and at surfaces to define the primary mechanisms influencing protein retention and elution during isoelectric chromatofocusing (ICF) of proteins and protein mixtures. Those fundamental findings are used to derive a new model to understand and predict elution times of proteins during ICF. The model uses a modified form of the steric mass action (SMA) isotherm to account for both ion exchange and isoelectric focusing contributions to protein partitioning. The dependence of partitioning on pH is accounted for through the characteristic charge parameter m of the SMA isotherm and the application of Gouy-Chapman theory to define the dependence of the equilibrium binding constant Kbi on both m and ionic strength. Finally, the effects of changes in matrix surface pH on protein retention are quantified through a Donnan equilibrium type model. By accounting for isoelectric focusing, ion binding and exchange, and surface pH contributions to protein retention and elution, the model is shown to accurately capture the dependence of protein elution times on column operating conditions. © 2014 Wiley Periodicals, Inc.

  19. A spatial interpretation of the density dependence model in industrial demography

    NARCIS (Netherlands)

    van Wissen, L

    2004-01-01

    In this paper the density dependence model, which was developed in organizational ecology, is compared to the economic-geographical notion of agglomeration economies. There is a basic resemblance: both involve some form of positive feedback between size of the population and growth. The paper explor

  20. Molecular interpretation of nonclassical gas dynamics of dense vapors under the van der Waals model

    NARCIS (Netherlands)

    Colonna, P.; Guardone, A.

    2006-01-01

    The van der Waals polytropic gas model is used to investigate the role of attractive and repulsive intermolecular forces and the influence of molecular complexity on the possible nonclassical gas dynamic behavior of vapors near the liquid-vapor saturation curve. The decrease of the sound speed upon

  1. Interpretation and modeling of a subsurface injection test, 200 East Area, Hanford, Washington

    Energy Technology Data Exchange (ETDEWEB)

    Smoot, J.L. [Pacific Northwest Lab., Richland, WA (United States); Lu, A.H. [Westinghouse Hanford Co., Richland, WA (United States)

    1994-11-01

    A tracer experiment was conducted in 1980 and 1981 in the unsaturated zone in the southeast portion of the Hanford 200 East Area near the Plutonium-Uranium Extraction (PUREX) facility. The field design consisted of a central injection well with 32 monitoring wells within an 8-m radius. Water containing radioactive and other tracers was injected weekly during the experiment. The unique features of the experiment were the documented control of the inputs, the experiment`s three-dimensional nature, the in-situ measurement of radioactive tracers, and the use of multiple injections. The spacing of the test wells provided reasonable lag distribution for spatial correlation analysis. Preliminary analyses indicated spatial correlation on the order of 400 to 500 cm in the vertical direction. Previous researchers found that two-dimensional axisymmetric modeling of moisture content generally underpredicts lateral spreading and overpredicts vertical movement of the injected water. Incorporation of anisotropic hydraulic properties resulted in the best model predictions. Three-dimensional modeling incorporated the geologic heterogeneity of discontinuous layers and lenses of sediment apparent in the site geology. Model results were compared statistically with measured experimental data and indicate reasonably good agreement with vertical and lateral field moisture distributions.

  2. A spatial interpretation of the density dependence model in industrial demography

    NARCIS (Netherlands)

    van Wissen, L

    2004-01-01

    In this paper the density dependence model, which was developed in organizational ecology, is compared to the economic-geographical notion of agglomeration economies. There is a basic resemblance: both involve some form of positive feedback between size of the population and growth. The paper

  3. Interpretation of heavy metal speciation in sequential extraction using geochemical modelling

    NARCIS (Netherlands)

    Cui, Yanshan; Weng, Liping

    2015-01-01

    Environmental context Heavy metal pollution is a worldwide environmental concern, and the risk depends not only on their total concentration, but also on their chemical speciation. Based on state-of-the-art geochemical modelling, we pinpoint the heavy metal pools approached by the widely used seq

  4. Interpretation of electrochemical impedance spectroscopy(EIS) circuit model for soils

    Institute of Scientific and Technical Information of China (English)

    韩鹏举; 张亚芬; 陈幼佳; 白晓红

    2015-01-01

    Based on three different kinds of conductive paths in microstructure of soil and theory of electrochemical impedance spectroscopy(EIS), an integrated equivalent circuit model and impedance formula for soils were proposed, which contain 6 meaningful resistance and reactance parameters. Considering the conductive properties of soils and dispersion effects, mathematical equations for impedance under various circuit models were deduced and studied. The mathematical expression presents two semicircles for theoretical EIS Nyquist spectrum, in which the center of one semicircle is degraded to simply the equivalent model. Based on the measured parameters of EIS Nyquist spectrum, meaningful soil parameters can easily be determined. Additionally, EIS was used to investigate the soil properties with different water contents along with the mathematical relationships and mechanism between the physical parameters and water content. Magnitude of the impedance decreases with the increase of testing frequency and water content for Bode graphs. The proposed model would help us to better understand the soil microstructure and properties and offer more reasonable explanations for EIS spectra.

  5. Quantum interpretations

    Energy Technology Data Exchange (ETDEWEB)

    Goernitz, T.; Weizsaecker, C.F.V.

    1987-10-01

    Four interpretations of quantum theory are compared: the Copenhagen interpretation (C.I.) with the additional assumption that the quantum description also applies to the mental states of the observer, and three recent ones, by Kochen, Deutsch, and Cramer. Since they interpret the same mathematical structure with the same empirical predictions, it is assumed that they formulate only different linguistic expressions of one identical theory. C.I. as a theory on human knowledge rests on a phenomenological description of time. It can be reconstructed from simple assumptions on predictions. Kochen shows that mathematically every composite system can be split into an object and an observer. Deutsch, with the same decomposition, describes futuric possibilities under the Everett term worlds. Cramer, using four-dimensional action at a distance (Wheeler-Feynman), describes all future events like past facts. All three can be described in the C.I. frame. The role of abstract nonlocality is discussed.

  6. About new dynamical interpretations of entropic model of correspondence matrix calculation and Nash-Wardrop's equilibrium in Beckmann's traffic flow distribution model

    CERN Document Server

    Nagapetyan, Tigran

    2011-01-01

    In this work we widespread statistical physics (chemical kinetic stochastic) approach to the investigation of macrosystems, arise in economic, sociology and traffic flow theory. The main line is a definition of equilibrium of macrosystem as most probable macrostate of invariant measure of Markov dynamic (corresponds to the macrosystem). We demonstrate new dynamical interpretations for the well known static model of correspondence matrix calculation. Based on this model we propose a best response dynamics for the Beckmann's traffic flow distribution model. We prove that this "natural" dynamic under quite general conditions converges to the Nash-Wardrop's equilibrium. After that we consider two interesting demonstration examples.

  7. Interpreting Physics

    CERN Document Server

    MacKinnon, Edward

    2012-01-01

    This book is the first to offer a systematic account of the role of language in the development and interpretation of physics. An historical-conceptual analysis of the co-evolution of mathematical and physical concepts leads to the classical/quatum interface. Bohrian orthodoxy stresses the indispensability of classical concepts and the functional role of mathematics. This book analyses ways of extending, and then going beyond this orthodoxy orthodoxy. Finally, the book analyzes how a revised interpretation of physics impacts on basic philosophical issues: conceptual revolutions, realism, and r

  8. The interpretation of remotely sensed cloud properties from a model parameterization perspective

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    The goals of ISCCP and FIRE are, broadly speaking, to provide methods for the retrieval of cloud properties from satellites, and to improve cloud radiation models and the parameterization of clouds in GCMs. This study suggests a direction for GCM cloud parameterizations based on analysis of Landsat and ISCCP satellite data. For low level single layer clouds it is found that the mean retrieved liquid water pathe in cloudy pixels is essentially invariant to the cloud fraction, at least in the range 0.2 - 0.8. This result is very important since it allows the cloud fraction to be estimated if the mean liquid water path of cloud in a general circulation model gridcell is known. 3 figs.

  9. Modelling as a tool when interpreting biodegradation of micro pollutants in activated sludge systems

    DEFF Research Database (Denmark)

    Press-Kristensen, Kåre; Lindblom, Erik Ulfson; Henze, Mogens

    2007-01-01

    The aims of the present work were to improve the biodegradation of the endocrine disrupting micro pollutant, bisphenol A (BPA), used as model compound in an activated sludge system and to underline the importance of modelling the system. Previous results have shown that BPA mainly is degraded under...... aerobic conditions. Therefore the aerobic phase time in the BioDenitro process of the activated sludge system was increased from 50% to 70%. The hypothesis was that this would improve the biodegradation of BPA. Both the influent and the effluent concentrations of BPA in the experiment dropped...... probably was caused by either a larger specific biomass to influent BPA ratio, improved biodegradation related to the increased aerobic phase time, or a combination of the two. Thereby it was not possibly to determine if the increase in aerobic phase time improved the biodegradation of BPA. The work...

  10. Determination of disk diffusion susceptibility testing interpretive criteria using model-based analysis: development and implementation.

    Science.gov (United States)

    DePalma, Glen; Turnidge, John; Craig, Bruce A

    2017-02-01

    The determination of diffusion test breakpoints has become a challenging issue due to the increasing resistance of microorganisms to antibiotics. Currently, the most commonly-used method for determining these breakpoints is the modified error-rate bounded method. Its use has remained widespread despite the introduction of several model-based methods that have been shown superior in terms of precision and accuracy. However, the computational complexities associated with these new approaches has been a significant barrier for clinicians. To remedy this, we developed and examine the utility of a free online software package designed for the determination of diffusion test breakpoints: dBETS (diffusion Breakpoint Estimation Testing Software). This software package allows clinicians to easily analyze data from susceptibility experiments through visualization, error-rate bounded, and model-based approaches. We analyze four publicly available data sets from the Clinical and Laboratory Standards Institute using dBETS.

  11. Acoustic emission noise from sodium vapour bubble collapsing: detection, interpretation, modelling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dentico, G.; Pacilio, V.; Papalia, B.; Taglienti, S.; Tosi, V.

    1982-01-01

    Sodium vapour bubble collapsing is detected by means of piezoelectric accelorometers coupled to the test section via short waveguides. The output analog signal is processed by transforming it into a time series of pulses through the setting of an amplitude threshold and the shaping of a standard pulse (denominated 'event') every time the signal crosses that border. The number of events is counted in adjacent and equal time duration samples and the waiting time distribution between contiguous events is measured. Up to the moment, six kinetic properties have been found for the mentioned time series. They help in setting a stochastic model in which the subministration of energy into a liquid sodium medium induces the formation of vapour bubbles and their consequent collapsing delivers acoustic pulses. Finally, a simulation procedure is carried out: a Polya's urn model is adopted for simulating event sequences with a priori established requisites.

  12. $B\\to K^*\\ell^+\\ell^-$ in the Standard Model: Elaborations and Interpretations

    CERN Document Server

    Ciuchini, Marco; Franco, Enrico; Mishima, Satoshi; Paul, Ayan; Silvestrini, Luca; Valli, Mauro

    2016-01-01

    Disentangling New Physics effects from the Standard Model requires a good understanding of all pieces that stem from the latter, especially the uncertainties that might plague the theoretical estimations within the Standard Model. In the light of recent measurements made in the decay of $B\\to K^*\\ell^+\\ell^-$, and accompanying possibilities of New Physics effects, we re-examine the hadronic uncertainties that come about in this exclusive $b \\to s$ transition. We show that it is not trivial to distinguish New Physics effects from these hadronic uncertainties and we attempt to quantify the latter in its magnitude and kinematic shape from the recent LHCb measurements of the angular observables in this decay mode. We also update our fit with the more recent calculations of the form factors combined with the ones computed with Lattice QCD.

  13. Performances of effective medium model in interpreting optical properties of polyvinylcarbazole:ZnSe nanocomposites

    Science.gov (United States)

    Benchaabane, Aida; Ben Hamed, Zied; Kouki, Fayçal; Abderrahmane Sanhoury, Mohamed; Zellama, Kacem; Zeinert, Andreas; Bouchriha, Habib

    2014-04-01

    The effective medium model is applied to investigate the optical properties of hybrid nanocomposite layers of Polyvinylcarbazole (PVK) and nanoparticles of Zinc Selenide (ZnSe). Thin films of PVK:ZnSe nanocomposites show a porous microstructure with pore diameters of 500 nm. Numerical calculations led to the determination of optical constants such as the refractive index n, the extinction coefficient k, the dielectric permittivity ɛ, and absorption coefficient α. Using common theoretical models, we have determined the Cauchy parameters of the refractive index, namely, static ɛs and lattice ɛ∞ dielectric constants as well as the plasma frequency ωp, carrier density to effective mass ratio N/me*, and the optical conductivity σoc. We show that the optical band gap energy Eg of the nanocomposite structure decreases slightly upon the increase of the nanoparticles volume fraction and is in good agreement with the Vegard law.

  14. Intercomparison and interpretation of surface energy fluxes in atmospheric general circulation models

    Science.gov (United States)

    Randall, D. A.; Cess, R. D.; Blanchet, J. P.; Boer, G. J.; Dazlich, D. A.; Del Genio, A. D.; Deque, M.; Dymnikov, V.; Galin, V.; Ghan, S. J.

    1992-01-01

    Responses of the surface energy budgets and hydrologic cycles of 19 atmospheric general circulation models to an imposed, globally uniform sea surface temperature perturbation of 4 K were analyzed. The responses of the simulated surface energy budgets are extremely diverse and are closely linked to the responses of the simulated hydrologic cycles. The response of the net surface energy flux is not controlled by cloud effects; instead, it is determined primarily by the response of the latent heat flux. The prescribed warming of the oceans leads to major increases in the atmospheric water vapor content and the rates of evaporation and precipitation. The increased water vapor amount drastically increases the downwelling IR radiation at the earth's surface, but the amount of the change varies dramatically from one model to another.

  15. A combinatorial interpretation of the free-fermion condition of the six-vertex model

    Energy Technology Data Exchange (ETDEWEB)

    Brak, R.; Owczarek, A. [Department of Mathematics and Statistics, University of Melbourne, Parkville, VIC (Australia)

    1999-05-14

    The free-fermion condition of the six-vertex model provides a five-parameter sub-manifold on which the Bethe Ansatz equations for the wavenumbers that enter into the eigenfunctions of the transfer matrices of the model decouple, hence allowing explicit solutions. Such conditions arose originally in early field-theoretic S-matrix approaches. Here we provide a combinatorial explanation for the condition in terms of a generalized Gessel-Viennot involution. By doing so we extend the use of the Gessel-Viennot theorem, originally devised for non-intersecting walks only, to a special weighted type of intersecting walk, and hence express the partition function of N such walks starting and finishing at fixed endpoints in terms of the single-walk partition functions. (author)

  16. A combinatorial interpretation of the free-fermion condition of the six-vertex model

    Science.gov (United States)

    Brak, R.; Owczarek, A.

    1999-05-01

    The free-fermion condition of the six-vertex model provides a five-parameter sub-manifold on which the Bethe ansatz equations for the wavenumbers that enter into the eigenfunctions of the transfer matrices of the model decouple, hence allowing explicit solutions. Such conditions arose originally in early field-theoretic S-matrix approaches. Here we provide a combinatorial explanation for the condition in terms of a generalized Gessel-Viennot involution. By doing so we extend the use of the Gessel-Viennot theorem, originally devised for non-intersecting walks only, to a special weighted type of intersecting walk, and hence express the partition function of N such walks starting and finishing at fixed endpoints in terms of the single-walk partition functions.

  17. Uncertainty Representation and Interpretation in Model-based Prognostics Algorithms based on Kalman Filter Estimation

    Science.gov (United States)

    2012-09-01

    different decisions as com- pared to an unmanned aerial vehicle (UAV) mission reconfig- uration based on prognostics indication on power train fail- ures...Degradation Modeling Training Trajectories Test Trajectory Parameter Estimation State-space Representation Prognostics Dynamic System Realization Health...and ASME. Kai Goebel received the degree of Diplom-Ingenieur from the Technische University Munchen, Germany in 1990. He received the M.S. and Ph.D

  18. A numerical analysis model for interpretation of flow cytometric studies of ex vivo phagocytosis.

    Directory of Open Access Journals (Sweden)

    Ted S Strom

    Full Text Available The study of ex vivo phagocytosis via flow cytometry requires that one distinguish experimentally between uptake and adsorption of fluorescently labeled targets by phagocytes. Removal of the latter quantity from the analysis is the most common means of analyzing such data. Because the probability of phagocytosis is a function of the probability of adsorption, and because partially quenched fluorescence after uptake often overlaps with that of negative controls, this approach is suboptimal at best. Here, we describe a numerical analysis model which overcomes these limitations. We posit that the random adsorption of targets to macrophages, and subsequent phagocytosis, is a function of three parameters: the ratio of targets to macrophages (m, the mean fluorescence intensity imparted to the phagocyte by the internalized target (alpha, and the probability of phagocytosis per adsorbed target (p. The potential values of these parameters define a parameter space and their values at any point in parameter space can be used to predict the fraction of adsorption(+ and [adsorption(-, phagocytosis(+] cells that might be observed experimentally. By systematically evaluating the points in parameter space for the latter two values and comparing them to experimental data, the model arrives at sets of parameter values that optimally predict such data. Using activated THP-1 cells as macrophages and platelets as targets, we validate the model by demonstrating that it can distinguish between the effects of experimental changes in m, alpha, and p. Finally, we use the model to demonstrate that platelets from a congenitally thrombocytopenic WAS patient show an increased probability of ex vivo phagocytosis. This finding correlates with other evidence that rapid in vivo platelet consumption contributes significantly to the thrombocytopenia of WAS. Our numerical analysis method represents a useful and innovative approach to multivariate analysis.

  19. Lifetime measurements in 71Ge and a new interacting boson-fermion model interpretation

    Science.gov (United States)

    Ivaşcu, M.; Mărginean, N.; Bucurescu, D.; Căta-Danil, I.; Ur, C. A.; Lobach, Yu. N.

    1999-08-01

    The lifetimes of twelve low spin excited states have been measured in 71Ge using the Doppler shift attenuation method in the 71Ga(p,nγ) reaction at 3.0 and 3.5 MeV incident energy. New interacting boson-fermion model calculations for this nucleus account well for the properties of all its levels known up to about 1.5 MeV excitation.

  20. Equations and their physical interpretation in numerical modeling of heavy metals in fluvial rivers

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the previous work on the transport-transformation of heavy metal pollutants in fluvial rivers, this paper presented the formulation of a two-dimensional model to describe heavy metal transport-transformation in fluvial rivers by considering basic principles of environmental chemistry, hydraulics, mechanics of sediment transport and recent developments along with three very simplified test cases. The model consists of water flow governing equations, sediment transport governing equations, transport-transformation equation of heavy metal pollutants, and convection-diffusion equations of adsorption-desorption kinetics of particulate heavy metal concentrations on suspended load, bed load and bed sediment. The heavy metal transport-transformation equation is basically a mass balance equation, which demonstrates how sediment transport affects transport-transformation of heavy metals in fluvial rivers. The convection-diffusion equations of adsorption-desorption kinetics of heavy metals, being an extension of batch reactor experimental results and a major advancement of the previous work, take both physical transport, i.e. convection and diffusion and chemical reactions, i.e. adsorption-desorption into account. Effects of sediment transport on heavy metal transport-transformation were clarified through three examples. Specifically, the transport-transformation of heavy metals in a steady, uniform and equilibrium sediment-laden flow was calculated by applying this model, and results were shown to be rational. Both theoretical analysis and numerical simulation indicated that the transport-transformation of heavy metals in sediment-laden flows with clay-enriched riverbed possesses not only the generality of common tracer pollutants, but also characteristics of transport-transformation induced by sediment motion. Future work will be conducted to present validation/application of the model with available data.

  1. Cyclostratigraphy for Chinese red clay sequences: Implications to changing previous age models and paleoclimate interpretations

    Science.gov (United States)

    Anwar, T.; Kravchinsky, V. A.; Zhang, R.

    2015-12-01

    The Chinese Loess Plateau contains red clay sequence which has continuous alternation of sedimentary cycles with recurrent paleoclimatic fluctuations. Absence of abundant fossils and inability of radiometric dating method made magnetostratigraphy a leading method to build age model for the red clay. Here magnetostratigraphic age model in red clay sequence is tested using cyclostratigraphy as orbital parameters of Earth are known. Milankovitch periodicities recorded in magnetic susceptibility and grain size in the Shilou red clay section are investigated and previously found age of 11 Ma for this section is re-evaluated. Magnetostratigraphy dating based on only visual correlation could potentially lead to erroneous age model. In this study the correlation is executed through the iteration procedure until it is supported by cyclostratigraphy; i.e. Milankovitch cycles are resolved in the best possible manner. Our new approach provides an age of 5.2 Ma for the Shilou profile. Wavelet analysis reveals that a 400 kyr eccentricity cycle is well preserved and the existence of a 100 kyr eccentricity in the red clay sequence on the eastern Chinese Loess Plateau suggests that eccentricity plays a vital role in Pliocene climate evolution. Paleomonsoon evolution is reconstructed and divided into three intervals (5.2-4.5 Ma, 4.5-3.6 Ma and 3.6-2.58 Ma). The earliest stage indicates that summer and winter monsoon cycles may rapidly alter, whereas the middle stage reflects an intensification of winter monsoon and aridification in Asia, and the youngest stage is characterized by relatively intensified summer monsoon. This study demonstrates that cyclostratigraphy can greatly assist magnetostratigraphy in dating the red clay sequences, and implies that many published age models for the red clay sequences should likely be re-assessed where possible. An evaluation of the monsoon system and climate change in eastern Asia might prominently benefit from this approach.

  2. Interpreting the cosmic far-infrared background anisotropies using a gas regulator model

    CERN Document Server

    Wu, Hao-Yi; Teyssier, Romain

    2016-01-01

    Cosmic far-infrared background (CFIRB) is a powerful probe of the history of star formation rate and the connection between baryons and dark matter. In this work, we explore to which extent the CFIRB anisotropies can be reproduced by a simple physical framework for galaxy evolution, the gas regulator (bathtub) model. The model is based on continuity equations for gas, stars, and metals, taking into account cosmic gas accretion, star formation, and gas ejection. Our model not only provides a good fit to the CFIRB power spectra measured by Planck, but also agrees well with the correlation between CFIRB and gravitational lensing, far-infrared galaxy number counts, and bolometric infrared luminosity functions. The strong clustering of CFIRB indicates a large galaxy bias, which corresponds to haloes of mass 10^12.5 Msun at z=2; thus, CFIRB favors strong infrared emission in massive haloes, which is higher than the expectation from the star formation rate. We provide constraints and fitting functions for the cosmic...

  3. Modelling and interpreting the isotopic composition of water vapour in convective updrafts

    Directory of Open Access Journals (Sweden)

    M. Bolot

    2013-08-01

    Full Text Available The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed-phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters, including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener–Bergeron–Findeisen process. As all of these processes are related to updraft strength, particle size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  4. A computer-human interaction model to improve the diagnostic accuracy and clinical decision-making during 12-lead electrocardiogram interpretation.

    Science.gov (United States)

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Breen, Cathal; Guldenring, Daniel; Gaffney, Robert; Gallagher, Anthony G; Peace, Aaron J; Henn, Pat

    2016-12-01

    The 12-lead Electrocardiogram (ECG) presents a plethora of information and demands extensive knowledge and a high cognitive workload to interpret. Whilst the ECG is an important clinical tool, it is frequently incorrectly interpreted. Even expert clinicians are known to impulsively provide a diagnosis based on their first impression and often miss co-abnormalities. Given it is widely reported that there is a lack of competency in ECG interpretation, it is imperative to optimise the interpretation process. Predominantly the ECG interpretation process remains a paper based approach and whilst computer algorithms are used to assist interpreters by providing printed computerised diagnoses, there are a lack of interactive human-computer interfaces to guide and assist the interpreter. An interactive computing system was developed to guide the decision making process of a clinician when interpreting the ECG. The system decomposes the interpretation process into a series of interactive sub-tasks and encourages the clinician to systematically interpret the ECG. We have named this model 'Interactive Progressive based Interpretation' (IPI) as the user cannot 'progress' unless they complete each sub-task. Using this model, the ECG is segmented into five parts and presented over five user interfaces (1: Rhythm interpretation, 2: Interpretation of the P-wave morphology, 3: Limb lead interpretation, 4: QRS morphology interpretation with chest lead and rhythm strip presentation and 5: Final review of 12-lead ECG). The IPI model was implemented using emerging web technologies (i.e. HTML5, CSS3, AJAX, PHP and MySQL). It was hypothesised that this system would reduce the number of interpretation errors and increase diagnostic accuracy in ECG interpreters. To test this, we compared the diagnostic accuracy of clinicians when they used the standard approach (control cohort) with clinicians who interpreted the same ECGs using the IPI approach (IPI cohort). For the control cohort, the

  5. Fracture propagation in Indiana Limestone interpreted via linear softening cohesive fracture model

    Science.gov (United States)

    Rinehart, Alex J.; Bishop, Joseph E.; Dewers, Thomas

    2015-04-01

    We examine the use of a linear softening cohesive fracture model (LCFM) to predict single-trace fracture growth in short-rod (SR) and notched 3-point-bend (N3PB) test configurations in Indiana Limestone. The broad goal of this work is to (a) understand the underlying assumptions of LCFM and (b) use experimental similarities and deviations from the LCFM to understand the role of loading paths of tensile fracture propagation. Cohesive fracture models are being applied in prediction of structural and subsurface fracture propagation in geomaterials. They lump the inelastic processes occurring during fracture propagation into a thin zone between elastic subdomains. LCFM assumes that the cohesive zone initially deforms elastically to a maximum tensile stress (σmax) and then softens linearly from the crack opening width at σmax to zero stress at a critical crack opening width w1. Using commercial finite element software, we developed LCFMs for the SR and N3PB configurations. After fixing σmax with results from cylinder splitting tests and finding an initial Young's modulus (E) with unconfined compressive strength tests, we manually calibrate E and w1 in the SR model against an envelope of experimental data. We apply the calibrated LCFM parameters in the N3PB geometry and compare the model against an envelope of N3PB experiments. For accurate simulation of fracture propagation, simulated off-crack stresses are high enough to require inclusion of damage. Different elastic moduli are needed in tension and compression. We hypothesize that the timing and location of shear versus extensional micromechanical failures control the qualitative macroscopic force-versus-displacement response in different tests. For accurate prediction, the LCFM requires a constant style of failure, which the SR configuration maintains until very late in deformation. The N3PB configuration does not maintain this constancy. To be broadly applicable between geometries and failure styles, the LCFM

  6. Enabling innovative healthcare delivery through the use of focussed factory model: case of spine clinic of the future

    NARCIS (Netherlands)

    Wickramasinghe, N.; Bloemendal, J.W.; de Bruin, A.K.; Krabbendam, Johannes Jacobus

    2005-01-01

    Abstract: This paper discusses the concept of the focused factory model. We highlight that the focused factory model combines one of the key generic strategies identified by Michael Porter (1985) and the ideas and concepts from manufacturing. The genesis of this model has its roots in trying to

  7. Enabling innovative healthcare delivery through the use of focussed factory model: case of spine clinic of the future

    NARCIS (Netherlands)

    Wickramasinghe, N.; Bloemendal, J.W.; de Bruin, A.K.; Krabbendam, Johannes Jacobus

    2005-01-01

    Abstract: This paper discusses the concept of the focused factory model. We highlight that the focused factory model combines one of the key generic strategies identified by Michael Porter (1985) and the ideas and concepts from manufacturing. The genesis of this model has its roots in trying to rest

  8. Single and Double ITCZ in Aqua-Planet Models with Globally Uniform Sea Surface Temperature and Solar Insolation: An Interpretation

    Science.gov (United States)

    Chao, Winston C.; Chen, Baode; Einaudi, Franco (Technical Monitor)

    2001-01-01

    It has been known for more than a decade that an aqua-planet model with globally uniform sea surface temperature and solar insolation angle can generate ITCZ (intertropical convergence zone). Previous studies have shown that the ITCZ under such model settings can be changed between a single ITCZ over the equator and a double ITCZ straddling the equator through one of several measures. These measures include switching to a different cumulus parameterization scheme, changes within the cumulus parameterization scheme, and changes in other aspects of the model design such as horizontal resolution. In this paper an interpretation for these findings is offered. The latitudinal location of the ITCZ is the latitude where the balance of two types of attraction on the ITCZ, both due to earth's rotation, exists. The first type is equator-ward and is directly related to the earth's rotation and thus not sensitive to model design changes. The second type is poleward and is related to the convective circulation and thus is sensitive to model design changes. Due to the shape of the attractors, the balance of the two types of attractions is reached either at the equator or more than 10 degrees away from the equator. The former case results in a single ITCZ over the equator and the latter case a double ITCZ straddling the equator.

  9. Exploring the Gross Schoenebeck (Germany) geothermal site using a statistical joint interpretation of magnetotelluric and seismic tomography models

    Energy Technology Data Exchange (ETDEWEB)

    Munoz, Gerard; Bauer, Klaus; Moeck, Inga; Schulze, Albrecht; Ritter, Oliver [Deutsches GeoForschungsZentrum (GFZ), Telegrafenberg, 14473 Potsdam (Germany)

    2010-03-15

    Exploration for geothermal resources is often challenging because there are no geophysical techniques that provide direct images of the parameters of interest, such as porosity, permeability and fluid content. Magnetotelluric (MT) and seismic tomography methods yield information about subsurface distribution of resistivity and seismic velocity on similar scales and resolution. The lack of a fundamental law linking the two parameters, however, has limited joint interpretation to a qualitative analysis. By using a statistical approach in which the resistivity and velocity models are investigated in the joint parameter space, we are able to identify regions of high correlation and map these classes (or structures) back onto the spatial domain. This technique, applied to a seismic tomography-MT profile in the area of the Gross Schoenebeck geothermal site, allows us to identify a number of classes in accordance with the local geology. In particular, a high-velocity, low-resistivity class is interpreted as related to areas with thinner layers of evaporites; regions where these sedimentary layers are highly fractured may be of higher permeability. (author)

  10. Using the pseudophase kinetic model to interpret chemical reactivity in ionic emulsions: determining antioxidant partition constants and interfacial rate constants.

    Science.gov (United States)

    Gu, Qing; Bravo-Díaz, Carlos; Romsted, Laurence S

    2013-06-15

    Kinetic results obtained in cationic and anionic emulsions show for the first time that pseudophase kinetic models give reasonable estimates of the partition constants of reactants, here t-butylhydroquinone (TBHQ) between the oil and interfacial region, P(O)(I), and the water and interfacial region, P(W)(I), and of the interfacial rate constant, k(I), for the reaction with an arenediazonium ion in emulsions containing a 1:1 volume ratio of a medium chain length triglyceride, MCT, and aqueous acid or buffer. The results provide: (a) an explanation for the large difference in pH, >4 pH units, required to run the reaction in CTAB (pH 1.54, added HBr) and SDS (pH 5.71, acetate buffer) emulsions; (b) reasonable estimates of PO(I) and k(I) in the CTAB emulsions; (c) a sensible interpretation of added counterion effects based on ion exchange in SDS emulsions (Na(+)/H3O(+) ion exchange in the interfacial region) and Donnan equilibrium in CTAB emulsions (Br(-) increasing the interfacial H3O(+)); and (d) the significance of the effect of the much greater solubility of TBHQ in MCT versus octane, 1000/1, as the oil. These results should aid in interpreting the effects of ionic surfactants on chemical reactivity in emulsions in general and in selecting the most efficient antioxidant for particular food applications.

  11. Interpreting Evidence.

    Science.gov (United States)

    Munsart, Craig A.

    1993-01-01

    Presents an activity that allows students to experience the type of discovery process that paleontologists necessarily followed during the early dinosaur explorations. Students are read parts of a story taken from the "American Journal of Science" and interpret the evidence leading to the discovery of Triceratops and Stegosaurus. (PR)

  12. Hysteresis model and statistical interpretation of energy losses in non-oriented steels

    Energy Technology Data Exchange (ETDEWEB)

    Mănescu, Veronica, E-mail: veronica.paltanea@upb.ro; Păltânea, Gheorghe; Gavrilă, Horia

    2016-04-01

    In this paper the hysteresis energy losses in two non-oriented industrial steels (M400-65A and M800-65A) were determined, by means of an efficient classical Preisach model, which is based on the Pescetti–Biorci method for the identification of the Preisach density. The excess and the total energy losses were also determined, using a statistical framework, based on magnetic object theory. The hysteresis energy losses, in a non-oriented steel alloy, depend on the peak magnetic polarization and they can be computed using a Preisach model, due to the fact that in these materials there is a direct link between the elementary rectangular loops and the discontinuous character of the magnetization process (Barkhausen jumps). To determine the Preisach density it was necessary to measure the normal magnetization curve and the saturation hysteresis cycle. A system of equations was deduced and the Preisach density was calculated for a magnetic polarization of 1.5 T; then the hysteresis cycle was reconstructed. Using the same pattern for the Preisach distribution, it was computed the hysteresis cycle for 1 T. The classical losses were calculated using a well known formula and the excess energy losses were determined by means of the magnetic object theory. The total energy losses were mathematically reconstructed and compared with those, measured experimentally.

  13. Interpretation by numerical modeling of data monitored at a cover for a nuclear waste repository

    Science.gov (United States)

    Gran, M.; Carrera, J.; Saaltink, M. W.

    2012-04-01

    Two pilot covers have been set up at the Spanish facility for disposal of low and intermediate-level radioactive waste located at El Cabril (southern Spain). Their objective is to test the effectiveness in reducing or preventing surface erosion and runoff, infiltration and biointrusion. They consist of multilayer systems that profit from capillary barrier concepts. A complete monitoring system involving more than 200 sensors has been installed. At the same time, a complete meteorological station records meteorological data. This information is used to define initial and boundary conditions of a numerical model and also to test its validity. Here we discuss results of a preliminary 1D non isothermal multiphase flow model with an atmospheric boundary (whose fluxes depend on meteorological data) at the top. Furthermore a sink-source term has been developed to simulate the effect of lateral flow caused by the steep slope (40%) of the cover. Joint analysis of numerical simulation results together with field data allows us to study the behaviour of the liquid, gas and energy fluxes in a layered slope and to study the effects of different hydraulic properties, capillary pressures and degrees of saturation of the materials on the magnitude and direction of these flows.

  14. Lake-level variations of Lago Fagnano, Tierra del Fuego: observations, modelling and interpretation

    Directory of Open Access Journals (Sweden)

    Luciano MENDOZA

    2010-02-01

    Full Text Available The lake-level variations of Lago Fagnano, the largest lake in Tierra del Fuego, southernmost South America, on time scales from a few minutes to three years are investigated using a geodetic approach and applying the tools of time series analysis. Based on pressure tide gauge records at three locations in the lake precise lake-level time series are derived. The analysis of the observed variations in space, time and frequency domain leads to the separation of the principal force-response mechanisms. We show that the lake-level variations in Lago Fagnano can be described essentially as a combination of lake-level shift and tilt and of surface seiches. Regarding the lake-level response to air-pressure forcing, a significant departure from the inverse barometer model is found. Surface seiches dynamics are particularly intensive in Lago Fagnano pointing towards exceptionally low dissipative friction. An undisturbed series of seiches lasting eleven days is presented; and at least eleven longitudinal modes are identified. Based on the characterisation of the main contributions in space and time as well as their relation to the driving forces, a model for the transfer of the lake-level variations at a reference point to an arbitrary location in the lake with an accuracy of 1 cm is developed.

  15. Seasonality of Oxygen isotope composition in cow (Bos taurus) hair and its model interpretation

    Science.gov (United States)

    Chen, Guo; Schnyder, Hans; Auerswald, Karl

    2017-04-01

    Oxygen isotopes in animal and human tissues are expected to be good recorders of geographical origin and migration histories based on the isotopic relationship between hair oxygen and annual precipitation and the well-known spatial pattern of oxygen isotope composition in meteoric water. However, seasonal variation of oxygen isotope composition may diminish the origin information in the tissues. Here the seasonality of oxygen isotope composition in tail hair was investigated in a domestic suckler cow (Bos taurus) that underwent different ambient conditions, physiological states, and keeping and feeding strategies during five years. A detailed mechanistic model involving in ambient conditions, soil properties and animal physiology was built to explain this variation. The measured oxygen isotope composition in hair was significantly related (pwater in a regression analysis. Modelling suggested that this relation was only partly derived from the direct influence of feed moisture. Ambient conditions (temperature, moisture) did not only influence the isotopic signal of precipitation but also affected the animal itself (drinking water demand, transcutaneous vapor etc.). The clear temporal variation thus resulted from complex interactions with multiple influences. The twofold influence of ambient conditions via the feed and via the animal itself is advantageous for tracing the geographic origin because the oxygen isotope composition is then less influenced by variations in moisture uptake; however, it is unfavorable for indicating the production system, e.g. to distinguish between milk produced from fresh grass or from silage.

  16. Hysteresis model and statistical interpretation of energy losses in non-oriented steels

    Science.gov (United States)

    Mănescu (Păltânea), Veronica; Păltânea, Gheorghe; Gavrilă, Horia

    2016-04-01

    In this paper the hysteresis energy losses in two non-oriented industrial steels (M400-65A and M800-65A) were determined, by means of an efficient classical Preisach model, which is based on the Pescetti-Biorci method for the identification of the Preisach density. The excess and the total energy losses were also determined, using a statistical framework, based on magnetic object theory. The hysteresis energy losses, in a non-oriented steel alloy, depend on the peak magnetic polarization and they can be computed using a Preisach model, due to the fact that in these materials there is a direct link between the elementary rectangular loops and the discontinuous character of the magnetization process (Barkhausen jumps). To determine the Preisach density it was necessary to measure the normal magnetization curve and the saturation hysteresis cycle. A system of equations was deduced and the Preisach density was calculated for a magnetic polarization of 1.5 T; then the hysteresis cycle was reconstructed. Using the same pattern for the Preisach distribution, it was computed the hysteresis cycle for 1 T. The classical losses were calculated using a well known formula and the excess energy losses were determined by means of the magnetic object theory. The total energy losses were mathematically reconstructed and compared with those, measured experimentally.

  17. Statistical Model for the Interpretation of Evidence for Bio-Signatures Simulated in virtual Mars Samples.

    Science.gov (United States)

    Mani, Peter; Heuer, Markus; Hofmann, Beda A.; Milliken, Kitty L.; West, Julia M.

    This paper evaluates a mathematical model of bio-signature search processes on Mars samples returned to Earth and studied inside a Mars Sample Return Facility (MSRF). Asimple porosity model for a returned Mars sample, based on initial observations on Mars meteorites, has been stochastically simulated and the data analysed in a computer study. The resulting false positive, true negative and false negative values - as a typical output of the simulations - was statistically analysed. The results were used in Bayes’ statistics to correct the a-priori probability of presence of bio-signature and the resulting posteriori probability was used in turn to improve the initial assumption of the value of extra-terrestrial presence for life forms in Mars material. Such an iterative algorithm can lead to a better estimate of the positive predictive value for life on Mars and therefore, together with Poisson statistics for a null result, it should be possible to bound the probability for the presence of extra-terrestrial bio-signatures to an upper level.

  18. Mechanistic QSAR models for interpreting degradation rates of sulfonamides in UV-photocatalysis systems.

    Science.gov (United States)

    Huang, Xiangfeng; Feng, Yi; Hu, Cui; Xiao, Xiaoyu; Yu, Daliang; Zou, Xiaoming

    2015-11-01

    Photocatalysis is one of the most effective methods for treating antibiotic wastewater. Thus, it is of great significance to determine the relationship between degradation rates and structural characteristics of antibiotics in photocatalysis processes. In the present study, the photocatalytic degradation characteristics of 10 sulfonamides (SAs) were studied using two photocatalytic systems composed of nanophase titanium dioxide (nTiO2) plus ultraviolet (UV) and nTiO2/activated carbon fiber (ACF) plus UV. The results indicated that the largest apparent SA degradation rate constant (Kapp) is approximately 5 times as large as that of the smallest one. Based on the degradation mechanism and the partial least squares regression (PLS) method, optimum Quantitative Structure Activity Relationship (QSAR) models were developed for the two systems. Mechanistic models indicated that the degradation rule of SAs in the TiO2 systems strongly relates to their highest occupied molecular orbital (Ehomo), the maximum values of nucleophilic attack (f(+)x), and the minimum values of the most negative partial charge on a main-chain atom (q(C)min), whereas the maximum values of OH radical attack (f(0)x) and the apparent adsorption rate constant values (kad) are key factors affecting the degradation rule of SAs in the TiO2/ACF system.

  19. AMMI Model for Interpreting Clone-Environment Interaction in Starch Yield of Cassava

    Directory of Open Access Journals (Sweden)

    SHOLIHIN

    2011-03-01

    Full Text Available The aim of the study was to analyze the interaction between clone and environment for starch yield in six month-old plants of cassava clones based on additive main effects and multiplicative interaction (AMMI model. The experiments were conducted on mineral soil in four different locations: Lumajang (inceptisol, Kediri (entisol, Pati (alfisol, and Tulangbawang (ultisol. The experiments were carried out during 2004-2005, using a split plot design withthree replications. The main plots were the simple and the improved technology. The clones used were fifteen clones. Parameter recorded was starch yield (kg/ha of the 6 month old plants. The data were analyzed using the AMMI model. Based on the AMMI analysis, environmental factors being important in determining the stability of the starch yield were soil density for subsoil, pH of topsoil, and the maximum air humidity four months after planting. The clones of CMM97001-87, CMM97002-183, CMM97011-191, CMM97006-44, and Adhira 4 were identified as stable clones in starch yield within 6 month-old plants. CMM97007-235 was adapted to maximum relative humidity 4 months after planting and to lower pH of topsoil, whereas, MLG 10311 was adapted to lower bulk density. The mean starch yield of MLG 10311 was the highest six months after planting.

  20. Modeling and Interpreting CHAMP Magnetic Anomaly Field over China Continent Using Spherical Cap Harmonic Analysis

    Institute of Scientific and Technical Information of China (English)

    Fu Yuanyuan; Liu Qingsheng; Yang Tao

    2004-01-01

    Based on the CHAMP Magsat data set, spherical cap harmonic analysis was used to model the magnetic fields over China continent. The data set used in the analysis includes the 15′×15′ gridded values of the CHAMP anomaly fields (latitude φ=25°N to 50°N and longitude λ=78°E to 135°E). The pole of the cap is located at φ=35°N and λ=110°E with half-angle of 30°. The maximum index (Kmax) of the model is 30 and the total number of model coefficients is 961, which corresponds to the minimum wavelength at the earth's surface about 400 km. The root mean square (RMS) deviations between the calculated and observed values are ~ 4 nT for ΔX, ~ 3 nT for ΔY and ~ 3.5 nT for ΔZ, respectively. Results show that positive anomalies are found mainly at the Tarim basin with ~6- 8 nT, the Yangtze platform and North China platform with ~4 nT, and the Songliao basin with ~4-6 nT. In contrast, negative anomaly is mainly located in the Tibet orogenic belt with the amplitude ~ (-6)-(-8) nT. Upward continuation of magnetic anomalies was used to semi-quantitatively separate the magnetic anomalies in different depths of crust. The magnetic anomalies at the earth's surface are from -6 to 10 nT for upper crust, middle crust -27 to 42 nT and lower crust -12 to 18 nT, respectively. The strikes of the magnetic anomalies for the upper crust are consistent with those for the middle crust, but not for the lower crust. The high positive magnetic anomalies mainly result from the old continental nucleus and diastrophic block (e.g. middle Sichuan continental nucleus, middle Tarim basin continental nucleus, Junggar diastrophic block and Qaidam diastrophic block). The amplitudes of the magnetic anomalies of the old continental nucleus and diastrophic block are related to evolution of deep crust. These results improve our understanding of the crustal structure over China continent.

  1. Deep geothermal systems interpreted by coupled thermo-hydraulic-mechanical-chemical numerical modeling

    Science.gov (United States)

    Peters, Max; Lesueur, Martin; Held, Sebastian; Poulet, Thomas; Veveakis, Manolis; Regenauer-Lieb, Klaus; Kohl, Thomas

    2017-04-01

    The dynamic response of the geothermal reservoirs of Soultz-sous-Forêts (NE France) and a new site in Iceland are theoretically studied upon fluid injection and production. Since the Soultz case can be considered the most comprehensive project in the area of enhanced geothermal systems (EGS), it is tailored for the testing of forward modeling techniques that aim at the characterization of fluid dynamics and mechanical properties in any deeply-seated fractured cystalline reservoir [e.g. Held et al., 2014]. We present multi-physics finite element models using the recently developed framework MOOSE (mooseframework.org) that implicitly consider fully-coupled feedback mechanisms of fluid-rock interaction at depth where EGS are located (depth > 5 km), i.e. the effects of dissipative strain softening on chemical reactions and reactive transport [Poulet et al., 2016]. In a first suite of numerical experiments, we show that an accurate simulation of propagation fronts allows studying coupled fluid and heat transport, following preferred pathways, and the transport time of the geothermal fluid between injection and production wells, which is in good agreement with tracer experiments performed inside the natural reservoir. Based on induced seismicity experiments and related damage along boreholes, we concern with borehole instabilities resulting from pore pressure variations and (a)seismic creep in a second series of simulations. To this end, we account for volumetric and deviatoric components, following the approach of Veveakis et al. (2016), and discuss the mechanisms triggering slow earthquakes in the stimulated reservoirs. Our study will allow applying concepts of unconventional geomechanics, which were previously reviewed on a theoretical basis [Regenauer-Lieb et al., 2015], to substantial engineering problems of deep geothermal reservoirs in the future. REFERENCES Held, S., Genter, A., Kohl, T., Kölbel, T., Sausse, J. and Schoenball, M. (2014). Economic evaluation of

  2. Analytic modeling, simulation and interpretation of broadband beam coupling impedance bench measurements

    Energy Technology Data Exchange (ETDEWEB)

    Niedermayer, U., E-mail: niedermayer@temf.tu-darmstadt.de [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Eidam, L. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); Boine-Frankenheim, O. [Institut für Theorie Elektromagnetischer Felder (TEMF), Technische Universität Darmstadt, Schloßgartenstraße 8, 64289 Darmstadt (Germany); GSI Helmholzzentrum für Schwerionenforschung, Planckstraße 1, 64291 Darmstadt (Germany)

    2015-03-11

    First, a generalized theoretical approach towards beam coupling impedances and stretched-wire measurements is introduced. Applied to a circular symmetric setup, this approach allows to compare beam and wire impedances. The conversion formulas for TEM scattering parameters from measurements to impedances are thoroughly analyzed and compared to the analytical beam impedance solution. A proof of validity for the distributed impedance formula is given. The interaction of the beam or the TEM wave with dispersive material such as ferrite is discussed. The dependence of the obtained beam impedance on the relativistic velocity β is investigated and found as material property dependent. Second, numerical simulations of wakefields and scattering parameters are compared. The applicability of scattering parameter conversion formulas for finite device length is investigated. Laboratory measurement results for a circularly symmetric test setup, i.e. a ferrite ring, are shown and compared to analytic and numeric models. The optimization of the measurement process and error reduction strategies are discussed.

  3. Modeling flow in nanoporous, membrane reservoirs and interpretation of coupled fluxes

    Science.gov (United States)

    Geren, Filiz

    The average pore size in unconventional, tight-oil reservoirs is estimated to be less than 100 nm. At this pore size, Darcy flow is no longer the dominating flow mechanism and a combination of diffusive flows determines the flow characteristics. Concentration driven self-diffusion has been well known and included in the flow and transport models in porous media. However, when the sizes of the pores and pore-throats decrease down to the size of the hydrocarbon molecules, the porous medium acts like a semi-permeable membrane, and the size of the pore openings dictates the direction of transport between adjacent pores. Accordingly, characterization of flow and transport in tight unconventional plays requires understanding of their membrane properties. This Master of Science thesis first highlights the membrane properties of nanoporous, unconventional reservoirs and then discusses how filtration effects can be incorporated into the models of transport in nanoporous media within the coupled flux concept. The effect of filtration on fluid composition and its impact on black-oil fluid properties like bubble point pressure is also demonstrated. To define filtration and filtration pressure in unconventional, tight-oil reservoirs, analogy to chemical osmosis is applied two pore systems connected with a pore throat, which shows membrane properties. Because the pore throat selectivity permits the passage of fluid molecules by their sizes, given a filtration pressure difference between the two pore systems, the concentration difference between the systems is determined by flash calculations. The results are expressed in the form of filtration (membrane) efficiency, which is essential parameter to define coupled fluxes for porous media flow.

  4. Tumor size interpretation for predicting cervical lymph node metastasis using a differentiated thyroid cancer risk model

    Science.gov (United States)

    Shi, Rong-liang; Qu, Ning; Yang, Shu-wen; Ma, Ben; Lu, Zhong-wu; Wen, Duo; Sun, Guo-hua; Wang, Yu; Ji, Qing-hai

    2016-01-01

    Lymph node metastasis (LNM) is common in differentiated thyroid cancer (DTC), but management of clinically negative DTC is controversial. This study evaluated primary tumor size as a predictor of LNM. Multivariate logistic regression analysis was used for DTC patients who were treated with surgery between 2002 and 2012 in the Surveillance, Epidemiology, and End Results (SEER) database, to determine the association of tumor size at 10 mm increments with LNM. A predictive model was then developed to estimate the risk of LNM in DTC, using tumor size and other clinicopathological characteristics identified from the multivariate analysis. We identified 80,565 eligible patients with DTC in the SEER database. Final histology confirmed 9,896 (12.3%) cases affected with N1a disease and 8,194 (10.2%) cases with N1b disease. After the patients were classified into subgroups by tumor size, we found that the percentages of male sex, white race, follicular histology, gross extrathyroidal extension, lateral lymph node metastasis, and distant metastasis gradually increased with size. In multivariate analysis, tumor size was a significant independent prognostic factor for LNM; in particular, the odds ratio for lateral lymph node metastasis continued to increase by size relative to a 1–10 mm baseline. The coefficient for tumor size in the LNM predictive model waŝ0.20, indicating extra change in log(odds ratio) for LNM as 0.2 per unit increment in size relative to baseline. In conclusion, larger tumors are likely to have aggressive features and metastasize to a cervical compartment. Multistratification by size could provide more precise estimates of the likelihood of LNM before surgery. PMID:27574443

  5. Interpretability in Linear Brain Decoding

    OpenAIRE

    Kia, Seyed Mostafa; Passerini, Andrea

    2016-01-01

    Improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of brain decoding models. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, we present a simple definition for interpretability of linear brain decoding models. Then, we propose to combine the...

  6. How to Develop and Interpret a Credibility Assessment of Numerical Models for Human Research: NASA-STD-7009 Demystified

    Science.gov (United States)

    Nelson, Emily S.; Mulugeta, Lealem; Walton, Marlei; Myers, Jerry G.

    2014-01-01

    In the wake of the Columbia accident, the NASA-STD-7009 [1] credibility assessment was developed as a unifying platform to describe model credibility and the uncertainties in its modeling predictions. This standard is now being adapted by NASAs Human Research Program to cover a wide range of numerical models for human research. When used properly, the standard can improve the process of code development by encouraging the use of best practices. It can also give management more insight in making informed decisions through a better understanding of the models capabilities and limitations.To a newcomer, the abstractions presented in NASA-STD-7009 and the sheer volume of information that must be absorbed can be overwhelming. This talk is aimed at describing the credibility assessment, which is the heart of the standard, in plain terms. It will outline how to develop a credibility assessment under the standard. It will also show how to quickly interpret the graphs and tables that result from the assessment and how to drill down from the top-level view to the foundation of the assessment. Finally, it will highlight some of the resources that are available for further study.

  7. Development of the HT-BP neural network system for the identification of a well-test interpretation model

    Energy Technology Data Exchange (ETDEWEB)

    Sung, W.; Yoo, I.; Ra, S. [Hanyang Univ., Seoul (Korea, Republic of). Mineral and Petroleum Engineering Dept.; Park, H.

    1996-08-01

    The back propagation (BP) neural network approach has been the subject of recent focus because it can identify models for incomplete or distorted data without performing data preparation procedures. However, this approach uses only partial sets of data to reduce computing time and memory, and it may miss the points representing characteristics of the curve shape. Therefore, the resulted model may not be correct, forcing one to use sequential neural nets to find the correct model. The authors present the Hough Transform (HT) method combined with the BP neural network to improve this problem. With the aid of an HT, one can extract one simple pattern, including noisy and extraneous points, from the full-set data. A number of exercises also have been conducted for the published well-test data with the artificial intelligence neutral network identification system (ANNIS) they developed. The results show that ANNIS is quite reliable, especially for the incomplete or distorted data. They also demonstrate that the modified Levenberg-Marquart interpretation model, also developed in this work, successfully estimates reservoir parameters.

  8. TimerQuant: a modelling approach to tandem fluorescent timer design and data interpretation for measuring protein turnover in embryos.

    Science.gov (United States)

    Barry, Joseph D; Donà, Erika; Gilmour, Darren; Huber, Wolfgang

    2016-01-01

    Studies on signalling dynamics in living embryos have been limited by a scarcity of in vivo reporters. Tandem fluorescent protein timers provide a generic method for detecting changes in protein population age and thus provide readouts for signalling events that lead to changes in protein stability or location. When imaged with quantitative dual-colour fluorescence microscopy, tandem timers offer detailed 'snapshot' readouts of signalling activity from subcellular to organismal scales, and therefore have the potential to revolutionise studies in developing embryos. Here we use computer modelling and embryo experiments to explore the behaviour of tandem timers in developing systems. We present a mathematical model of timer kinetics and provide software tools that will allow experimentalists to select the most appropriate timer designs for their biological question, and guide interpretation of the obtained readouts. Through the generation of a series of novel zebrafish reporter lines, we confirm experimentally that our quantitative model can accurately predict different timer responses in developing embryos and explain some less expected findings. For example, increasing the FRET efficiency of a tandem timer actually increases the ability of the timer to detect differences in protein half-life. Finally, while previous studies have used timers to monitor changes in protein turnover, our model shows that timers can also be used to facilitate the monitoring of gene expression kinetics in vivo. © 2016. Published by The Company of Biologists Ltd.

  9. Interpretation of NO(x)/NO(y) observations from AASE-2 using a model of chemistry along trajectories

    Science.gov (United States)

    Kawa, S. R.; Fahey, D. W.; Wilson, J. C.; Schoeberl, M. R.; Douglass, A. R.; Stolarski, R. S.; Woodbridge, E. L.; Jonsson, H.; Lait, L. R.; Newman, P. A.

    1993-01-01

    In situ measurements of NO and NO(y) are used to derive the ratio NO(x)/NO(y) along the flight track of the NASA ER-2 aircraft. Data are presented for two flights at midlatitudes in October 1991 during the Airborne Arctic Stratospheric Expedition-2 (AASE-2). Aerosol particle surface area was concurrently measured. The observations are compared with a photochemical model integrated along back trajectories from the aircraft flight track. Comparison of observations with the model run along trajectories and at a fixed position clearly and quantitatively demonstrates the importance of an air parcel's dynamic history in interpretation of local chemical observations. Comparison of the data with model runs under different assumptions regarding heterogeneous chemistry further reinforces the case for occurrence of the reaction of N2O5 + H2O on sulfate aerosol surfaces in the atmosphere. Finally, comparisons for which relative changes in the model and the data are not consistent caution that our ability to resolve all the observations is not yet complete.

  10. Mountains on Io: High-resolution Galileo observations, initial interpretations, and formation models

    Science.gov (United States)

    Turtle, E.P.; Jaeger, W.L.; Keszthelyi, L.P.; McEwen, A.S.; Milazzo, M.; Moore, J.; Phillips, C.B.; Radebaugh, J.; Simonelli, D.; Chuang, F.; Schuster, P.; Alexander, D.D.A.; Capraro, K.; Chang, S.-H.; Chen, A.C.; Clark, J.; Conner, D.L.; Culver, A.; Handley, T.H.; Jensen, D.N.; Knight, D.D.; LaVoie, S.K.; McAuley, M.; Mego, V.; Montoya, O.; Mortensen, H.B.; Noland, S.J.; Patel, R.R.; Pauro, T.M.; Stanley, C.L.; Steinwand, D.J.; Thaller, T.F.; Woncik, P.J.; Yagi, G.M.; Yoshimizu, J.R.; Alvarez Del Castillo, E.M.; Beyer, R.; Branston, D.; Fishburn, M.B.; Muller, Birgit; Ragan, R.; Samarasinha, N.; Anger, C.D.; Cunningham, C.; Little, B.; Arriola, S.; Carr, M.H.; Asphaug, E.; Morrison, D.; Rages, K.; Banfield, D.; Bell, M.; Burns, J.A.; Carcich, B.; Clark, B.; Currier, N.; Dauber, I.; Gierasch, P.J.; Helfenstein, P.; Mann, M.; Othman, O.; Rossier, L.; Solomon, N.; Sullivan, R.; Thomas, P.C.; Veverka, J.; Becker, T.; Edwards, K.; Gaddis, L.; Kirk, R.; Lee, E.; Rosanova, T.; Sucharski, R.M.; Beebe, R.F.; Simon, A.; Belton, M.J.S.; Bender, K.; Fagents, S.; Figueredo, P.; Greeley, R.; Homan, K.; Kadel, S.; Kerr, J.; Klemaszewski, J.; Lo, E.; Schwarz, W.; Williams, D.; Williams, K.; Bierhaus, B.; Brooks, S.; Chapman, C.R.; Merline, B.; Keller, J.; Tamblyn, P.; Bouchez, A.; Dyundian, U.; Ingersoll, A.P.; Showman, A.; Spitale, J.; Stewart, S.; Vasavada, A.; Breneman, H.H.; Cunningham, W.F.; Johnson, T.V.; Jones, T.J.; Kaufman, J.M.; Klaasen, K.P.; Levanas, G.; Magee, K.P.; Meredith, M.K.; Orton, G.S.; Senske, D.A.; West, A.; Winther, D.; Collins, G.; Fripp, W.J.; Head, J. W.; Pappalardo, R.; Pratt, S.; Prockter, L.; Spaun, N.; Colvin, T.; Davies, M.; DeJong, E.M.; Hall, J.; Suzuki, S.; Gorjian, Z.; Denk, T.; Giese, B.; Koehler, U.; Neukum, G.; Oberst, J.; Roatsch, T.; Tost, W.; Wagner, R.; Dieter, N.; Durda, D.; Geissler, P.; Greenberg, R.J.; Hoppa, G.; Plassman, J.; Tufts, R.; Fanale, F.P.; Granahan, J.C.

    2001-01-01

    During three close flybys in late 1999 and early 2000 the Galileo spacecraft ac-quired new observations of the mountains that tower above Io's surface. These images have revealed surprising variety in the mountains' morphologies. They range from jagged peaks several kilometers high to lower, rounded structures. Some are very smooth, others are covered by numerous parallel ridges. Many mountains have margins that are collapsing outward in large landslides or series of slump blocks, but a few have steep, scalloped scarps. From these observations we can gain insight into the structure and material properties of Io's crust as well as into the erosional processes acting on Io. We have also investigated formation mechanisms proposed for these structures using finite-element analysis. Mountain formation might be initiated by global compression due to the high rate of global subsidence associated with Io's high resurfacing rate; however, our models demonstrate that this hypothesis lacks a mechanism for isolating the mountains. The large fraction (???40%) of mountains that are associated with paterae suggests that in some cases these features are tectonically related. Therefore we have also simulated the stresses induced in Io's crust by a combination of a thermal upwelling in the mantle with global lithospheric compression and have shown that this can focus compressional stresses. If this mechanism is responsible for some of Io's mountains, it could also explain the common association of mountains with paterae. Copyright 2001 by the American Geophysical Union.

  11. Interpreting H2O isotope variations in high-altitude ice cores using a cyclone model

    Science.gov (United States)

    Holdsworth, Gerald

    2008-04-01

    Vertical profiles of isotope (δ18O or δD) values versus altitude (z) from sea level to high altitude provide a link to cyclones, which impact most ice core sites. Cyclonic structure variations cause anomalous variations in ice core δ time series which may obscure the basic temperature signal. Only one site (Mount Logan, Yukon) provides a complete δ versus z profile generated solely from data. At other sites, such a profile has to be constructed by supplementing field data. This requires using the so-called isotopic or δ thermometer which relates δ to a reference temperature (T). The construction of gapped sections of δ versus z curves requires assuming a typical atmospheric lapse rate (dT/dz), where T is air temperature, and using the slope (dδ/dT) of a site-derived δ thermometer to calculate dδ/dz. Using a three-layer model of a cyclone, examples are given to show geometrically how changes in the thickness of the middle, mixed layer leads to the appearance of anomalous δ values in time series (producing decalibration of the δ thermometer there). The results indicate that restrictions apply to the use of the δ thermometer in ice core paleothermometry, according to site altitude, regional meteorology, and climate state.

  12. Use of modeling and simulation in the planning, analysis and interpretation of ultrasonic testing; Einsatz von Modellierung und Simulation bei der Planung, Analyse und Interpretation von Ultraschallpruefungen

    Energy Technology Data Exchange (ETDEWEB)

    Algernon, Daniel [SVTI Schweizerischer Verein fuer technische Inspektionen, Wallisellen (Switzerland). ZfP-Labor; Grosse, Christian U. [Technische Univ. Muenchen (Germany). Lehrstuhl fuer Zerstoerungsfreie Pruefung

    2016-05-01

    Acoustic testing methods such as ultrasound and impact echo are an important tool in building diagnostics. The range includes thickness measurements, the representation of the internal component geometry as well as the detection of voids (gravel pockets), delaminations or possibly locating grouting faults in the interior of metallic cladding tubes of tendon ducts. Basically acoustic method for non-destructive testing (NDT) is based on the excitation of elastic waves that interact with the target object (e.g. to detect discontinuity in the component) at the acoustic interface. From the signal received at the component surface this interaction shall be detected and interpreted to draw conclusions about the presence of the target object, and optionally to determine its size and position (approximately). Although the basic underlying physical principles of the application of elastic waves in NDT are known, it can be complicated by complex relationships in the form of restricted access, component geometries, or the type and form of reflectors. To estimate the chances of success of a test is already often not trivial. These circumstances highlight the importance of using simulations that allow a theoretically sound basis for testing and allow easy optimizing test systems. The deployable simulation methods are varied. Common are in particular the finite element method, the Elasto Finite Integration Technique and semi-analytical calculation methods. [German] Akustische Pruefverfahren wie Ultraschall und Impact-Echo sind ein wichtiges Werkzeug der Bauwerksdiagnose. Das Einsatzspektrum beinhaltet Dickenmessungen, die Darstellung der inneren Bauteilgeometrie ebenso wie die Ortung von Kiesnestern, Delaminationen oder u.U. die Ortung von Verpressfehlern im Innern metallischer Huellrohre von Spannkanaelen. Grundsaetzlich beruhen akustische Verfahren zur Zerstoerungsfreien Pruefung (ZfP) auf der Anregung elastischer Wellen, die mit dem Zielobjekt (z. B. zu detektierende Ungaenze

  13. Two-band model interpretation of the p- to n-transition in ternary tetradymite topological insulators

    Energy Technology Data Exchange (ETDEWEB)

    Chasapis, T. C., E-mail: t-chasapis@northwestern.edu, E-mail: m-kanatzidis@northwestern.edu; Calta, N. P.; Kanatzidis, M. G., E-mail: t-chasapis@northwestern.edu, E-mail: m-kanatzidis@northwestern.edu [Department of Chemistry, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States); Koumoulis, D.; Leung, B. [Department of Chemistry and Biochemistry, University of California – Los Angeles, 607 Charles E. Young Drive East, Los Angeles, California 90095 (United States); Lo, S.-H.; Dravid, V. P. [Department of Materials Science and Engineering, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States); Bouchard, L.-S. [Department of Chemistry and Biochemistry, University of California – Los Angeles, 607 Charles E. Young Drive East, Los Angeles, California 90095 (United States); California NanoSystems Institute – UCLA, 570 Westwood Plaza, Los Angeles, California 90095 (United States)

    2015-08-01

    The requirement for large bulk resistivity in topological insulators has led to the design of complex ternary and quaternary phases with balanced donor and acceptor levels. A common feature of the optimized phases is that they lie close to the p- to n-transition. The tetradymite Bi{sub 2}Te{sub 3−x}Se{sub x} system exhibits minimum bulk conductance at the ordered composition Bi{sub 2}Te{sub 2}Se. By combining local and integral measurements of the density of states, we find that the point of minimum electrical conductivity at x = 1.0 where carriers change from hole-like to electron-like is characterized by conductivity of the mixed type. Our experimental findings, which are interpreted within the framework of a two-band model for the different carrier types, indicate that the mixed state originates from different types of native defects that strongly compensate at the crossover point.

  14. Viral epidemics in a cell culture: novel high resolution data and their interpretation by a percolation theory based model

    CERN Document Server

    Gönci, Balázs; Balogh, Emeric; Szabó, Bálint; Dénes, Ádám; Környei, Zsuzsanna; Vicsek, Tamás

    2010-01-01

    Because of its relevance to everyday life, the spreading of viral infections has been of central interest in a variety of scientific communities involved in fighting, preventing and theoretically interpreting epidemic processes. Recent large scale observations have resulted in major discoveries concerning the overall features of the spreading process in systems with highly mobile susceptible units, but virtually no data are available about observations of infection spreading for a very large number of immobile units. Here we present the first detailed quantitative documentation of percolation-type viral epidemics in a highly reproducible in vitro system consisting of tens of thousands of virtually motionless cells. We use a confluent astroglial monolayer in a Petri dish and induce productive infection in a limited number of cells with a genetically modified herpesvirus strain. This approach allows extreme high resolution tracking of the spatio-temporal development of the epidemic. We show that a simple model ...

  15. Structural Analysis of the Factors Influencing the Financing of Forestry Enterprises Based on Interpretive Structural Modeling(ISM)

    Institute of Scientific and Technical Information of China (English)

    Zhen; WANG; Weiping; LIU; Xiaomin; JIANG

    2015-01-01

    Through the collection of related literature,we point out the six major factors influencing China’s forestry enterprises’ financing: insufficient national support; regulations and institutional environmental factors; narrow channels of financing; inappropriate existing mortgagebacked approach; forestry production characteristics; forestry enterprises’ defects. Then,we use interpretive structural modeling( ISM) from System Engineering to analyze the structure of the six factors and set up ladder-type structure. We put three factors including forestry production characteristics,shortcomings of forestry enterprises and regulatory,institutional and environmental factors as basic factors and put other three factors as important factors. From the perspective of the government and enterprises,we put forward some personal advices and ideas based on the basic factors and important factors to ease the financing difficulties of forestry enterprises.

  16. Backward transfer entropy: Informational measure for detecting hidden Markov models and its interpretations in thermodynamics, gambling and causality

    Science.gov (United States)

    Ito, Sosuke

    2016-11-01

    The transfer entropy is a well-established measure of information flow, which quantifies directed influence between two stochastic time series and has been shown to be useful in a variety fields of science. Here we introduce the transfer entropy of the backward time series called the backward transfer entropy, and show that the backward transfer entropy quantifies how far it is from dynamics to a hidden Markov model. Furthermore, we discuss physical interpretations of the backward transfer entropy in completely different settings of thermodynamics for information processing and the gambling with side information. In both settings of thermodynamics and the gambling, the backward transfer entropy characterizes a possible loss of some benefit, where the conventional transfer entropy characterizes a possible benefit. Our result implies the deep connection between thermodynamics and the gambling in the presence of information flow, and that the backward transfer entropy would be useful as a novel measure of information flow in nonequilibrium thermodynamics, biochemical sciences, economics and statistics.

  17. A new metric enabling an exact hypergraph model for the communication volume in distributed-memory parallel applications

    NARCIS (Netherlands)

    Fortmeier, O.; Bücker, H.M.; Fagginger Auer, B.O.; Bisseling, R.H.

    2013-01-01

    A hypergraph model for mapping applications with an all-neighbor communication pattern to distributed-memory computers is proposed, which originated in finite element triangulations. Rather than approximating the communication volume for linear algebra operations, this new model represents the commu

  18. New Mouse Model for Chronic Infections by Gram-Negative Bacteria Enabling the Study of Anti-Infective Efficacy and Host-Microbe Interactions

    Science.gov (United States)

    Pletzer, Daniel; Mansour, Sarah C.; Wuerth, Kelli; Rahanjam, Negin

    2017-01-01

    ABSTRACT Only a few, relatively cumbersome animal models enable long-term Gram-negative bacterial infections that mimic human situations, where untreated infections can last for weeks. Here, we describe a simple murine cutaneous abscess model that enables chronic or progressive infections, depending on the subcutaneously injected bacterial strain. In this model, Pseudomonas aeruginosa cystic fibrosis epidemic isolate LESB58 caused localized high-density skin and soft tissue infections and necrotic skin lesions for up to 10 days but did not disseminate in either CD-1 or C57BL/6 mice. The model was adapted for use with four major Gram-negative nosocomial pathogens, Acinetobacter baumannii, Klebsiella pneumoniae, Enterobacter cloacae, and Escherichia coli. This model enabled noninvasive imaging and tracking of lux-tagged bacteria, the influx of activated neutrophils, and production of reactive oxygen-nitrogen species at the infection site. Screening antimicrobials against high-density infections showed that local but not intravenous administration of gentamicin, ciprofloxacin, and meropenem significantly but incompletely reduced bacterial counts and superficial tissue dermonecrosis. Bacterial RNA isolated from the abscess tissue revealed that Pseudomonas genes involved in iron uptake, toxin production, surface lipopolysaccharide regulation, adherence, and lipase production were highly upregulated whereas phenazine production and expression of global activator gacA were downregulated. The model was validated for studying virulence using mutants of more-virulent P. aeruginosa strain PA14. Thus, mutants defective in flagella or motility, type III secretion, or siderophore biosynthesis were noninvasive and suppressed dermal necrosis in mice, while a strain with a mutation in the bfiS gene encoding a sensor kinase showed enhanced invasiveness and mortality in mice compared to controls infected with wild-type P. aeruginosa PA14. PMID:28246361

  19. Mathematical Language / Scientific Interpretation / Theological Interpretation

    Directory of Open Access Journals (Sweden)

    Bodea Marcel Smilihon

    2015-05-01

    Full Text Available The specific languages referred to in this presentation are: scientific language, mathematical language, theological language and philosophical language. Cosmological, scientific or theological models understood as distinct interpretations of a common symbolic language do not ensure, by such a common basis, a possible or legitimate correspondence of certain units of meaning. Mathematics understood as a symbolic language used in scientific and theological interpretation does not bridge between science and theology. Instead, it only allows the assertion of a rational-mathematical unity in expression. In this perspective, theology is nothing less rational than science. The activity of interpretation has an interdisciplinary character, it is a necessary condition of dialogue. We cannot speak about dialogue without communication between various fields, without passing from one specialized language to another specialized language. The present paper proposes to suggest this aspect.

  20. Modelling, Simulation & Analysis (MS&A): Potent Enabling Tools for Planning and Executing Complex Major National Events

    Science.gov (United States)

    2011-10-01

    called flexible cartography , this approach models relevant results for the processing of interdependencies, while 16 DRDC CSS TM 2011-20...Geographic Information Systems ( GIS ) to Enhance Academic Capability of Philippine Higher Education Institutions Asian Journal of Business and Governance

  1. Interpreting the nonlinear dielectric response of glass-formers in terms of the coupling model

    Energy Technology Data Exchange (ETDEWEB)

    Ngai, K. L. [CNR-IPCF, Largo Bruno Pontecorvo 3, I-56127 Pisa, Italy and Dipartimento di Fisica, Università di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa (Italy)

    2015-03-21

    Nonlinear dielectric measurements at high electric fields of glass-forming glycerol and propylene carbonate initially were carried out to elucidate the dynamic heterogeneous nature of the structural α-relaxation. Recently, the measurements were extended to sufficiently high frequencies to investigate the nonlinear dielectric response of faster processes including the so-called excess wing (EW), appearing as a second power law at high frequencies in the loss spectra of many glass formers without a resolved secondary relaxation. While a strong increase of dielectric constant and loss is found in the nonlinear dielectric response of the α-relaxation, there is a lack of significant change in the EW. A surprise to the experimentalists finding it, this difference in the nonlinear dielectric properties between the EW and the α-relaxation is explained in the framework of the coupling model by identifying the EW investigated with the nearly constant loss (NCL) of caged molecules, originating from the anharmonicity of the intermolecular potential. The NCL is terminated at longer times (lower frequencies) by the onset of the primitive relaxation, which is followed sequentially by relaxation processes involving increasing number of molecules until the terminal Kohlrausch α-relaxation is reached. These intermediate faster relaxations, combined to form the so-called Johari-Goldstein (JG) β-relaxation, are spatially and dynamically heterogeneous, and hence exhibit nonlinear dielectric effects, as found in glycerol and propylene carbonate, where the JG β-relaxation is not resolved and in D-sorbitol where it is resolved. Like the linear susceptibility, χ{sub 1}(f), the frequency dispersion of the third-order dielectric susceptibility, χ{sub 3}(f), was found to depend primarily on the α-relaxation time, and independent of temperature T and pressure P. I show this property of the frequency dispersions of χ{sub 1}(f) and χ{sub 3}(f) is the characteristic of the many

  2. Evolution of the quaternary magmatic system, Mineral Mountains, Utah: Interpretations from chemical and experimental modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nash, W.P.; Crecraft, H.R.

    1982-09-01

    The evolution of silicic magmas in the upper crust is characterized by the establishment of chemical and thermal gradients in the upper portion of magma chambers. The chemical changes observed in rhyolite magmas erupted over a period of 300,000 years in the Mineral Mountains are similar to those recorded at Twin Peaks, Utah, and in the spatially zoned Bishop Tuff from Long Valley, California. Chemical and fluid dynamic models indicate that cooling of a silicic magma body from the top and sides can result in the formation of a roof zone above a convecting region which is chemically and thermally stratified, as well as highly fractionated and water rich. Crystallization experiments have been performed with sodium carbonate solutions as an analog to crystallization in magmatic systems. Top and side cooling of a homogeneous sodium carbonate solution results in crystallization along the top and sides and upward convection of sodium carbonate-depleted fluid. A stably stratified roof zone, which is increasingly water rich and cooler upwards, develops over a thermally and chemically homogeneous convecting region. Crystallization at the top ultimately ceases, and continued upward convection of water-rich fluid causes a slight undersaturation adjacent to the roof despite cooler temperatures. By analogy, crystallization at the margins of a magma chamber and buoyant rise of the fractionated boundary layer into the roof zone can account for the chemical evolution of the magma system at the Mineral Mountains. To produce compositionally stratified silicic magmas requires thermal input to a silicic system via mafic magmas. The small volume, phenocryst-poor rhyolite magma which persisted for at least 300,000 years in the Mineral Mountains requires the presence of a continued thermal input from a mafic magma source. The presence of silicic lavas signifies that there is a substantial thermal anomaly both in the crust and upper mantle. The production of silicic lavas requires (1) the

  3. Genome-scale modeling using flux ratio constraints to enable metabolic engineering of clostridial metabolism in silico

    Directory of Open Access Journals (Sweden)

    McAnulty Michael J

    2012-05-01

    Full Text Available Abstract Background Genome-scale metabolic networks and flux models are an effective platform for linking an organism genotype to its phenotype. However, few modeling approaches offer predictive capabilities to evaluate potential metabolic engineering strategies in silico. Results A new method called “flux balance analysis with flux ratios (FBrAtio” was developed in this research and applied to a new genome-scale model of Clostridium acetobutylicum ATCC 824 (iCAC490 that contains 707 metabolites and 794 reactions. FBrAtio was used to model wild-type metabolism and metabolically engineered strains of C. acetobutylicum where only flux ratio constraints and thermodynamic reversibility of reactions were required. The FBrAtio approach allowed solutions to be found through standard linear programming. Five flux ratio constraints were required to achieve a qualitative picture of wild-type metabolism for C. acetobutylicum for the production of: (i acetate, (ii lactate, (iii butyrate, (iv acetone, (v butanol, (vi ethanol, (vii CO2 and (viii H2. Results of this simulation study coincide with published experimental results and show the knockdown of the acetoacetyl-CoA transferase increases butanol to acetone selectivity, while the simultaneous over-expression of the aldehyde/alcohol dehydrogenase greatly increases ethanol production. Conclusions FBrAtio is a promising new method for constraining genome-scale models using internal flux ratios. The method was effective for modeling wild-type and engineered strains of C. acetobutylicum.

  4. A New Approach and Analysis of Modeling the Human Body in RFID-Enabled Body-Centric Wireless Systems

    Directory of Open Access Journals (Sweden)

    Karoliina Koski

    2014-01-01

    Full Text Available Body-centric wireless systems demand wearable sensor and tag antennas that have robust impedance matching and provide enough gain for a reliable wireless communication link. In this paper, we discuss a novel and practical technique for the modeling of the human body in UHF RFID body-centric wireless systems. What makes this technique different is that we base the human model on measured far-field response from a reference tag attached to the human body. Hereby, the human body model accounts for the encountered human body effects on the tag performance. The on-body measurements are fast, which allows establishing a catalog of human body models for different tag locations and human subjects. Such catalog would provide a ready simulation model for a wide range of wireless body-centric applications in order to initiate a functional design. Our results demonstrate that the suggested modeling technique can be used in the design and optimization of wearable antennas for different real-case body-centric scenarios.

  5. The Causal Meaning of Genomic Predictors and How It Affects Construction and Comparison of Genome-Enabled Selection Models

    Science.gov (United States)

    Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.

    2015-01-01

    The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318

  6. Using simulation to interpret a discrete time survival model in a complex biological system: fertility and lameness in dairy cows.

    Science.gov (United States)

    Hudson, Christopher D; Huxley, Jonathan N; Green, Martin J

    2014-01-01

    The ever-growing volume of data routinely collected and stored in everyday life presents researchers with a number of opportunities to gain insight and make predictions. This study aimed to demonstrate the usefulness in a specific clinical context of a simulation-based technique called probabilistic sensitivity analysis (PSA) in interpreting the results of a discrete time survival model based on a large dataset of routinely collected dairy herd management data. Data from 12,515 dairy cows (from 39 herds) were used to construct a multilevel discrete time survival model in which the outcome was the probability of a cow becoming pregnant during a given two day period of risk, and presence or absence of a recorded lameness event during various time frames relative to the risk period amongst the potential explanatory variables. A separate simulation model was then constructed to evaluate the wider clinical implications of the model results (i.e. the potential for a herd's incidence rate of lameness to influence its overall reproductive performance) using PSA. Although the discrete time survival analysis revealed some relatively large associations between lameness events and risk of pregnancy (for example, occurrence of a lameness case within 14 days of a risk period was associated with a 25% reduction in the risk of the cow becoming pregnant during that risk period), PSA revealed that, when viewed in the context of a realistic clinical situation, a herd's lameness incidence rate is highly unlikely to influence its overall reproductive performance to a meaningful extent in the vast majority of situations. Construction of a simulation model within a PSA framework proved to be a very useful additional step to aid contextualisation of the results from a discrete time survival model, especially where the research is designed to guide on-farm managem