WorldWideScience

Sample records for invalid lnt model

  1. Radiation, Ecology and the Invalid LNT Model: The Evolutionary Imperative

    OpenAIRE

    Parsons, Peter A.

    2006-01-01

    Metabolic and energetic efficiency, and hence fitness of organisms to survive, should be maximal in their habitats. This tenet of evolutionary biology invalidates the linear-nothreshold (LNT) model for the risk consequences of environmental agents. Hormesis in response to selection for maximum metabolic and energetic efficiency, or minimum metabolic imbalance, to adapt to a stressed world dominated by oxidative stress should therefore be universal. Radiation hormetic zones extending substanti...

  2. Radiation, ecology and the invalid LNT model: the evolutionary imperative.

    Science.gov (United States)

    Parsons, Peter A

    2006-09-27

    Metabolic and energetic efficiency, and hence fitness of organisms to survive, should be maximal in their habitats. This tenet of evolutionary biology invalidates the linear-no threshold (LNT) model for the risk consequences of environmental agents. Hormesis in response to selection for maximum metabolic and energetic efficiency, or minimum metabolic imbalance, to adapt to a stressed world dominated by oxidative stress should therefore be universal. Radiation hormetic zones extending substantially beyond common background levels, can be explained by metabolic interactions among multiple abiotic stresses. Demographic and experimental data are mainly in accord with this expectation. Therefore, non-linearity becomes the primary model for assessing risks from low-dose ionizing radiation. This is the evolutionary imperative upon which risk assessment for radiation should be based.

  3. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2017-01-01

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. • The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.

  4. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu [Department of Environmental Health Sciences, School of Public Health and Health Sciences, Morrill I, N344, University of Massachusetts, Amherst, MA 01003 (United States)

    2017-04-15

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. • The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.

  5. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT.

    Science.gov (United States)

    Calabrese, Edward J

    2017-04-01

    This paper reveals that nearly 25 years after the National Academy of Sciences (NAS), Biological Effects of Ionizing Radiation (BEIR) I Committee (1972) used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Linear non-threshold (LNT) radiation hazards model and its evaluation

    International Nuclear Information System (INIS)

    Min Rui

    2011-01-01

    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  7. Model Uncertainty via the Integration of Hormesis and LNT as the Default in Cancer Risk Assessment.

    Science.gov (United States)

    Calabrese, Edward J

    2015-01-01

    On June 23, 2015, the US Nuclear Regulatory Commission (NRC) issued a formal notice in the Federal Register that it would consider whether "it should amend its 'Standards for Protection Against Radiation' regulations from the linear non-threshold (LNT) model of radiation protection to the hormesis model." The present commentary supports this recommendation based on the (1) flawed and deceptive history of the adoption of LNT by the US National Academy of Sciences (NAS) in 1956; (2) the documented capacity of hormesis to make more accurate predictions of biological responses for diverse biological end points in the low-dose zone; (3) the occurrence of extensive hormetic data from the peer-reviewed biomedical literature that revealed hormetic responses are highly generalizable, being independent of biological model, end point measured, inducing agent, level of biological organization, and mechanism; and (4) the integration of hormesis and LNT models via a model uncertainty methodology that optimizes public health responses at 10(-4). Thus, both LNT and hormesis can be integratively used for risk assessment purposes, and this integration defines the so-called "regulatory sweet spot."

  8. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  9. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 1. The Russell-Muller debate

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu

    2017-04-15

    This paper assesses the discovery of the dose-rate effect in radiation genetics and how it challenged fundamental tenets of the linear non-threshold (LNT) dose response model, including the assumptions that all mutational damage is cumulative and irreversible and that the dose-response is linear at low doses. Newly uncovered historical information also describes how a key 1964 report by the International Commission for Radiological Protection (ICRP) addressed the effects of dose rate in the assessment of genetic risk. This unique story involves assessments by two leading radiation geneticists, Hermann J. Muller and William L. Russell, who independently argued that the report's Genetic Summary Section on dose rate was incorrect while simultaneously offering vastly different views as to what the report's summary should have contained. This paper reveals occurrences of scientific disagreements, how conflicts were resolved, which view(s) prevailed and why. During this process the Nobel Laureate, Muller, provided incorrect information to the ICRP in what appears to have been an attempt to manipulate the decision-making process and to prevent the dose-rate concept from being adopted into risk assessment practices. - Highlights: • The discovery of radiation dose rate challenged the scientific basis of LNT. • Radiation dose rate occurred in males and females. • The dose rate concept supported a threshold dose-response for radiation.

  10. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 1. The Russell-Muller debate

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2017-01-01

    This paper assesses the discovery of the dose-rate effect in radiation genetics and how it challenged fundamental tenets of the linear non-threshold (LNT) dose response model, including the assumptions that all mutational damage is cumulative and irreversible and that the dose-response is linear at low doses. Newly uncovered historical information also describes how a key 1964 report by the International Commission for Radiological Protection (ICRP) addressed the effects of dose rate in the assessment of genetic risk. This unique story involves assessments by two leading radiation geneticists, Hermann J. Muller and William L. Russell, who independently argued that the report's Genetic Summary Section on dose rate was incorrect while simultaneously offering vastly different views as to what the report's summary should have contained. This paper reveals occurrences of scientific disagreements, how conflicts were resolved, which view(s) prevailed and why. During this process the Nobel Laureate, Muller, provided incorrect information to the ICRP in what appears to have been an attempt to manipulate the decision-making process and to prevent the dose-rate concept from being adopted into risk assessment practices. - Highlights: • The discovery of radiation dose rate challenged the scientific basis of LNT. • Radiation dose rate occurred in males and females. • The dose rate concept supported a threshold dose-response for radiation.

  11. The linear nonthreshold (LNT) model as used in radiation protection: an NCRP update.

    Science.gov (United States)

    Boice, John D

    2017-10-01

    The linear nonthreshold (LNT) model has been used in radiation protection for over 40 years and has been hotly debated. It relies heavily on human epidemiology, with support from radiobiology. The scientific underpinnings include NCRP Report No. 136 ('Evaluation of the Linear-Nonthreshold Dose-Response Model for Ionizing Radiation'), UNSCEAR 2000, ICRP Publication 99 (2004) and the National Academies BEIR VII Report (2006). NCRP Scientific Committee 1-25 is reviewing recent epidemiologic studies focusing on dose-response models, including threshold, and the relevance to radiation protection. Recent studies after the BEIR VII Report are being critically reviewed and include atomic-bomb survivors, Mayak workers, atomic veterans, populations on the Techa River, U.S. radiological technologists, the U.S. Million Person Study, international workers (INWORKS), Chernobyl cleanup workers, children given computerized tomography scans, and tuberculosis-fluoroscopy patients. Methodologic limitations, dose uncertainties and statistical approaches (and modeling assumptions) are being systematically evaluated. The review of studies continues and will be published as an NCRP commentary in 2017. Most studies reviewed to date are consistent with a straight-line dose response but there are a few exceptions. In the past, the scientific consensus process has worked in providing practical and prudent guidance. So pragmatic judgment is anticipated. The evaluations are ongoing and the extensive NCRP review process has just begun, so no decisions or recommendations are in stone. The march of science requires a constant assessment of emerging evidence to provide an optimum, though not necessarily perfect, approach to radiation protection. Alternatives to the LNT model may be forthcoming, e.g. an approach that couples the best epidemiology with biologically-based models of carcinogenesis, focusing on chronic (not acute) exposure circumstances. Currently for the practical purposes of

  12. Whither LNT?

    International Nuclear Information System (INIS)

    Higson, D.J.

    2015-01-01

    UNSCEAR and ICRP have reported that no health effects have been attributed to radiation exposure at Fukushima. As at Chernobyl, however, fear that there is no safe dose of radiation has caused enormous psychological damage to health; and evacuation to protect the public from exposure to radiation appears to have done. more harm than good. UNSCEAR and ICRP both stress that collective doses, aggregated from the exposure of large numbers of individuals to very low doses, should not be used to estimate numbers of radiation-induced health effects. This is incompatible with the LNT assumption recommended by the ICRP. (author)

  13. Whither LNT?

    International Nuclear Information System (INIS)

    Higson, D.J.

    2014-01-01

    UNSCEAR and ICRP have reported that no health effects have been attributed to radiation exposure at Fukushima. As at Chernobyl, however, fear that there is no safe dose of radiation has caused enormous psychological damage to health; and evacuation to protect the public from exposure to radiation appears to have done more harm than good. UNSCEAR and ICRP both stress that collective doses, aggregated from the exposure of large numbers of individuals to very low doses, should not be used to estimate numbers of radiation-induced health effects. This is incompatible with the LNT assumption recommended by the ICRP. (author)

  14. Whither LNT?

    Energy Technology Data Exchange (ETDEWEB)

    Higson, D.J.

    2015-03-15

    UNSCEAR and ICRP have reported that no health effects have been attributed to radiation exposure at Fukushima. As at Chernobyl, however, fear that there is no safe dose of radiation has caused enormous psychological damage to health; and evacuation to protect the public from exposure to radiation appears to have done. more harm than good. UNSCEAR and ICRP both stress that collective doses, aggregated from the exposure of large numbers of individuals to very low doses, should not be used to estimate numbers of radiation-induced health effects. This is incompatible with the LNT assumption recommended by the ICRP. (author)

  15. Whither LNT?

    Energy Technology Data Exchange (ETDEWEB)

    Higson, D.J. [Australian Nuclear Association, Paddington, NSW (Australia)

    2014-07-01

    UNSCEAR and ICRP have reported that no health effects have been attributed to radiation exposure at Fukushima. As at Chernobyl, however, fear that there is no safe dose of radiation has caused enormous psychological damage to health; and evacuation to protect the public from exposure to radiation appears to have done more harm than good. UNSCEAR and ICRP both stress that collective doses, aggregated from the exposure of large numbers of individuals to very low doses, should not be used to estimate numbers of radiation-induced health effects. This is incompatible with the LNT assumption recommended by the ICRP. (author)

  16. Univariate time series modeling and an application to future claims amount in SOCSO's invalidity pension scheme

    Science.gov (United States)

    Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd

    2012-09-01

    The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.

  17. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  18. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  19. Combining Generated Data Models with Formal Invalidation for Insider Threat Analysis

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2014-01-01

    draw from recent insights into generation of insider data to complement a logic based mechanical approach. We show how insider analysis can be traced back to the early days of security verification and the Lowe-attack on NSPK. The invalidation of policies allows modelchecking organizational structures......In this paper we revisit the advances made on invalidation policies to explore attack possibilities in organizational models. One aspect that has so far eloped systematic analysis of insider threat is the integration of data into attack scenarios and its exploitation for analyzing the models. We...... to detect insider attacks. Integration of higher order logic specification techniques allows the use of data refinement to explore attack possibilities beyond the initial system specification. We illustrate this combined invalidation technique on the classical example of the naughty lottery fairy. Data...

  20. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    Science.gov (United States)

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  1. Invalidating Policies using Structural Information

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2013-01-01

    by invalidating policies using structural information of the organisational model. Based on this structural information and a description of the organisation's policies, our approach invalidates the policies and identifies exemplary sequences of actions that lead to a violation of the policy in question. Based...... on these examples, the organisation can identify real attack vectors that might result in an insider attack. This information can be used to refine access control system or policies....

  2. Invalidating Policies using Structural Information

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2014-01-01

    by invalidating policies using structural information of the organisational model. Based on this structural information and a description of the organisation’s policies, our approach invalidates the policies and identifies exemplary sequences of actions that lead to a violation of the policy in question. Based...... on these examples, the organisation can identify real attack vectors that might result in an insider attack. This information can be used to refine access control systems or policies. We provide case studies showing how mechanical verification tools, i.e. modelchecking with MCMAS and interactive theorem proving...

  3. Observations on the Chernobyl Disaster and LNT

    Science.gov (United States)

    Jaworowski, Zbigniew

    2010-01-01

    The Chernobyl accident was probably the worst possible catastrophe of a nuclear power station. It was the only such catastrophe since the advent of nuclear power 55 years ago. It resulted in a total meltdown of the reactor core, a vast emission of radionuclides, and early deaths of only 31 persons. Its enormous political, economic, social and psychological impact was mainly due to deeply rooted fear of radiation induced by the linear non-threshold hypothesis (LNT) assumption. It was a historic event that provided invaluable lessons for nuclear industry and risk philosophy. One of them is demonstration that counted per electricity units produced, early Chernobyl fatalities amounted to 0.86 death/GWe-year), and they were 47 times lower than from hydroelectric stations (∼40 deaths/GWe-year). The accident demonstrated that using the LNT assumption as a basis for protection measures and radiation dose limitations was counterproductive, and lead to sufferings and pauperization of millions of inhabitants of contaminated areas. The projections of thousands of late cancer deaths based on LNT, are in conflict with observations that in comparison with general population of Russia, a 15% to 30% deficit of solid cancer mortality was found among the Russian emergency workers, and a 5% deficit solid cancer incidence among the population of most contaminated areas. PMID:20585443

  4. Observations on the Chernobyl Disaster and LNT.

    Science.gov (United States)

    Jaworowski, Zbigniew

    2010-01-28

    The Chernobyl accident was probably the worst possible catastrophe of a nuclear power station. It was the only such catastrophe since the advent of nuclear power 55 years ago. It resulted in a total meltdown of the reactor core, a vast emission of radionuclides, and early deaths of only 31 persons. Its enormous political, economic, social and psychological impact was mainly due to deeply rooted fear of radiation induced by the linear non-threshold hypothesis (LNT) assumption. It was a historic event that provided invaluable lessons for nuclear industry and risk philosophy. One of them is demonstration that counted per electricity units produced, early Chernobyl fatalities amounted to 0.86 death/GWe-year), and they were 47 times lower than from hydroelectric stations ( approximately 40 deaths/GWe-year). The accident demonstrated that using the LNT assumption as a basis for protection measures and radiation dose limitations was counterproductive, and lead to sufferings and pauperization of millions of inhabitants of contaminated areas. The projections of thousands of late cancer deaths based on LNT, are in conflict with observations that in comparison with general population of Russia, a 15% to 30% deficit of solid cancer mortality was found among the Russian emergency workers, and a 5% deficit solid cancer incidence among the population of most contaminated areas.

  5. In modelling effects of global warming, invalid assumptions lead to unrealistic projections.

    Science.gov (United States)

    Lefevre, Sjannie; McKenzie, David J; Nilsson, Göran E

    2018-02-01

    In their recent Opinion, Pauly and Cheung () provide new projections of future maximum fish weight (W ∞ ). Based on criticism by Lefevre et al. (2017) they changed the scaling exponent for anabolism, d G . Here we find that changing both d G and the scaling exponent for catabolism, b, leads to the projection that fish may even become 98% smaller with a 1°C increase in temperature. This unrealistic outcome indicates that the current W ∞ is unlikely to be explained by the Gill-Oxygen Limitation Theory (GOLT) and, therefore, GOLT cannot be used as a mechanistic basis for model projections about fish size in a warmer world. © 2017 John Wiley & Sons Ltd.

  6. Consequences and detection of invalid exogeneity conditions

    NARCIS (Netherlands)

    Niemczyk, J.

    2009-01-01

    Estimators for econometric relationships require observations on at least as many exogenous variables as the model has unknown coefficients. This thesis examines techniques to classify variables as being either exogenous or endogenous, and investigates the consequences of invalid classifications.

  7. Invalid Permutation Tests

    Directory of Open Access Journals (Sweden)

    Mikel Aickin

    2010-01-01

    Full Text Available Permutation tests are often presented in a rather casual manner, in both introductory and advanced statistics textbooks. The appeal of the cleverness of the procedure seems to replace the need for a rigorous argument that it produces valid hypothesis tests. The consequence of this educational failing has been a widespread belief in a “permutation principle”, which is supposed invariably to give tests that are valid by construction, under an absolute minimum of statistical assumptions. Several lines of argument are presented here to show that the permutation principle itself can be invalid, concentrating on the Fisher-Pitman permutation test for two means. A simple counterfactual example illustrates the general problem, and a slightly more elaborate counterfactual argument is used to explain why the main mathematical proof of the validity of permutation tests is mistaken. Two modifications of the permutation test are suggested to be valid in a very modest simulation. In instances where simulation software is readily available, investigating the validity of a specific permutation test can be done easily, requiring only a minimum understanding of statistical technicalities.

  8. Racial identity invalidation with multiracial individuals: An instrument development study.

    Science.gov (United States)

    Franco, Marisa G; O'Brien, Karen M

    2018-01-01

    Racial identity invalidation, others' denial of an individual's racial identity, is a salient racial stressor with harmful effects on the mental health and well-being of Multiracial individuals. The purpose of this study was to create a psychometrically sound measure to assess racial identity invalidation for use with Multiracial individuals (N = 497). The present sample was mostly female (75%) with a mean age of 26.52 years (SD = 9.60). The most common racial backgrounds represented were Asian/White (33.4%) and Black/White (23.7%). Participants completed several online measures via Qualtrics. Exploratory factor analyses revealed 3 racial identity invalidation factors: behavior invalidation, phenotype invalidation, and identity incongruent discrimination. A confirmatory factor analysis provided support for the initial factor structure. Alternative model testing indicated that the bifactor model was superior to the 3-factor model. Thus, a total score and/or 3 subscale scores can be used when administering this instrument. Support was found for the reliability and validity of the total scale and subscales. In line with the minority stress theory, challenges with racial identity mediated relationships between racial identity invalidation and mental health and well-being outcomes. The findings highlight the different dimensions of racial identity invalidation and indicate their negative associations with connectedness and psychological well-being. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. A Capacity-Restraint Transit Assignment Model When a Predetermination Method Indicates the Invalidity of Time Independence

    Directory of Open Access Journals (Sweden)

    Haoyang Ding

    2015-01-01

    Full Text Available The statistical independence of time of every two adjacent bus links plays a crucial role in deciding the feasibility of using many mathematical models to analyze urban transit networks. Traditional research generally ignores the time independence that acts as the ground of their models. Assumption is usually made that time independence of every two adjacent links is sound. This is, however, actually groundless and probably causes problematic conclusions reached by corresponding models. Many transit assignment models such as multinomial probit-based models lose their effects when the time independence is not valid. In this paper, a simple method to predetermine the time independence is proposed. Based on the predetermination method, a modified capacity-restraint transit assignment method aimed at engineering practice is put forward and tested through a small contrived network and a case study in Nanjing city, China, respectively. It is found that the slope of regression equation between the mean and standard deviation of normal distribution acts as the indicator of time independence at the same time. Besides, our modified assignment method performs better than the traditional one with more reasonable results while keeping the property of simplicity well.

  10. LNT-an apparent rather than a real controversy?

    Energy Technology Data Exchange (ETDEWEB)

    Charles, M W [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)

    2006-09-15

    Can the carcinogenic risks of radiation that are observed at high doses be extrapolated to low doses? This question has been debated through the whole professional life of the author-now nearing four decades. In its extreme form the question relates to a particular hypothesis (LNT) used widely by the international community for radiological protection applications. The linear no-threshold (LNT) hypothesis propounds that the extrapolation is linear and that it extends down to zero dose. The debate on the validity of LNT has increased dramatically in recent years. This is in no small part due to concern that exaggerated risks at low doses leads to undue amounts of societal resources being used to reduce man-made human exposure and because of the related growing public aversion to diagnostic and therapeutic medical exposures. The debate appears to be entering a new phase. There is a growing realisation of the limitations of fundamental data and the scientific approach to address this question at low doses. There also appears to be an increasing awareness that the assumptions necessary for a workable and acceptable system of radiological protection at low doses must necessarily be based on considerable pragmatism. Recent developments are reviewed and a historical perspective is given on the general nature of controversies in radiation protection over the years. All the protagonists in the debate will at the end of the day probably be able to claim that they were right{exclamation_point} (opinion)

  11. Response to, "On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith.".

    Science.gov (United States)

    Beyea, Jan

    2016-07-01

    It is not true that successive groups of researchers from academia and research institutions-scientists who served on panels of the US National Academy of Sciences (NAS)-were duped into supporting a linear no-threshold model (LNT) by the opinions expressed in the genetic panel section of the 1956 "BEAR I" report. Successor reports had their own views of the LNT model, relying on mouse and human data, not fruit fly data. Nor was the 1956 report biased and corrupted, as has been charged in an article by Edward J. Calabrese in this journal. With or without BEAR I, the LNT model would likely have been accepted in the US for radiation protection purposes in the 1950's. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Attack Tree Generation by Policy Invalidation

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof

    2015-01-01

    through brainstorming of experts. In this work we formalize attack tree generation including human factors; based on recent advances in system models we develop a technique to identify possible attacks analytically, including technical and human factors. Our systematic attack generation is based......Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified...... on invalidating policies in the system model by identifying possible sequences of actions that lead to an attack. The generated attacks are precise enough to illustrate the threat, and they are general enough to hide the details of individual steps....

  13. The Use of Lexical Neighborhood Test (LNT) in the Assessment of Speech Recognition Performance of Cochlear Implantees with Normal and Malformed Cochlea.

    Science.gov (United States)

    Kant, Anjali R; Banik, Arun A

    2017-09-01

    The present study aims to use the model-based test Lexical Neighborhood Test (LNT), to assess speech recognition performance in early and late implanted hearing impaired children with normal and malformed cochlea. The LNT was administered to 46 children with congenital (prelingual) bilateral severe-profound sensorineural hearing loss, using Nucleus 24 cochlear implant. The children were grouped into Group 1-(early implantees with normal cochlea-EI); n = 15, 31/2-61/2 years of age; mean age at implantation-3½ years. Group 2-(late implantees with normal cochlea-LI); n = 15, 6-12 years of age; mean age at implantation-5 years. Group 3-(early implantees with malformed cochlea-EIMC); n = 9; 4.9-10.6 years of age; mean age at implantation-3.10 years. Group 4-(late implantees with malformed cochlea-LIMC); n = 7; 7-12.6 years of age; mean age at implantation-6.3 years. The following were the malformations: dysplastic cochlea, common cavity, Mondini's, incomplete partition-1 and 2 (IP-1 and 2), enlarged IAC. The children were instructed to repeat the words on hearing them. Means of the word and phoneme scores were computed. The LNT can also be used to assess speech recognition performance of hearing impaired children with malformed cochlea. When both easy and hard lists of LNT are considered, although, late implantees (with or without normal cochlea), have achieved higher word scores than early implantees, the differences are not statistically significant. Using LNT for assessing speech recognition enables a quantitative as well as descriptive report of phonological processes used by the children.

  14. Invalidity of contract: legislative regulation and types

    Directory of Open Access Journals (Sweden)

    Василь Іванович Крат

    2017-09-01

    Full Text Available Invalidity contracts always attracted attention researchers. Without regard to it, in modern conditions there is an enormous layer of the problems related to invalidity contract, that to require a doctrine and utilitarian comprehension. The article is sanctified to research invalidity contract. In the article analyses problems of the legislative regulation and types of invalidity contract through the prism of judicial practice. In the Civil code of Ukraine, a voidable contract sets as a common rule. A voidable of the contract is incarnated in the so-called «virtual» invalidity when only the most typical grounds are enumerated. However, even such approach does not allow to overcome all possible cases that arise up in practice. Such situation touches possibility of voidable contracts concluded with the purpose of to shut out the appeal of claim to the property of the debtor. Therefore it follows to set general rules in relation to voidable contracts of the debtor. A nullity of the contract takes place only in the case when there is the direct pointing of law on the qualification of that or another contract as the nullity. The nullity of contract in the Civil code of Ukraine is constructed by means of «textual» invalidity. There are no single attempts to use the construction of «virtual» invalidity in judicial practice when there is the direct pointing of law on the qualification of that or another contract as the nullity, that is impermissible. Methodologically incorrectly to carry out identifying of invalidity contract and obligation with the aim of providing of application of different after the rich in content filling norms.

  15. Putting aside the LNT dilemma in the controllable dose concept

    International Nuclear Information System (INIS)

    Koblinger, Laszlo

    2000-01-01

    Recently, Professor R. Clarke, ICRP Chairman has published his proposal for a renewal of the basic radiation protection concept. The two main points of his proposed system are: (1) the term Controllable Dose is introduced, and (2) the protection philosophy is based on the individual. For practical uses terms like 'Action Level', 'Investigation Level' etc. are introduced. The outline of the new system promises a really less complex frame; no distinction between practices and interventions, unified treatment for occupational, medical and public exposures. There is, however, an inconsistency within the new system: Though linearity is not assumed, the relations between the definitions of the new terms of the system of protection and the doses assigned to them are still based on the LNT hypothesis. To avoid this discrepancy a new definition of Action Level is recommended as a conservative estimate of the lowest dose where harmful effects have ever been demonstrated. Other levels should be defined by the Action Level and Safety Factors applied on the doses. (author)

  16. Association among self-compassion, childhood invalidation, and borderline personality disorder symptomatology in a Singaporean sample.

    Science.gov (United States)

    Keng, Shian-Ling; Wong, Yun Yi

    2017-01-01

    Linehan's biosocial theory posits that parental invalidation during childhood plays a role in the development of borderline personality disorder symptoms later in life. However, little research has examined components of the biosocial model in an Asian context, and variables that may influence the relationship between childhood invalidation and borderline symptoms. Self-compassion is increasingly regarded as an adaptive way to regulate one's emotions and to relate to oneself, and may serve to moderate the association between invalidation and borderline symptoms. The present study investigated the association among childhood invalidation, self-compassion, and borderline personality disorder symptoms in a sample of Singaporean undergraduate students. Two hundred and ninety undergraduate students from a large Singaporean university were recruited and completed measures assessing childhood invalidation, self-compassion, and borderline personality disorder symptoms. Analyses using multiple regression indicated that both childhood invalidation and self-compassion significantly predicted borderline personality disorder symptomatology. Results from moderation analyses indicated that relationship between childhood invalidation and borderline personality disorder symptomatology did not vary as a function of self-compassion. This study provides evidence in support of aspects of the biosocial model in an Asian context, and demonstrates a strong association between self-compassion and borderline personality disorder symptoms, independent of one's history of parental invalidation during childhood.

  17. Parental Invalidation and the Development of Narcissism.

    Science.gov (United States)

    Huxley, Elizabeth; Bizumic, Boris

    2017-02-17

    Parenting behaviors and childhood experiences have played a central role in theoretical approaches to the etiology of narcissism. Research has suggested an association between parenting and narcissism; however, it has been limited in its examination of different narcissism subtypes and individual differences in parenting behaviors. This study investigates the influence of perceptions of parental invalidation, an important aspect of parenting behavior theoretically associated with narcissism. Correlational and hierarchical regression analyses were conducted using a sample of 442 Australian participants to examine the relationship between invalidating behavior from mothers and fathers, and grandiose and vulnerable narcissism. Results indicate that stronger recollections of invalidating behavior from either mothers or fathers are associated with higher levels of grandiose and vulnerable narcissism when controlling for age, gender, and the related parenting behaviors of rejection, coldness, and overprotection. The lowest levels of narcissism were found in individuals who reported low levels of invalidation in both parents. These findings support the idea that parental invalidation is associated with narcissism.

  18. Growth of non-toxigenic Clostridium botulinum mutant LNT01 in cooked beef: One-step kinetic analysis and comparison with C. sporogenes and C. perfringens.

    Science.gov (United States)

    Huang, Lihan

    2018-05-01

    The objective of this study was to investigate the growth kinetics of Clostridium botulinum LNT01, a non-toxigenic mutant of C. botulinum 62A, in cooked ground beef. The spores of C. botulinum LNT01 were inoculated to ground beef and incubated anaerobically under different temperature conditions to observe growth and develop growth curves. A one-step kinetic analysis method was used to analyze the growth curves simultaneously to minimize the global residual error. The data analysis was performed using the USDA IPMP-Global Fit, with the Huang model as the primary model and the cardinal parameters model as the secondary model. The results of data analysis showed that the minimum, optimum, and maximum growth temperatures of this mutant are 11.5, 36.4, and 44.3 °C, and the estimated optimum specific growth rate is 0.633 ln CFU/g per h, or 0.275 log CFU/g per h. The maximum cell density is 7.84 log CFU/g. The models and kinetic parameters were validated using additional isothermal and dynamic growth curves. The resulting residual errors of validation followed a Laplace distribution, with about 60% of the residual errors within ±0.5 log CFU/g of experimental observations, suggesting that the models could predict the growth of C. botulinum LNT01 in ground beef with reasonable accuracy. Comparing with C. perfringens, C. botulinum LNT01 grows at much slower rates and with much longer lag times. Its growth kinetics is also very similar to C. sporogenes in ground beef. The results of computer simulation using kinetic models showed that, while prolific growth of C. perfringens may occur in ground beef during cooling, no growth of C. botulinum LNT01 or C. sporogenes would occur under the same cooling conditions. The models developed in this study may be used for prediction of the growth and risk assessments of proteolytic C. botulinum in cooked meats. Published by Elsevier Ltd.

  19. Association among self-compassion, childhood invalidation, and borderline personality disorder symptomatology in a Singaporean sample

    OpenAIRE

    Keng, Shian-Ling; Wong, Yun Yi

    2017-01-01

    Background Linehan’s biosocial theory posits that parental invalidation during childhood plays a role in the development of borderline personality disorder symptoms later in life. However, little research has examined components of the biosocial model in an Asian context, and variables that may influence the relationship between childhood invalidation and borderline symptoms. Self-compassion is increasingly regarded as an adaptive way to regulate one’s emotions and to relate to oneself, and m...

  20. Development of Optimal Catalyst Designs and Operating Strategies for Lean NOx Reduction in Coupled LNT-SCR Systems

    Energy Technology Data Exchange (ETDEWEB)

    Harold, Michael [Univ. of Houston, TX (United States); Crocker, Mark [Univ. of Kentucky, Lexington, KY (United States); Balakotaiah, Vemuri [Univ. of Houston, TX (United States); Luss, Dan [Univ. of Houston, TX (United States); Choi, Jae-Soon [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dearth, Mark [Ford Motor Company, Dearborn, MI (United States); McCabe, Bob [Ford Motor Company, Dearborn, MI (United States); Theis, Joe [Ford Motor Company, Dearborn, MI (United States)

    2013-09-30

    Oxides of nitrogen in the form of nitric oxide (NO) and nitrogen dioxide (NO2) commonly referred to as NOx, is one of the two chemical precursors that lead to ground-level ozone, a ubiquitous air pollutant in urban areas. A major source of NOx} is generated by equipment and vehicles powered by diesel engines, which have a combustion exhaust that contains NOx in the presence of excess O2. Catalytic abatement measures that are effective for gasoline-fueled engines such as the precious metal containing three-way catalytic converter (TWC) cannot be used to treat O2-laden exhaust containing NOx. Two catalytic technologies that have emerged as effective for NOx abatement are NOx storage and reduction (NSR) and selective catalytic reduction (SCR). NSR is similar to TWC but requires much larger quantities of expensive precious metals and sophisticated periodic switching operation, while SCR requires an on-board source of ammonia which serves as the chemical reductant of the NOx. The fact that NSR produces ammonia as a byproduct while SCR requires ammonia to work has led to interest in combining the two together to avoid the need for the cumbersome ammonia generation system. In this project a comprehensive study was carried out of the fundamental aspects and application feasibility of combined NSR/SCR. The project team, which included university, industry, and national lab researchers, investigated the kinetics and mechanistic features of the underlying chemistry in the lean NOx trap (LNT) wherein NSR was carried out, with particular focus on identifying the operating conditions such as temperature and catalytic properties which lead to the production of ammonia in the LNT. The performance features of SCR on both model and commercial catalysts focused on the synergy between the LNT and SCR converters in terms of utilizing the upstream-generated ammonia and

  1. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2015-01-01

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  2. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu

    2015-10-15

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  3. The Perceived Invalidation of Emotion Scale (PIES): Development and psychometric properties of a novel measure of current emotion invalidation.

    Science.gov (United States)

    Zielinski, Melissa J; Veilleux, Jennifer C

    2018-05-24

    Emotion invalidation is theoretically and empirically associated with mental and physical health problems. However, existing measures of invalidation focus on past (e.g., childhood) invalidation and/or do not specifically emphasize invalidation of emotion. In this article, the authors articulate a clarified operational definition of emotion invalidation and use that definition as the foundation for development of a new measure of current perceived emotion invalidation across a series of five studies. Study 1 was a qualitative investigation of people's experiences with emotional invalidation from which we generated items. An initial item pool was vetted by expert reviewers in Study 2 and examined via exploratory factor analysis in Study 3 within both college student and online samples. The scale was reduced to 10 items via confirmatory factor analysis in Study 4, resulting in a brief but psychometrically promising measure, the Perceived Invalidation of Emotion Scale (PIES). A short-term longitudinal investigation (Study 5) revealed that PIES scores had strong test-retest reliability, and that greater perceived emotion invalidation was associated with greater emotion dysregulation, borderline features and symptoms of emotional distress. In addition, the PIES predicted changes in relational health and psychological health over a 1-month period. The current set of studies thus presents a psychometrically promising and practical measure of perceived emotion invalidation that can provide a foundation for future research in this burgeoning area. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. The (unclear effects of invalid retro-cues.

    Directory of Open Access Journals (Sweden)

    Marcel eGressmann

    2016-03-01

    Full Text Available Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.

  5. Automatically repairing invalid polygons with a constrained triangulation

    NARCIS (Netherlands)

    Ledoux, H.; Arroyo Ohori, K.; Meijers, M.

    2012-01-01

    Although the validation of single polygons has received considerable attention, the automatic repair of invalid polygons has not. Automated repair methods can be considered as interpreting ambiguous or ill-defined polygons and giving a coherent and clearly defined output. At this moment, automatic

  6. LNTgate: How scientific misconduct by the U.S. NAS led to governments adopting LNT for cancer risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu

    2016-07-15

    This paper provides a detailed rebuttal to the letter of Beyea (2016) which offered a series of alternative interpretations to those offered in my article in Environmental Research (Calabrese, 2015a) concerning the role of the U.S. National Academy of Sciences (NAS) Biological Effects of Atomic Radiation (BEAR) I Committee Genetics Panel in the adoption of the linear dose response model for cancer risk assessment. Significant newly uncovered evidence is presented which supports and extends the findings of Calabrese (2015a), reaffirming the conclusion that the Genetics Panel should be evaluated for scientific misconduct for deliberate misrepresentation of the research record in order to enhance an ideological agenda. This critique documents numerous factual errors along with extensive and deliberate filtering of information in the Beyea letter (2016) that leads to consistently incorrect conclusions and an invalid general perspective.

  7. LNTgate: How scientific misconduct by the U.S. NAS led to governments adopting LNT for cancer risk assessment

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2016-01-01

    This paper provides a detailed rebuttal to the letter of Beyea (2016) which offered a series of alternative interpretations to those offered in my article in Environmental Research (Calabrese, 2015a) concerning the role of the U.S. National Academy of Sciences (NAS) Biological Effects of Atomic Radiation (BEAR) I Committee Genetics Panel in the adoption of the linear dose response model for cancer risk assessment. Significant newly uncovered evidence is presented which supports and extends the findings of Calabrese (2015a), reaffirming the conclusion that the Genetics Panel should be evaluated for scientific misconduct for deliberate misrepresentation of the research record in order to enhance an ideological agenda. This critique documents numerous factual errors along with extensive and deliberate filtering of information in the Beyea letter (2016) that leads to consistently incorrect conclusions and an invalid general perspective.

  8. Intriguing legacy of Einstein, Fermi, Jordan, and others: The possible invalidation of quark conjectures

    International Nuclear Information System (INIS)

    Santilli, R.M.

    1981-01-01

    The objective of this paper is to present an outline of a number of criticisms of the quark models of hadron structure which have been present in the community of basic research for some time. The hope is that quark supporters will consider these criticisms and present possible counterarguments for a scientifically effective resolution of the issues. In particular, it is submitted that the problem of whether quarks exist as physical particles necessarily calls for the prior theoretical and experimental resolution of the question of the validity or invalidity, for hadronic structure, of the relativity and quantum mechanical laws established for atomic structure. The current theoretical studies leading to the conclusion that they are invalid are considered, together with the experimental situation. We also recall the doubts by Einstein, Fermi, Jordan, and others on the final character of contemporary physical knowledge. Most of all, this paper is an appeal to young minds of all ages. The possible invalidity for the strong interactions of the physical laws of the electromagnetic interactions, rather than constituting a scientific drawback, represents instead an invaluable impetus toward the search for covering laws specifically conceived for hadronic structure and strong interactions in general, a program which has already been initiated by a number of researchers. In turn, this situation appears to have all the ingredients for a new scientific renaissance, perhaps comparable to that of the early part of this century

  9. Intriguing legacy of Einstein, Fermi, Jordan, and others: the possible invalidation of quark conjectures

    International Nuclear Information System (INIS)

    Santilli, R.M.

    1981-01-01

    The objective of this paper is to present an outline of a number of criticisms of the quark models of hadron structure which have been present in the community of basic research for some time. The hope is that quark supporters will consider these criticisms and present possible counterarguments for a scintifically effective resolution of the issues. In particular, it is submitted that the problem of whether quarks exist as physical particles necessarily calls for the prior theoretical and experimental resolution of the question of the validity or invalidity, for hadronic structure, of the relativity and quantum mechanical laws established for atomic structure. The current theoretical studies leading to the conclusion that they are invalid are considered, together with the experimental situation. We also recall the doubts by Einstein, Fermi, Jordan, and others on the final character of contemporary physical knowledge. Most of all, this paper is an appeal to young minds of all ages. The possible invalidity for the strong interactions of the physical laws of the electromagnetic interactions, rather than constituting a scientific drawback, represents instead an invaluable impetus toward the search for covering laws specifically conceived for hadronic structure and strong interactions in general, a program which has already been initiated by a number of researchers. In turn, this situation appears to have all the ingredients for a new scientific renaissance, perhaps comparable to that of the early part of this century

  10. An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication

    Science.gov (United States)

    Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao

    2014-05-01

    For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.

  11. [Assessment of invalidity as a result of infectious diseases].

    Science.gov (United States)

    Čeledová, L; Čevela, R; Bosák, M

    2016-01-01

    The article features the new medical assessment paradigm for invalidity as a result of infectious disease which is applied as of 1 January 2010. The invalidity assessment criteria are regulated specifically by Regulation No. 359/2009. Chapter I of the Annexe to the invalidity assessment regulation addresses the area of infectious diseases with respect to functional impairment and its impact on the quality of life. Since 2010, the invalidity has also been newly categorized into three groups. The new assessment approach makes it possible to evaluate a persons functional capacity, type of disability, and eligibility for compensation for reduced capacity for work. In 2010, a total of 170 375 invalidity cases were assessed, and in 2014, 147 121 invalidity assessments were made. Invalidity as a result of infectious disease was assessed in 177 persons in 2010, and 128 invalidity assessments were made in 2014. The most common causes of invalidity as a result of infectious disease are chronic viral hepatitis, other spirochetal infections, tuberculosis of the respiratory tract, tick-borne viral encephalitis, and HIV/AIDS. The number of assessments of invalidity as a result of infectious disease showed a declining trend between 2010 and 2014, similarly to the total of invalidity assessments. In spite of this fact, the cases of invalidity as a result of infectious disease account for approximately half percent of all invalidity assessments made in the above-mentioned period of time.

  12. Does overprotection cause cardiac invalidism after acute myocardial infarction?

    Science.gov (United States)

    Riegel, B J; Dracup, K A

    1992-01-01

    To determine if overprotection on the part of the patient's family and friends contributes to the development of cardiac invalidism after acute myocardial infarction. Longitudinal survey. Nine hospitals in the southwestern United States. One hundred eleven patients who had experienced a first acute myocardial infarction. Subjects were predominantly male, older-aged, married, caucasian, and in functional class I. Eighty-one patients characterized themselves as being overprotected (i.e., receiving more social support from family and friends than desired), and 28 reported receiving inadequate support. Only two patients reported receiving as much support as they desired. Self-esteem, emotional distress, health perceptions, interpersonal dependency, return to work. Overprotected patients experienced less anxiety, depression, anger, confusion, more vigor, and higher self-esteem than inadequately supported patients 1 month after myocardial infarction (p Overprotection on the part of family and friends may facilitate psychosocial adjustment in the early months after an acute myocardial infarction rather than lead to cardiac invalidism.

  13. A Novel Cache Invalidation Scheme for Mobile Networks

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, we propose a strategy of maintaining cache consistency in wireless mobile environments, which adds a validation server (VS) into the GPRS network, utilizes the location information of mobile terminal in SGSN located at GPRS backbone, just sends invalidation information to mobile terminal which is online in accordance with the cached data, and reduces the information amount in asynchronous transmission. This strategy enables mobile terminal to access cached data with very little computing amount, little delay and arbitrary disconnection intervals, and excels the synchronous IR and asynchronous state (AS) in the total performances.

  14. The footprints of visual attention during search with 100% valid and 100% invalid cues.

    Science.gov (United States)

    Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S

    2004-06-01

    Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.

  15. Has the prevalence of invalidating musculoskeletal pain changed over the last 15 years (1993-2006)? A Spanish population-based survey.

    Science.gov (United States)

    Jiménez-Sánchez, Silvia; Jiménez-García, Rodrigo; Hernández-Barrera, Valentín; Villanueva-Martínez, Manuel; Ríos-Luna, Antonio; Fernández-de-las-Peñas, César

    2010-07-01

    The aim of the current study was to estimate the prevalence and time trend of invalidating musculoskeletal pain in the Spanish population and its association with socio-demographic factors, lifestyle habits, self-reported health status, and comorbidity with other diseases analyzing data from 1993-2006 Spanish National Health Surveys (SNHS). We analyzed individualized data taken from the SNHS conducted in 1993 (n = 20,707), 2001 (n = 21,058), 2003 (n = 21,650) and 2006 (n = 29,478). Invalidating musculoskeletal pain was defined as pain suffered from the preceding 2 weeks that decreased main working activity or free-time activity by at least half a day. We analyzed socio-demographic characteristics, self-perceived health status, lifestyle habits, and comorbid conditions using multivariate logistic regression models. Overall, the prevalence of invalidating musculoskeletal pain in Spanish adults was 6.1% (95% CI, 5.7-6.4) in 1993, 7.3% (95% CI, 6.9-7.7) in 2001, 5.5% (95% CI, 5.1-5.9) in 2003 and 6.4% (95% CI 6-6.8) in 2006. The prevalence of invalidating musculoskeletal pain among women was almost twice that of men in every year (P postural hygiene, physical exercise, and how to prevent obesity and sedentary lifestyle habits should be provided by Public Health Services. This population-based study indicates that invalidating musculoskeletal pain that reduces main working activity is a public health problem in Spain. The prevalence of invalidating musculoskeletal pain was higher in women than in men and associated to lower income, poor sleeping, worse self-reported health status, and other comorbid conditions. Further, the prevalence of invalidating musculoskeletal pain increased from 1993 to 2001, but remained stable from the last years (2001 to 2006).

  16. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.

  17. 30 CFR 253.50 - How can MMS refuse or invalidate my OSFR evidence?

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How can MMS refuse or invalidate my OSFR... can MMS refuse or invalidate my OSFR evidence? (a) If MMS determines that any OSFR evidence you submit... acceptable evidence without being subject to civil penalty under § 253.51. (b) MMS may immediately and...

  18. Implications of invalidity of Data Retention Directive to telecom operators

    Directory of Open Access Journals (Sweden)

    Darja LONČAR DUŠANOVIĆ

    2014-12-01

    Full Text Available Obligation for telecom operators to retain traffic and location data for combating crime purposes had been controversial ever since the adoption of the Data Retention Directive in 2006 because of its inherent negative impact on the fundamental right to privacy and personal data protection. However, the awaited judgment of the CJEU in April this year, which declared the Directive invalid, did not so far resolve the ambiguity of the issue. Namely, having in mind that half a year later, some countries did not amend their national data retention legislations (yet to comply with the aforementioned CJEU judgment, telecom operators as addresses of this obligation are in uncertain legal situation which could be called “lose-lose” situation. Also, the emphasis from the question of proportionality between data privacy and public security is shifted to the question of existence of valid legal basis for data processing (retaining data and providing them to authorities in the new legal environment in which national and EU law are still not in compliance. In this paper the author examines the implications of the CJEU judgment to national EU legislation, telecom operators and data subjects, providing comparative analysis of national data retention legislation status in EU member states. The existence of valid legal basis for data processing is examined within EU law sources, including within proposed EU General Data Protection Regulation and opinions of the relevant data protection bodies (e.g. Article 29 Working Party.

  19. Self-compassion and emotional invalidation mediate the effects of parental indifference on psychopathology.

    Science.gov (United States)

    Westphal, Maren; Leahy, Robert L; Pala, Andrea Norcini; Wupperman, Peggilee

    2016-08-30

    This study investigated whether self-compassion and emotional invalidation (perceiving others as indifferent to one's emotions) may explain the relationship of childhood exposure to adverse parenting and adult psychopathology in psychiatric outpatients (N=326). Path analysis was used to investigate associations between exposure to adverse parenting (abuse and indifference), self-compassion, emotional invalidation, and mental health when controlling for gender and age. Self-compassion was strongly inversely associated with emotional invalidation, suggesting that a schema that others will be unsympathetic or indifferent toward one's emotions may affect self-compassion and vice versa. Both self-compassion and emotional invalidation mediated the relationship between parental indifference and mental health outcomes. These preliminary findings suggest the potential utility of self-compassion and emotional schemas as transdiagnostic treatment targets. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. An abuse of risk assessment: how regulatory agencies improperly adopted LNT for cancer risk assessment.

    Science.gov (United States)

    Calabrese, Edward J

    2015-04-01

    The Genetics Panel of the National Academy of Sciences' Committee on Biological Effects of Atomic Radiation (BEAR) recommended the adoption of the linear dose-response model in 1956, abandoning the threshold dose-response for genetic risk assessments. This recommendation was quickly generalized to include somatic cells for cancer risk assessment and later was instrumental in the adoption of linearity for carcinogen risk assessment by the Environmental Protection Agency. The Genetics Panel failed to provide any scientific assessment to support this recommendation and refused to do so when later challenged by other leading scientists. Thus, the linearity model used in cancer risk assessment was based on ideology rather than science and originated with the recommendation of the NAS BEAR Committee Genetics Panel. Historical documentation in support of these conclusions is provided in the transcripts of the Panel meetings and in previously unexamined correspondence among Panel members.

  1. Invalid-point removal based on epipolar constraint in the structured-light method

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  2. [Structure of childhood and adolescent invalidity in persons with chronic somatic diseases].

    Science.gov (United States)

    Korenev, N M; Bogmat, L F; Tolmacheva, S R; Timofeeva, O N

    2002-01-01

    Based on the analysis of statistical data, prevalence is estimated of disorders with invalidism patterns outlined among those children and young adults under 40 years of age presenting with chronic somatic disorders in Kharkov. Both in children (52.4%) and in young adults (43.9%) diseases of the nervous system held the prominent place. Invalidity due to formed somatic disorders was identified in 10.9% of children and 24.3% of those persons less than 40 years old. There prevailed diseases of the circulation organs. The necessity is substantiated for the rehabilitation to be carried out of children with somatic disorders to prevent their disability.

  3. Penis invalidating cicatricial outcomes in an enlargement phalloplasty case with polyacrylamide gel (Formacryl).

    Science.gov (United States)

    Parodi, P C; Dominici, M; Moro, U

    2006-01-01

    The present article reports the case of a patient subjected to polyacrylamide polymers-composed gel cutaneous infiltration in the penis for cosmetic purposes, resulting in severe invalidating outcomes. A significant tissue reaction to the subcutaneous injection of polyacrylamide gel for the penis enlargement purpose resulted in permanent and invalidating scars both on the esthetic and functional levels. Such a result must be simply taken into account both singly and in the light of the international literature to exclude this method as standard uro-andrologic activity.

  4. Sociodemographic characteristics and diabetes predict invalid self-reported non-smoking in a population-based study of U.S. adults

    Directory of Open Access Journals (Sweden)

    Shelton Brent J

    2007-03-01

    Full Text Available Abstract Background Nearly all studies reporting smoking status collect self-reported data. The objective of this study was to assess sociodemographic characteristics and selected, common smoking-related diseases as predictors of invalid reporting of non-smoking. Valid self-reported smoking may be related to the degree to which smoking is a behavior that is not tolerated by the smoker's social group. Methods True smoking was defined as having serum cotinine of 15+ng/ml. 1483 "true" smokers 45+ years of age with self-reported smoking and serum cotinine data from the Mobile Examination Center were identified in the third National Health and Nutrition Examination Survey. Invalid non-smoking was defined as "true" smokers self-reporting non-smoking. To assess predictors of invalid self-reported non-smoking, odds ratios (OR and 95% confidence intervals (CI were calculated for age, race/ethnicity-gender categories, education, income, diabetes, hypertension, and myocardial infarction. Multiple logistic regression modeling took into account the complex survey design and sample weights. Results Among smokers with diabetes, invalid non-smoking status was 15%, ranging from 0% for Mexican-American (MA males to 22%–25% for Non-Hispanic White (NHW males and Non-Hispanic Black (NHB females. Among smokers without diabetes, invalid non-smoking status was 5%, ranging from 3% for MA females to 10% for NHB females. After simultaneously taking into account diabetes, education, race/ethnicity and gender, smokers with diabetes (ORAdj = 3.15; 95% CI: 1.35–7.34, who did not graduate from high school (ORAdj = 2.05; 95% CI: 1.30–3.22 and who were NHB females (ORAdj = 5.12; 95% CI: 1.41–18.58 were more likely to self-report as non-smokers than smokers without diabetes, who were high school graduates, and MA females, respectively. Having a history of myocardial infarction or hypertension did not predict invalid reporting of non-smoking. Conclusion Validity of self

  5. Validity of the linear no-threshold (LNT) hypothesis in setting radiation protection regulations for the inhabitants in high level natural radiation areas of Ramsar, Iran

    International Nuclear Information System (INIS)

    Mortazavi, S.M.J.; Atefi, M.; Razi, Z.; Mortazavi Gh

    2010-01-01

    Some areas in Ramsar, a city in northern Iran, have long been known as inhabited areas with the highest levels of natural radiation. Despite the fact that the health effects of high doses of ionizing radiation are well documented, biological effects of above the background levels of natural radiation are still controversial and the validity of the LNT hypothesis in this area, has been criticized by many investigators around the world. The study of the health effects of high levels of natural radiation in areas such as Ramsar, help scientists to investigate the biological effects without the need for extrapolating the observations either from high doses of radiation to low dose region or from laboratory animals to humans. Considering the importance of these studies, National Radiation Protection Department (NRPD) of the Iranian Nuclear Regulatory Authority has started an integrative research project on the health effects of long-term exposure to high levels of natural radiation. This paper reviews findings of the studies conducted on the plants and humans living or laboratory animals kept in high level natural radiation areas of Ramsar. In human studies, different end points such as DNA damage, chromosome aberrations, blood cells and immunological alterations are discussed. This review comes to the conclusion that no reproducible detrimental health effect has been reported so far. In this paper the validity of LNT hypothesis in the assessment of the health effects of high levels of natural radiation is discussed. (author)

  6. The Role of Maternal Emotional Validation and Invalidation on Children's Emotional Awareness

    Science.gov (United States)

    Lambie, John A.; Lindberg, Anja

    2016-01-01

    Emotional awareness--that is, accurate emotional self-report--has been linked to positive well-being and mental health. However, it is still unclear how emotional awareness is socialized in young children. This observational study examined how a particular parenting communicative style--emotional validation versus emotional invalidation--was…

  7. 20 CFR 655.1132 - When will the Department suspend or invalidate an approved Attestation?

    Science.gov (United States)

    2010-04-01

    ... Requirements Must a Facility Meet to Employ H-1C Nonimmigrant Workers as Registered Nurses? § 655.1132 When... Administrator, where that penalty or remedy assessment has become the final agency action. If an Attestation is... is suspended, invalidated or expired, as long as any H-1C nurse is at the facility, unless the...

  8. Validation of Measures of Biosocial Precursors to Borderline Personality Disorder: Childhood Emotional Vulnerability and Environmental Invalidation

    Science.gov (United States)

    Sauer, Shannon E.; Baer, Ruth A.

    2010-01-01

    Linehan's biosocial theory suggests that borderline personality disorder (BPD) results from a transaction of two childhood precursors: emotional vulnerability and an invalidating environment. Until recently, few empirical studies have explored relationships between these theoretical precursors and symptoms of the disorder. Psychometrically sound…

  9. [Physicians as Experts of the Integration of war invalids of WWI and WWII].

    Science.gov (United States)

    Wolters, Christine

    2015-12-01

    After the First World War the large number of war invalids posed a medical as well as a socio-political problem. This needed to be addressed, at least to some extent, through healthcare providers (Versorgungsbehörden) and reintegration into the labour market. Due to the demilitarization of Germany, this task was taken on by the civil administration, which was dissolved during the time of National Socialism. In 1950, the Federal Republic of Germany enacted the Federal War Victims Relief Act (Bundesversorgungsgesetz), which created a privileged group of civil and military war invalids, whereas other disabled people and victims of national socialist persecution were initially excluded. This article examines the continuities and discontinuities of the institutions following the First World War. A particular focus lies on the groups of doctors which structured this field. How did doctors become experts and what was their expertise?

  10. [The loss of work fitness and the course of invalidism in patients with limb vessel lesions].

    Science.gov (United States)

    Chernenko, V F; Goncharenko, A G; Shuvalov, A Iu; Chernenko, V V; Tarasov, I V

    2005-01-01

    The growth of the sick rate of limb peripheral vessels associated with a severe outcome (trophic ulcers, amputation) exerts an appreciable effect on the lowering of quality of life in patients. This manifests by the prolonged loss of work fitness, change of the habitual occupation and disability establishment. Objective analytical information on this problem will be of help in the delineation of the tendencies in this direction and potential approaches to the prevention of social losses. The present work is based on an analysis of 2115 statements of medicosocial expert evaluation (MSEE) of invalids suffering from diseases of limb vessels, performed over recent 8 years in the Altai region. The decisions made by the MSEE were based on the results of the clinical examination of patients using the current diagnostic modalities (ultrasonography, duplex scanning, angiography, etc). It has been established that among persons who had undergone MSEE, over the half (64.1%) were under 60 years, i.e. in the age of work fitness. It is noteworthy that the overwhelming number of invalids were men (83%) and workers (84.2%). As for special vascular pathologies, the majority of patients presented with obliterative arterial diseases (OAD) of the lower limbs, accounting for 76.3% whereas patients with venous pathology ranked second in number (15.9%). The highest severity of invalidism (groups I and II) was also recorded in OAD (77.5%), especially in atherosclerosis obliterans (AO) which accounted for 84%. Of note, these diseases were marked by no tendency toward reduction of their incidence. The time of temporary disability (from 3 to 9 months) was also most frequently recorded in OAD of the limbs. In OAD, the temporary or persistent loss of work fitness were caused by critical ischemia and amputations whereas in venous pathology, namely in varicosity and post-thrombophlebotic syndrome, the cause was progressing CVI complicated by trophic ulcers. On the whole, the lack of changes in

  11. On the invalidity of Bragg's rule in stopping cross sections of molecules for swift Li ions

    International Nuclear Information System (INIS)

    Neuwirth, W.; Pietsch, W.; Richter, K.; Hauser, U.

    1975-01-01

    We discuss the invalidity of Bragg's rule for stopping cross sections of molecules for Li ions in the velocity range 1.5 x 10 8 cm/sec to 4.8 x 10 8 cm/sec. Here the influence of the chemical bonding in a molecule normally leads to strong deviations from Bragg's additivity rule. In our boron compounds the measured cross section of the molecule is smaller than the sum of the stopping cross sections of the single constituents. This can be explained in a first order description by the transfer of electrons in the bonding. With this description it is possible to determine from the measured molecular stopping cross sections the charge transfer in certain compounds. (orig.) [de

  12. Avoidance of Affect Mediates the Effect of Invalidating Childhood Environments on Borderline Personality Symptomatology in a Non-Clinical Sample

    Science.gov (United States)

    Sturrock, Bonnie A.; Francis, Andrew; Carr, Steven

    2009-01-01

    The aim of this study was to test the Linehan (1993) proposal regarding associations between invalidating childhood environments, distress tolerance (e.g., avoidance of affect), and borderline personality disorder (BPD) symptoms. The sample consisted of 141 non-clinical participants (51 men, 89 women, one gender unknown), ranging in age from 18 to…

  13. Sequential and base rate analysis of emotional validation and invalidation in chronic pain couples: patient gender matters.

    Science.gov (United States)

    Leong, Laura E M; Cano, Annmarie; Johansen, Ayna B

    2011-11-01

    The purpose of this study was to examine the extent to which communication patterns that foster or hinder intimacy and emotion regulation in couples were related to pain, marital satisfaction, and depression in 78 chronic pain couples attempting to problem-solve an area of disagreement in their marriage. Sequences and base rates of validation and invalidation communication patterns were almost uniformly unrelated to adjustment variables unless patient gender was taken into account. Male patient couples' reciprocal invalidation was related to worse pain, but this was not found in female patient couples. In addition, spouses' validation was related to poorer patient pain and marital satisfaction, but only in couples with a male patient. It was not only the presence or absence of invalidation and validation that mattered (base rates), but the context and timing of these events (sequences) that affected patients' adjustment. This research demonstrates that sequences of interaction behaviors that foster and hinder emotion regulation should be attended to when assessing and treating pain patients and their spouses. This article presents analyses of both sequences and base rates of chronic pain couples' communication patterns, focusing on validation and invalidation. These results may potentially improve psychosocial treatments for these couples, by addressing sequential interactions of intimacy and empathy. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  14. 22 CFR 51.63 - Passports invalid for travel into or through restricted areas; prohibition on passports valid...

    Science.gov (United States)

    2010-04-01

    ... restricted areas; prohibition on passports valid only for travel to Israel. 51.63 Section 51.63 Foreign... Passports § 51.63 Passports invalid for travel into or through restricted areas; prohibition on passports... use in a country or area which the Secretary has determined is: (1) A country with which the United...

  15. An experimental pilot study of response to invalidation in young women with features of borderline personality disorder.

    Science.gov (United States)

    Woodberry, Kristen A; Gallo, Kaitlin P; Nock, Matthew K

    2008-01-15

    One of the leading biosocial theories of borderline personality disorder (BPD) suggests that individuals with BPD have biologically based abnormalities in emotion regulation contributing to more intense and rapid responses to emotional stimuli, in particular, invalidation [Linehan, M.M., 1993. Cognitive-Behavioral Treatment of Borderline Personality Disorder. Guilford, New York.]. This study used a 2 by 2 experimental design to test whether young women with features of BPD actually show increased physiological arousal in response to invalidation. Twenty-three women ages 18 to 29 who endorsed high levels of BPD symptoms and 18 healthy controls were randomly assigned to hear either a validating or invalidating comment during a frustrating task. Although we found preliminary support for differential response to these stimuli in self-report of valence, we found neither self-report nor physiological evidence of hyperarousal in the BPD features group, either at baseline or in response to invalidation. Interestingly, the BPD features group reported significantly lower comfort with emotion, and comfort was significantly associated with affective valence but not arousal. We discuss implications for understanding and responding to the affective intensity of this population.

  16. Brain transcriptional stability upon prion protein-encoding gene invalidation in zygotic or adult mouse

    Directory of Open Access Journals (Sweden)

    Béringue Vincent

    2010-07-01

    Full Text Available Abstract Background The physiological function of the prion protein remains largely elusive while its key role in prion infection has been expansively documented. To potentially assess this conundrum, we performed a comparative transcriptomic analysis of the brain of wild-type mice with that of transgenic mice invalidated at this locus either at the zygotic or at the adult stages. Results Only subtle transcriptomic differences resulting from the Prnp knockout could be evidenced, beside Prnp itself, in the analyzed adult brains following microarray analysis of 24 109 mouse genes and QPCR assessment of some of the putatively marginally modulated loci. When performed at the adult stage, neuronal Prnp disruption appeared to sequentially induce a response to an oxidative stress and a remodeling of the nervous system. However, these events involved only a limited number of genes, expression levels of which were only slightly modified and not always confirmed by RT-qPCR. If not, the qPCR obtained data suggested even less pronounced differences. Conclusions These results suggest that the physiological function of PrP is redundant at the adult stage or important for only a small subset of the brain cell population under classical breeding conditions. Following its early reported embryonic developmental regulation, this lack of response could also imply that PrP has a more detrimental role during mouse embryogenesis and that potential transient compensatory mechanisms have to be searched for at the time this locus becomes transcriptionally activated.

  17. The unwanted heroes: war invalids in Poland after World War I.

    Science.gov (United States)

    Magowska, Anita

    2014-04-01

    This article focuses on the unique and hitherto unknown history of disabled ex-servicemen and civilians in interwar Poland. In 1914, thousands of Poles were conscripted into the Austrian, Prussian, and Russian armies and forced to fight against each other. When the war ended and Poland regained independence after more than one hundred years of partition, the fledgling government was unable to provide support for the more than three hundred thousand disabled war victims, not to mention the many civilians left injured or orphaned by the war. The vast majority of these victims were ex-servicemen of foreign armies, and were deprived of any war compensation. Neither the Polish government nor the impoverished society could meet the disabled ex-servicemen's medical and material needs; therefore, these men had to take responsibility for themselves and started cooperatives and war-invalids-owned enterprises. A social collaboration between Poland and America, rare in Europe at that time, was initiated by the Polish community in the United States to help blind ex-servicemen in Poland.

  18. Televīzijas loma neapmierinātībā ar politiku: LTV1, LNT, TV3 nedēļas analītisko raidījumu satura, to veidotāju un ekspertu vērtējumu analīze (2008.gada oktobris-2009.gada marts)

    OpenAIRE

    Novodvorskis, Vladimirs

    2009-01-01

    Maģistra darbu „Televīzijas loma neapmierinātībā ar politiku: LTV1, LNT, TV3 nedēļas analītisko raidījumu satura, to veidotāju un ekspertu vērtējumu analīze (2008. gada oktobris – 2009. gada marts)” izstrādāja Latvijas Universitātes Komunikācijas studiju nodaļas students Vladimirs Novodvorskis. Darbs veltīts auditorijas negatīvas attieksmes veidošanas problēmas izpētei televīzijas informatīvi analītiskajos raidījumos Panorāma, De facto (LTV1), LNT Top 10 (LNT), Nekā personīga (TV3) pret pol...

  19. Motivated reflection on attitude-inconsistent information: an exploration of the role of fear of invalidity in self-persuasion.

    Science.gov (United States)

    Clarkson, Joshua J; Valente, Matthew J; Leone, Christopher; Tormala, Zakary L

    2013-12-01

    The mere thought effect is defined in part by the tendency of self-reflective thought to heighten the generation of and reflection on attitude-consistent thoughts. By focusing on individuals' fears of invalidity, we explored the possibility that the mere opportunity for thought sometimes motivates reflection on attitude-inconsistent thoughts. Across three experiments, dispositional and situational fear of invalidity was shown to heighten reflection on attitude-inconsistent thoughts. This heightened reflection, in turn, interacted with individuals' thought confidence to determine whether attitude-inconsistent thoughts were assimilated or refuted and consequently whether individuals' attitudes and behavioral intentions depolarized or polarized following a sufficient opportunity for thought, respectively. These findings emphasize the impact of motivational influences on thought reflection and generation, the importance of thought confidence in the assimilation and refutation of self-generated thought, and the dynamic means by which the mere thought bias can impact self-persuasion.

  20. Express attentional re-engagement but delayed entry into consciousness following invalid spatial cues in visual search.

    Directory of Open Access Journals (Sweden)

    Benoit Brisson

    Full Text Available BACKGROUND: In predictive spatial cueing studies, reaction times (RT are shorter for targets appearing at cued locations (valid trials than at other locations (invalid trials. An increase in the amplitude of early P1 and/or N1 event-related potential (ERP components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. METHODOLOGY/PRINCIPAL FINDINGS: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN, to evaluate whether visual selection (as indexed by the N2pc and visual-short term memory processes (as indexed by the SPCN are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. CONCLUSIONS/SIGNIFICANCE: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.

  1. Mechanisms of Contextual Risk for Adolescent Self-Injury: Invalidation and Conflict Escalation in Mother-Child Interactions

    Science.gov (United States)

    Crowell, Sheila E.; Baucom, Brian R.; McCauley, Elizabeth; Potapova, Natalia V.; Fitelson, Martha; Barth, Heather; Smith, Cindy J.; Beauchaine, Theodore P.

    2013-01-01

    OBJECTIVE According to developmental theories of self-injury, both child characteristics and environmental contexts shape and maintain problematic behaviors. Although progress has been made toward identifying biological vulnerabilities to self-injury, mechanisms underlying psychosocial risk have received less attention. METHOD In the present study, we compared self-injuring adolescents (n=17) with typical controls (n=20) during a mother-child conflict discussion. Dyadic interactions were coded using both global and microanalytic systems, allowing for a highly detailed characterization of mother-child interactions. We also assessed resting state psychophysiological regulation, as indexed by respiratory sinus arrhythmia (RSA). RESULTS Global coding revealed that maternal invalidation was associated with adolescent anger. Furthermore, maternal invalidation and coerciveness were both related to adolescent opposition/defiance. Results from the microanalytic system indicated that self-injuring dyads were more likely to escalate conflict, suggesting a potential mechanism through which emotion dysregulation is shaped and maintained over time. Finally, mother and teen aversiveness interacted to predict adolescent resting RSA. Low-aversive teens with highly aversive mothers had the highest RSA, whereas teens in high-high dyads showed the lowest RSA. CONCLUSIONS These findings are consistent with theories that emotion invalidation and conflict escalation are possible contextual risk factors for self-injury. PMID:23581508

  2. Educating Jurors about Forensic Evidence: Using an Expert Witness and Judicial Instructions to Mitigate the Impact of Invalid Forensic Science Testimony.

    Science.gov (United States)

    Eastwood, Joseph; Caldwell, Jiana

    2015-11-01

    Invalid expert witness testimony that overstated the precision and accuracy of forensic science procedures has been highlighted as a common factor in many wrongful conviction cases. This study assessed the ability of an opposing expert witness and judicial instructions to mitigate the impact of invalid forensic science testimony. Participants (N = 155) acted as mock jurors in a sexual assault trial that contained both invalid forensic testimony regarding hair comparison evidence, and countering testimony from either a defense expert witness or judicial instructions. Results showed that the defense expert witness was successful in educating jurors regarding limitations in the initial expert's conclusions, leading to a greater number of not-guilty verdicts. The judicial instructions were shown to have no impact on verdict decisions. These findings suggest that providing opposing expert witnesses may be an effective safeguard against invalid forensic testimony in criminal trials. © 2015 American Academy of Forensic Sciences.

  3. PROCEDURAL REASONS FOR INVALIDITY OF DECISIONS MADE BY THE ASSEMBLY IN LIMITED LIABILITY COMPANY - de lege lata vs. de lege ferenda

    Directory of Open Access Journals (Sweden)

    Lidija Šimunović

    2017-01-01

    Full Text Available Procedural reasons, unlike other reasons for invalidity of decisions made by the Assembly in a Limited liability company (hereinafter:Ltd in the judicial and business practice open up the highest number of legal questions. These are “mistakes in the steps” that lead to invalidity of decisions made by the Assembly in Ltd. about which in the domestic legal literature has not been systematically discussed. The starting point for the elaboration of this issue is based on the circumstance that in the provision of article 448 of the Companies Act is stipulated that to the invalidity of decisions made by the Assembly in Ltd. appropriately apply the provisions on the invalidity of decisions made by the General Assembly in PLC (Public Limited Company. Procedural differences in working of the General Assembly in PLC and Assembly in Ltd. is one of the fundamental differences between these two types of capital companies and this kind of positive legal regulation leads to legal uncertainty and misinterpretations. The first part of this paper gives a chronological overview of the domestic law with regard to invalidity of decisions made by the Assembly in Ltd. Then are doctrinally deferred invalid decisions from the other decisions with defect. Then, each provision on the invalidity of decisions made by the General Assembly in PLC is tested and then explicitly formulated provision which is valid only within the context of Ltd. Apart from domestic law are analyzed also solutions from comparative law (especially German because domestic law largely overlaps with the solutions from comparative law. In conclusion after completion of analysis, the obtained findings are used as guidelines for more practical de lege ferenda regulation in the Companies Act regarding the invalidity of decisions made by the Assembly in Ltd.

  4. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    International Nuclear Information System (INIS)

    Little, M.P.

    2010-01-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported statistical associations for cardiovascular disease is unreliable but cannot be excluded. Inflammatory processes are the most likely mechanism by which radiation could modify the atherosclerotic disease process. If there is to be modification by low doses of ionizing radiation of cardiovascular disease through this mechanism, a role for non-DNA-targeted effects cannot be excluded.

  5. An exploration of the impact of invalid MMPI-2 protocols on collateral self-report measure scores.

    Science.gov (United States)

    Forbey, Johnathan D; Lee, Tayla T C

    2011-11-01

    Although a number of studies have examined the impact of invalid MMPI-2 (Butcher et al., 2001) response styles on MMPI-2 scale scores, limited research has specifically explored the effects that such response styles might have on conjointly administered collateral self-report measures. This study explored the potential impact of 2 invalidating response styles detected by the Validity scales of the MMPI-2, overreporting and underreporting, on scores of collateral self-report measures administered conjointly with the MMPI-2. The final group of participants included in analyses was 1,112 college students from a Midwestern university who completed all measures as part of a larger study. Results of t-test analyses suggested that if either over- or underreporting was indicated by the MMPI-2 Validity scales, the scores of most conjointly administered collateral measures were also significantly impacted. Overall, it appeared that test-takers who were identified as either over- or underreporting relied on such a response style across measures. Limitations and suggestions for future study are discussed.

  6. Evaluating the accuracy of the Wechsler Memory Scale-Fourth Edition (WMS-IV) logical memory embedded validity index for detecting invalid test performance.

    Science.gov (United States)

    Soble, Jason R; Bain, Kathleen M; Bailey, K Chase; Kirton, Joshua W; Marceaux, Janice C; Critchfield, Edan A; McCoy, Karin J M; O'Rourke, Justin J F

    2018-01-08

    Embedded performance validity tests (PVTs) allow for continuous assessment of invalid performance throughout neuropsychological test batteries. This study evaluated the utility of the Wechsler Memory Scale-Fourth Edition (WMS-IV) Logical Memory (LM) Recognition score as an embedded PVT using the Advanced Clinical Solutions (ACS) for WAIS-IV/WMS-IV Effort System. This mixed clinical sample was comprised of 97 total participants, 71 of whom were classified as valid and 26 as invalid based on three well-validated, freestanding criterion PVTs. Overall, the LM embedded PVT demonstrated poor concordance with the criterion PVTs and unacceptable psychometric properties using ACS validity base rates (42% sensitivity/79% specificity). Moreover, 15-39% of participants obtained an invalid ACS base rate despite having a normatively-intact age-corrected LM Recognition total score. Receiving operating characteristic curve analysis revealed a Recognition total score cutoff of < 61% correct improved specificity (92%) while sensitivity remained weak (31%). Thus, results indicated the LM Recognition embedded PVT is not appropriate for use from an evidence-based perspective, and that clinicians may be faced with reconciling how a normatively intact cognitive performance on the Recognition subtest could simultaneously reflect invalid performance validity.

  7. A proposed strategy for the validation of ground-water flow and solute transport models

    International Nuclear Information System (INIS)

    Davis, P.A.; Goodrich, M.T.

    1991-01-01

    Ground-water flow and transport models can be thought of as a combination of conceptual and mathematical models and the data that characterize a given system. The judgment of the validity or invalidity of a model depends both on the adequacy of the data and the model structure (i.e., the conceptual and mathematical model). This report proposes a validation strategy for testing both components independently. The strategy is based on the philosophy that a model cannot be proven valid, only invalid or not invalid. In addition, the authors believe that a model should not be judged in absence of its intended purpose. Hence, a flow and transport model may be invalid for one purpose but not invalid for another. 9 refs

  8. On the (In)Validity of Tests of Simple Mediation: Threats and Solutions

    OpenAIRE

    Pek, Jolynn; Hoyle, Rick H.

    2016-01-01

    Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choi...

  9. Prevalence of Invalid Performance on Baseline Testing for Sport-Related Concussion by Age and Validity Indicator.

    Science.gov (United States)

    Abeare, Christopher A; Messa, Isabelle; Zuccato, Brandon G; Merker, Bradley; Erdodi, Laszlo

    2018-03-12

    Estimated base rates of invalid performance on baseline testing (base rates of failure) for the management of sport-related concussion range from 6.1% to 40.0%, depending on the validity indicator used. The instability of this key measure represents a challenge in the clinical interpretation of test results that could undermine the utility of baseline testing. To determine the prevalence of invalid performance on baseline testing and to assess whether the prevalence varies as a function of age and validity indicator. This retrospective, cross-sectional study included data collected between January 1, 2012, and December 31, 2016, from a clinical referral center in the Midwestern United States. Participants included 7897 consecutively tested, equivalently proportioned male and female athletes aged 10 to 21 years, who completed baseline neurocognitive testing for the purpose of concussion management. Baseline assessment was conducted with the Immediate Postconcussion Assessment and Cognitive Testing (ImPACT), a computerized neurocognitive test designed for assessment of concussion. Base rates of failure on published ImPACT validity indicators were compared within and across age groups. Hypotheses were developed after data collection but prior to analyses. Of the 7897 study participants, 4086 (51.7%) were male, mean (SD) age was 14.71 (1.78) years, 7820 (99.0%) were primarily English speaking, and the mean (SD) educational level was 8.79 (1.68) years. The base rate of failure ranged from 6.4% to 47.6% across individual indicators. Most of the sample (55.7%) failed at least 1 of 4 validity indicators. The base rate of failure varied considerably across age groups (117 of 140 [83.6%] for those aged 10 years to 14 of 48 [29.2%] for those aged 21 years), representing a risk ratio of 2.86 (95% CI, 2.60-3.16; P indicator and the age of the examinee. The strong age association, with 3 of 4 participants aged 10 to 12 years failing validity indicators, suggests that the

  10. On the (In)Validity of Tests of Simple Mediation: Threats and Solutions

    Science.gov (United States)

    Pek, Jolynn; Hoyle, Rick H.

    2015-01-01

    Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice. PMID:26985234

  11. Estimation of lower-bound KJc on pressure vessel steels from invalid data

    International Nuclear Information System (INIS)

    McCable, D.E.; Merkle, J.G.

    1996-01-01

    Statistical methods are currently being introduced into the transition temperature characterization of ferritic steels. Objective is to replace imprecise correlations between empirical impact test methods and universal K Ic or K Ia lower-bound curves with direct use of material-specific fracture mechanics data. This paper introduces a computational procedure that couples order statistics, weakest-link statistical theory, and a constraint model to arrive at estimates of lower-bound K Jc values. All of the above concepts have been used before to meet various objectives. In the present case, scheme is to make a best estimate of lower-bound fracture toughness when resource K Jc data are too few to use conventional statistical analyses. Utility of the procedure is of greatest value in the middle-to-high toughness part of the transition range where specimen constraint loss and elevated lower-bound toughness interfere with conventional statistical analysis methods

  12. Trpm4 gene invalidation leads to cardiac hypertrophy and electrophysiological alterations.

    Directory of Open Access Journals (Sweden)

    Marie Demion

    Full Text Available RATIONALE: TRPM4 is a non-selective Ca2+-activated cation channel expressed in the heart, particularly in the atria or conduction tissue. Mutations in the Trpm4 gene were recently associated with several human conduction disorders such as Brugada syndrome. TRPM4 channel has also been implicated at the ventricular level, in inotropism or in arrhythmia genesis due to stresses such as ß-adrenergic stimulation, ischemia-reperfusion, and hypoxia re-oxygenation. However, the physiological role of the TRPM4 channel in the healthy heart remains unclear. OBJECTIVES: We aimed to investigate the role of the TRPM4 channel on whole cardiac function with a Trpm4 gene knock-out mouse (Trpm4-/- model. METHODS AND RESULTS: Morpho-functional analysis revealed left ventricular (LV eccentric hypertrophy in Trpm4-/- mice, with an increase in both wall thickness and chamber size in the adult mouse (aged 32 weeks when compared to Trpm4+/+ littermate controls. Immunofluorescence on frozen heart cryosections and qPCR analysis showed no fibrosis or cellular hypertrophy. Instead, cardiomyocytes in Trpm4-/- mice were smaller than Trpm4+/+with a higher density. Immunofluorescent labeling for phospho-histone H3, a mitosis marker, showed that the number of mitotic myocytes was increased 3-fold in the Trpm4-/-neonatal stage, suggesting hyperplasia. Adult Trpm4-/- mice presented multilevel conduction blocks, as attested by PR and QRS lengthening in surface ECGs and confirmed by intracardiac exploration. Trpm4-/-mice also exhibited Luciani-Wenckebach atrioventricular blocks, which were reduced following atropine infusion, suggesting paroxysmal parasympathetic overdrive. In addition, Trpm4-/- mice exhibited shorter action potentials in atrial cells. This shortening was unrelated to modifications of the voltage-gated Ca2+ or K+ currents involved in the repolarizing phase. CONCLUSIONS: TRPM4 has pleiotropic roles in the heart, including the regulation of conduction and cellular

  13. Physical examination tests and imaging studies based on arthroscopic assessment of the long head of biceps tendon are invalid.

    Science.gov (United States)

    Jordan, Robert W; Saithna, Adnan

    2017-10-01

    The aim of this study was to evaluate whether glenohumeral arthroscopy is an appropriate gold standard for the diagnosis of long head of biceps (LHB) tendon pathology. The objectives were to evaluate whether the length of tendon that can be seen at arthroscopy allows visualisation of areas of predilection of pathology and also to determine the rates of missed diagnoses at arthroscopy when compared to an open approach. A systematic review of cadaveric and clinical studies was performed. The search strategy was applied to MEDLINE, PubMed and Google Scholar databases. All relevant articles were included. Critical appraisal of clinical studies was performed using a validated quality assessment scale. Five articles were identified for inclusion in the review. This included both clinical and cadaveric studies. The overall population comprised 18 cadaveric specimens and 575 patients. Out of the five included studies, three reported the length of LHB tendon visualised during arthroscopy and four reported the rate of missed LHB diagnosis. Cadaveric studies showed that the use of a hook probe allowed arthroscopic visualisation of between 34 and 48 % of the overall length of the LHB. In the clinical series, the rate of missed diagnoses at arthroscopy when compared to open exploration ranged between 33 and 49 %. Arthroscopy allows visualisation of only a small part of the extra-articular LHB tendon. This leads to a high rate of missed pathology in the distal part of the tendon. Published figures for sensitivities and specificities of common physical examination and imaging tests for LHB pathology that are based on arthroscopy as the gold standard are therefore invalid. In clinical practice, it is important to note that a "negative" arthroscopic assessment does not exclude a lesion of the LHB tendon as this technique does not allow visualisation of common sites of distal pathology. IV.

  14. Invalidity of the Fermi liquid theory and magnetic phase transition in quasi-1D dopant-induced armchair-edged graphene nanoribbons

    Science.gov (United States)

    Hoi, Bui Dinh; Davoudiniya, Masoumeh; Yarmohammadi, Mohsen

    2018-04-01

    Based on theoretically tight-binding calculations considering nearest neighbors and Green's function technique, we show that the magnetic phase transition in both semiconducting and metallic armchair graphene nanoribbons with width ranging from 9.83 Å to 69.3 Å would be observed in the presence of injecting electrons by doping. This transition is explained by the temperature-dependent static charge susceptibility through calculation of the correlation function of charge density operators. This work showed that charge concentration of dopants in such system plays a crucial role in determining the magnetic phase. A variety of multicritical points such as transition temperatures and maximum susceptibility are compared in undoped and doped cases. Our findings show that there exist two different transition temperatures and maximum susceptibility depending on the ribbon width in doped structures. Another remarkable point refers to the invalidity (validity) of the Fermi liquid theory in nanoribbons-based systems at weak (strong) concentration of dopants. The obtained interesting results of magnetic phase transition in such system create a new potential for magnetic graphene nanoribbon-based devices.

  15. Sulfur Deactivation of NOx Storage Catalysts: A Multiscale Modeling Approach

    Directory of Open Access Journals (Sweden)

    Rankovic N.

    2013-09-01

    Full Text Available Lean NOx Trap (LNT catalysts, a promising solution for reducing the noxious nitrogen oxide emissions from the lean burn and Diesel engines, are technologically limited by the presence of sulfur in the exhaust gas stream. Sulfur stemming from both fuels and lubricating oils is oxidized during the combustion event and mainly exists as SOx (SO2 and SO3 in the exhaust. Sulfur oxides interact strongly with the NOx trapping material of a LNT to form thermodynamically favored sulfate species, consequently leading to the blockage of NOx sorption sites and altering the catalyst operation. Molecular and kinetic modeling represent a valuable tool for predicting system behavior and evaluating catalytic performances. The present paper demonstrates how fundamental ab initio calculations can be used as a valuable source for designing kinetic models developed in the IFP Exhaust library, intended for vehicle simulations. The concrete example we chose to illustrate our approach was SO3 adsorption on the model NOx storage material, BaO. SO3 adsorption was described for various sites (terraces, surface steps and kinks and bulk for a closer description of a real storage material. Additional rate and sensitivity analyses provided a deeper understanding of the poisoning phenomena.

  16. Differential-difference model for textile engineering

    International Nuclear Information System (INIS)

    Wu Guocheng; Zhao Ling; He Jihuan

    2009-01-01

    Woven fabric is manifestly not a continuum and therefore Darcy's law or its modifications, or any other differential models are invalid theoretically. A differential-difference model for air transport in discontinuous media is introduced using conservation of mass, conservation of energy, and the equation of state in discrete space and continuous time, capillary pressure is obtained by dimensional analysis.

  17. Droplet-model electric dipole moments

    International Nuclear Information System (INIS)

    Myers, W.D.; Swiatecki, W.J.

    1991-01-01

    Denisov's recent criticism of the droplet-model formula for the dipole moment of a deformed nucleus as derived by Dorso et al., it shown to be invalid. This helps to clarify the relation of theory to the measured dipole moments, as discussed in the review article by Aberg et al. (orig.)

  18. Gene-Environment Interplay in Twin Models

    OpenAIRE

    Verhulst, Brad; Hatemi, Peter K.

    2013-01-01

    In this article, we respond to Shultziner’s critique that argues that identical twins are more alike not because of genetic similarity, but because they select into more similar environments and respond to stimuli in comparable ways, and that these effects bias twin model estimates to such an extent that they are invalid. The essay further argues that the theory and methods that undergird twin models, as well as the empirical studies which rely upon them, are unaware of these potential biases...

  19. An examination of adaptive cellular protective mechanisms using a multi-stage carcinogenesis model

    International Nuclear Information System (INIS)

    Schollnberger, H.; Stewart, R. D.; Mitchel, R. E. J.; Hofmann, W.

    2004-01-01

    A multi-stage cancer model that describes the putative rate-limiting steps in carcinogenesis was developed and used to investigate the potential impact on lung cancer incidence of the hormesis mechanisms suggested by Feinendegen and Pollycove. In this deterministic cancer model, radiation and endogenous processes damage the DNA of target cells in the lung. Some fraction of the misrepaired our unrepaired DNA damage induces genomic instability and, ultimately, leads to the accumulation of malignant cells. The model accounts for cell birth and death processes. Ita also includes a rate of malignant transformation and a lag period for tumour formation. Cellular defence mechanisms are incorporated into the model by postulating dose and dose rate dependent radical scavenging. The accuracy of DNA damage repair also depends on dose and dose rate. Sensitivity studies were conducted to identify critical model inputs and to help define the shapes of the cumulative lung cancer incidence curves that may arise when dose and dose rate dependent cellular defence mechanisms are incorporated into a multi-stage cancer model. For lung cancer, both linear no-threshold (LNT) and non-LNT shaped responses can be obtained. The reported studied clearly show that it is critical to know whether or not and to what extent multiply damaged DNA sites are formed by endogenous processes. Model inputs that give rise to U-shaped responses are consistent with an effective cumulative lung cancer incidence threshold that may be as high as 300 mGy (4 mGy per year for 75 years). (Author) 11 refs

  20. Attack Tree Generation by Policy Invalidation

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian; Naeem Akram, R.; Jajodia, S.

    2015-01-01

    Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identi﬿cation. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identi﬿ed

  1. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  2. 我國智慧財產訴訟中專利權無效抗辯趨勢報導 The Defense of Patent Invalidity in the Intellectual Property Litigation Special Report

    Directory of Open Access Journals (Sweden)

    陳群顯 Chun-Hsien Chen

    2007-06-01

    Full Text Available 我國智慧財產民事訴訟中,以往囿於「公、私法訴訟二元制」之體系設計,被告即便認為原告所主張之智慧財產權有無效的理由,亦僅能循行政救濟的途徑主張,並無法直接於民事訴訟中直接提起智慧財產權無效抗辯,造成民事訴訟程序之延滯等不便。我國預計於2007 年間設立智慧財產法院,而該法院之設立對於我國智慧財產案件之爭訟將產生巨大而直接之影響,而攸關該法院成敗之主要關鍵⎯⎯「智慧財產法院組織法」及「智慧財產案件審理法」等二法案,業已送立法院進行審查。其中「智慧財產案件審理法」已 於2007 年1 月9 日經立法院三讀通過,「智慧財產法院組織法」亦已於2007 年3 月5 日經立法院三讀通過。「智慧財產案件審理法」中一項劃時代的變革,即是在第16 條第1 項規定:「當事人主張或抗辯智慧財產權有應撤銷、廢止之原因者,法院應就其主張或抗辯有無理由自為判斷」,易言之,該法條規定將直接改變目前我國「公、私法訴訟二元制」的現狀,對於專利訴訟當事人間自產生重大之影響,然依據該法案之規定,是否確能達到立法者之目的?以及是否需要有其他配套制度?本文將介紹我國智慧財產訴訟中 專利權無效抗辯相關制度沿革,並嘗試提供分析意見,同時就目前各國相關專利訴訟制度之設計,提供分析及建議。 In the past, the defendant of intellectual property (IP litigation cannot raise the defense of patent invalidity in the civil litigation. The defendant can only file an invalidity action against the IP at issue. Such judicial system design delays the proceeding of the civil litigation of the IP infringement. The IP Court is proposed to be established in 2007. The establishment of the IP Court will change the current court proceeding of the intellectual

  3. Stability of the thermodynamic equilibrium - A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    Science.gov (United States)

    Faghihi, Mustafa; Scheffel, Jan; Spies, Guenther O.

    1988-05-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure.

  4. Stability of the thermodynamic equilibrium: A test of the validity of dynamic models as applied to gyroviscous perpendicular magnetohydrodynamics

    International Nuclear Information System (INIS)

    Faghihi, M.; Scheffel, J.; Spies, G.O.

    1988-01-01

    Stability of the thermodynamic equilibrium is put forward as a simple test of the validity of dynamic equations, and is applied to perpendicular gyroviscous magnetohydrodynamics (i.e., perpendicular magnetohydrodynamics with gyroviscosity added). This model turns out to be invalid because it predicts exponentially growing Alfven waves in a spatially homogeneous static equilibrium with scalar pressure

  5. Distribution of shortest path lengths in a class of node duplication network models

    Science.gov (United States)

    Steinbock, Chanania; Biham, Ofer; Katzav, Eytan

    2017-09-01

    We present analytical results for the distribution of shortest path lengths (DSPL) in a network growth model which evolves by node duplication (ND). The model captures essential properties of the structure and growth dynamics of social networks, acquaintance networks, and scientific citation networks, where duplication mechanisms play a major role. Starting from an initial seed network, at each time step a random node, referred to as a mother node, is selected for duplication. Its daughter node is added to the network, forming a link to the mother node, and with probability p to each one of its neighbors. The degree distribution of the resulting network turns out to follow a power-law distribution, thus the ND network is a scale-free network. To calculate the DSPL we derive a master equation for the time evolution of the probability Pt(L =ℓ ) , ℓ =1 ,2 ,⋯ , where L is the distance between a pair of nodes and t is the time. Finding an exact analytical solution of the master equation, we obtain a closed form expression for Pt(L =ℓ ) . The mean distance 〈L〉 t and the diameter Δt are found to scale like lnt , namely, the ND network is a small-world network. The variance of the DSPL is also found to scale like lnt . Interestingly, the mean distance and the diameter exhibit properties of a small-world network, rather than the ultrasmall-world network behavior observed in other scale-free networks, in which 〈L〉 t˜lnlnt .

  6. Microkinetic Modeling of Lean NOx Trap Sulfation and Desulfation

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Richard S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-08-01

    A microkinetic reaction sub-mechanism designed to account for the sulfation and desulfation of a commercial lean NOx trap (LNT) is presented. This set of reactions is appended to a previously developed mechanism for the normal storage and regeneration processes in an LNT in order to provide a comprehensive modeling tool. The reactions describing the storage, release, and reduction of sulfur oxides are patterned after those involving NOx, but the number of reactions is kept to the minimum necessary to give an adequate simulation of the experimental observations. Values for the kinetic constants are estimated by fitting semi-quantitatively the somewhat limited experimental data, using a transient plug flow reactor code to model the processes occurring in a single monolith channel. Rigorous thermodynamic constraints are imposed in order to ensure that the overall mechanism is consistent both internally and with the known properties of all gas-phase species. The final mechanism is shown to be capable of reproducing the principal aspects of sulfation/desulfation behavior, most notably (a) the essentially complete trapping of SO2 during normal cycling; (b) the preferential sulfation of NOx storage sites over oxygen storage sites and the consequent plug-like and diffuse sulfation profiles; (c) the degradation of NOx storage and reduction (NSR) capability with increasing sulfation level; and (d) the mix of H2S and SO2 evolved during desulfation by temperature-programmed reduction.

  7. Item bias detection in the Hospital Anxiety and Depression Scale using structural equation modeling: comparison with other item bias detection methods

    NARCIS (Netherlands)

    Verdam, M.G.E.; Oort, F.J.; Sprangers, M.A.G.

    Purpose Comparison of patient-reported outcomes may be invalidated by the occurrence of item bias, also known as differential item functioning. We show two ways of using structural equation modeling (SEM) to detect item bias: (1) multigroup SEM, which enables the detection of both uniform and

  8. Non-Fermi-liquid theory of a compactified Anderson single-impurity model

    International Nuclear Information System (INIS)

    Zhang, G.; Hewson, A.C.

    1996-01-01

    We consider a version of the symmetric Anderson impurity model (compactified) which has a non-Fermi-liquid weak-coupling regime. We find that in the Majorana fermion representation the perturbation theory can be conveniently developed in terms of Pfaffian determinants and we use this formalism to calculate the impurity free energy, self-energies, and vertex functions. We derive expressions for the impurity and the local conduction-electron charge and spin-dynamical susceptibilities in terms of the impurity self-energies and vertex functions. In the second-order perturbation theory, a linear temperature dependence of the electrical resistivity is obtained, and the leading corrections to the impurity specific heat are found to behave as TlnT. The impurity static susceptibilities have terms in lnT to zero, first, and second order, and corrections of ln 2 T to second order as well. The conduction-electron static susceptibilities, and the singlet superconducting paired static susceptibility at the impurity site, have second-order corrections lnT, which indicate that a singlet conduction-electron pairing resonance forms at the Fermi level (the chemical potential). When the perturbation theory is extended to third order logarithmic divergences are found in the only vertex function Γ 0,1,2,3 (0,0,0,0), which is nonvanishing in the zero-frequency limit. We use the multiplicative renormalization-group (RG) method to sum all the leading-order logarithmic contributions. This gives a weak-coupling low-temperature energy scale T c =Δexp[-(1/9)(πΔ/U) 2 ], which is the combination of the two independent coupling parameters. The RG scaling equation is derived and shows that the dimensionless coupling constant bar U=U/πΔ is increased as the high-energy scale Δ is reduced, so our perturbational results can be justified in the regime T approx-gt T c

  9. Evidence for beneficial low level radiation effects and radiation hormesis

    International Nuclear Information System (INIS)

    Feinendegen, L.E.

    2005-01-01

    Low doses in the mGy range cause a dual effect on cellular DNA. One effect concerns a relatively low probability of DNA damage per energy deposition event and it increases proportional with dose, with possible bystander effects operating. This damage at background radiation exposure is orders of magnitudes lower than that from endogenous sources, such as ROS. The other effect at comparable doses brings an easily obeservable adaptive protection against DNA damage from any, mainly endogenous sources, depending on cell type, species, and metabolism. Protective responses express adaptive responses to metabolic perturbations and also mimic oxygen stress responses. Adaptive protection operates in terms of DNA damage prevention and repair, and of immune stimulation. It develops with a delay of hours, may last for days to months, and increasingly disappears at doses beyond about 100 to 200 mGy. Radiation-induced apoptosis and terminal cell differentiation occurs also at higher doses and adds to protection by reducing genomic instability and the number of mutated cells in tissues. At low doses, damage reduction by adaptive protection against damage from endogenous sources predictably outweighs radiogenic damage induction. The analysis of the consequences of the particular low-dose scenario shows that the linear-no-threshold (LNT) hypothesis for cancer risk is scientifically unfounded and appears to be invalid in favor of a threshold or hormesis. This is consistent with data both from animal studies and human epidemiological observations on low-dose induced cancer. The LNT hypothesis should be abandoned and be replaced by a hypothesis that is scientifically justified. The appropriate model should include terms for both linear and non-linear response probabilities. Maintaining the LNT-hypothesis as basis for radiation protection causes unressonable fear and expenses. (author)

  10. Simulation of deposition and activity distribution of radionuclides in human airways

    International Nuclear Information System (INIS)

    Farkas, A.; Balashazy, I.; Szoke, I.; Hofmann, W.; Golser, R.

    2002-01-01

    The aim of our research activities is the modelling of the biological processes related to the development of lung cancer at the large central-airways observed in the case of uranium miners caused by the inhalation of radionuclides (especially alpha-emitting radon decay products). Statistical data show that at the uranium miners the lung cancer has developed mainly in the 3-4.-5. airway generations and especially in the right upper lobe. Therefore, it is rather important to study the physical and biological effects in this section of the human airways to find relations between the radiation dose and the adverse health effects. These results may provide useful information about the validity or invalidity of the currently used LNT (Linear-No-Threshold) dose-effect hypothesis at low doses

  11. Leukemia and ionizing radiation revisited

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M. [Cuttler & Associates Inc., Vaughan, Ontario (Canada); Welsh, J.S. [Loyola University-Chicago, Dept. or Radiation Oncology, Stritch School of Medicine, Maywood, Illinois (United States)

    2016-03-15

    A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)

  12. Modeling aerodynamic discontinuities and onset of chaos in flight dynamical systems

    Science.gov (United States)

    Tobak, M.; Chapman, G. T.; Unal, A.

    1987-01-01

    Various representations of the aerodynamic contribution to the aircraft's equation of motion are shown to be compatible within the common assumption of their Frechet differentiability. Three forms of invalidating Frechet differentiability are identified, and the mathematical model is amended to accommodate their occurrence. Some of the ways in which chaotic behavior may emerge are discussed, first at the level of the aerodynamic contribution to the equations of motion, and then at the level of the equations of motion themselves.

  13. Modeling aerodynamic discontinuities and the onset of chaos in flight dynamical systems

    Science.gov (United States)

    Tobak, M.; Chapman, G. T.; Uenal, A.

    1986-01-01

    Various representations of the aerodynamic contribution to the aircraft's equation of motion are shown to be compatible within the common assumption of their Frechet differentiability. Three forms of invalidating Frechet differentiality are identified, and the mathematical model is amended to accommodate their occurrence. Some of the ways in which chaotic behavior may emerge are discussed, first at the level of the aerodynamic contribution to the equation of motion, and then at the level of the equations of motion themselves.

  14. Using EEG and stimulus context to probe the modelling of auditory-visual speech.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2016-02-01

    We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  15. Galilean invariance in the exponential model of atomic collisions

    International Nuclear Information System (INIS)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-01-01

    Using the X/sup n/ + (1s 2 )+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He + (1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed

  16. Mechanistic Investigation of the Reduction of NOx over Pt- and Rh-Based LNT Catalysts

    Directory of Open Access Journals (Sweden)

    Lukasz Kubiak

    2016-03-01

    Full Text Available The influence of the noble metals (Pt vs. Rh on the NOx storage reduction performances of lean NOx trap catalysts is here investigated by transient micro-reactor flow experiments. The study indicates a different behavior during the storage in that the Rh-based catalyst showed higher storage capacity at high temperature as compared to the Pt-containing sample, while the opposite is seen at low temperatures. It is suggested that the higher storage capacity of the Rh-containing sample at high temperature is related to the higher dispersion of Rh as compared to Pt, while the lower storage capacity of Rh-Ba/Al2O3 at low temperature is related to its poor oxidizing properties. The noble metals also affect the catalyst behavior upon reduction of the stored NOx, by decreasing the threshold temperature for the reduction of the stored NOx. The Pt-based catalyst promotes the reduction of the adsorbed NOx at lower temperatures if compared to the Rh-containing sample, due to its superior reducibility. However, Rh-based material shows higher reactivity in the NH3 decomposition significantly enhancing N2 selectivity. Moreover, formation of small amounts of N2O is observed on both Pt- and Rh-based catalyst samples only during the reduction of highly reactive NOx stored at 150 °C, where NOx is likely in the form of nitrites.

  17. THE HIGH BACKGROUND RADIATION AREA IN RAMSAR IRAN: GEOLOGY, NORM, BIOLOGY, LNT, AND POSSIBLE REGULATORY FUN

    Energy Technology Data Exchange (ETDEWEB)

    Karam, P. A.

    2002-02-25

    The city of Ramsar Iran hosts some of the highest natural radiation levels on earth, and over 2000 people are exposed to radiation doses ranging from 1 to 26 rem per year. Curiously, inhabitants of this region seem to have no greater incidence of cancer than those in neighboring areas of normal background radiation levels, and preliminary studies suggest their blood cells experience fewer induced chromosomal abnormalities when exposed to 150 rem ''challenge'' doses of radiation than do the blood cells of their neighbors. This paper will briefly describe the unique geology that gives Ramsar its extraordinarily high background radiation levels. It will then summarize the studies performed to date and will conclude by suggesting ways to incorporate these findings (if they are borne out by further testing) into future radiation protection standards.

  18. Luminescence emission in NaCl:Cu X-irradiated at LNT and RT

    International Nuclear Information System (INIS)

    Herreros, J.M.; Jaque, F.

    1979-01-01

    The thermoluminescence (TL) and photostimulated thermoluminescence (PTL) in NaCl:Cu in the range of temperatures 77-400 K have been studied. For the five peaks found, the order kinetics of recombination, the pre-exponential factor activation energy and role of F and F' centres have been analyzed. (Auth.)

  19. Non-targeted effects of radiation: applications for radiation protection and contribution to LNT discussion

    International Nuclear Information System (INIS)

    Belyakov, O.V.; Folkard, M.; Prise, K.M.; Michael, B.D.; Mothersill, C.

    2002-01-01

    According to the target theory of radiation induced effects (Lea, 1946), which forms a central core of radiation biology, DNA damage occurs during or very shortly after irradiation of the nuclei in targeted cells and the potential for biological consequences can be expressed within one or two cell generations. A range of evidence has now emerged that challenges the classical effects resulting from targeted damage to DNA. These effects have also been termed non-(DNA)-targeted (Ward, 1999) and include radiation-induced bystander effects (Iyer and Lehnert, 2000a), genomic instability (Wright, 2000), adaptive response (Wolff, 1998), low dose hyper-radiosensitivity (HRS) (Joiner, et al., 2001), delayed reproductive death (Seymour, et al., 1986) and induction of genes by radiation (Hickman, et al., 1994). An essential feature of non-targeted effects is that they do not require a direct nuclear exposure by irradiation to be expressed and they are particularly significant at low doses. This new evidence suggests a new paradigm for radiation biology that challenges the universality of target theory. In this paper we will concentrate on the radiation-induced bystander effects because of its particular importance for radiation protection

  20. Understanding lack of understanding : Invalidation in rheumatic diseases

    NARCIS (Netherlands)

    Kool, M.B.

    2012-01-01

    The quality of life of patients with chronic rheumatic diseases is negatively influenced by symptoms such as pain, fatigue, and stiffness, and secondary symptoms such as physical limitations and depressive mood. On top of this burden, some patients experience negative responses from others, such as

  1. Invalid Cookery, Nursing and Domestic Medicine in Ireland, c. 1900.

    Science.gov (United States)

    Adelman, Juliana

    2018-04-01

    This article uses a 1903 text by the Irish cookery instructress Kathleen Ferguson to examine the intersections between food, medicine and domestic work. Sick Room Cookery, and numerous texts like it, drew on traditions of domestic medicine and Anglo-Irish gastronomy while also seeking to establish female expertise informed by modern science and medicine. Placing the text in its broader cultural context, the article examines how it fit into the tradition of domestic medicine and the emerging profession of domestic science. Giving equal weight to the history of food and of medicine, and seeing each as shaped by historical context, help us to see the practice of feeding the sick in a different way.

  2. Leaf arrangements are invalid in the taxonomy of orchid species

    Directory of Open Access Journals (Sweden)

    Anna Jakubska-Busse

    2017-07-01

    Full Text Available The selection and validation of proper distinguishing characters are of crucial importance in taxonomic revisions. The modern classifications of orchids utilize the molecular tools, but still the selection and identification of the material used in these studies is for the most part related to general species morphology. One of the vegetative characters quoted in orchid manuals is leaf arrangement. However, phyllotactic diversity and ontogenetic changeability have not been analysed in detail in reference to particular taxonomic groups. Therefore, we evaluated the usefulness of leaf arrangements in the taxonomy of the genus Epipactis Zinn, 1757. Typical leaf arrangements in shoots of this genus are described as distichous or spiral. However, in the course of field research and screening of herbarium materials, we indisputably disproved the presence of distichous phyllotaxis in the species Epipactis purpurata Sm. and confirmed the spiral Fibonacci pattern as the dominant leaf arrangement. In addition, detailed analyses revealed the presence of atypical decussate phyllotaxis in this species, as well as demonstrated the ontogenetic formation of pseudowhorls. These findings confirm ontogenetic variability and plasticity in E. purpurata. Our results are discussed in the context of their significance in delimitations of complex taxa within the genus Epipactis.

  3. OPERA and MINOS Experimental Result Prove Big Bang Theory Invalid

    Science.gov (United States)

    Pressler, David E.

    2012-03-01

    The greatest error in the history of science is the misinterpretation of the Michelson-Morley Experiment. The speed of light was measured to travel at the same speed in all three directions (x, y, z axis) in ones own inertial reference system; however, c will always be measured as having an absolute different speed in all other inertial frames at different energy levels. Time slows down due to motion or a gravity field. Time is the rate of physical process. Speed = Distance/Time. If the time changes the distance must change. Therefore, BOTH mirrors must move towards the center of the interferometer and space must contract in all-three-directions; C-Space. Gravity is a C-Space condition, and is the cause of redshift in our universe-not motion. The universe is not expanding. OPERA results are directly indicated; at the surface of earth, the strength of the gravity field is at maximum-below the earth's surface, time and space is less distorted, C-Space; therefore, c is faster. Newtonian mechanics dictate that a spherical shell of matter at greater radii, with uniform density, produces no net force on an observer located centrally. An observer located on the sphere's surface, like our Earth's or a large sphere, like one located in a remote galaxy, will construct a picture centered on himself to be identical to the one centered inside the spherical shell of mass. Both observers will view the incoming radiation, emitted by the other observer, as redshifted, because they lay on each others radial line. The Universe is static and very old.

  4. Simplified Model of Nonlinear Landau Damping

    International Nuclear Information System (INIS)

    Yampolsky, N.A.; Fisch, N.J.

    2009-01-01

    The nonlinear interaction of a plasma wave with resonant electrons results in a plateau in the electron distribution function close to the phase velocity of the plasma wave. As a result, Landau damping of the plasma wave vanishes and the resonant frequency of the plasma wave downshifts. However, this simple picture is invalid when the external driving force changes the plasma wave fast enough so that the plateau cannot be fully developed. A new model to describe amplification of the plasma wave including the saturation of Landau damping and the nonlinear frequency shift is proposed. The proposed model takes into account the change of the plasma wave amplitude and describes saturation of the Landau damping rate in terms of a single fluid equation, which simplifies the description of the inherently kinetic nature of Landau damping. A proposed fluid model, incorporating these simplifications, is verified numerically using a kinetic Vlasov code.

  5. Stability of the electroweak ground state in the Standard Model and its extensions

    International Nuclear Information System (INIS)

    Di Luzio, Luca; Isidori, Gino; Ridolfi, Giovanni

    2016-01-01

    We review the formalism by which the tunnelling probability of an unstable ground state can be computed in quantum field theory, with special reference to the Standard Model of electroweak interactions. We describe in some detail the approximations implicitly adopted in such calculation. Particular attention is devoted to the role of scale invariance, and to the different implications of scale-invariance violations due to quantum effects and possible new degrees of freedom. We show that new interactions characterized by a new energy scale, close to the Planck mass, do not invalidate the main conclusions about the stability of the Standard Model ground state derived in absence of such terms.

  6. Stability of the electroweak ground state in the Standard Model and its extensions

    Energy Technology Data Exchange (ETDEWEB)

    Di Luzio, Luca, E-mail: diluzio@ge.infn.it [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Isidori, Gino [Department of Physics, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy)

    2016-02-10

    We review the formalism by which the tunnelling probability of an unstable ground state can be computed in quantum field theory, with special reference to the Standard Model of electroweak interactions. We describe in some detail the approximations implicitly adopted in such calculation. Particular attention is devoted to the role of scale invariance, and to the different implications of scale-invariance violations due to quantum effects and possible new degrees of freedom. We show that new interactions characterized by a new energy scale, close to the Planck mass, do not invalidate the main conclusions about the stability of the Standard Model ground state derived in absence of such terms.

  7. Stability of the electroweak ground state in the Standard Model and its extensions

    Directory of Open Access Journals (Sweden)

    Luca Di Luzio

    2016-02-01

    Full Text Available We review the formalism by which the tunnelling probability of an unstable ground state can be computed in quantum field theory, with special reference to the Standard Model of electroweak interactions. We describe in some detail the approximations implicitly adopted in such calculation. Particular attention is devoted to the role of scale invariance, and to the different implications of scale-invariance violations due to quantum effects and possible new degrees of freedom. We show that new interactions characterized by a new energy scale, close to the Planck mass, do not invalidate the main conclusions about the stability of the Standard Model ground state derived in absence of such terms.

  8. Illusory inferences from a disjunction of conditionals: a new mental models account.

    Science.gov (United States)

    Barrouillet, P; Lecas, J F

    2000-08-14

    (Johnson-Laird, P.N., & Savary, F. (1999, Illusory inferences: a novel class of erroneous deductions. Cognition, 71, 191-229.) have recently presented a mental models account, based on the so-called principle of truth, for the occurrence of inferences that are compelling but invalid. This article presents an alternative account of the illusory inferences resulting from a disjunction of conditionals. In accordance with our modified theory of mental models of the conditional, we show that the way individuals represent conditionals leads them to misinterpret the locus of the disjunction and prevents them from drawing conclusions from a false conditional, thus accounting for the compelling character of the illusory inference.

  9. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...

  10. Statistical challenges in modelling the health consequences of social mobility: the need for diagonal reference models.

    Science.gov (United States)

    van der Waal, Jeroen; Daenekindt, Stijn; de Koster, Willem

    2017-12-01

    Various studies on the health consequences of socio-economic position address social mobility. They aim to uncover whether health outcomes are affected by: (1) social mobility, besides, (2) social origin, and (3) social destination. Conventional methods do not, however, estimate these three effects separately, which may produce invalid conclusions. We highlight that diagonal reference models (DRMs) overcome this problem, which we illustrate by focusing on overweight/obesity (OWOB). Using conventional methods (logistic-regression analyses with dummy variables) and DRMs, we examine the effects of intergenerational educational mobility on OWOB (BMI ≥ 25 kg/m 2 ) using survey data representative of the Dutch population aged 18-45 (1569 males, 1771 females). Conventional methods suggest that mobility effects on OWOB are present. Analyses with DRMs, however, indicate that no such effects exist. Conventional analyses of the health consequences of social mobility may produce invalid results. We, therefore, recommend the use of DRMs. DRMs also validly estimate the health consequences of other types of social mobility (e.g. intra- and intergenerational occupational and income mobility) and status inconsistency (e.g. in educational or occupational attainment between partners).

  11. Numerical modelling of local deposition patients, activity distributions and cellular hit probabilities of inhaled radon progenies in human airways

    International Nuclear Information System (INIS)

    Farkas, A.; Balashazy, I.; Szoeke, I.

    2003-01-01

    The general objective of our research is modelling the biophysical processes of the effects of inhaled radon progenies. This effort is related to the rejection or support of the linear no threshold (LNT) dose-effect hypothesis, which seems to be one of the most challenging tasks of current radiation protection. Our approximation and results may also serve as a useful tool for lung cancer models. In this study, deposition patterns, activity distributions and alpha-hit probabilities of inhaled radon progenies in the large airways of the human tracheobronchial tree are computed. The airflow fields and related particle deposition patterns strongly depend on the shape of airway geometry and breathing pattern. Computed deposition patterns of attached an unattached radon progenies are strongly inhomogeneous creating hot spots at the carinal regions and downstream of the inner sides of the daughter airways. The results suggest that in the vicinity of the carinal regions the multiple hit probabilities are quite high even at low average doses and increase exponentially in the low-dose range. Thus, even the so-called low doses may present high doses for large clusters of cells. The cell transformation probabilities are much higher in these regions and this phenomenon cannot be modeled with average burdens. (authors)

  12. [Consolidating the medical model of disability: on poliomyelitis and constitution of orthopedic surgery and orthopaedics as a speciality in Spain (1930-1950)].

    Science.gov (United States)

    Martínez-Pérez, José

    2009-01-01

    At the beginning of the 1930s, various factors made it necessary to transform one of the institutions which was renowned for its work regarding the social reinsertion of the disabled, that is, the Instituto de Reeducación Profesional de Inválidos del Trabajo (Institute for Occupational Retraining of Invalids of Work). The economic crisis of 1929 and the legislative reform aimed at regulating occupational accidents highlighted the failings of this institution to fulfill its objectives. After a time of uncertainty, the centre was renamed the Instituto Nacional de Reeducación de Inválidos (National Institute for Retraining of Invalids). This was done to take advantage of its work in championing the recovery of all people with disabilities.This work aims to study the role played in this process by the poliomyelitis epidemics in Spain at this time. It aims to highlight how this disease justified the need to continue the work of a group of professionals and how it helped to reorient the previous programme to re-educate the "invalids." Thus we shall see the way in which, from 1930 to 1950, a specific medical technology helped to consolidate an "individual model" of disability and how a certain cultural stereotype of those affected developed as a result. Lastly, this work discusses the way in which all this took place in the midst of a process of professional development of orthopaedic surgeons.

  13. An elastic-plastic contact model for line contact structures

    Science.gov (United States)

    Zhu, Haibin; Zhao, Yingtao; He, Zhifeng; Zhang, Ruinan; Ma, Shaopeng

    2018-06-01

    Although numerical simulation tools are now very powerful, the development of analytical models is very important for the prediction of the mechanical behaviour of line contact structures for deeply understanding contact problems and engineering applications. For the line contact structures widely used in the engineering field, few analytical models are available for predicting the mechanical behaviour when the structures deform plastically, as the classic Hertz's theory would be invalid. Thus, the present study proposed an elastic-plastic model for line contact structures based on the understanding of the yield mechanism. A mathematical expression describing the global relationship between load history and contact width evolution of line contact structures was obtained. The proposed model was verified through an actual line contact test and a corresponding numerical simulation. The results confirmed that this model can be used to accurately predict the elastic-plastic mechanical behaviour of a line contact structure.

  14. A test of inflated zeros for Poisson regression models.

    Science.gov (United States)

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  15. Cooling as a method of finding topological dislocations in lattice models

    International Nuclear Information System (INIS)

    Gomberoff, K.

    1989-01-01

    It is well known that the O(3) two-dimensional model has configurations with topological charge Q=1 and action S/sub min/=6.69. Since the exponent characterizing the renormalization-group behavior of this model is 4π such configurations invalidate the standard scaling behavior of the topological susceptibility. The analog exponent for the four-dimensional lattice SU(2) gauge model is 10.77. If there would exist configurations with Q=1 and S<10.77 in this model, they would invalidate the standard scaling behavior of its topological susceptibility. Kremer et al. have calculated the action of different configurations during cooling runs. They report that they do not find any configuration with S<12.7 and Q=1. I show that in the O(3) two-dimensional model cooling runs fail to uncover the well-known configurations with S<8. We conclude that the cooling method is not effective in uncovering the smallest action configurations in the Q=1 sector

  16. Linear, no threshold response at low doses of ionizing radiation: ideology, prejudice and science

    International Nuclear Information System (INIS)

    Kesavan, P.C.

    2014-01-01

    The linear, no threshold (LNT) response model assumes that there is no threshold dose for the radiation-induced genetic effects (heritable mutations and cancer), and it forms the current basis for radiation protection standards for radiation workers and the general public. The LNT model is, however, based more on ideology than valid radiobiological data. Further, phenomena such as 'radiation hormesis', 'radioadaptive response', 'bystander effects' and 'genomic instability' are now demonstrated to be radioprotective and beneficial. More importantly, the 'differential gene expression' reveals that qualitatively different proteins are induced by low and high doses. This finding negates the LNT model which assumes that qualitatively similar proteins are formed at all doses. Thus, all available scientific data challenge the LNT hypothesis. (author)

  17. Conformal Invariance in the Long-Range Ising Model

    CERN Document Server

    Paulos, Miguel F; van Rees, Balt C; Zan, Bernardo

    2016-01-01

    We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  18. Conformal invariance in the long-range Ising model

    Directory of Open Access Journals (Sweden)

    Miguel F. Paulos

    2016-01-01

    Full Text Available We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  19. Conformal invariance in the long-range Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Paulos, Miguel F. [CERN, Theory Group, Geneva (Switzerland); Rychkov, Slava, E-mail: slava.rychkov@lpt.ens.fr [CERN, Theory Group, Geneva (Switzerland); Laboratoire de Physique Théorique de l' École Normale Supérieure (LPTENS), Paris (France); Faculté de Physique, Université Pierre et Marie Curie (UPMC), Paris (France); Rees, Balt C. van [CERN, Theory Group, Geneva (Switzerland); Zan, Bernardo [Institute of Physics, Universiteit van Amsterdam, Amsterdam (Netherlands)

    2016-01-15

    We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  20. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  1. Paradigm lost, paradigm found: The re-emergence of hormesis as a fundamental dose response model in the toxicological sciences

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J. [Environmental Health Sciences, School of Public Health, Morrill I, N344, University of Massachusetts, Amherst, MA 01003 (United States)]. E-mail: edwardc@schoolph.umass.edu

    2005-12-15

    This paper provides an assessment of the toxicological basis of the hormetic dose-response relationship including issues relating to its reproducibility, frequency, and generalizability across biological models, endpoints measured and chemical class/physical stressors and implications for risk assessment. The quantitative features of the hormetic dose response are described and placed within toxicological context that considers study design, temporal assessment, mechanism, and experimental model/population heterogeneity. Particular emphasis is placed on an historical evaluation of why the field of toxicology rejected hormesis in favor of dose response models such as the threshold model for assessing non-carcinogens and linear no threshold (LNT) models for assessing carcinogens. The paper argues that such decisions were principally based on complex historical factors that emerged from the intense and protracted conflict between what is now called traditional medicine and homeopathy and the overly dominating influence of regulatory agencies on the toxicological intellectual agenda. Such regulatory agency influence emphasized hazard/risk assessment goals such as the derivation of no observed adverse effect levels (NOAELs) and the lowest observed adverse effect levels (LOAELs) which were derived principally from high dose studies using few doses, a feature which restricted perceptions and distorted judgments of several generations of toxicologists concerning the nature of the dose-response continuum. Such historical and technical blind spots lead the field of toxicology to not only reject an established dose-response model (hormesis), but also the model that was more common and fundamental than those that the field accepted. - The quantitative features of the hormetic dose/response are described and placed within the context of toxicology.

  2. Paradigm lost, paradigm found: The re-emergence of hormesis as a fundamental dose response model in the toxicological sciences

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2005-01-01

    This paper provides an assessment of the toxicological basis of the hormetic dose-response relationship including issues relating to its reproducibility, frequency, and generalizability across biological models, endpoints measured and chemical class/physical stressors and implications for risk assessment. The quantitative features of the hormetic dose response are described and placed within toxicological context that considers study design, temporal assessment, mechanism, and experimental model/population heterogeneity. Particular emphasis is placed on an historical evaluation of why the field of toxicology rejected hormesis in favor of dose response models such as the threshold model for assessing non-carcinogens and linear no threshold (LNT) models for assessing carcinogens. The paper argues that such decisions were principally based on complex historical factors that emerged from the intense and protracted conflict between what is now called traditional medicine and homeopathy and the overly dominating influence of regulatory agencies on the toxicological intellectual agenda. Such regulatory agency influence emphasized hazard/risk assessment goals such as the derivation of no observed adverse effect levels (NOAELs) and the lowest observed adverse effect levels (LOAELs) which were derived principally from high dose studies using few doses, a feature which restricted perceptions and distorted judgments of several generations of toxicologists concerning the nature of the dose-response continuum. Such historical and technical blind spots lead the field of toxicology to not only reject an established dose-response model (hormesis), but also the model that was more common and fundamental than those that the field accepted. - The quantitative features of the hormetic dose/response are described and placed within the context of toxicology

  3. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  4. Possible roles of Peccei-Quinn symmetry in an effective low energy model

    Science.gov (United States)

    Suematsu, Daijiro

    2017-12-01

    The strong C P problem is known to be solved by imposing Peccei-Quinn (PQ) symmetry. However, the domain wall problem caused by the spontaneous breaking of its remnant discrete subgroup could make models invalid in many cases. We propose a model in which the PQ charge is assigned quarks so as to escape this problem without introducing any extra colored fermions. In the low energy effective model resulting after the PQ symmetry breaking, both the quark mass hierarchy and the CKM mixing could be explained through Froggatt-Nielsen mechanism. If the model is combined with the lepton sector supplemented by an inert doublet scalar and right-handed neutrinos, the effective model reduces to the scotogenic neutrino mass model in which both the origin of neutrino masses and dark matter are closely related. The strong C P problem could be related to the quark mass hierarchy, neutrino masses, and dark matter through the PQ symmetry.

  5. Development and demonstration of a validation methodology for vehicle lateral dynamics simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Kutluay, Emir

    2013-02-01

    In this thesis a validation methodology to be used in the assessment of the vehicle dynamics simulation models is presented. Simulation of vehicle dynamics is used to estimate the dynamic responses of existing or proposed vehicles and has a wide array of applications in the development of vehicle technologies. Although simulation environments, measurement tools and mathematical theories on vehicle dynamics are well established, the methodical link between the experimental test data and validity analysis of the simulation model is still lacking. The developed validation paradigm has a top-down approach to the problem. It is ascertained that vehicle dynamics simulation models can only be validated using test maneuvers although they are aimed for real world maneuvers. Test maneuvers are determined according to the requirements of the real event at the start of the model development project and data handling techniques, validation metrics and criteria are declared for each of the selected maneuvers. If the simulation results satisfy these criteria, then the simulation is deemed ''not invalid''. If the simulation model fails to meet the criteria, the model is deemed invalid, and model iteration should be performed. The results are analyzed to determine if the results indicate a modeling error or a modeling inadequacy; and if a conditional validity in terms of system variables can be defined. Three test cases are used to demonstrate the application of the methodology. The developed methodology successfully identified the shortcomings of the tested simulation model, and defined the limits of application. The tested simulation model is found to be acceptable but valid only in a certain dynamical range. Several insights for the deficiencies of the model are reported in the analysis but the iteration step of the methodology is not demonstrated. Utilizing the proposed methodology will help to achieve more time and cost efficient simulation projects with

  6. Galilean invariance in the exponential model of atomic collisions

    Energy Technology Data Exchange (ETDEWEB)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-11-01

    Using the X/sup n//sup +/(1s/sup 2/)+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He/sup +/(1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed.

  7. Theoretical model of granular compaction

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Naim, E. [Los Alamos National Lab., NM (United States); Knight, J.B. [Princeton Univ., NJ (United States). Dept. of Physics; Nowak, E.R. [Univ. of Illinois, Urbana, IL (United States). Dept. of Physics]|[Univ. of Chicago, IL (United States). James Franck Inst.; Jaeger, H.M.; Nagel, S.R. [Univ. of Chicago, IL (United States). James Franck Inst.

    1997-11-01

    Experimental studies show that the density of a vibrated granular material evolves from a low density initial state into a higher density final steady state. The relaxation towards the final density follows an inverse logarithmic law. As the system approaches its final state, a growing number of beads have to be rearranged to enable a local density increase. A free volume argument shows that this number grows as N = {rho}/(1 {minus} {rho}). The time scale associated with such events increases exponentially e{sup {minus}N}, and as a result a logarithmically slow approach to the final state is found {rho} {infinity} {minus}{rho}(t) {approx_equal} 1/lnt.

  8. Atmospheric radionuclide transport model with radon postprocessor and SBG module. Model description version 2.8.0; ARTM. Atmosphaerisches Radionuklid-Transport-Modell mit Radon Postprozessor und SBG-Modul. Modellbeschreibung zu Version 2.8.0

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Sogalla, Martin; Thielen, Harald; Martens, Reinhard

    2015-04-20

    The study on the atmospheric radionuclide transport model with radon postprocessor and SBG module (model description version 2.8.0) covers the following issues: determination of emissions, radioactive decay, atmospheric dispersion calculation for radioactive gases, atmospheric dispersion calculation for radioactive dusts, determination of the gamma cloud radiation (gamma submersion), terrain roughness, effective source height, calculation area and model points, geographic reference systems and coordinate transformations, meteorological data, use of invalid meteorological data sets, consideration of statistical uncertainties, consideration of housings, consideration of bumpiness, consideration of terrain roughness, use of frequency distributions of the hourly dispersion situation, consideration of the vegetation period (summer), the radon post processor radon.exe, the SBG module, modeling of wind fields, shading settings.

  9. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  10. A sequential threshold cure model for genetic analysis of time-to-event data

    DEFF Research Database (Denmark)

    Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.

    2011-01-01

    In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...

  11. Comparisons of patch-use models for wintering American tree sparrows

    Science.gov (United States)

    Tome, M.W.

    1990-01-01

    Optimal foraging theory has stimulated numerous theoretical and empirical studies of foraging behavior for >20 years. These models provide a valuable tool for studying the foraging behavior of an organism. As with any other tool, the models are most effective when properly used. For example, to obtain a robust test of a foraging model, Stephens and Krebs (1986) recommend experimental designs in which four questions are answered in the affirmative. First, do the foragers play the same "game" as the model? Sec- ond, are the assumptions of the model met? Third, does the test rule out alternative possibilities? Finally, are the appropriate variables measured? Negative an- swers to any of these questions could invalidate the model and lead to confusion over the usefulness of foraging theory in conducting ecological studies. Gaines (1989) attempted to determine whether American Tree Sparrows (Spizella arborea) foraged by a time (Krebs 1973) or number expectation rule (Gibb 1962), or in a manner consistent with the predictions of Charnov's (1976) marginal value theorem (MVT). Gaines (1989: 118) noted appropriately that field tests of foraging models frequently involve uncontrollable circumstances; thus, it is often difficult to meet the assumptions of the models. Gaines also states (1989: 118) that "violations of the assumptions are also in- formative but do not constitute robust tests of predicted hypotheses," and that "the problem can be avoided by experimental analyses which concurrently test mutually exclusive hypotheses so that alter- native predictions will be eliminated if falsified." There is a problem with this approach because, when major assumptions of models are not satisfied, it is not justifiable to compare a predator's foraging behavior with the model's predictions. I submit that failing to follow the advice offered by Stephens and Krebs (1986) can invalidate tests of foraging models.

  12. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  13. Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows

    Science.gov (United States)

    Rolland, Joran

    2018-02-01

    This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T ) =Apr -Bp , with Ap and Bp positive. Moreover, Ap and Bp are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T ≍exp[L (A r -B )] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability

  14. Cancer and low dose responses In Vivo: implications for radiation protection

    Energy Technology Data Exchange (ETDEWEB)

    Mitchel, R.E.J. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    2006-12-15

    This paper discusses the linear no-threshold (LNT) hypothesis, risk prediction and radiation protection. The summary implications for the radiation protection system are that at low doses the conceptual basis of the present system appears to be incorrect. The belief that the current system embodies the precautionary principle and that the LNT assumption is cautious appears incorrect. The concept of dose additivity appears incorrect. Effective dose (Sievert) and the weighting factors on which it is based appear to be invalid. There may be no constant and appropriate value of DDREF for radiological protection dosimetry. The use of dose as a predictor of risk needs to be re-examined. The use of dose limits as a means of limiting risk need to be re-evaluated.

  15. Multilevel Models: Conceptual Framework and Applicability

    Directory of Open Access Journals (Sweden)

    Roxana-Otilia-Sonia Hrițcu

    2015-10-01

    Full Text Available Individuals and the social or organizational groups they belong to can be viewed as a hierarchical system situated on different levels. Individuals are situated on the first level of the hierarchy and they are nested together on the higher levels. Individuals interact with the social groups they belong to and are influenced by these groups. Traditional methods that study the relationships between data, like simple regression, do not take into account the hierarchical structure of the data and the effects of a group membership and, hence, results may be invalidated. Unlike standard regression modelling, the multilevel approach takes into account the individuals as well as the groups to which they belong. To take advantage of the multilevel analysis it is important that we recognize the multilevel characteristics of the data. In this article we introduce the outlines of multilevel data and we describe the models that work with such data. We introduce the basic multilevel model, the two-level model: students can be nested into classes, individuals into countries and the general two-level model can be extended very easily to several levels. Multilevel analysis has begun to be extensively used in many research areas. We present the most frequent study areas where multilevel models are used, such as sociological studies, education, psychological research, health studies, demography, epidemiology, biology, environmental studies and entrepreneurship. We support the idea that since hierarchies exist everywhere, multilevel data should be recognized and analyzed properly by using multilevel modelling.

  16. Adaptive Modeling of the International Space Station Electrical Power System

    Science.gov (United States)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  17. Dynamic Computation of Change Operations in Version Management of Business Process Models

    Science.gov (United States)

    Küster, Jochen Malte; Gerth, Christian; Engels, Gregor

    Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.

  18. Principles and interest of GOF tests for multistate capture-recapture models

    Directory of Open Access Journals (Sweden)

    Pradel, R.

    2005-12-01

    Full Text Available Optimal goodness–of–fit procedures for multistate models are new. Drawing a parallel with the corresponding single–state procedures, we present their singularities and show how the overall test can be decomposed into interpretable components. All theoretical developments are illustrated with an application to the now classical study of movements of Canada geese between wintering sites. Through this application, we exemplify how the interpretable components give insight into the data, leading eventually to the choice of an appropriate general model but also sometimes to the invalidation of the multistate models as a whole. The method for computing a corrective overdispersion factor is then mentioned. We also take the opportunity to try to demystify some statistical notions like that of Minimal Sufficient Statistics by introducing them intuitively. We conclude that these tests should be considered an important part of the analysis itself, contributing in ways that the parametric modelling cannot always do to the understanding of the data.

  19. The cooperative effect of p53 and Rb in local nanotherapy in a rabbit VX2 model of hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    Dong S

    2013-10-01

    Full Text Available Shengli Dong,1 Qibin Tang,2 Miaoyun Long,3 Jian Guan,4 Lu Ye,5 Gaopeng Li6 1Department of General Surgery, The Second Hospital of Shanxi Medical University, Shanxi Medical University, Taiyuan, Shanxi Province, 2Department of Hepatobiliopancreatic Surgery, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, 3Department of Thyroid and Vascular Surgery, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, 4Department of Radiology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, 5Infection Department, Guangzhou No 8 Hospital, Guangzhou, Guangdong Province, 6Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, People's Republic of China Background/aim: A local nanotherapy (LNT combining the therapeutic efficacy of trans-arterial embolization, nanoparticles, and p53 gene therapy has been previously presented. The study presented here aimed to further improve the incomplete tumor eradication and limited survival enhancement and to elucidate the molecular mechanism of the LNT. Methods: In a tumor-targeting manner, recombinant expressing plasmids harboring wild-type p53 and Rb were either co-transferred or transferred separately to rabbit hepatic VX2 tumors in a poly-L-lysine-modified hydroxyapatite nanoparticle nanoplex and Lipiodol® (Guerbet, Villepinte, France emulsion via the hepatic artery. Subsequent co-expression of p53 and Rb proteins within the treated tumors was investigated by Western blotting and in situ analysis by laser-scanning confocal microscopy. The therapeutic effect was evaluated by the tumor growth velocity, apoptosis and necrosis rates, their sensitivity to Adriamycin® (ADM, mitomycin C, and fluorouracil, the microvessel density of tumor tissue, and the survival time of animals. Eventually, real-time polymerase chain reaction and enhanced chemiluminescence Western blotting

  20. Sulfated lentinan induced mitochondrial dysfunction leads to programmed cell death of tobacco BY-2 cells.

    Science.gov (United States)

    Wang, Jie; Wang, Yaofeng; Shen, Lili; Qian, Yumei; Yang, Jinguang; Wang, Fenglong

    2017-04-01

    Sulphated lentinan (sLTN) is known to act as a resistance inducer by causing programmed cell death (PCD) in tobacco suspension cells. However, the underlying mechanism of this effect is largely unknown. Using tobacco BY-2 cell model, morphological and biochemical studies revealed that mitochondrial reactive oxygen species (ROS) production and mitochondrial dysfunction contribute to sLNT induced PCD. Cell viability, and HO/PI fluorescence imaging and TUNEL assays confirmed a typical cell death process caused by sLNT. Acetylsalicylic acid (an ROS scavenger), diphenylene iodonium (an inhibitor of NADPH oxidases) and protonophore carbonyl cyanide p-trifluoromethoxyphenyl hydrazone (a protonophore and an uncoupler of mitochondrial oxidative phosphorylation) inhibited sLNT-induced H 2 O 2 generation and cell death, suggesting that ROS generation linked, at least partly, to a mitochondrial dysfunction and caspase-like activation. This conclusion was further confirmed by double-stained cells with the mitochondria-specific marker MitoTracker RedCMXRos and the ROS probe H 2 DCFDA. Moreover, the sLNT-induced PCD of BY-2 cells required cellular metabolism as up-regulation of the AOX family gene transcripts and induction of the SA biosynthesis, the TCA cycle, and miETC related genes were observed. It is concluded that mitochondria play an essential role in the signaling pathway of sLNT-induced ROS generation, which possibly provided new insight into the sLNT-mediated antiviral response, including PCD. Copyright © 2016. Published by Elsevier Inc.

  1. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    Science.gov (United States)

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  2. Application of dynamic slip wall modeling to a turbine nozzle guide vane

    Science.gov (United States)

    Bose, Sanjeeb; Talnikar, Chaitanya; Blonigan, Patrick; Wang, Qiqi

    2015-11-01

    Resolution of near-wall turbulent structures is computational prohibitive necessitating the need for wall-modeled large-eddy simulation approaches. Standard wall models are often based on assumptions of equilibrium boundary layers, which do not necessarily account for the dissimilarity of the momentum and thermal boundary layers. We investigate the use of the dynamic slip wall boundary condition (Bose and Moin, 2014) for the prediction of surface heat transfer on a turbine nozzle guide vane (Arts and de Rouvroit, 1992). The heat transfer coefficient is well predicted by the slip wall model, including capturing the transition to turbulence. The sensitivity of the heat transfer coefficient to the incident turbulence intensity will additionally be discussed. Lastly, the behavior of the thermal and momentum slip lengths will be contrasted between regions where the strong Reynolds analogy is invalid (near transition on the suction side) and an isothermal, zero pressure gradient flat plate boundary layer (Wu and Moin, 2010).

  3. Towards product design automation based on parameterized standard model with diversiform knowledge

    Science.gov (United States)

    Liu, Wei; Zhang, Xiaobing

    2017-04-01

    Product standardization based on CAD software is an effective way to improve design efficiency. In the past, research and development on standardization mainly focused on the level of component, and the standardization of the entire product as a whole is rarely taken into consideration. In this paper, the size and structure of 3D product models are both driven by the Excel datasheets, based on which a parameterized model library is therefore established. Diversiform knowledge including associated parameters and default properties are embedded into the templates in advance to simplify their reuse. Through the simple operation, we can obtain the correct product with the finished 3D models including single parts or complex assemblies. Two examples are illustrated later to invalid the idea, which will greatly improve the design efficiency.

  4. Use of nonlinear dose-effect models to predict consequences

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    The linear dose-effect relationship was introduced as a model for the induction of cancer from exposure to nuclear radiation. Subsequently, it has been used by analogy to assess the risk of chemical carcinogens also. Recently, however, the model for radiation carcinogenesis has come increasingly under attack because its calculations contradict the epidemiological data, such as cancer in atomic bomb survivors. Even so, its proponents vigorously defend it, often using arguments that are not so much scientific as a mix of scientific, societal, and often political arguments. At least in part, the resilience of the linear model is due to two convenient properties that are exclusive to linearity: First, the risk of an event is determined solely by the event dose; second, the total risk of a population group depends only on the total population dose. In reality, the linear model has been conclusively falsified; i.e., it has been shown to make wrong predictions, and once this fact is generally realized, the scientific method calls for a new paradigm model. As all alternative models are by necessity nonlinear, all the convenient properties of the linear model are invalid, and calculational procedures have to be used that are appropriate for nonlinear models

  5. Model Reduction in Biomechanics

    Science.gov (United States)

    Feng, Yan

    mechanical parameters from experimental results. However, in real biological world, these homogeneous and isotropic assumptions are usually invalidate. Thus, instead of using hypothesized model, a specific continuum model at mesoscopic scale can be introduced based upon data reduction of the results from molecular simulations at atomistic level. Once a continuum model is established, it can provide details on the distribution of stresses and strains induced within the biomolecular system which is useful in determining the distribution and transmission of these forces to the cytoskeletal and sub-cellular components, and help us gain a better understanding in cell mechanics. A data-driven model reduction approach to the problem of microtubule mechanics as an application is present, a beam element is constructed for microtubules based upon data reduction of the results from molecular simulation of the carbon backbone chain of alphabeta-tubulin dimers. The data base of mechanical responses to various types of loads from molecular simulation is reduced to dominant modes. The dominant modes are subsequently used to construct the stiffness matrix of a beam element that captures the anisotropic behavior and deformation mode coupling that arises from a microtubule's spiral structure. In contrast to standard Euler-Bernoulli or Timoshenko beam elements, the link between forces and node displacements results not from hypothesized deformation behavior, but directly from the data obtained by molecular scale simulation. Differences between the resulting microtubule data-driven beam model (MTDDBM) and standard beam elements are presented, with a focus on coupling of bending, stretch, shear deformations. The MTDDBM is just as economical to use as a standard beam element, and allows accurate reconstruction of the mechanical behavior of structures within a cell as exemplified in a simple model of a component element of the mitotic spindle.

  6. Is the Bifactor Model a Better Model or Is It Just Better at Modeling Implausible Responses? Application of Iteratively Reweighted Least Squares to the Rosenberg Self-Esteem Scale.

    Science.gov (United States)

    Reise, Steven P; Kim, Dale S; Mansolf, Maxwell; Widaman, Keith F

    2016-01-01

    Although the structure of the Rosenberg Self-Esteem Scale (RSES) has been exhaustively evaluated, questions regarding dimensionality and direction of wording effects continue to be debated. To shed new light on these issues, we ask (a) for what percentage of individuals is a unidimensional model adequate, (b) what additional percentage of individuals can be modeled with multidimensional specifications, and (c) what percentage of individuals respond so inconsistently that they cannot be well modeled? To estimate these percentages, we applied iteratively reweighted least squares (IRLS) to examine the structure of the RSES in a large, publicly available data set. A distance measure, d s , reflecting a distance between a response pattern and an estimated model, was used for case weighting. We found that a bifactor model provided the best overall model fit, with one general factor and two wording-related group factors. However, on the basis of d r  values, a distance measure based on individual residuals, we concluded that approximately 86% of cases were adequately modeled through a unidimensional structure, and only an additional 3% required a bifactor model. Roughly 11% of cases were judged as "unmodelable" due to their significant residuals in all models considered. Finally, analysis of d s revealed that some, but not all, of the superior fit of the bifactor model is owed to that model's ability to better accommodate implausible and possibly invalid response patterns, and not necessarily because it better accounts for the effects of direction of wording.

  7. Gene-Environment Interplay in Twin Models

    Science.gov (United States)

    Hatemi, Peter K.

    2013-01-01

    In this article, we respond to Shultziner’s critique that argues that identical twins are more alike not because of genetic similarity, but because they select into more similar environments and respond to stimuli in comparable ways, and that these effects bias twin model estimates to such an extent that they are invalid. The essay further argues that the theory and methods that undergird twin models, as well as the empirical studies which rely upon them, are unaware of these potential biases. We correct this and other misunderstandings in the essay and find that gene-environment (GE) interplay is a well-articulated concept in behavior genetics and political science, operationalized as gene-environment correlation and gene-environment interaction. Both are incorporated into interpretations of the classical twin design (CTD) and estimated in numerous empirical studies through extensions of the CTD. We then conduct simulations to quantify the influence of GE interplay on estimates from the CTD. Due to the criticism’s mischaracterization of the CTD and GE interplay, combined with the absence of any empirical evidence to counter what is presented in the extant literature and this article, we conclude that the critique does not enhance our understanding of the processes that drive political traits, genetic or otherwise. PMID:24808718

  8. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. Leveraging the BPEL Event Model to Support QoS-aware Process Execution

    Science.gov (United States)

    Zaid, Farid; Berbner, Rainer; Steinmetz, Ralf

    Business processes executed using compositions of distributed Web Services are susceptible to different fault types. The Web Services Business Process Execution Language (BPEL) is widely used to execute such processes. While BPEL provides fault handling mechanisms to handle functional faults like invalid message types, it still lacks a flexible native mechanism to handle non-functional exceptions associated with violations of QoS levels that are typically specified in a governing Service Level Agreement (SLA), In this paper, we present an approach to complement BPEL's fault handling, where expected QoS levels and necessary recovery actions are specified declaratively in form of Event-Condition-Action (ECA) rules. Our main contribution is leveraging BPEL's standard event model which we use as an event space for the created ECA rules. We validate our approach by an extension to an open source BPEL engine.

  10. Scenario and parameter studies on global deposition of radioactivity using the computer model GLODEP2

    International Nuclear Information System (INIS)

    Shapiro, C.S.

    1984-08-01

    The GLODEP2 computer code was utilized to determine biological impact to humans on a global scale using up-to-date estimates of biological risk. These risk factors use varied biological damage models for assessing effects. All the doses reported are the unsheltered, unweathered, smooth terrain, external gamma dose. We assume the unperturbed atmosphere in determining injection and deposition. Effects due to ''nuclear winter'' may invalidate this assumption. The calculations also include scenarios that attempt to assess the impact of the changing nature of the nuclear stockpile. In particular, the shift from larger to smaller yield nuclear devices significantly changes the injection pattern into the atmosphere, and hence significantly affects the radiation doses that ensue. We have also looked at injections into the equatorial atmosphere. In total, we report here the results for 8 scenarios. 10 refs., 6 figs., 11 tabs

  11. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  12. Estimation of sexual behavior in the 18-to-24-years-old Iranian youth based on a crosswise model study.

    Science.gov (United States)

    Vakilian, Katayon; Mousavi, Seyed Abbas; Keramat, Afsaneh

    2014-01-13

    In many countries, negative social attitude towards sensitive issues such as sexual behavior has resulted in false and invalid data concerning this issue.This is an analytical cross-sectional study, in which a total number of 1500 single students from universities of Shahroud City were sampled using a multi stage technique. The students were assured that their information disclosed for the researcher will be treated as private and confidential. The results were analyzed using crosswise model, Crosswise Regression, T-test and Chi-square tests. It seems that the prevalence of sexual behavior among Iranian youth is 41% (CI = 36-53). Findings showed that estimation sexual relationship in Iranian single youth is high. Thus, devising training models according to the Islamic-Iranian culture is necessary in order to prevent risky sexual behavior.

  13. A hybrid hydrostatic and non-hydrostatic numerical model for shallow flow simulations

    Science.gov (United States)

    Zhang, Jingxin; Liang, Dongfang; Liu, Hua

    2018-05-01

    Hydrodynamics of geophysical flows in oceanic shelves, estuaries, and rivers, are often studied by solving shallow water model equations. Although hydrostatic models are accurate and cost efficient for many natural flows, there are situations where the hydrostatic assumption is invalid, whereby a fully hydrodynamic model is necessary to increase simulation accuracy. There is a growing concern about the decrease of the computational cost of non-hydrostatic pressure models to improve the range of their applications in large-scale flows with complex geometries. This study describes a hybrid hydrostatic and non-hydrostatic model to increase the efficiency of simulating shallow water flows. The basic numerical model is a three-dimensional hydrostatic model solved by the finite volume method (FVM) applied to unstructured grids. Herein, a second-order total variation diminishing (TVD) scheme is adopted. Using a predictor-corrector method to calculate the non-hydrostatic pressure, we extended the hydrostatic model to a fully hydrodynamic model. By localising the computational domain in the corrector step for non-hydrostatic pressure calculations, a hybrid model was developed. There was no prior special treatment on mode switching, and the developed numerical codes were highly efficient and robust. The hybrid model is applicable to the simulation of shallow flows when non-hydrostatic pressure is predominant only in the local domain. Beyond the non-hydrostatic domain, the hydrostatic model is still accurate. The applicability of the hybrid method was validated using several study cases.

  14. modeling workflow management in a distributed computing system

    African Journals Online (AJOL)

    Dr Obe

    communication system, which allows for computerized support. ... Keywords: Distributed computing system; Petri nets;Workflow management. 1. ... A distributed operating system usually .... the questionnaire is returned with invalid data,.

  15. Thermodynamical aspects of modeling the mechanical response of granular materials

    International Nuclear Information System (INIS)

    Elata, D.

    1995-01-01

    In many applications in rock physics, the material is treated as a continuum. By supplementing the related conservation laws with constitutive equations such as stress-strain relations, a well-posed problem can be formulated and solved. The stress-strain relations may be based on a combination of experimental data and a phenomenological or micromechanical model. If the model is physically sound and its parameters have a physical meaning, it can serve to predict the stress response of the material to unmeasured deformations, predict the stress response of other materials, and perhaps predict other categories of the mechanical response such as failure, permeability, and conductivity. However, it is essential that the model be consistent with all conservation laws and consistent with the second law of thermodynamics. Specifically, some models of the mechanical response of granular materials proposed in literature, are based on intergranular contact force-displacement laws that violate the second law of thermodynamics by permitting energy generation at no cost. This diminishes the usefulness of these models as it invalidates their predictive capabilities. [This work was performed under the auspices of the U.S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48.

  16. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    Science.gov (United States)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them

  17. Systematic validation of non-equilibrium thermochemical models using Bayesian inference

    KAUST Repository

    Miki, Kenji

    2015-10-01

    © 2015 Elsevier Inc. The validation process proposed by Babuška et al. [1] is applied to thermochemical models describing post-shock flow conditions. In this validation approach, experimental data is involved only in the calibration of the models, and the decision process is based on quantities of interest (QoIs) predicted on scenarios that are not necessarily amenable experimentally. Moreover, uncertainties present in the experimental data, as well as those resulting from an incomplete physical model description, are propagated to the QoIs. We investigate four commonly used thermochemical models: a one-temperature model (which assumes thermal equilibrium among all inner modes), and two-temperature models developed by Macheret et al. [2], Marrone and Treanor [3], and Park [4]. Up to 16 uncertain parameters are estimated using Bayesian updating based on the latest absolute volumetric radiance data collected at the Electric Arc Shock Tube (EAST) installed inside the NASA Ames Research Center. Following the solution of the inverse problems, the forward problems are solved in order to predict the radiative heat flux, QoI, and examine the validity of these models. Our results show that all four models are invalid, but for different reasons: the one-temperature model simply fails to reproduce the data while the two-temperature models exhibit unacceptably large uncertainties in the QoI predictions.

  18. Mathematical modelling of complex contagion on clustered networks

    Science.gov (United States)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  19. Mathematical modelling of complex contagion on clustered networks

    Directory of Open Access Journals (Sweden)

    David J. P. O'Sullivan

    2015-09-01

    Full Text Available The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010, adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the complex contagion effects of social reinforcement are important in such diffusion, in contrast to simple contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010, to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  20. Patient choice modelling: how do patients choose their hospitals?

    Science.gov (United States)

    Smith, Honora; Currie, Christine; Chaiwuttisak, Pornpimol; Kyprianou, Andreas

    2018-06-01

    As an aid to predicting future hospital admissions, we compare use of the Multinomial Logit and the Utility Maximising Nested Logit models to describe how patients choose their hospitals. The models are fitted to real data from Derbyshire, United Kingdom, which lists the postcodes of more than 200,000 admissions to six different local hospitals. Both elective and emergency admissions are analysed for this mixed urban/rural area. For characteristics that may affect a patient's choice of hospital, we consider the distance of the patient from the hospital, the number of beds at the hospital and the number of car parking spaces available at the hospital, as well as several statistics publicly available on National Health Service (NHS) websites: an average waiting time, the patient survey score for ward cleanliness, the patient safety score and the inpatient survey score for overall care. The Multinomial Logit model is successfully fitted to the data. Results obtained with the Utility Maximising Nested Logit model show that nesting according to city or town may be invalid for these data; in other words, the choice of hospital does not appear to be preceded by choice of city. In all of the analysis carried out, distance appears to be one of the main influences on a patient's choice of hospital rather than statistics available on the Internet.

  1. ON THE LAMPPOST MODEL OF ACCRETING BLACK HOLES

    Energy Technology Data Exchange (ETDEWEB)

    Niedźwiecki, Andrzej; Szanecki, Michał [Łódź University, Department of Physics, Pomorska 149/153, 90-236 Łódź (Poland); Zdziarski, Andrzej A. [Centrum Astronomiczne im. M. Kopernika, Bartycka 18, 00-716 Warszawa (Poland)

    2016-04-10

    We study the lamppost model, in which the X-ray source in accreting black hole (BH) systems is located on the rotation axis close to the horizon. We point out a number of inconsistencies in the widely used lamppost model relxilllp, e.g., neglecting the redshift of the photons emitted by the lamppost that are directly observed. They appear to invalidate those model fitting results for which the source distances from the horizon are within several gravitational radii. Furthermore, if those results were correct, most of the photons produced in the lamppost would be trapped by the BH, and the luminosity generated in the source as measured at infinity would be much larger than that observed. This appears to be in conflict with the observed smooth state transitions between the hard and soft states of X-ray binaries. The required increase of the accretion rate and the associated efficiency reduction also present a problem for active galactic nuclei. Then, those models imply the luminosity measured in the local frame is much higher than that produced in the source and measured at infinity, due to the additional effects of time dilation and redshift, and the electron temperature is significantly higher than that observed. We show that these conditions imply that the fitted sources would be out of the e{sup ±} pair equilibrium. On the other hand, the above issues pose relatively minor problems for sources at large distances from the BH, where relxilllp can still be used.

  2. Stochastic population oscillations in spatial predator-prey models

    International Nuclear Information System (INIS)

    Taeuber, Uwe C

    2011-01-01

    It is well-established that including spatial structure and stochastic noise in models for predator-prey interactions invalidates the classical deterministic Lotka-Volterra picture of neutral population cycles. In contrast, stochastic models yield long-lived, but ultimately decaying erratic population oscillations, which can be understood through a resonant amplification mechanism for density fluctuations. In Monte Carlo simulations of spatial stochastic predator-prey systems, one observes striking complex spatio-temporal structures. These spreading activity fronts induce persistent correlations between predators and prey. In the presence of local particle density restrictions (finite prey carrying capacity), there exists an extinction threshold for the predator population. The accompanying continuous non-equilibrium phase transition is governed by the directed-percolation universality class. We employ field-theoretic methods based on the Doi-Peliti representation of the master equation for stochastic particle interaction models to (i) map the ensuing action in the vicinity of the absorbing state phase transition to Reggeon field theory, and (ii) to quantitatively address fluctuation-induced renormalizations of the population oscillation frequency, damping, and diffusion coefficients in the species coexistence phase.

  3. The separatrix response of diverted TCV plasmas compared to the CREATE-L model

    International Nuclear Information System (INIS)

    Vyas, P.; Lister, J.B.; Villone, F.; Albanese, R.

    1997-11-01

    The response of Ohmic, single-null diverted, non-centred plasmas in TCV to poloidal field coil stimulation has been compared to the linear CREATE-L MHD equilibrium response model. The closed loop responses of directly measured quantities, reconstructed parameters, and the reconstructed plasma contour were all examined. Provided that the plasma position and shape perturbation were small enough for the linearity assumption to hold, the model-experiment agreement was good. For some stimulations the open loop vertical position instability growth rate changed significantly, illustrating the limitations of a linear model. A different model was developed with the assumption that the flux at the plasma boundary is frozen and was also compared with experimental results. It proved not to be as reliable as the CREATE-L model for some simulation parameters showing that the experiments were able to discriminate between different plasma response models. The closed loop response was also found to be sensitive to changes in the modelled plasma shape. It was not possible to invalidate the CREATE-L model despite the extensive range of responses excited by the experiments. (author) figs., tabs., 5 refs

  4. Precise generation of systems biology models from KEGG pathways.

    Science.gov (United States)

    Wrzodek, Clemens; Büchel, Finja; Ruff, Manuel; Dräger, Andreas; Zell, Andreas

    2013-02-21

    The KEGG PATHWAY database provides a plethora of pathways for a diversity of organisms. All pathway components are directly linked to other KEGG databases, such as KEGG COMPOUND or KEGG REACTION. Therefore, the pathways can be extended with an enormous amount of information and provide a foundation for initial structural modeling approaches. As a drawback, KGML-formatted KEGG pathways are primarily designed for visualization purposes and often omit important details for the sake of a clear arrangement of its entries. Thus, a direct conversion into systems biology models would produce incomplete and erroneous models. Here, we present a precise method for processing and converting KEGG pathways into initial metabolic and signaling models encoded in the standardized community pathway formats SBML (Levels 2 and 3) and BioPAX (Levels 2 and 3). This method involves correcting invalid or incomplete KGML content, creating complete and valid stoichiometric reactions, translating relations to signaling models and augmenting the pathway content with various information, such as cross-references to Entrez Gene, OMIM, UniProt ChEBI, and many more.Finally, we compare several existing conversion tools for KEGG pathways and show that the conversion from KEGG to BioPAX does not involve a loss of information, whilst lossless translations to SBML can only be performed using SBML Level 3, including its recently proposed qualitative models and groups extension packages. Building correct BioPAX and SBML signaling models from the KEGG database is a unique characteristic of the proposed method. Further, there is no other approach that is able to appropriately construct metabolic models from KEGG pathways, including correct reactions with stoichiometry. The resulting initial models, which contain valid and comprehensive SBML or BioPAX code and a multitude of cross-references, lay the foundation to facilitate further modeling steps.

  5. Invalidity of the spectral Fokker-Planck equation forCauchy noise driven Langevin equation

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2004-01-01

    -called alpha-stable noise (or Levy noise) the Fokker-Planck equation no longer exists as a partial differential equation for the probability density because the property of finite variance is lost. In stead it has been attempted to formulate an equation for the characteristic function (the Fourier transform...

  6. 21 CFR 314.52 - Notice of certification of invalidity or noninfringement of a patent.

    Science.gov (United States)

    2010-04-01

    ..., agent, or other authorized official. The name and address of the application holder or its attorney, agent, or authorized official may be obtained from the Orange Book Staff, Office of Generic Drugs, 7500... of an agent in the United States authorized to accept service of process for the applicant. (d...

  7. 21 CFR 314.95 - Notice of certification of invalidity or noninfringement of a patent.

    Science.gov (United States)

    2010-04-01

    ... of business within the United States, the application holder's attorney, agent, or other authorized official. The name and address of the application holder or its attorney, agent, or authorized official may be obtained from the Orange Book Staff, Office of Generic Drugs, 7500 Standish Pl., Rockville, MD...

  8. Evaluating a Novel Eye Tracking Tool to Detect Invalid Responding in Neurocognitive Assessment

    Science.gov (United States)

    2014-05-07

    traumatigue [the psychological exam in cases of traumatic encephalopathy]. Archives de Psychologie 28:286-340 214. Rey A. 1964. L’examen clinique en... psychologie [the clinical exam in psychology]. Paris, France: Press Universitaire de France 215. Reynolds CR, Horton AM. 2012. Detection of malingering

  9. 32 CFR 538.5 - Conversion of invalidated military payment certificates.

    Science.gov (United States)

    2010-07-01

    ... be forwarded by the summary court officer to the U.S. Army Finance and Accounting Center for decision... up in the accounts of the finance and accounting officer. Such certificates will be held in... claimant. In the event these certificates are again received by the finance and accounting officer as...

  10. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    Science.gov (United States)

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  11. 20 CFR 656.30 - Validity of and invalidation of labor certifications.

    Science.gov (United States)

    2010-04-01

    ... Immigration Officer. (2) The filing date, established under § 656.17(c), of an approved labor certification... particular job opportunity, the alien named on the original application (unless a substitution was approved... the written request of a Consular or Immigration Officer. The Certifying Officer shall issue such...

  12. MOTOR REHABILITATION OF INVALIDS WITH INFRINGEMENT OF LOCOMOTOR FUNCTION DUE TO RESIDUAL PHENOMENA OF STROKE

    Directory of Open Access Journals (Sweden)

    G. M. Tsirkin

    2013-01-01

    Full Text Available This paper demonstrates the clinical efficacy of multiparametric biofeedback in patients aged 45 to 60 years with residual phenomena after stroke with 1 to 5 years prescription. Comparison was made according to the international scale. Patients in the control group and the main group were selected at random.It was shown that the use of multiparametric biofeedback allows to reduce spasticity, restore body image and improve hemodynamics, increase adaptive capacity of the body, improve coordination. At the sa me time, when compared with medical therapy of spasticity, this technology is an order higher cost-effective.

  13. Rectification of invalidly published new names for plants from the late Eocene of North Bohemia

    Directory of Open Access Journals (Sweden)

    Kvaček Zlatko

    2015-12-01

    Full Text Available Valid publication of new names of fossil plant taxa published since 1 January 1996 requires a diagnosis or description in English, besides other requirements included in the International Code of Nomenclature for algae, fungi, and plants (Melbourne Code adopted by the Eighteenth International Botanical Congress, Melbourne, Australia, July 2011 (McNeill et al. 2012. In order to validate names published from the late Eocene flora of the Staré Sedlo Formation, North Bohemia, diagnosed only in German (Knobloch et al. 1996, English translations are provided, including references to the type material and further relevant information.

  14. A New Global Policy Regime Founded on Invalid Statistics? Hanushek, Woessmann, PISA, and Economic Growth

    Science.gov (United States)

    Komatsu, Hikaru; Rappleye, Jeremy

    2017-01-01

    Several recent, highly influential comparative studies have made strong statistical claims that improvements on global learning assessments such as PISA will lead to higher GDP growth rates. These claims have provided the primary source of legitimation for policy reforms championed by leading international organisations, most notably the World…

  15. Most people do not ignore salient invalid cues in memory-based decisions.

    Science.gov (United States)

    Platzer, Christine; Bröder, Arndt

    2012-08-01

    Former experimental studies have shown that decisions from memory tend to rely only on a few cues, following simple noncompensatory heuristics like "take the best." However, it has also repeatedly been demonstrated that a pictorial, as opposed to a verbal, representation of cue information fosters the inclusion of more cues in compensatory strategies, suggesting a facilitated retrieval of cue patterns. These studies did not properly control for visual salience of cues, however. In the experiment reported here, the cue salience hierarchy established in a pilot study was either congruent or incongruent with the validity order of the cues. Only the latter condition increased compensatory decision making, suggesting that the apparent representational format effect is, rather, a salience effect: Participants automatically retrieve and incorporate salient cues irrespective of their validity. Results are discussed with respect to reaction time data.

  16. Genetic invalidation of Lp-PLA2 as a therapeutic target

    DEFF Research Database (Denmark)

    Gregson, John M; Freitag, Daniel F; Surendran, Praveen

    2017-01-01

    AIMS: Darapladib, a potent inhibitor of lipoprotein-associated phospholipase A2 (Lp-PLA2), has not reduced risk of cardiovascular disease outcomes in recent randomized trials. We aimed to test whether Lp-PLA2 enzyme activity is causally relevant to coronary heart disease. METHODS: In 72...... (Val379Ala (rs1051931)) in PLA2G7, the gene encoding Lp-PLA2. We supplemented de-novo genotyping with information on a further 45,823 coronary heart disease patients and 88,680 controls in publicly available databases and other previous studies. We conducted a systematic review of randomized trials...... to compare effects of darapladib treatment on soluble Lp-PLA2 activity, conventional cardiovascular risk factors, and coronary heart disease risk with corresponding effects of Lp-PLA2-lowering alleles. RESULTS: Lp-PLA2 activity was decreased by 64% (p = 2.4 × 10(-25)) with carriage of any of the four loss...

  17. Modeling time-series count data: the unique challenges facing political communication studies.

    Science.gov (United States)

    Fogarty, Brian J; Monogan, James E

    2014-05-01

    This paper demonstrates the importance of proper model specification when analyzing time-series count data in political communication studies. It is common for scholars of media and politics to investigate counts of coverage of an issue as it evolves over time. Many scholars rightly consider the issues of time dependence and dynamic causality to be the most important when crafting a model. However, to ignore the count features of the outcome variable overlooks an important feature of the data. This is particularly the case when modeling data with a low number of counts. In this paper, we argue that the Poisson autoregressive model (Brandt and Williams, 2001) accurately meets the needs of many media studies. We replicate the analyses of Flemming et al. (1997), Peake and Eshbaugh-Soha (2008), and Ura (2009) and demonstrate that models missing some of the assumptions of the Poisson autoregressive model often yield invalid inferences. We also demonstrate that the effect of any of these models can be illustrated dynamically with estimates of uncertainty through a simulation procedure. The paper concludes with implications of these findings for the practical researcher. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. CRADA Final Report for CRADA Number ORNL00-0605: Advanced Engine/Aftertreatment System R&D

    Energy Technology Data Exchange (ETDEWEB)

    Pihl, Josh A [ORNL; West, Brian H [ORNL; Toops, Todd J [ORNL; Adelman, Brad [Navistar; Derybowski, Edward [Navistar

    2011-10-01

    compound experiments confirmed the previous results regarding hydrocarbon reactivity: 1-pentene was the most efficient LNT reductant, followed by toluene. Injection location had minimal impact on the reactivity of these two compounds. Iso-octane was an ineffective LNT reductant, requiring high doses (resulting in high HC emissions) to achieve reasonable NOx conversions. Diesel fuel reactivity was sensitive to injection location, with the best performance achieved through fuel injection downstream of the DOC. This configuration generated large LNT temperature excursions, which probably improved the efficiency of the NOx storage/reduction process, but also resulted in very high HC emissions. The ORNL team demonstrated an LNT desulfation under 'road load' conditions using throttling, EGR, and in-pipe injection of diesel fuel. Flow reactor characterization of core samples cut from the front and rear of the engine-aged LNT revealed complex spatially dependent degradation mechanisms. The front of the catalyst contained residual sulfates, which impacted NOx storage and conversion efficiencies at high temperatures. The rear of the catalyst showed significant sintering of the washcoat and precious metal particles, resulting in lower NOx conversion efficiencies at low temperatures. Further flow reactor characterization of engine-aged LNT core samples established that low temperature performance was limited by slow release and reduction of stored NOx during regeneration. Carbon monoxide was only effective at regenerating the LNT at temperatures above 200 C; propene was unreactive even at 250 C. Low temperature operation also resulted in unselective NOx reduction, resulting in high emissions of both N{sub 2}O and NH{sub 3}. During the latter years of the CRADA, the focus was shifted from LNTs to other aftertreatment devices. Two years of the CRADA were spent developing detailed ammonia SCR device models with sufficient accuracy and computational efficiency to be used in

  19. Dose and Dose-Rate Effectiveness Factor (DDREF); Der Dosis- und Dosisleistungs-Effektivitaetsfaktor (DDREF)

    Energy Technology Data Exchange (ETDEWEB)

    Breckow, Joachim [Fachhochschule Giessen-Friedberg, Giessen (Germany). Inst. fuer Medizinische Physik und Strahlenschutz

    2016-08-01

    For practical radiation protection purposes it is supposed that stochastic radiation effects a determined by a proportional dose relation (LNT). Radiobiological and radiation epidemiological studies indicated that in the low dose range a dependence on dose rates might exist. This would trigger an overestimation of radiation risks based on the LNT model. OCRP had recommended a concept to combine all effects in a single factor DDREF (dose and dose-Rate effectiveness factor). There is still too low information on cellular mechanisms of low dose irradiation including possible repair and other processes. The Strahlenschutzkommission cannot identify a sufficient scientific justification for DDREF and recommends an adaption to the actual state of science.

  20. Photo- and electro-luminescence of rare earth doped ZnO electroluminors at liquid nitrogen temperature

    International Nuclear Information System (INIS)

    Bhushan, S.; Kaza, B.R.; Pandey, A.N.

    1981-01-01

    Photo (PL) and electroluminescent (EL) spectra of some rare earth (La, Gd, Er or Dy) doped ZnO electroluminors have been investigated at liquid nitrogen temperature (LNT) and compared with their corresponding results at room temperature (RT). In addition to three bands observed at RT, one more band on the higher wavelength side appears in EL spectra. Spectral shift with the exciting intensity at LNT supports the donor-acceptor (DA) model in which the rare earths form the donor levels. From the temperature dependent studies of PL and EL brightness, the EL phenomenon is found to be more susceptible to traps. (author)

  1. Validation of regression models for nitrate concentrations in the upper groundwater in sandy soils

    International Nuclear Information System (INIS)

    Sonneveld, M.P.W.; Brus, D.J.; Roelsma, J.

    2010-01-01

    For Dutch sandy regions, linear regression models have been developed that predict nitrate concentrations in the upper groundwater on the basis of residual nitrate contents in the soil in autumn. The objective of our study was to validate these regression models for one particular sandy region dominated by dairy farming. No data from this area were used for calibrating the regression models. The model was validated by additional probability sampling. This sample was used to estimate errors in 1) the predicted areal fractions where the EU standard of 50 mg l -1 is exceeded for farms with low N surpluses (ALT) and farms with higher N surpluses (REF); 2) predicted cumulative frequency distributions of nitrate concentration for both groups of farms. Both the errors in the predicted areal fractions as well as the errors in the predicted cumulative frequency distributions indicate that the regression models are invalid for the sandy soils of this study area. - This study indicates that linear regression models that predict nitrate concentrations in the upper groundwater using residual soil N contents should be applied with care.

  2. Lipoproteins of slow-growing Mycobacteria carry three fatty acids and are N-acylated by apolipoprotein N-acyltransferase BCG_2070c.

    Science.gov (United States)

    Brülle, Juliane K; Tschumi, Andreas; Sander, Peter

    2013-10-05

    Lipoproteins are virulence factors of Mycobacterium tuberculosis. Bacterial lipoproteins are modified by the consecutive action of preprolipoprotein diacylglyceryl transferase (Lgt), prolipoprotein signal peptidase (LspA) and apolipoprotein N- acyltransferase (Lnt) leading to the formation of mature triacylated lipoproteins. Lnt homologues are found in Gram-negative and high GC-rich Gram-positive, but not in low GC-rich Gram-positive bacteria, although N-acylation is observed. In fast-growing Mycobacterium smegmatis, the molecular structure of the lipid modification of lipoproteins was resolved recently as a diacylglyceryl residue carrying ester-bound palmitic acid and ester-bound tuberculostearic acid and an additional amide-bound palmitic acid. We exploit the vaccine strain Mycobacterium bovis BCG as model organism to investigate lipoprotein modifications in slow-growing mycobacteria. Using Escherichia coli Lnt as a query in BLASTp search, we identified BCG_2070c and BCG_2279c as putative lnt genes in M. bovis BCG. Lipoproteins LprF, LpqH, LpqL and LppX were expressed in M. bovis BCG and BCG_2070c lnt knock-out mutant and lipid modifications were analyzed at molecular level by matrix-assisted laser desorption ionization time-of-flight/time-of-flight analysis. Lipoprotein N-acylation was observed in wildtype but not in BCG_2070c mutants. Lipoprotein N- acylation with palmitoyl and tuberculostearyl residues was observed. Lipoproteins are triacylated in slow-growing mycobacteria. BCG_2070c encodes a functional Lnt in M. bovis BCG. We identified mycobacteria-specific tuberculostearic acid as further substrate for N-acylation in slow-growing mycobacteria.

  3. An interface tracking model for droplet electrocoalescence.

    Energy Technology Data Exchange (ETDEWEB)

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  4. Combinatorial DNA Damage Pairing Model Based on X-Ray-Induced Foci Predicts the Dose and LET Dependence of Cell Death in Human Breast Cells

    Energy Technology Data Exchange (ETDEWEB)

    Vadhavkar, Nikhil [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Pham, Christopher [University of Texas, Houston, TX (United States). MD Anderson Cancer Center; Georgescu, Walter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.; Deschamps, Thomas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.; Heuskin, Anne-Catherine [Univ. of Namur (Belgium). Namur Research inst. for Life Sciences (NARILIS), Research Center for the Physics of Matter and Radiation (PMR); Tang, Jonathan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.; Costes, Sylvain V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.

    2014-09-01

    are based on experimental RIF and are three times larger than the hypothetical LEM voxel used to fit survival curves. Our model is therefore an alternative to previous approaches that provides a testable biological mechanism (i.e., RIF). In addition, we propose that DSB pairing will help develop more accurate alternatives to the linear cancer risk model (LNT) currently used for regulating exposure to very low levels of ionizing radiation.

  5. Modeling of the Earth's gravity field using the New Global Earth Model (NEWGEM)

    Science.gov (United States)

    Kim, Yeong E.; Braswell, W. Danny

    1989-01-01

    Traditionally, the global gravity field was described by representations based on the spherical harmonics (SH) expansion of the geopotential. The SH expansion coefficients were determined by fitting the Earth's gravity data as measured by many different methods including the use of artificial satellites. As gravity data have accumulated with increasingly better accuracies, more of the higher order SH expansion coefficients were determined. The SH representation is useful for describing the gravity field exterior to the Earth but is theoretically invalid on the Earth's surface and in the Earth's interior. A new global Earth model (NEWGEM) (KIM, 1987 and 1988a) was recently proposed to provide a unified description of the Earth's gravity field inside, on, and outside the Earth's surface using the Earth's mass density profile as deduced from seismic studies, elevation and bathymetric information, and local and global gravity data. Using NEWGEM, it is possible to determine the constraints on the mass distribution of the Earth imposed by gravity, topography, and seismic data. NEWGEM is useful in investigating a variety of geophysical phenomena. It is currently being utilized to develop a geophysical interpretation of Kaula's rule. The zeroth order NEWGEM is being used to numerically integrate spherical harmonic expansion coefficients and simultaneously determine the contribution of each layer in the model to a given coefficient. The numerically determined SH expansion coefficients are also being used to test the validity of SH expansions at the surface of the Earth by comparing the resulting SH expansion gravity model with exact calculations of the gravity at the Earth's surface.

  6. Criticisms and defences of the balance-of-payments constrained growth model: some old, some new

    Directory of Open Access Journals (Sweden)

    John S.L. McCombie

    2011-12-01

    Full Text Available This paper assesses various critiques that have been levelled over the years against Thirlwall’s Law and the balance-of-payments constrained growth model. It starts by assessing the criticisms that the law is largely capturing an identity; that the law of one price renders the model incoherent; and that statistical testing using cross-country data rejects the hypothesis that the actual and the balance-of-payments equilibrium growth rates are the same. It goes on to consider the argument that calculations of the “constant-market-shares” income elasticities of demand for exports demonstrate that the UK (and by implication other advanced countries could not have been balance-of-payments constrained in the early postwar period. Next Krugman’s interpretation of the law (or what he terms the “45-degree rule”, which is at variance with the usual demand-oriented explanation, is examined. The paper next assesses attempts to reconcile the demand and supply side of the model and examines whether or not the balance-of-payments constrained growth model is subject to the fallacy of composition. It concludes that none of these criticisms invalidate the model, which remains a powerful explanation of why growth rates differ.

  7. Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints

    Directory of Open Access Journals (Sweden)

    Raphaël Beamonte

    2016-01-01

    Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.

  8. Analysis of correlations between sites in models of protein sequences

    International Nuclear Information System (INIS)

    Giraud, B.G.; Lapedes, A.; Liu, L.C.

    1998-01-01

    A criterion based on conditional probabilities, related to the concept of algorithmic distance, is used to detect correlated mutations at noncontiguous sites on sequences. We apply this criterion to the problem of analyzing correlations between sites in protein sequences; however, the analysis applies generally to networks of interacting sites with discrete states at each site. Elementary models, where explicit results can be derived easily, are introduced. The number of states per site considered ranges from 2, illustrating the relation to familiar classical spin systems, to 20 states, suitable for representing amino acids. Numerical simulations show that the criterion remains valid even when the genetic history of the data samples (e.g., protein sequences), as represented by a phylogenetic tree, introduces nonindependence between samples. Statistical fluctuations due to finite sampling are also investigated and do not invalidate the criterion. A subsidiary result is found: The more homogeneous a population, the more easily its average properties can drift from the properties of its ancestor. copyright 1998 The American Physical Society

  9. A model for electron currents near a field null

    International Nuclear Information System (INIS)

    Stark, R.A.; Miley, G.H.

    1987-01-01

    The fluid approximation is invalid near a field null, since the local electron orbit size and the magnetic scale length are comparable. To model the electron currents in this region we propose a single equation of motion describing the bulk electron dynamics. The equation applies to the plasma within one thermal orbit size of the null. The region is treated as unmagnetized; electrons are accelerated by the inductive electric field and drag on ions; damping is provided by viscosity due to electrons and collisions with ions. Through variational calculations and a particle tracking code for electrons, the size of the terms in the equation of motion have been estimated. The resulting equation of motion combines with Faraday's Law to produce a governing equation which implicitly contains the self inductive field of the electrons. This governing equation predicts that viscosity prevents complete cancellation of the ion current density by the electrons in the null region. Thus electron dynamics near the field null should not prevent the formation and deepening of field reversal using neutral-beam injection

  10. Why we need new approaches to low-dose risk modeling

    International Nuclear Information System (INIS)

    Alvarez, J.L.; Seiler, F.A.

    1996-01-01

    The linear no-threshold model for radiation effects was introduced as a conservative model for the design of radiation protection programs. The model has persisted not only as the basis for such programs, but has come to be treated as a dogma and is often confused with scientific fact. In this examination a number of serious problems with the linear no-threshold model of radiation carcinogenesis were demonstrated, many of them invalidating the hypothesis. It was shown that the relative risk formalism did not approach 1 as the dose approaches zero. When morality ratios were used instead, the data in the region below 0.3 Sv were systematically below the predictions of the linear model. It was also shown that the data above 0.3 Sv were of little use in formulating a model at low doses. In addition, these data are valid only for doses accumulated at high dose rates, and there is no scientific justification for using the model in low-dose, low-dose-rate extrapolations for purposes of radiation protection. Further examination of model fits to the Japanese survivor data were attempted. Several such models were fit to the data including an unconstrained linear, linear-square root, and Weibull, all of which fit the data better than the relative risk, linear no-threshold model. These fits were used to demonstrate that the linear model systematically over estimates the risk at low doses in the Japanese survivor data set. It is recommended here that an unbiased re-analysis of the data be undertaken and the results used to construct a new model, based on all pertinent data. This model could then form the basis for managing radiation risks in the appropriate regions of dose and dose rate

  11. A hybrid model for computing nonthermal ion distributions in a long mean-free-path plasma

    Science.gov (United States)

    Tang, Xianzhu; McDevitt, Chris; Guo, Zehua; Berk, Herb

    2014-10-01

    Non-thermal ions, especially the suprathermal ones, are known to make a dominant contribution to a number of important physics such as the fusion reactivity in controlled fusion, the ion heat flux, and in the case of a tokamak, the ion bootstrap current. Evaluating the deviation from a local Maxwellian distribution of these non-thermal ions can be a challenging task in the context of a global plasma fluid model that evolves the plasma density, flow, and temperature. Here we describe a hybrid model for coupling such constrained kinetic calculation to global plasma fluid models. The key ingredient is a non-perturbative treatment of the tail ions where the ion Knudsen number approaches or surpasses order unity. This can be sharply constrasted with the standard Chapman-Enskog approach which relies on a perturbative treatment that is frequently invalidated. The accuracy of our coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space. Although our specific application examples will be drawn from laboratory controlled fusion experiments, the general approach is applicable to space and astrophysical plasmas as well. Work supported by DOE.

  12. A numerical cloud model to interpret the isotope content of hailstones

    International Nuclear Information System (INIS)

    Jouzel, J.; Brichet, N.; Thalmann, B.; Federer, B.

    1980-07-01

    Measurements of the isotope content of hailstones are frequently used to deduce their trajectories and updraft speeds within severe storms. The interpretation was made in the past on the basis of an adiabatic equilibrium model in which the stones grew exclusively by interaction with droplets and vapor. Using the 1D steady-state model of Hirsch with parametrized cloud physics these unrealistic assumptions were dropped and the effects of interactions between droplets, drops, ice crystals and graupel on the concentrations of stable isotopes in hydrometeors were taken into account. The construction of the model is briefly discussed. The resulting height profiles of D and O 18 in hailstones deviate substantially from the equilibrium case, rendering most earlier trajectory calculations invalid. It is also seen that in the lower cloud layers the ice of the stones is richer due to relaxation effects, but at higher cloud layers (T(a) 0 C) the ice is much poorer in isotopes. This yields a broader spread of the isotope values in the interval 0>T(a)>-35 0 C or alternatively, it means that hailstones with a very large range of measured isotope concentrations grow in a smaller and therefore more realistic temperature interval. The use of the model in practice will be demonstrated

  13. Statistical mechanics of normal grain growth in one dimension: A partial integro-differential equation model

    International Nuclear Information System (INIS)

    Ng, Felix S.L.

    2016-01-01

    We develop a statistical-mechanical model of one-dimensional normal grain growth that does not require any drift-velocity parameterization for grain size, such as used in the continuity equation of traditional mean-field theories. The model tracks the population by considering grain sizes in neighbour pairs; the probability of a pair having neighbours of certain sizes is determined by the size-frequency distribution of all pairs. Accordingly, the evolution obeys a partial integro-differential equation (PIDE) over ‘grain size versus neighbour grain size’ space, so that the grain-size distribution is a projection of the PIDE's solution. This model, which is applicable before as well as after statistically self-similar grain growth has been reached, shows that the traditional continuity equation is invalid outside this state. During statistically self-similar growth, the PIDE correctly predicts the coarsening rate, invariant grain-size distribution and spatial grain size correlations observed in direct simulations. The PIDE is then reducible to the standard continuity equation, and we derive an explicit expression for the drift velocity. It should be possible to formulate similar parameterization-free models of normal grain growth in two and three dimensions.

  14. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    Science.gov (United States)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D equality at all studied dimensions.

  15. Vortex ring state by full-field actuator disc model

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, J.N.; Shen, W.Z.; Munduate, X. [DTU, Dept. of Energy Engineering, Lyngby (Denmark)

    1997-08-01

    One-dimensional momentum theory provides a simple analytical tool for analysing the gross flow behavior of lifting propellers and rotors. Combined with a blade-element strip-theory approach, it has for many years been the most popular model for load and performance predictions of wind turbines. The model works well at moderate and high wind velocities, but is not reliable at small wind velocities, where the expansion of the wake is large and the flow field behind the rotor dominated by turbulent mixing. This is normally referred to as the turbulent wake state or the vortex ring state. In the vortex ring state, momentum theory predicts a decrease of thrust whereas the opposite is found from experiments. The reason for the disagreement is that recirculation takes place behind the rotor with the consequence that the stream tubes past the rotor becomes effectively chocked. This represents a condition at which streamlines no longer carry fluid elements from far upstream to far downstream, hence one-dimensional momentum theory is invalid and empirical corrections have to be introduced. More sophisticated analytical or semi-analytical rotor models have been used to describe stationary flow fields for heavily loaded propellers. In recent years generalized actuator disc models have been developed, but up to now no detailed computations of the turbulent wake state or the vortex ring state have been performed. In the present work the phenomenon is simulated by direct simulation of the Navier-Stokes equations, where the influence of the rotor on the flow field is modelled simply by replacing the blades by an actuator disc with a constant normal load. (EG) 13 refs.

  16. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.

  17. Validity of two-phase polymer electrolyte membrane fuel cell models with respect to the gas diffusion layer

    Science.gov (United States)

    Ziegler, C.; Gerteisen, D.

    A dynamic two-phase model of a proton exchange membrane fuel cell with respect to the gas diffusion layer (GDL) is presented and compared with chronoamperometric experiments. Very good agreement between experiment and simulation is achieved for potential step voltammetry (PSV) and sine wave testing (SWT). Homogenized two-phase models can be categorized in unsaturated flow theory (UFT) and multiphase mixture (M 2) models. Both model approaches use the continuum hypothesis as fundamental assumption. Cyclic voltammetry experiments show that there is a deterministic and a stochastic liquid transport mode depending on the fraction of hydrophilic pores of the GDL. ESEM imaging is used to investigate the morphology of the liquid water accumulation in the pores of two different media (unteflonated Toray-TGP-H-090 and hydrophobic Freudenberg H2315 I3). The morphology of the liquid water accumulation are related with the cell behavior. The results show that UFT and M 2 two-phase models are a valid approach for diffusion media with large fraction of hydrophilic pores such as unteflonated Toray-TGP-H paper. However, the use of the homgenized UFT and M 2 models appears to be invalid for GDLs with large fraction of hydrophobic pores that corresponds to a high average contact angle of the GDL.

  18. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    Science.gov (United States)

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  19. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    Science.gov (United States)

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  20. TU-C-18A-01: Models of Risk From Low-Dose Radiation Exposures: What Does the Evidence Say?

    International Nuclear Information System (INIS)

    Bushberg, J; Boreham, D; Ulsh, B

    2014-01-01

    At dose levels of (approximately) 500 mSv or more, increased cancer incidence and mortality have been clearly demonstrated. However, at the low doses of radiation used in medical imaging, the relationship between dose and cancer risk is not well established. As such, assumptions about the shape of the dose-response curve are made. These assumptions, or risk models, are used to estimate potential long term effects. Common models include 1) the linear non-threshold (LNT) model, 2) threshold models with either a linear or curvilinear dose response above the threshold, and 3) a hormetic model, where the risk is initially decreased below background levels before increasing. The choice of model used when making radiation risk or protection calculations and decisions can have significant implications on public policy and health care decisions. However, the ongoing debate about which risk model best describes the dose-response relationship at low doses of radiation makes informed decision making difficult. This symposium will review the two fundamental approaches to determining the risk associated with low doses of ionizing radiation, namely radiation epidemiology and radiation biology. The strengths and limitations of each approach will be reviewed, the results of recent studies presented, and the appropriateness of different risk models for various real world scenarios discussed. Examples of well-designed and poorly-designed studies will be provided to assist medical physicists in 1) critically evaluating publications in the field and 2) communicating accurate information to medical professionals, patients, and members of the general public. Equipped with the best information that radiation epidemiology and radiation biology can currently provide, and an understanding of the limitations of such information, individuals and organizations will be able to make more informed decisions regarding questions such as 1) how much shielding to install at medical facilities, 2) at

  1. TU-C-18A-01: Models of Risk From Low-Dose Radiation Exposures: What Does the Evidence Say?

    Energy Technology Data Exchange (ETDEWEB)

    Bushberg, J [UC Davis Medical Center, Sacramento, CA (United States); Boreham, D [McMaster University, Ontario, CA (Canada); Ulsh, B

    2014-06-15

    At dose levels of (approximately) 500 mSv or more, increased cancer incidence and mortality have been clearly demonstrated. However, at the low doses of radiation used in medical imaging, the relationship between dose and cancer risk is not well established. As such, assumptions about the shape of the dose-response curve are made. These assumptions, or risk models, are used to estimate potential long term effects. Common models include 1) the linear non-threshold (LNT) model, 2) threshold models with either a linear or curvilinear dose response above the threshold, and 3) a hormetic model, where the risk is initially decreased below background levels before increasing. The choice of model used when making radiation risk or protection calculations and decisions can have significant implications on public policy and health care decisions. However, the ongoing debate about which risk model best describes the dose-response relationship at low doses of radiation makes informed decision making difficult. This symposium will review the two fundamental approaches to determining the risk associated with low doses of ionizing radiation, namely radiation epidemiology and radiation biology. The strengths and limitations of each approach will be reviewed, the results of recent studies presented, and the appropriateness of different risk models for various real world scenarios discussed. Examples of well-designed and poorly-designed studies will be provided to assist medical physicists in 1) critically evaluating publications in the field and 2) communicating accurate information to medical professionals, patients, and members of the general public. Equipped with the best information that radiation epidemiology and radiation biology can currently provide, and an understanding of the limitations of such information, individuals and organizations will be able to make more informed decisions regarding questions such as 1) how much shielding to install at medical facilities, 2) at

  2. Biological responses to low dose rate gamma radiation

    International Nuclear Information System (INIS)

    Magae, Junji; Ogata, Hiromitsu

    2003-01-01

    Linear non-threshold (LNT) theory is a basic theory for radioprotection. While LNT dose not consider irradiation time or dose-rate, biological responses to radiation are complex processes dependent on irradiation time as well as total dose. Moreover, experimental and epidemiological studies that can evaluate LNT at low dose/low dose-rate are not sufficiently accumulated. Here we analyzed quantitative relationship among dose, dose-rate and irradiation time using chromosomal breakage and proliferation inhibition of human cells as indicators of biological responses. We also acquired quantitative data at low doses that can evaluate adaptability of LNT with statistically sufficient accuracy. Our results demonstrate that biological responses at low dose-rate are remarkably affected by exposure time, and they are dependent on dose-rate rather than total dose in long-term irradiation. We also found that change of biological responses at low dose was not linearly correlated to dose. These results suggest that it is necessary for us to create a new model which sufficiently includes dose-rate effect and correctly fits of actual experimental and epidemiological results to evaluate risk of radiation at low dose/low dose-rate. (author)

  3. Investigations on low temperature thermoluminescence centres in quartz

    International Nuclear Information System (INIS)

    Bernhardt, H.

    1984-01-01

    The present paper will help to understand the often investigated process of thermoluminescence of quartz which is of high complexity. A lot of traps exist in quartz crystals which compete with each other with respect to the trapping of charge carriers during the X-ray treatment. That is why a variety of processes takes place after X-irradiation at liquid nitrogen temperature (LNT) of quartz which complicate the phenomenology of low temperature thermoluminescence. This competition in the trapping process leads to the so-called 'sensibilization' or 'desensibilization' effects of thermoluminescence, respectively, which are described in this paper for the first time. This effect means the dependence of the LNT thermoluminescence intensity on a pre-irradiation dose applied at room temperature (RT). The influence of this pre-irradiation is understood assuming the saturation of competitive traps. This favours an enhanced trapping of charge carriers at LNT-(shallow) traps instead of the preferential trapping on the deep traps in the case of X-ray treatment of the as-grown crystal at LNT. To get the afore mentioned model we take into account not only thermoluminescence but also coloration, ir- and vuv-absorption measurements. (author)

  4. Energy Efficient Thermal Management for Natural Gas Engine Aftertreatment via Active Flow Control

    Energy Technology Data Exchange (ETDEWEB)

    David K. Irick; Ke Nguyen; Vitacheslav Naoumov; Doug Ferguson

    2006-04-01

    The project is focused on the development of an energy efficient aftertreatment system capable of reducing NOx and methane by 90% from lean-burn natural gas engines by applying active exhaust flow control. Compared to conventional passive flow-through reactors, the proposed scheme cuts supplemental energy by 50%-70%. The system consists of a Lean NOx Trap (LNT) system and an oxidation catalyst. Through alternating flow control, a major amount of engine exhaust flows through a large portion of the LNT system in the absorption mode, while a small amount of exhaust goes through a small portion of the LNT system in the regeneration or desulfurization mode. By periodically reversing the exhaust gas flow through the oxidation catalyst, a higher temperature profile is maintained in the catalyst bed resulting in greater efficiency of the oxidation catalyst at lower exhaust temperatures. The project involves conceptual design, theoretical analysis, computer simulation, prototype fabrication, and empirical studies. This report details the progress during the first twelve months of the project. The primary activities have been to develop the bench flow reactor system, develop the computer simulation and modeling of the reverse-flow oxidation catalyst, install the engine into the test cell, and begin design of the LNT system.

  5. Taxonomic analysis of perceived risk: modeling individual and group perceptions within homogeneous hazard domains

    International Nuclear Information System (INIS)

    Kraus, N.N.; Slovic, P.

    1988-01-01

    Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions

  6. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations).

    Science.gov (United States)

    Beyea, Jan

    2017-04-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Spatial Interpretation of Tower, Chamber and Modelled Terrestrial Fluxes in a Tropical Forest Plantation

    Science.gov (United States)

    Whidden, E.; Roulet, N.

    2003-04-01

    Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.

  8. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Science.gov (United States)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  9. The conceptualization model problem—surprise

    Science.gov (United States)

    Bredehoeft, John

    2005-03-01

    The foundation of model analysis is the conceptual model. Surprise is defined as new data that renders the prevailing conceptual model invalid; as defined here it represents a paradigm shift. Limited empirical data indicate that surprises occur in 20-30% of model analyses. These data suggest that groundwater analysts have difficulty selecting the appropriate conceptual model. There is no ready remedy to the conceptual model problem other than (1) to collect as much data as is feasible, using all applicable methods—a complementary data collection methodology can lead to new information that changes the prevailing conceptual model, and (2) for the analyst to remain open to the fact that the conceptual model can change dramatically as more information is collected. In the final analysis, the hydrogeologist makes a subjective decision on the appropriate conceptual model. The conceptualization problem does not render models unusable. The problem introduces an uncertainty that often is not widely recognized. Conceptual model uncertainty is exacerbated in making long-term predictions of system performance. C'est le modèle conceptuel qui se trouve à base d'une analyse sur un modèle. On considère comme une surprise lorsque le modèle est invalidé par des données nouvelles; dans les termes définis ici la surprise est équivalente à un change de paradigme. Des données empiriques limitées indiquent que les surprises apparaissent dans 20 à 30% des analyses effectuées sur les modèles. Ces données suggèrent que l'analyse des eaux souterraines présente des difficultés lorsqu'il s'agit de choisir le modèle conceptuel approprié. Il n'existe pas un autre remède au problème du modèle conceptuel que: (1) rassembler autant des données que possible en utilisant toutes les méthodes applicables—la méthode des données complémentaires peut conduire aux nouvelles informations qui vont changer le modèle conceptuel, et (2) l'analyste doit rester ouvert au fait

  10. Revisiting the Gram-negative lipoprotein paradigm.

    Science.gov (United States)

    LoVullo, Eric D; Wright, Lori F; Isabella, Vincent; Huntley, Jason F; Pavelka, Martin S

    2015-05-01

    The processing of lipoproteins (Lpps) in Gram-negative bacteria is generally considered an essential pathway. Mature lipoproteins in these bacteria are triacylated, with the final fatty acid addition performed by Lnt, an apolipoprotein N-acyltransferase. The mature lipoproteins are then sorted by the Lol system, with most Lpps inserted into the outer membrane (OM). We demonstrate here that the lnt gene is not essential to the Gram-negative pathogen Francisella tularensis subsp. tularensis strain Schu or to the live vaccine strain LVS. An LVS Δlnt mutant has a small-colony phenotype on sucrose medium and increased susceptibility to globomycin and rifampin. We provide data indicating that the OM lipoprotein Tul4A (LpnA) is diacylated but that it, and its paralog Tul4B (LpnB), still sort to the OM in the Δlnt mutant. We present a model in which the Lol sorting pathway of Francisella has a modified ABC transporter system that is capable of recognizing and sorting both triacylated and diacylated lipoproteins, and we show that this modified system is present in many other Gram-negative bacteria. We examined this model using Neisseria gonorrhoeae, which has the same Lol architecture as that of Francisella, and found that the lnt gene is not essential in this organism. This work suggests that Gram-negative bacteria fall into two groups, one in which full lipoprotein processing is essential and one in which the final acylation step is not essential, potentially due to the ability of the Lol sorting pathway in these bacteria to sort immature apolipoproteins to the OM. This paper describes the novel finding that the final stage in lipoprotein processing (normally considered an essential process) is not required by Francisella tularensis or Neisseria gonorrhoeae. The paper provides a potential reason for this and shows that it may be widespread in other Gram-negative bacteria. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  11. Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows.

    Science.gov (United States)

    Rolland, Joran

    2018-02-01

    multistability, where ln(T) in the limit of small variance noise is studied. Two points of view, local noise of small variance and large length, can be used to discuss the exponential dependence in L of T. In particular, it is shown how a T≍exp[L(A^{'}R-B^{'})] can be derived in a conceptual two degrees of freedom model of a transitional wall flow proposed by Dauchot and Manneville. This is done by identifying a quasipotential in low variance noise, large length limit. This pinpoints the physical effects controlling collapse and build-up trajectories and corresponding passage times with an emphasis on the saddle points between laminar and turbulent states. This analytical analysis also shows that these effects lead to the asymmetric probability density function of kinetic energy of turbulence.

  12. The Application of Cyber Physical System for Thermal Power Plants: Data-Driven Modeling

    Directory of Open Access Journals (Sweden)

    Yongping Yang

    2018-03-01

    Full Text Available Optimal operation of energy systems plays an important role to enhance their lifetime security and efficiency. The determination of optimal operating strategies requires intelligent utilization of massive data accumulated during operation or prediction. The investigation of these data solely without combining physical models may run the risk that the established relationships between inputs and outputs, the models which reproduce the behavior of the considered system/component in a wide range of boundary conditions, are invalid for certain boundary conditions, which never occur in the database employed. Therefore, combining big data with physical models via cyber physical systems (CPS is of great importance to derive highly-reliable and -accurate models and becomes more and more popular in practical applications. In this paper, we focus on the description of a systematic method to apply CPS to the performance analysis and decision making of thermal power plants. We proposed a general procedure of CPS with both offline and online phases for its application to thermal power plants and discussed the corresponding methods employed to support each sub-procedure. As an example, a data-driven model of turbine island of an existing air-cooling based thermal power plant is established with the proposed procedure and demonstrates its practicality, validity and flexibility. To establish such model, the historical operating data are employed in the cyber layer for modeling and linking each physical component. The decision-making procedure of optimal frequency of air-cooling condenser is also illustrated to show its applicability of online use. It is concluded that the cyber physical system with the data mining technique is effective and promising to facilitate the real-time analysis and control of thermal power plants.

  13. Microkinetic Modeling of Lean NOx Trap Storage and Regeneration

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Richard S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chakravarthy, V. Kalyana [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pihl, Josh A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daw, C. Stuart [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-12-01

    A microkinetic chemical reaction mechanism capable of describing both the storage and regeneration processes in a fully formulated lean NOx trap (LNT) is presented. The mechanism includes steps occurring on the precious metal, barium oxide (NOx storage), and cerium oxide (oxygen storage) sites of the catalyst. The complete reaction set is used in conjunction with a transient plug flow reactor code (including boundary layer mass transfer) to simulate not only a set of long storage/regeneration cycles with a CO/H2 reductant, but also a series of steady flow temperature sweep experiments that were previously analyzed with just a precious metal mechanism and a steady state code neglecting mass transfer. The results show that, while mass transfer effects are generally minor, NOx storage is not negligible during some of the temperature ramps, necessitating a re-evaluation of the precious metal kinetic parameters. The parameters for the entire mechanism are inferred by finding the best overall fit to the complete set of experiments. Rigorous thermodynamic consistency is enforced for parallel reaction pathways and with respect to known data for all of the gas phase species involved. It is found that, with a few minor exceptions, all of the basic experimental observations can be reproduced with the transient simulations. In addition to accounting for normal cycling behavior, the final mechanism should provide a starting point for the description of further LNT phenomena such as desulfation and the role of alternative reductants.

  14. Crisis Decision Making Through a Shared Integrative Negotiation Mental Model

    NARCIS (Netherlands)

    Van Santen, W.; Jonker, C.M.; Wijngaards, N.

    2009-01-01

    Decision making during crises takes place in (multi-agency) teams, in a bureaucratic political context. As a result, the common notion that during crises decision making should be done in line with a Command & Control structure is invalid. This paper shows that the best way for crisis decision

  15. Capturing Assumptions while Designing a Verification Model for Embedded Systems

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    A formal proof of a system correctness typically holds under a number of assumptions. Leaving them implicit raises the chance of using the system in a context that violates some assumptions, which in return may invalidate the correctness proof. The goal of this paper is to show how combining

  16. A novel mouse model of creatine transporter deficiency [v1; ref status: indexed, http://f1000r.es/4f8

    Directory of Open Access Journals (Sweden)

    Laura Baroncelli

    2014-09-01

    Full Text Available Mutations in the creatine (Cr transporter (CrT gene lead to cerebral creatine deficiency syndrome-1 (CCDS1, an X-linked metabolic disorder characterized by cerebral Cr deficiency causing intellectual disability, seizures, movement  and behavioral disturbances, language and speech impairment ( OMIM #300352. CCDS1 is still an untreatable pathology that can be very invalidating for patients and caregivers. Only two murine models of CCDS1, one of which is an ubiquitous knockout mouse, are currently available to study the possible mechanisms underlying the pathologic phenotype of CCDS1 and to develop therapeutic strategies. Given the importance of validating phenotypes and efficacy of promising treatments in more than one mouse model we have generated a new murine model of CCDS1 obtained by ubiquitous deletion of 5-7 exons in the Slc6a8 gene. We showed a remarkable Cr depletion in the murine brain tissues and cognitive defects, thus resembling the key features of human CCDS1. These results confirm that CCDS1 can be well modeled in mice. This CrT−/y murine model will provide a new tool for increasing the relevance of preclinical studies to the human disease.

  17. A novel mouse model of creatine transporter deficiency [v2; ref status: indexed, http://f1000r.es/4zb

    Directory of Open Access Journals (Sweden)

    Laura Baroncelli

    2015-01-01

    Full Text Available Mutations in the creatine (Cr transporter (CrT gene lead to cerebral creatine deficiency syndrome-1 (CCDS1, an X-linked metabolic disorder characterized by cerebral Cr deficiency causing intellectual disability, seizures, movement  and behavioral disturbances, language and speech impairment ( OMIM #300352. CCDS1 is still an untreatable pathology that can be very invalidating for patients and caregivers. Only two murine models of CCDS1, one of which is an ubiquitous knockout mouse, are currently available to study the possible mechanisms underlying the pathologic phenotype of CCDS1 and to develop therapeutic strategies. Given the importance of validating phenotypes and efficacy of promising treatments in more than one mouse model we have generated a new murine model of CCDS1 obtained by ubiquitous deletion of 5-7 exons in the Slc6a8 gene. We showed a remarkable Cr depletion in the murine brain tissues and cognitive defects, thus resembling the key features of human CCDS1. These results confirm that CCDS1 can be well modeled in mice. This CrT−/y murine model will provide a new tool for increasing the relevance of preclinical studies to the human disease.

  18. KWIK Smoke Obscuration Model: User’s Guide.

    Science.gov (United States)

    1982-09-01

    t ’ustr ( td I IK,j) 384: prt 3o :: pr t 6 k AC 1-4G" 36~b : pr t " ~ L 3b7: if j~i;prt &t(t1,] 3 8 8: it J=2;pr. "&str(Zjl,1,KI) 3 0 9: j~r t "I 39u...t.2~t71. * j3 3 2u: w r t 7uX ,"i(Lz~j i21iJ "c3wt70, )i: lnt 4 5X, "irI I U uIL - = 01 17.2;wrt 701,kq3 j~b: Lirt. 45x,"a..~c4 uAT - LiLY = g,t4.2

  19. Comments on the Dutton-Puls model: Temperature and yield stress dependences of crack growth rate in zirconium alloys

    International Nuclear Information System (INIS)

    Kim, Young S.

    2010-01-01

    Research highlights: → This study shows first that temperature and yield stress dependences of crack growth rate in zirconium alloys can analytically be understood not by the Dutton-Puls model but by Kim's new DHC model. → It is demonstrated that the driving force for DHC is ΔC, not the stress gradient, which is the core of Kim's DHC model. → The Dutton-Puls model reveals the invalidity of Puls' claim that the crack tip solubility would increase to the cooling solvus. - Abstract: This work was prompted by the publication of Puls's recent papers claiming that the Dutton-Puls model is valid enough to explain the stress and temperature dependences of the crack growth rate (CGR) in zirconium alloys. The first version of the Dutton-Puls model shows that the CGR has positive dependences on the concentration difference ΔC, hydrogen diffusivity D H , and the yield strength, and a negative dependence on the applied stress intensity factor K I , which is one of its critical defects. Thus, the Dutton-Puls model claiming that the temperature dependence of CGR is determined by D H C H turns out to be incorrect. Given that ΔC is independent of the stress, it is evident that the driving force for DHC is ΔC, not the stress gradient, corroborating the validity of Kim's model. Furthermore, the predicted activation energy for CGR in a cold-worked Zr-2.5Nb tube disagrees with the measured one for the Zr-2.5Nb tube, showing that the Dutton-Puls model is too defective to explain the temperature dependence of CGR. It is demonstrated that the revised Dutton-Puls model also cannot explain the yield stress dependence of CGR.

  20. A model-based design and validation approach with OMEGA-UML and the IF toolset

    Science.gov (United States)

    Ben-hafaiedh, Imene; Constant, Olivier; Graf, Susanne; Robbana, Riadh

    2009-03-01

    Intelligent, embedded systems such as autonomous robots and other industrial systems are becoming increasingly more heterogeneous with respect to the platforms on which they are implemented, and thus the software architecture more complex to design and analyse. In this context, it is important to have well-defined design methodologies which should be supported by (1) high level design concepts allowing to master the design complexity, (2) concepts for the expression of non-functional requirements and (3) analysis tools allowing to verify or invalidate that the system under development will be able to conform to its requirements. We illustrate here such an approach for the design of complex embedded systems on hand of a small case study used as a running example for illustration purposes. We briefly present the important concepts of the OMEGA-RT UML profile, we show how we use this profile in a modelling approach, and explain how these concepts are used in the IFx verification toolbox to integrate validation into the design flow and make scalable verification possible.

  1. Magazines as wilderness information sources: assessing users' general wilderness knowledge and specific leave no trace knowledge

    Science.gov (United States)

    John J. Confer; Andrew J. Mowen; Alan K. Graefe; James D. Absher

    2000-01-01

    The Leave No Trace (LNT) educational program has the potential to provide wilderness users with useful minimum impact information. For LNT to be effective, managers need to understand who is most/least aware of minimum impact practices and how to expose users to LNT messages. This study examined LNT knowledge among various user groups at an Eastern wilderness area and...

  2. Simultaneous fitting of real-time PCR data with efficiency of amplification modeled as Gaussian function of target fluorescence

    Directory of Open Access Journals (Sweden)

    Lazar Andreas

    2008-02-01

    Full Text Available Abstract Background In real-time PCR, it is necessary to consider the efficiency of amplification (EA of amplicons in order to determine initial target levels properly. EAs can be deduced from standard curves, but these involve extra effort and cost and may yield invalid EAs. Alternatively, EA can be extracted from individual fluorescence curves. Unfortunately, this is not reliable enough. Results Here we introduce simultaneous non-linear fitting to determine – without standard curves – an optimal common EA for all samples of a group. In order to adjust EA as a function of target fluorescence, and still to describe fluorescence as a function of cycle number, we use an iterative algorithm that increases fluorescence cycle by cycle and thus simulates the PCR process. A Gauss peak function is used to model the decrease of EA with increasing amplicon accumulation. Our approach was validated experimentally with hydrolysis probe or SYBR green detection with dilution series of 5 different targets. It performed distinctly better in terms of accuracy than standard curve, DART-PCR, and LinRegPCR approaches. Based on reliable EAs, it was possible to detect that for some amplicons, extraordinary fluorescence (EA > 2.00 was generated with locked nucleic acid hydrolysis probes, but not with SYBR green. Conclusion In comparison to previously reported approaches that are based on the separate analysis of each curve and on modelling EA as a function of cycle number, our approach yields more accurate and precise estimates of relative initial target levels.

  3. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  4. Special physical examination tests for superior labrum anterior posterior shoulder tears are clinically limited and invalid: a diagnostic systematic review.

    Science.gov (United States)

    Calvert, Eric; Chambers, Gordon Keith; Regan, William; Hawkins, Robert H; Leith, Jordan M

    2009-05-01

    The diagnosis of a superior labrum anterior posterior (SLAP) lesion through physical examination has been widely reported in the literature. Most of these studies report high sensitivities and specificities, and claim to be accurate, valid, and reliable. The purpose of this study was to critically evaluate these studies to determine if there was sufficient evidence to support the use of the SLAP physical examination tests as valid and reliable diagnostic test procedures. Strict epidemiologic methodology was used to obtain and collate all relevant articles. Sackett's guidelines were applied to all articles. Confidence intervals and likelihood ratios were determined. Fifteen of 29 relevant studies met the criteria for inclusion. Only one article met all of Sackett's critical appraisal criteria. Confidence intervals for both the positive and negative likelihood ratios contained the value 1. The current literature being used as a resource for teaching in medical schools and continuing education lacks the validity necessary to be useful. There are no good physical examination tests that exist for effectively diagnosing a SLAP lesion.

  5. Invalidating stagnation theory for family owned businesses : comparing family-to-family and third party ownership transfers

    NARCIS (Netherlands)

    Alija Ibrahimovic; Lex van Teeffelen; Roger Heaver

    2015-01-01

    Miller, Le Breton-Miller and Scholnick (2008) summarize and discuss two major perspectives constructed from the literature on family owned businesses (FOBs): stewardship and stagnation theory. In this paper the stagnation theory is being put to the test on Dutch small/medium enterprises (SMEs).

  6. Catalase-Aminotriazole Assay, an Invalid Method for Measurement of Hydrogen Peroxide Production by Wood Decay Fungi

    OpenAIRE

    Highley, Terry L.

    1981-01-01

    The catalase-aminotriazole assay for determination of hydrogen peroxide apparently cannot be used for measuring hydrogen peroxide production in crude preparations from wood decay fungi because of materials in the crude preparations that interfere with the test.

  7. MCQ testing in higher education: Yes, there are bad items and invalid scores—A case study identifying solutions

    OpenAIRE

    Brown, Gavin

    2017-01-01

    This is a lecture given at Umea University, Sweden in September 2017. It is based on the published study: Brown, G. T. L., & Abdulnabi, H. (2017). Evaluating the quality of higher education instructor-constructed multiple-choice tests: Impact on student grades. Frontiers in Education: Assessment, Testing, & Applied Measurement, 2(24).. doi:10.3389/feduc.2017.00024

  8. Working with invalid boundary conditions: lessons from the field for communicating about climate change with public audiences

    Science.gov (United States)

    Gunther, A.

    2015-12-01

    There is an ongoing need to communicate with public audiences about climate science, current and projected impacts, the importance of reducing greenhouse gas emissions, and the requirement to prepare for changes that are likely unavoidable. It is essential that scientists are engaged and active in this effort. Scientists can be more effective communicators about climate change to non-scientific audiences if we recognize that some of the normal "boundary conditions" under which we operate do not need to apply. From how we are trained to how we think about our audience, there are some specific skills and practices that allow us to be more effective communicators. The author will review concepts for making our communication more effective based upon his experience from over 60 presentations about climate change to public audiences. These include expressing how your knowledge makes you feel, anticipating (and accepting) questions unconstrained by physics, respecting beliefs and values while separating them from evidence, and using the history of climate science to provide a compelling narrative. Proper attention to presentation structure (particularly an opening statement), speaking techniques for audience engagement, and effective use of presentation software are also important.

  9. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...

  10. 3D finite element model of the diabetic neuropathic foot: a gait analysis driven approach.

    Science.gov (United States)

    Guiotto, Annamaria; Sawacha, Zimi; Guarneri, Gabriella; Avogaro, Angelo; Cobelli, Claudio

    2014-09-22

    Diabetic foot is an invalidating complication of diabetes that can lead to foot ulcers. Three-dimensional (3D) finite element analysis (FEA) allows characterizing the loads developed in the different anatomical structures of the foot in dynamic conditions. The aim of this study was to develop a subject specific 3D foot FE model (FEM) of a diabetic neuropathic (DNS) and a healthy (HS) subject, whose subject specificity can be found in term of foot geometry and boundary conditions. Kinematics, kinetics and plantar pressure (PP) data were extracted from the gait analysis trials of the two subjects with this purpose. The FEM were developed segmenting bones, cartilage and skin from MRI and drawing a horizontal plate as ground support. Materials properties were adopted from previous literature. FE simulations were run with the kinematics and kinetics data of four different phases of the stance phase of gait (heel strike, loading response, midstance and push off). FEMs were then driven by group gait data of 10 neuropathic and 10 healthy subjects. Model validation focused on agreement between FEM-simulated and experimental PP. The peak values and the total distribution of the pressures were compared for this purpose. Results showed that the models were less robust when driven from group data and underestimated the PP in each foot subarea. In particular in the case of the neuropathic subject's model the mean errors between experimental and simulated data were around the 20% of the peak values. This knowledge is crucial in understanding the aetiology of diabetic foot. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Experimental Research Regarding New Models of Organizational Communication in the Romanian Tourism

    Directory of Open Access Journals (Sweden)

    Cristina STATE

    2015-12-01

    Full Text Available Presenting interests for the most various sciences (cybernetics, economics, ethnology, philosophy, history, psycho-sociology etc., the complex communication process incited and triggered a lot of opinions, many of them not complementary at all and even taken to the level of some passions generating contradictions. The result was the conceptualization of the content and of the communication functions on different forms called models by their creators. In time, with their evolution, the communication models have included, besides some basic elements (sender, message, means of communication, receiver and effect also a range of detail elements essential to streamline the process itself: the noise source , codec and feedback, the interaction of the field specific experience of the transmitter and receptor, the organizational context of communication and communication skills, including how to produce and interpretate these ones. Finally, any model’ functions are either heuristic (to explain, organizational (to order or predictive (making assumptions. They are worth only by their degree of probability remaining valid so long as it is not invalidated by practice and is one way of describing reality and not the reality itself. This is the context in which our work, the first of its kind in Romania, proposes in the context of improving organizational management, two new models of communication at both the micro- and macro- economic, models through which, using crowdsourcing, the units in the tourism, hospitality and leisure industry (THLI will be able to communicate more effectively, based not on own insights and / or perceptions but, firstly, on the views of management and experts in the field and especially on the customer’ feedback.

  12. Analytic treatment of nuclear spin-lattice relaxation for diffusion in a cone model

    Science.gov (United States)

    Sitnitsky, A. E.

    2011-12-01

    We consider nuclear spin-lattice relaxation rate resulted from a diffusion equation for rotational wobbling in a cone. We show that the widespread point of view that there are no analytical expressions for correlation functions for wobbling in a cone model is invalid and prove that nuclear spin-lattice relaxation in this model is exactly tractable and amenable to full analytical description. The mechanism of relaxation is assumed to be due to dipole-dipole interaction of nuclear spins and is treated within the framework of the standard Bloemberger, Purcell, Pound-Solomon scheme. We consider the general case of arbitrary orientation of the cone axis relative the magnetic field. The BPP-Solomon scheme is shown to remain valid for systems with the distribution of the cone axes depending only on the tilt relative the magnetic field but otherwise being isotropic. We consider the case of random isotropic orientation of cone axes relative the magnetic field taking place in powders. Also we consider the cases of their predominant orientation along or opposite the magnetic field and that of their predominant orientation transverse to the magnetic field which may be relevant for, e.g., liquid crystals. Besides we treat in details the model case of the cone axis directed along the magnetic field. The latter provides direct comparison of the limiting case of our formulas with the textbook formulas for free isotropic rotational diffusion. The dependence of the spin-lattice relaxation rate on the cone half-width yields results similar to those predicted by the model-free approach.

  13. Accurate market price formation model with both supply-demand and trend-following for global food prices providing policy recommendations.

    Science.gov (United States)

    Lagi, Marco; Bar-Yam, Yavni; Bertrand, Karla Z; Bar-Yam, Yaneer

    2015-11-10

    Recent increases in basic food prices are severely affecting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the United States, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time to our knowledge, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, whereas an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities, and bonds to take advantage of increased expected returns. Claims that speculators cannot influence grain prices are shown to be invalid by direct analysis of price-setting practices of granaries. Both causes of price increase, speculative investment and ethanol conversion, are promoted by recent regulatory changes-deregulation of the commodity markets, and policies promoting the conversion of corn to ethanol. Rapid action is needed to reduce the impacts of the price increases on global hunger.

  14. Chronic 5-HT4 receptor agonist treatment restores learning and memory deficits in a neuroendocrine mouse model of anxiety/depression.

    Science.gov (United States)

    Darcet, Flavie; Gardier, Alain M; David, Denis J; Guilloux, Jean-Philippe

    2016-03-11

    Cognitive disturbances are often reported as serious invalidating symptoms in patients suffering from major depression disorders (MDD) and are not fully corrected by classical monoaminergic antidepressant drugs. If the role of 5-HT4 receptor agonists as cognitive enhancers is well established in naïve animals or in animal models of cognitive impairment, their cognitive effects in the context of stress need to be examined. Using a mouse model of anxiety/depression (CORT model), we reported that a chronic 5-HT4 agonist treatment (RS67333, 1.5mg/kg/day) restored chronic corticosterone-induced cognitive deficits, including episodic-like, associative and spatial learning and memory impairments. On the contrary, a chronic monoaminergic antidepressant drug treatment with fluoxetine (18mg/kg/day) only partially restored spatial learning and memory deficits and had no effect in the associative/contextual task. These results suggest differential mechanisms underlying cognitive effects of these drugs. Finally, the present study highlights 5-HT4 receptor stimulation as a promising therapeutic mechanism to alleviate cognitive symptoms related to MDD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Leadership Models.

    Science.gov (United States)

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  16. Models and role models.

    Science.gov (United States)

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  17. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  18. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  19. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    Science.gov (United States)

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  20. Predictions for heat transfer characteristics in a natural draft reactor cooling system using a second moment closure turbulence model

    International Nuclear Information System (INIS)

    Nishimura, M.; Maekawa, I.

    2004-01-01

    A numerical study is performed on the natural draft reactor cavity cooling system (RCCS). In the cooling system, buoyancy driven heated upward flow could be in the mixed convection regime that is accompanied by heat transfer impairment. Also, the heating wall condition is asymmetric with regard to the channel cross section. These flow regime and thermal boundary conditions may invalidate the use of design correlation. To precisely simulate the flow and thermal fields within the RCCS, the second moment closure turbulence model is applied. Two types of the RCCS channel geometry are selected to make a comparison: an annular duct with fins on the outer surface of the inner circular wall, and a multi-rectangular duct. The prediction shows that the local heat transfer coefficient on the RCCS with finned annular duct is less than 1/6 of that estimated with Dittus-Boelter correlation. Much portion of the natural draft airflow does not contribute cooling at all because mainstream escapes from the narrow gaps between the fins. This result and thus the finned annulus design are unacceptable from the viewpoint for structural integrity of the RCCS wall boundary. The performance of the multi-rectangular duct design is acceptable that the RCCS maximum temperature is less than 400 degree centigrade even when the flow rate is halved from the designed condition. (author)

  1. Inter-rater reliability of healthcare professional skills' portfolio assessments: The Andalusian Agency for Healthcare Quality model

    Directory of Open Access Journals (Sweden)

    Antonio Almuedo-Paz

    2014-07-01

    Full Text Available This study aims to determine the reliability of assessment criteria used for a portfolio at the Andalusian Agency for Healthcare Quality (ACSA. Data: all competences certification processes, regardless of their discipline. Period: 2010-2011. Three types of tests are used: 368 certificates, 17,895 reports and 22,642 clinical practice reports (N = 3,010 candidates. The tests were evaluated in pairs by the ACSA team of raters using two categories: valid and invalid. Results: The percentage agreement in assessments of certificates was 89,9%, while for the reports of clinical practice was 85,1 % and for clinical practice reports was 81,7%. The inter-rater agreement coefficients (kappa ranged from 0,468 to 0,711. Discussion: The results of this study show that the inter-rater reliability of assessments varies from fair to good. Compared with other similar studies, the results put the reliability of the model in a comfortable position. Among the improvements incorporated, progressive automation of evaluations must be highlighted.

  2. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    Science.gov (United States)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  3. Consecutive Short-Scan CT for Geological Structure Analog Models with Large Size on In-Situ Stage.

    Science.gov (United States)

    Yang, Min; Zhang, Wen; Wu, Xiaojun; Wei, Dongtao; Zhao, Yixin; Zhao, Gang; Han, Xu; Zhang, Shunli

    2016-01-01

    For the analysis of interior geometry and property changes of a large-sized analog model during a loading or other medium (water or oil) injection process with a non-destructive way, a consecutive X-ray computed tomography (XCT) short-scan method is developed to realize an in-situ tomography imaging. With this method, the X-ray tube and detector rotate 270° around the center of the guide rail synchronously by switching positive and negative directions alternately on the way of translation until all the needed cross-sectional slices are obtained. Compared with traditional industrial XCTs, this method well solves the winding problems of high voltage cables and oil cooling service pipes during the course of rotation, also promotes the convenience of the installation of high voltage generator and cooling system. Furthermore, hardware costs are also significantly decreased. This kind of scanner has higher spatial resolution and penetrating ability than medical XCTs. To obtain an effective sinogram which matches rotation angles accurately, a structural similarity based method is applied to elimination of invalid projection data which do not contribute to the image reconstruction. Finally, on the basis of geometrical symmetry property of fan-beam CT scanning, a whole sinogram filling a full 360° range is produced and a standard filtered back-projection (FBP) algorithm is performed to reconstruct artifacts-free images.

  4. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    Science.gov (United States)

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of

  5. Attention and executive functions in a rat model of chronic epilepsy.

    Science.gov (United States)

    Faure, Jean-Baptiste; Marques-Carneiro, José E; Akimana, Gladys; Cosquer, Brigitte; Ferrandon, Arielle; Herbeaux, Karine; Koning, Estelle; Barbelivien, Alexandra; Nehlig, Astrid; Cassel, Jean-Christophe

    2014-05-01

    Temporal lobe epilepsy is a relatively frequent, invalidating, and often refractory neurologic disorder. It is associated with cognitive impairments that affect memory and executive functions. In the rat lithium-pilocarpine temporal lobe epilepsy model, memory impairment and anxiety disorder are classically reported. Here we evaluated sustained visual attention in this model of epilepsy, a function not frequently explored. Thirty-five Sprague-Dawley rats were subjected to lithium-pilocarpine status epilepticus. Twenty of them received a carisbamate treatment for 7 days, starting 1 h after status epilepticus onset. Twelve controls received lithium and saline. Five months later, attention was assessed in the five-choice serial reaction time task, a task that tests visual attention and inhibitory control (impulsivity/compulsivity). Neuronal counting was performed in brain regions of interest to the functions studied (hippocampus, prefrontal cortex, nucleus basalis magnocellularis, and pedunculopontine tegmental nucleus). Lithium-pilocarpine rats developed motor seizures. When they were able to learn the task, they exhibited attention impairment and a tendency toward impulsivity and compulsivity. These disturbances occurred in the absence of neuronal loss in structures classically related to attentional performance, although they seemed to better correlate with neuronal loss in hippocampus. Globally, rats that received carisbamate and developed motor seizures were as impaired as untreated rats, whereas those that did not develop overt motor seizures performed like controls, despite evidence for hippocampal damage. This study shows that attention deficits reported by patients with temporal lobe epilepsy can be observed in the lithium-pilocarpine model. Carisbamate prevents the occurrence of motor seizures, attention impairment, impulsivity, and compulsivity in a subpopulation of neuroprotected rats. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  6. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    Science.gov (United States)

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. CFD modelling of convective heat transfer from a window with adjacent venetian blinds

    Energy Technology Data Exchange (ETDEWEB)

    Marjanovic, L. [Belgrade Univ., Belgrade (Yugoslavia). Faculty of Mechanical Engineering]|[DeMontfort Univ. (United Kingdom). Inst. of Energy and Sustainable Development; Cook, M; Hanby, V.; Rees, S. [DeMontfort Univ. (United Kingdom). Inst. of Energy and Sustainable Development

    2005-07-01

    There is a limited amount of 3-dimensional modeling information on the performance of glazing systems with blinds. Two-dimensional flow modeling has indicated that 1-dimensional heat transfer can lead to invalid results where 2- and 3-dimensional effects are present. In this study, a 3-dimensional numerical solution was obtained on the effect of a venetian blind on the conjugate heat transfer from an indoor window glazing system. The solution was obtained for the coupled laminar free convection and radiation heat transfer problem, including conduction along the blind slats. Continuity, momentum and energy equations for buoyant flow were solved using Computational Fluid Dynamics (CFD) software. Grey diffuse radiation exchange between the window, blind and air were considered using the Monte Carlo method. All thermophysical properties of air were assumed to be constant with the exception of density, which was modeled using the Bousinesq approximation. Both winter and summer conditions were considered. In the computational domain, the window represented an isothermal type boundary condition with no slip. The height of the domain was extended beyond the blinds to allow for inflow and outflow regions. Fluid was allowed to entrain into the domain at an ambient temperature in a direction perpendicular to the window. The results indicated that heat transfer between window and indoor air is influenced both quantitatively and qualitatively by the presence of an aluminium venetian blind, and that the cellular flow between the blind slats can have a significant effect on the convective heat transfer from the window surface that is more fully recognized and analyzed in 3 dimensions. refs., 2 tabs., 13 figs.

  8. Sub-discretized surface model with application to contact mechanics in multi-body simulation

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, S; Williams, J

    2008-02-28

    The mechanics of contact between rough and imperfectly spherical adhesive powder grains are often complicated by a variety of factors, including several which vary over sub-grain length scales. These include several traction factors that vary spatially over the surface of the individual grains, including high energy electron and acceptor sites (electrostatic), hydrophobic and hydrophilic sites (electrostatic and capillary), surface energy (general adhesion), geometry (van der Waals and mechanical), and elasto-plastic deformation (mechanical). For mechanical deformation and reaction, coupled motions, such as twisting with bending and sliding, as well as surface roughness add an asymmetry to the contact force which invalidates assumptions for popular models of contact, such as the Hertzian and its derivatives, for the non-adhesive case, and the JKR and DMT models for adhesive contacts. Though several contact laws have been offered to ameliorate these drawbacks, they are often constrained to particular loading paths (most often normal loading) and are relatively complicated for computational implementation. This paper offers a simple and general computational method for augmenting contact law predictions in multi-body simulations through characterization of the contact surfaces using a hierarchically-defined surface sub-discretization. For the case of adhesive contact between powder grains in low stress regimes, this technique can allow a variety of existing contact laws to be resolved across scales, allowing for moments and torques about the contact area as well as normal and tangential tractions to be resolved. This is especially useful for multi-body simulation applications where the modeler desires statistical distributions and calibration for parameters in contact laws commonly used for resolving near-surface contact mechanics. The approach is verified against analytical results for the case of rough, elastic spheres.

  9. Consolidating the medical model of disability: on poliomyelitis and the constitution of orthopedic surgery and orthopaedics as a speciality in Spain (1930-1950

    Directory of Open Access Journals (Sweden)

    Martínez-Pérez, José

    2009-06-01

    Full Text Available At the beginning of the 1930s, various factors made it necessary to transform one of the institutions which was renowned for its work regarding the social reinsertion of the disabled, that is, the Instituto de Reeducación Profesional de Inválidos del Trabajo (Institute for Occupational Retraining of Invalids of Work. The economic crisis of 1929 and the legislative reform aimed at regulating occupational accidents highlighted the failings of this institution to fulfil its objectives. After a time of uncertainty, the centre was renamed the Instituto Nacional de Reeducación de Inválidos (National Institute for Retraining of Invalids. This was done to take advantage of its work in championing the recovery of all people with disabilities.

    This work aims to study the role played in this process by the poliomyelitis epidemics in Spain at this time. It aims to highlight how this disease justified the need to continue the work of a group of professionals and how it helped to reorient the previous programme to re-educate the «invalids». Thus we shall see the way in which, from 1930 to 1950, a specific medical technology helped to consolidate an «individual model» of disability and how a certain cultural stereotype of those affected developed as a result. Lastly, this work discusses the way in which all this took place in the midst of a process of professional development of orthopaedic surgeons.

    A comienzos de la década de 1930, una serie de factores obligaron a transformar una de las instituciones que más se había destacado en España en la labor de conseguir la reinserción social de las personas con discapacidades: el Instituto de Reeducación de Inválidos del Trabajo. La crisis económica de 1929 y las reformas legislativas destinadas a regular los accidentes del trabajo pusieron de relieve, entre otros factores, las limitaciones de esa institución para cumplir sus objetivos. Tras un período de cierta indefinición, el

  10. Modelling SDL, Modelling Languages

    Directory of Open Access Journals (Sweden)

    Michael Piefel

    2007-02-01

    Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.

  11. A comparison of zero-order, first-order, and Monod biotransformation models

    International Nuclear Information System (INIS)

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K S , this assumption is often made without verification of this condition. The authors present a formal error analysis showing that the relative error in the first-order approximation is S/K S and in the zero-order approximation the error is K S /S. They then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K S , it may be better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of K S for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, the authors apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set

  12. Interdependent demands, regulatory constraint, and peak-load pricing. [Assessment of Bailey's model

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D T; Macgregor-Reid, G J

    1977-06-01

    A model of a regulated firm which includes an analysis of peak-load pricing has been formulated by E. E. Bailey in which three alternative modes of regulation on a profit-maximizing firm are considered. The main conclusion reached is that under a regulation limiting the rate of return on capital investment, price reductions are received solely by peak-users and that when regulation limiting the profit per unit of output or the return on costs is imposed, there are price reductions for all users. Bailey has expressly assumed that the demands in different periods are interdependent but has somehow failed to derive the correct price and welfare implications of this empirically highly relevant assumption. Her conclusions would have been perfectly correct for marginal revenues but are quite incorrect for prices, even if her assumption that price exceeds marginal revenues in every period holds. This present paper derives fully and rigorously the implications of regulation for prices, outputs, capacity, and social welfare for a profit-maximizing firm with interdependent demands. In section II, Bailey's model is reproduced and the optimal conditions are given. In section III, it is demonstrated that under the conditions of interdependent demands assumed by Bailey herself, her often-quoted conclusion concerning the effects of the return-on-investment regulation on the off-peak price is invalid. In section IV, the effects of the return-on-investment regulation on the optimal prices, outputs, capacity, and social welfare both for the case in which the demands in different periods are substitutes and for the case in which they are complements are examined. In section V, the pricing and welfare implications of the return-on-investment regulation are compared with the two other modes of regulation considered by Bailey. Section VI is a summary of all sections. (MCW)

  13. P2X7 Receptors Drive Spine Synapse Plasticity in the Learned Helplessness Model of Depression.

    Science.gov (United States)

    Otrokocsi, Lilla; Kittel, Ágnes; Sperlágh, Beáta

    2017-10-01

    Major depressive disorder is characterized by structural and functional abnormalities of cortical and limbic brain areas, including a decrease in spine synapse number in the dentate gyrus of the hippocampus. Recent studies highlighted that both genetic and pharmacological invalidation of the purinergic P2X7 receptor (P2rx7) leads to antidepressant-like phenotype in animal experiments; however, the impact of P2rx7 on depression-related structural changes in the hippocampus is not clarified yet. Effects of genetic deletion of P2rx7s on depressive-like behavior and spine synapse density in the dentate gyrus were investigated using the learned helplessness mouse model of depression. We demonstrate that in wild-type animals, inescapable footshocks lead to learned helplessness behavior reflected in increased latency and number of escape failures to subsequent escapable footshocks. This behavior is accompanied with downregulation of mRNA encoding P2rx7 and decrease of spine synapse density in the dentate gyrus as determined by electron microscopic stereology. In addition, a decrease in synaptopodin but not in PSD95 and NR2B/GluN2B protein level was also observed under these conditions. Whereas the absence of P2rx7 was characterized by escape deficit, no learned helpless behavior is observed in these animals. Likewise, no decrease in spine synapse number and synaptopodin protein levels was detected in response to inescapable footshocks in P2rx7-deficient animals. Our findings suggest the endogenous activation of P2rx7s in the learned helplessness model of depression and decreased plasticity of spine synapses in P2rx7-deficient mice might explain the resistance of these animals to repeated stressful stimuli. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  14. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  15. Modelling, simulation, and optimisation of a downflow entrained flow reactor for black liquor gasification

    Energy Technology Data Exchange (ETDEWEB)

    Marklund, Magnus [ETC Energitekniskt Centrum, Piteaa (Sweden)

    2003-12-01

    of heat flux to the reactor wall. By using a model based on coal combustion it was concluded that the gas flow field is relatively insensitive to the burner spray angle. Partial verification of an advanced PBLG model for a simplified case showed very good agreement. By studying the influence fiom uncertainties in some model parameter inputs, it was found that all the studied main effects are relatively small and the uncertainties in the examined model parameters would not invalidate the results fiom a design optimisation with the developed PBLG reactor model.

  16. Analyzing subsurface drain network performance in an agricultural monitoring site with a three-dimensional hydrological model

    Science.gov (United States)

    Nousiainen, Riikka; Warsta, Lassi; Turunen, Mika; Huitu, Hanna; Koivusalo, Harri; Pesonen, Liisa

    2015-10-01

    Effectiveness of a subsurface drainage system decreases with time, leading to a need to restore the drainage efficiency by installing new drain pipes in problem areas. The drainage performance of the resulting system varies spatially and complicates runoff and nutrient load generation within the fields. We presented a method to estimate the drainage performance of a heterogeneous subsurface drainage system by simulating the area with the three-dimensional hydrological FLUSH model. A GIS analysis was used to delineate the surface runoff contributing area in the field. We applied the method to reproduce the water balance and to investigate the effectiveness of a subsurface drainage network of a clayey field located in southern Finland. The subsurface drainage system was originally installed in the area in 1971 and the drainage efficiency was improved in 1995 and 2005 by installing new drains. FLUSH was calibrated against total runoff and drain discharge data from 2010 to 2011 and validated against total runoff in 2012. The model supported quantification of runoff fractions via the three installed drainage networks. Model realisations were produced to investigate the extent of the runoff contributing areas and the effect of the drainage parameters on subsurface drain discharge. The analysis showed that better model performance was achieved when the efficiency of the oldest drainage network (installed in 1971) was decreased. Our analysis method can reveal the drainage system performance but not the reason for the deterioration of the drainage performance. Tillage layer runoff from the field was originally computed by subtracting drain discharge from the total runoff. The drains installed in 1995 bypass the measurement system, which renders the tillage layer runoff calculation procedure invalid after 1995. Therefore, this article suggests use of a local correction coefficient based on the simulations for further research utilizing data from the study area.

  17. A More Flexible Lipoprotein Sorting Pathway

    Science.gov (United States)

    Chahales, Peter

    2015-01-01

    Lipoprotein biogenesis in Gram-negative bacteria occurs by a conserved pathway, each step of which is considered essential. In contrast to this model, LoVullo and colleagues demonstrate that the N-acyl transferase Lnt is not required in Francisella tularensis or Neisseria gonorrhoeae. This suggests the existence of a more flexible lipoprotein pathway, likely due to a modified Lol transporter complex, and raises the possibility that pathogens may regulate lipoprotein processing to modulate interactions with the host. PMID:25755190

  18. Targeted and non-targeted effects of ionizing radiation

    OpenAIRE

    Omar Desouky; Nan Ding; Guangming Zhou

    2015-01-01

    For a long time it was generally accepted that effects of ionizing radiation such as cell death, chromosomal aberrations, DNA damage, mutagenesis, and carcinogenesis result from direct ionization of cell structures, particularly DNA, or from indirect damage through reactive oxygen species produced by radiolysis of water, and these biological effects were attributed to irreparable or misrepaired DNA damage in cells directly hit by radiation. Using linear non-threshold model (LNT), possible ris...

  19. Non-Linear Adaptive Phenomena Which Decrease The Risk of Infection After Pre-Exposure to Radiofrequency Radiation

    OpenAIRE

    Mortazavi, S.M.J.; Motamedifar, M.; Namdari, G.; Taheri, M.; Mortazavi, A.R.; Shokrpour, N.

    2013-01-01

    Substantial evidence indicates that adaptive response induced by low doses of ionizing radiation can result in resistance to the damage caused by a subsequently high-dose radiation or cause cross-resistance to other non-radiation stressors. Adaptive response contradicts the linear-non-threshold (LNT) dose-response model for ionizing radiation. We have previously reported that exposure of laboratory animals to radiofrequency radiation can induce a survival adaptive response. Furthermore, we ha...

  20. The potential for bias in Cohen's ecological analysis of lung cancer and residential radon

    International Nuclear Information System (INIS)

    Lubin, Jay H.

    2002-01-01

    Cohen's ecological analysis of US lung cancer mortality rates and mean county radon concentration shows decreasing mortality rates with increasing radon concentration (Cohen 1995 Health Phys. 68 157-74). The results prompted his rejection of the linear-no-threshold (LNT) model for radon and lung cancer. Although several authors have demonstrated that risk patterns in ecological analyses provide no inferential value for assessment of risk to individuals, Cohen advances two arguments in a recent response to Darby and Doll (2000 J. Radiol. Prot. 20 221-2) who suggest Cohen's results are and will always be burdened by the ecological fallacy. Cohen asserts that the ecological fallacy does not apply when testing the LNT model, for which average exposure determines average risk, and that the influence of confounding factors is obviated by the use of large numbers of stratification variables. These assertions are erroneous. Average dose determines average risk only for models which are linear in all covariates, in which case ecological analyses are valid. However, lung cancer risk and radon exposure, while linear in the relative risk, are not linearly related to the scale of absolute risk, and thus Cohen's rejection of the LNT model is based on a false premise of linearity. In addition, it is demonstrated that the deleterious association for radon and lung cancer observed in residential and miner studies is consistent with negative trends from ecological studies, of the type described by Cohen. (author)

  1. Identification and location tasks rely on different mental processes: a diffusion model account of validity effects in spatial cueing paradigms with emotional stimuli.

    Science.gov (United States)

    Imhoff, Roland; Lange, Jens; Germar, Markus

    2018-02-22

    Spatial cueing paradigms are popular tools to assess human attention to emotional stimuli, but different variants of these paradigms differ in what participants' primary task is. In one variant, participants indicate the location of the target (location task), whereas in the other they indicate the shape of the target (identification task). In the present paper we test the idea that although these two variants produce seemingly comparable cue validity effects on response times, they rest on different underlying processes. Across four studies (total N = 397; two in the supplement) using both variants and manipulating the motivational relevance of cue content, diffusion model analyses revealed that cue validity effects in location tasks are primarily driven by response biases, whereas the same effect rests on delay due to attention to the cue in identification tasks. Based on this, we predict and empirically support that a symmetrical distribution of valid and invalid cues would reduce cue validity effects in location tasks to a greater extent than in identification tasks. Across all variants of the task, we fail to replicate the effect of greater cue validity effects for arousing (vs. neutral) stimuli. We discuss the implications of these findings for best practice in spatial cueing research.

  2. Traumatic stress causes distinctive effects on fear circuit catecholamines and the fear extinction profile in a rodent model of posttraumatic stress disorder.

    Science.gov (United States)

    Lin, Chen-Cheng; Tung, Che-Se; Lin, Pin-Hsuan; Huang, Chuen-Lin; Liu, Yia-Ping

    2016-09-01

    Central catecholamines regulate fear memory across the medial prefrontal cortex (mPFC), amygdala (AMYG), and hippocampus (HPC). However, inadequate evidence exists to address the relationships among these fear circuit areas in terms of the fear symptoms of posttraumatic stress disorder (PTSD). By examining the behavioral profile in a Pavlovian fear conditioning paradigm together with tissue/efflux levels of dopamine (DA) and norepinephrine (NE) and their reuptake abilities across the fear circuit areas in rats that experienced single prolonged stress (SPS, a rodent model of PTSD), we demonstrated that SPS-impaired extinction retrieval was concomitant with the changes of central DA/NE in a dissociable manner. For tissue levels, diminished DA and increased NE were both observed in the mPFC and AMYG. DA efflux and synaptosomal DA transporter were consistently reduced in the AMYG/vHPC, whereas SPS reduced NE efflux in the infralimbic cortex and synaptosomal NE transporter in the mPFC. Furthermore, a lower expression of synaptosomal VMAT2 was observed in the mPFC, AMYG, and vHPC after SPS. Finally, negative correlations were observed between retrieval freezing and DA in the mPFC/AMYG; nevertheless, the phenomena became invalid after SPS. Our results suggest that central catecholamines are crucially involved in the retrieval of fear extinction in which DA and NE play distinctive roles across the fear circuit areas. Copyright © 2016 Elsevier B.V. and ECNP. All rights reserved.

  3. Molecular alterations in childhood thyroid cancer after Chernobyl accident and low-dose radiation risk

    International Nuclear Information System (INIS)

    Suzuki, Keiji; Mitsutake, Norisato; Yamashita, Shunichi

    2012-01-01

    The linear no-threshold (LNT) model of radiation carcinogenesis has been used for evaluating the risk from radiation exposure. While the epidemiological studies have supported the LNT model at doses above 100 mGy, more uncertainties are still existed in the LNT model at low doses below 100 mGy. Thus, it is urged to clarify the molecular mechanisms underlying radiation carcinogenesis. After the Chernobyl accident in 1986, significant amount of childhood thyroid cancer has emerged in the children living in the contaminated area. As the incidence of sporadic childhood thyroid cancer is very low, it is quite evident that those cancer cases have been induced by radiation exposure caused mainly by the intake of contaminated foods, such as milk. Because genetic alterations in childhood thyroid cancers have extensively been studied, it should provide a unique chance to understand the molecular mechanisms of radiation carcinogenesis. In a current review, molecular signatures obtained from the molecular studies of childhood thyroid cancer after Chernobyl accident have been overviewed, and new roles of radiation exposure in thyroid carcinogenesis will be discussed. (author)

  4. Health Physics Society Comments to U.S. Environmental Protection Agency Regulatory Reform Task Force.

    Science.gov (United States)

    Ring, Joseph; Tupin, Edward; Elder, Deirdre; Hiatt, Jerry; Sheetz, Michael; Kirner, Nancy; Little, Craig

    2018-05-01

    The Health Physics Society (HPS) provided comment to the U.S. Environmental Protection Agency (EPA) on options to consider when developing an action plan for President Trump's Executive Order to evaluate regulations for repeal, replacement, or modification. The HPS recommended that the EPA reconsider their adherence to the linear no-threshold (LNT) model for radiation risk calculations and improve several documents by better addressing uncertainties in low-dose, low dose-rate (LDDR) radiation exposure environments. The authors point out that use of the LNT model near background levels cannot provide reliable risk projections, use of the LNT model and collective-dose calculations in some EPA documents is inconsistent with the recommendations of international organizations, and some EPA documents have not been exposed to the public comment rule-making process. To assist in establishing a better scientific basis for the risks of low dose rate and low dose radiation exposure, the EPA should continue to support the "Million Worker Study," led by the National Council on Radiation Protection and Measurement.

  5. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  6. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy; Jonkman, Jason

    2017-03-09

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FAST wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).

  7. Fokker-Planck modeling of current penetration during electron cyclotron current drive

    International Nuclear Information System (INIS)

    Merkulov, A.; Westerhof, E.; Schueller, F. C.

    2007-01-01

    The current penetration during electron cyclotron current drive (ECCD) on the resistive time scale is studied with a Fokker-Planck simulation, which includes a model for the magnetic diffusion that determines the parallel electric field evolution. The existence of the synergy between the inductive electric field and EC driven current complicates the process of the current penetration and invalidates the standard method of calculation in which Ohm's law is simply approximated by j-j cd =σE. Here it is proposed to obtain at every time step a self-consistent approximation to the plasma resistivity from the Fokker-Planck code, which is then used in a concurrent calculation of the magnetic diffusion equation in order to obtain the inductive electric field at the next time step. A series of Fokker-Planck calculations including a self-consistent evolution of the inductive electric field has been performed. Both the ECCD power and the electron density have been varied, thus varying the well known nonlinearity parameter for ECCD P rf [MW/m -3 ]/n e 2 [10 19 m -3 ] [R. W. Harvey et al., Phys. Rev. Lett 62, 426 (1989)]. This parameter turns out also to be a good predictor of the synergetic effects. The results are then compared with the standard method of calculations of the current penetration using a transport code. At low values of the Harvey parameter, the standard method is in quantitative agreement with Fokker-Planck calculations. However, at high values of the Harvey parameter, synergy between ECCD and E parallel is found. In the case of cocurrent drive, this synergy leads to the generation of large amounts of nonthermal electrons and a concomitant increase of the electrical conductivity and current penetration time. In the case of countercurrent drive, the ECCD efficiency is suppressed by the synergy with E parallel while only a small amount of nonthermal electrons is produced

  8. Modelling Overview

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Vesterager, Johan

    This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise s...

  9. Document Models

    Directory of Open Access Journals (Sweden)

    A.A. Malykh

    2017-08-01

    Full Text Available In this paper, the concept of locally simple models is considered. Locally simple models are arbitrarily complex models built from relatively simple components. A lot of practically important domains of discourse can be described as locally simple models, for example, business models of enterprises and companies. Up to now, research in human reasoning automation has been mainly concentrated around the most intellectually intensive activities, such as automated theorem proving. On the other hand, the retailer business model is formed from ”jobs”, and each ”job” can be modelled and automated more or less easily. At the same time, the whole retailer model as an integrated system is extremely complex. In this paper, we offer a variant of the mathematical definition of a locally simple model. This definition is intended for modelling a wide range of domains. Therefore, we also must take into account the perceptual and psychological issues. Logic is elitist, and if we want to attract to our models as many people as possible, we need to hide this elitism behind some metaphor, to which ’ordinary’ people are accustomed. As such a metaphor, we use the concept of a document, so our locally simple models are called document models. Document models are built in the paradigm of semantic programming. This allows us to achieve another important goal - to make the documentary models executable. Executable models are models that can act as practical information systems in the described domain of discourse. Thus, if our model is executable, then programming becomes redundant. The direct use of a model, instead of its programming coding, brings important advantages, for example, a drastic cost reduction for development and maintenance. Moreover, since the model is well and sound, and not dissolved within programming modules, we can directly apply AI tools, in particular, machine learning. This significantly expands the possibilities for automation and

  10. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  11. Hidden Markov event sequence models: toward unsupervised functional MRI brain mapping.

    Science.gov (United States)

    Faisan, Sylvain; Thoraval, Laurent; Armspach, Jean-Paul; Foucher, Jack R; Metz-Lutz, Marie-Noëlle; Heitz, Fabrice

    2005-01-01

    Most methods used in functional MRI (fMRI) brain mapping require restrictive assumptions about the shape and timing of the fMRI signal in activated voxels. Consequently, fMRI data may be partially and misleadingly characterized, leading to suboptimal or invalid inference. To limit these assumptions and to capture the broad range of possible activation patterns, a novel statistical fMRI brain mapping method is proposed. It relies on hidden semi-Markov event sequence models (HSMESMs), a special class of hidden Markov models (HMMs) dedicated to the modeling and analysis of event-based random processes. Activation detection is formulated in terms of time coupling between (1) the observed sequence of hemodynamic response onset (HRO) events detected in the voxel's fMRI signal and (2) the "hidden" sequence of task-induced neural activation onset (NAO) events underlying the HROs. Both event sequences are modeled within a single HSMESM. The resulting brain activation model is trained to automatically detect neural activity embedded in the input fMRI data set under analysis. The data sets considered in this article are threefold: synthetic epoch-related, real epoch-related (auditory lexical processing task), and real event-related (oddball detection task) fMRI data sets. Synthetic data: Activation detection results demonstrate the superiority of the HSMESM mapping method with respect to a standard implementation of the statistical parametric mapping (SPM) approach. They are also very close, sometimes equivalent, to those obtained with an "ideal" implementation of SPM in which the activation patterns synthesized are reused for analysis. The HSMESM method appears clearly insensitive to timing variations of the hemodynamic response and exhibits low sensitivity to fluctuations of its shape (unsustained activation during task). Real epoch-related data: HSMESM activation detection results compete with those obtained with SPM, without requiring any prior definition of the expected

  12. Exercise training attenuates experimental autoimmune encephalomyelitis by peripheral immunomodulation rather than direct neuroprotection.

    Science.gov (United States)

    Einstein, Ofira; Fainstein, Nina; Touloumi, Olga; Lagoudaki, Roza; Hanya, Ester; Grigoriadis, Nikolaos; Katz, Abram; Ben-Hur, Tamir

    2018-01-01

    Conflicting results exist on the effects of exercise training (ET) on Experimental Autoimmune Encephalomyelitis (EAE), nor is it known how exercise impacts on disease progression. We examined whether ET ameliorates the development of EAE by modulating the systemic immune system or exerting direct neuroprotective effects on the CNS. Healthy mice were subjected to 6weeks of motorized treadmill running. The Proteolipid protein (PLP)-induced transfer EAE model in mice was utilized. To assess effects of ET on systemic autoimmunity, lymph-node (LN)-T cells from trained- vs. sedentary donor mice were transferred to naïve recipients. To assess direct neuroprotective effects of ET, PLP-reactive LN-T cells were transferred into recipient mice that were trained prior to EAE transfer or to sedentary mice. EAE severity was assessed in vivo and the characteristics of encephalitogenic LN-T cells derived from PLP-immunized mice were evaluated in vitro. LN-T cells obtained from trained mice induced an attenuated clinical and pathological EAE in recipient mice vs. cells derived from sedentary animals. Training inhibited the activation, proliferation and cytokine gene expression of PLP-reactive T cells in response to CNS-derived autoantigen, but strongly enhanced their proliferation in response to Concanavalin A, a non-specific stimulus. However, there was no difference in EAE severity when autoreactive encephalitogenic T cells were transferred to trained vs. sedentary recipient mice. ET inhibits immune system responses to an auto-antigen to attenuate EAE, rather than generally suppressing the immune system, but does not induce a direct neuro-protective effect against EAE. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  14. Galactic models

    International Nuclear Information System (INIS)

    Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.

    1990-01-01

    Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings

  15. “Protective Bystander Effects Simulated with the State-Vector Model”—HeLa x Skin Exposure to 137Cs Not Protective Bystander Response But Mammogram and Diagnostic X-Rays Are

    Science.gov (United States)

    Leonard, Bobby E.

    2008-01-01

    The recent Dose Response journal article “Protective Bystander Effects Simulated with the State-Vector Model” (Schollnberger and Eckl 2007) identified the suppressive (below natural occurring, zero primer dose, spontaneous level) dose response for HeLa x skin exposure to 137Cs gamma rays (Redpath et al 2001) as a protective Bystander Effect (BE) behavior. I had previously analyzed the Redpath et al (2001) data with a Microdose Model and conclusively showed that the suppressive response was from Adaptive Response (AR) radio-protection (Leonard 2005, 2007a). The significance of my microdose analysis has been that low LET radiation induced single (i.e. only one) charged particle traversals through a cell can initiate a Poisson distributed activation of AR radio-protection. The purpose of this correspondence is to clarify the distinctions relative to the BE and the AR behaviors for the Redpath groups 137Cs data, show conversely however that the Redpath group data for mammography (Ko et al 2004) and diagnostic (Redpath et al 2003) X-rays do conclusively reflect protective bystander behavior and also herein emphasize the need for radio-biologist to apply microdosimetry in planning and analyzing their experiments for BE and AR. Whether we are adamantly pro-LNT, adamantly anti-LNT or, like most of us, just simple scientists searching for the truth in radio-biology, it is important that we accurately identify our results, especially when related to the LNT hypothesis controversy. PMID:18846260

  16. Pros and cons of the revolution in radiation protection

    International Nuclear Information System (INIS)

    Latek, Stanislav

    2001-01-01

    In 1959, the International Commission of Radiation Protection (ICRP) chose the LNT (Linear No-Threshold) model as an assumption to form the basis for regulating radiation protection. During the 1999 UNSCEAR session, held in April in Vienna, the linear no-threshold (LNT) hypothesis was discussed. Among other LNT-related subjects, the Committee discussed the problem of collective dose and dose commitment. These concepts have been introduced in the early 1960s, as the offspring of the linear no-threshold assumption. At the time they reflected a deep concern about the induction of hereditary effects by nuclear tests fallout. Almost four decades later, collective dose and dose commitment are still widely used, although by now both the concepts and the concern should have faded into oblivion. It seems that the principles and concepts of radiation protection have gone astray and have led to exceedingly prohibitive standards and impractical recommendations. Revision of these principles and concepts is now being proposed by an increasing number of scientists and several organisations

  17. Model-model Perencanaan Strategik

    OpenAIRE

    Amirin, Tatang M

    2005-01-01

    The process of strategic planning, used to be called as long-term planning, consists of several components, including strategic analysis, setting strategic direction (covering of mission, vision, and values), and action planning. Many writers develop models representing the steps of the strategic planning process, i.e. basic planning model, problem-based planning model, scenario model, and organic or self-organizing model.

  18. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...

  19. Modelling survival

    DEFF Research Database (Denmark)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight

    2016-01-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test...

  20. Constitutive Models

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina

    2011-01-01

    covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...... the degrees of freedom, dependent and independent variables and solution strategy. Vapour-liquid and solid-liquid equilibrium is covered, and applications to droplet evaporation and kinetic models are given....

  1. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  2. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  3. ICRF modelling

    International Nuclear Information System (INIS)

    Phillips, C.K.

    1985-12-01

    This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs

  4. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    International Nuclear Information System (INIS)

    Beyea, Jan

    2017-01-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  5. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    Energy Technology Data Exchange (ETDEWEB)

    Beyea, Jan, E-mail: jbeyea@cipi.com

    2017-04-15

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  6. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  7. Eclipse models

    International Nuclear Information System (INIS)

    Michel, F.C.

    1989-01-01

    Three existing eclipse models for the PSR 1957 + 20 pulsar are discussed in terms of their requirements and the information they yield about the pulsar wind: the interacting wind from a companion model, the magnetosphere model, and the occulting disk model. It is shown out that the wind model requires an MHD wind from the pulsar, with enough particles that the Poynting flux of the wind can be thermalized; in this model, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model requires an EM wind, which is Poynting flux dominated; the advantage of this model over the wind model is that the plasma density inside the magnetosphere can be orders of magnitude larger than in a magnetospheric tail blown back by wind interaction. The occulting disk model also requires an EM wind so that the interaction would be pushed down onto the companion surface, minimizing direct interaction of the wind with the orbiting macroscopic particles

  8. Ventilation Model

    International Nuclear Information System (INIS)

    Yang, H.

    1999-01-01

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future

  9. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...

  10. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  11. Model : making

    OpenAIRE

    Bottle, Neil

    2013-01-01

    The Model : making exhibition was curated by Brian Kennedy in collaboration with Allies & Morrison in September 2013. For the London Design Festival, the Model : making exhibition looked at the increased use of new technologies by both craft-makers and architectural model makers. In both practices traditional ways of making by hand are increasingly being combined with the latest technologies of digital imaging, laser cutting, CNC machining and 3D printing. This exhibition focussed on ...

  12. Model building

    International Nuclear Information System (INIS)

    Frampton, Paul H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA

  13. Model building

    International Nuclear Information System (INIS)

    Frampton, P.H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA. copyright 1998 American Institute of Physics

  14. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  15. Animal models

    DEFF Research Database (Denmark)

    Gøtze, Jens Peter; Krentz, Andrew

    2014-01-01

    In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...

  16. Battery Modeling

    NARCIS (Netherlands)

    Jongerden, M.R.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However,

  17. Didactical modelling

    DEFF Research Database (Denmark)

    Højgaard, Tomas; Hansen, Rune

    The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful...

  18. Design modelling

    NARCIS (Netherlands)

    Kempen, van A.; Kok, H.; Wagter, H.

    1992-01-01

    In Computer Aided Drafting three groups of three-dimensional geometric modelling can be recognized: wire frame, surface and solid modelling. One of the methods to describe a solid is by using a boundary based representation. The topology of the surface of a solid is the adjacency information between

  19. Education models

    NARCIS (Netherlands)

    Poortman, Sybilla; Sloep, Peter

    2006-01-01

    Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in

  20. Is BAMM Flawed? Theoretical and Practical Concerns in the Analysis of Multi-Rate Diversification Models.

    Science.gov (United States)

    Rabosky, Daniel L; Mitchell, Jonathan S; Chang, Jonathan

    2017-07-01

    Bayesian analysis of macroevolutionary mixtures (BAMM) is a statistical framework that uses reversible jump Markov chain Monte Carlo to infer complex macroevolutionary dynamics of diversification and phenotypic evolution on phylogenetic trees. A recent article by Moore et al. (MEA) reported a number of theoretical and practical concerns with BAMM. Major claims from MEA are that (i) BAMM's likelihood function is incorrect, because it does not account for unobserved rate shifts; (ii) the posterior distribution on the number of rate shifts is overly sensitive to the prior; and (iii) diversification rate estimates from BAMM are unreliable. Here, we show that these and other conclusions from MEA are generally incorrect or unjustified. We first demonstrate that MEA's numerical assessment of the BAMM likelihood is compromised by their use of an invalid likelihood function. We then show that "unobserved rate shifts" appear to be irrelevant for biologically plausible parameterizations of the diversification process. We find that the purportedly extreme prior sensitivity reported by MEA cannot be replicated with standard usage of BAMM v2.5, or with any other version when conventional Bayesian model selection is performed. Finally, we demonstrate that BAMM performs very well at estimating diversification rate variation across the ${\\sim}$20% of simulated trees in MEA's data set for which it is theoretically possible to infer rate shifts with confidence. Due to ascertainment bias, the remaining 80% of their purportedly variable-rate phylogenies are statistically indistinguishable from those produced by a constant-rate birth-death process and were thus poorly suited for the summary statistics used in their performance assessment. We demonstrate that inferences about diversification rates have been accurate and consistent across all major previous releases of the BAMM software. We recognize an acute need to address the theoretical foundations of rate-shift models for

  1. VENTILATION MODEL

    International Nuclear Information System (INIS)

    V. Chipman

    2002-01-01

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses

  2. Modelling Constructs

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2009-01-01

    , these notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult......There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add most...

  3. STEREOMETRIC MODELLING

    Directory of Open Access Journals (Sweden)

    P. Grimaldi

    2012-07-01

    Full Text Available These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : – the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program; – the shot visualization in two distinct windows – the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  4. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Graphical Rasch models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models......Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models...

  6. Supernova models

    International Nuclear Information System (INIS)

    Woosley, S.E.; California, University, Livermore, CA); Weaver, T.A.

    1981-01-01

    Recent progress in understanding the observed properties of type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the Ni-56 produced therein is reviewed. The expected nucleosynthesis and gamma-line spectra for this model of type I explosions and a model for type II explosions are presented. Finally, a qualitatively new approach to the problem of massive star death and type II supernovae based upon a combination of rotation and thermonuclear burning is discussed. While the theoretical results of existing models are predicated upon the assumption of a successful core bounce calculation and the neglect of such two-dimensional effects as rotation and magnetic fields the new model suggests an entirely different scenario in which a considerable portion of the energy carried by an equatorially ejected blob is deposited in the red giant envelope overlying the mantle of the star

  7. Model theory

    CERN Document Server

    Hodges, Wilfrid

    1993-01-01

    An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.

  8. Markov model

    Indian Academy of Sciences (India)

    2School of Water Resources, Indian Institute of Technology,. Kharagpur ... the most accepted method for modelling LULCC using current .... We used UTM coordinate system with zone 45 .... need to develop criteria for making decision about.

  9. Paleoclimate Modeling

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of all...

  10. Energy Models

    Science.gov (United States)

    Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...

  11. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  12. Ventilation models

    Science.gov (United States)

    Skaaret, Eimund

    Calculation procedures, used in the design of ventilating systems, which are especially suited for displacement ventilation in addition to linking it to mixing ventilation, are addressed. The two zone flow model is considered and the steady state and transient solutions are addressed. Different methods of supplying air are discussed, and different types of air flow are considered: piston flow, plane flow and radial flow. An evaluation model for ventilation systems is presented.

  13. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  14. Thermocouple modeling

    International Nuclear Information System (INIS)

    Fryer, M.O.

    1984-01-01

    The temperature measurements provided by thermocouples (TCs) are important for the operation of pressurized water reactors. During severe inadequate core cooling incidents, extreme temperatures may cause type K thermocouples (TCs) used for core exit temperature monitoring to perform poorly. A model of TC electrical behavior has been developed to determine how TCs react under extreme temperatures. The model predicts the voltage output of the TC and its impedance. A series of experiments were conducted on a length of type K thermocouple to validate the model. Impedance was measured at several temperatures between 22 0 C and 1100 0 C and at frequencies between dc and 10 MHz. The model was able to accurately predict impedance over this wide range of conditions. The average percentage difference between experimental data and the model was less than 6.5%. Experimental accuracy was +-2.5%. There is a sriking difference between impedance versus frequency plots at 300 0 C and at higher temperatures. This may be useful in validating TC data during accident conditions

  15. Photoionization Modeling

    Science.gov (United States)

    Kallman, T.

    2010-01-01

    Warm absorber spectra are characterized by the many lines from partially ionized intermediate-Z elements, and iron, detected with the grating instruments on Chandra and XMM-Newton. If these ions are formed in a gas which is in photoionization equilibrium, they correspond to a broad range of ionization parameters, although there is evidence for certain preferred values. A test for any dynamical model for these outflows is to reproduce these properties, at some level of detail. In this paper we present a statistical analysis of the ionization distribution which can be applied both the observed spectra and to theoretical models. As an example, we apply it to our dynamical models for warm absorber outflows, based on evaporation from the molecular torus.

  16. Reflectance Modeling

    Science.gov (United States)

    Smith, J. A.; Cooper, K.; Randolph, M.

    1984-01-01

    A classical description of the one dimensional radiative transfer treatment of vegetation canopies was completed and the results were tested against measured prairie (blue grama) and agricultural canopies (soybean). Phase functions are calculated in terms of directly measurable biophysical characteristics of the canopy medium. While the phase functions tend to exhibit backscattering anisotropy, their exact behavior is somewhat more complex and wavelength dependent. A Monte Carlo model was developed that treats soil surfaces with large periodic variations in three dimensions. A photon-ray tracing technology is used. Currently, the rough soil surface is described by analytic functions and appropriate geometric calculations performed. A bidirectional reflectance distribution function is calculated and, hence, available for other atmospheric or canopy reflectance models as a lower boundary condition. This technique is used together with an adding model to calculate several cases where Lambertian leaves possessing anisotropic leaf angle distributions yield non-Lambertian reflectance; similar behavior is exhibited for simulated soil surfaces.

  17. Mathematical modeling

    CERN Document Server

    Eck, Christof; Knabner, Peter

    2017-01-01

    Mathematical models are the decisive tool to explain and predict phenomena in the natural and engineering sciences. With this book readers will learn to derive mathematical models which help to understand real world phenomena. At the same time a wealth of important examples for the abstract concepts treated in the curriculum of mathematics degrees are given. An essential feature of this book is that mathematical structures are used as an ordering principle and not the fields of application. Methods from linear algebra, analysis and the theory of ordinary and partial differential equations are thoroughly introduced and applied in the modeling process. Examples of applications in the fields electrical networks, chemical reaction dynamics, population dynamics, fluid dynamics, elasticity theory and crystal growth are treated comprehensively.

  18. Modelling language

    CERN Document Server

    Cardey, Sylviane

    2013-01-01

    In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int

  19. Molecular modeling

    Directory of Open Access Journals (Sweden)

    Aarti Sharma

    2009-01-01

    Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.

  20. Supernova models

    International Nuclear Information System (INIS)

    Woosley, S.E.; Weaver, T.A.

    1980-01-01

    Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the 56 Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed

  1. Isolating lattice from electronic contributions in thermal transport measurements of metals and alloys above ambient temperature and an adiabatic model

    Science.gov (United States)

    Criss, Everett M.; Hofmeister, Anne M.

    2017-06-01

    From femtosecond spectroscopy (fs-spectroscopy) of metals, electrons and phonons reequilibrate nearly independently, which contrasts with models of heat transfer at ordinary temperatures (T > 100 K). These electronic transfer models only agree with thermal conductivity (k) data at a single temperature, but do not agree with thermal diffusivity (D) data. To address the discrepancies, which are important to problems in solid state physics, we separately measured electronic (ele) and phononic (lat) components of D in many metals and alloys over ˜290-1100 K by varying measurement duration and sample length in laser-flash experiments. These mechanisms produce distinct diffusive responses in temperature versus time acquisitions because carrier speeds (u) and heat capacities (C) differ greatly. Electronic transport of heat only operates for a brief time after heat is applied because u is high. High Dele is associated with moderate T, long lengths, low electrical resistivity, and loss of ferromagnetism. Relationships of Dele and Dlat with physical properties support our assignments. Although kele reaches ˜20 × klat near 470 K, it is transient. Combining previous data on u with each D provides mean free paths and lifetimes that are consistent with ˜298 K fs-spectroscopy, and new values at high T. Our findings are consistent with nearly-free electrons absorbing and transmitting a small fraction of the incoming heat, whereas phonons absorb and transmit the majority. We model time-dependent, parallel heat transfer under adiabatic conditions which is one-dimensional in solids, as required by thermodynamic law. For noninteracting mechanisms, k≅ΣCikiΣCi/(ΣCi2). For metals, this reduces to k = klat above ˜20 K, consistent with our measurements, and shows that Meissner’s equation (k≅klat + kele) is invalid above ˜20 K. For one mechanism with multiple, interacting carriers, k≅ΣCiki/(ΣCi). Thus, certain dynamic behaviors of electrons and phonons in metals have been

  2. Painting models

    Science.gov (United States)

    Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.

    2015-12-01

    The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .

  3. Entrepreneurship Models.

    Science.gov (United States)

    Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.

    This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…

  4. Lens Model

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil...

  5. Eclipse models

    International Nuclear Information System (INIS)

    Michel, F.C.

    1989-01-01

    This paper addresses the question of, if one overlooks their idiosyncratic difficulties, what could be learned from the various models about the pulsar wind? The wind model requires an MHD wind from the pulsar, namely, one with enough particles that the Poynting flux of the wind can be thermalized. Otherwise, there is no shock and the pulsar wind simply reflects like a flashlight beam. Additionally, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model probably requires an EM wind, which is Poynting flux dominated. Reflection in this case would arguably minimize the intimate interaction between the two flows that leads to tail formation and thereby permit a weakly magnetized tail. The occulting disk model also would point to an EM wind so that the interaction would be pushed down onto the companion surface (to form the neutral fountain) and so as to also minimize direct interaction of the wind with the orbiting macroscopic particles

  6. (SSE) model

    African Journals Online (AJOL)

    Simple analytic polynomials have been proposed for estimating solar radiation in the traditional Northern, Central and Southern regions of Malawi. There is a strong agreement between the polynomials and the SSE model with R2 values of 0.988, 0.989 and 0.989 and root mean square errors of 0.061, 0.057 and 0.062 ...

  7. Successful modeling?

    Science.gov (United States)

    Lomnitz, Cinna

    Tichelaar and Ruff [1989] propose to “estimate model variance in complicated geophysical problems,” including the determination of focal depth in earthquakes, by means of unconventional statistical methods such as bootstrapping. They are successful insofar as they are able to duplicate the results from more conventional procedures.

  8. Defect modelling

    International Nuclear Information System (INIS)

    Norgett, M.J.

    1980-01-01

    Calculations, drawing principally on developments at AERE Harwell, of the relaxation about lattice defects are reviewed with emphasis on the techniques required for such calculations. The principles of defect modelling are outlined and various programs developed for defect simulations are discussed. Particular calculations for metals, ionic crystals and oxides, are considered. (UK)

  9. Cadastral Modeling

    DEFF Research Database (Denmark)

    Stubkjær, Erik

    2005-01-01

    to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...

  10. The Model

    DEFF Research Database (Denmark)

    About the reconstruction of Palle Nielsen's (f. 1942) work The Model from 1968: a gigantic playground for children in the museum, where they can freely romp about, climb in ropes, crawl on wooden structures, work with tools, jump in foam rubber, paint with finger paints and dress up in costumes....

  11. Biotran model

    International Nuclear Information System (INIS)

    Wenzel, W.J.; Gallegos, A.F.; Rodgers, J.C.

    1985-01-01

    The BIOTRAN model was developed at Los Alamos to help predict short- and long-term consequences to man from releases of radionuclides into the environment. It is a dynamic model that simulates on a daily and yearly basis the flux of biomass, water, and radionuclides through terrestrial and aquatic ecosystems. Biomass, water, and radionuclides are driven within the ecosystems by climate variables stochastically generated by BIOTRAN each simulation day. The climate variables influence soil hydraulics, plant growth, evapotranspiration, and particle suspension and deposition. BIOTRAN has 22 different plant growth strategies for simulating various grasses, shrubs, trees, and crops. Ruminants and humans are also dynamically simulated by using the simulated crops and forage as intake for user-specified diets. BIOTRAN has been used at Los Alamos for long-term prediction of health effects to populations following potential accidental releases of uranium and plutonium. Newly developed subroutines are described: a human dynamic physiological and metabolic model; a soil hydrology and irrigation model; limnetic nutrient and radionuclide cycling in fresh-water lakes. 7 references

  12. Turbulence Model

    DEFF Research Database (Denmark)

    Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens

    2011-01-01

    term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence...

  13. Hydroballistics Modeling

    Science.gov (United States)

    1975-01-01

    thai h’liathe0in antd is finaull’ %IIIrd alt %tramlit And drohlttle. Mike aplpars Ito inua•,e upward in outler a rei and dowoi. ward it %iunr areli, Oil...fiducial marks should be constant and the edges phobic nor hydrophilic is better for routine sharpl ) defined. model testing. Before each launching in

  14. Molecular Modeling

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 9; Issue 5. Molecular Modeling: A Powerful Tool for Drug Design and Molecular Docking. Rama Rao Nadendla. General Article Volume 9 Issue 5 May 2004 pp 51-60. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Review: Bilirubin pKa studies; new models and theories indicate high pKa values in water, dimethylformamide and DMSO

    Directory of Open Access Journals (Sweden)

    Ostrow J

    2010-03-01

    Full Text Available Abstract Background Correct aqueous pKa values of unconjugated bilirubin (UCB, a poorly-soluble, unstable substance, are essential for understanding its functions. Our prior solvent partition studies, of unlabeled and [14C] UCB, indicated pKa values above 8.0. These high values were attributed to effects of internal H-bonding in UCB. Many earlier and subsequent studies have reported lower pKa values, some even below 5.0, which are often used to describe the behavior of UCB. We here review 18 published studies that assessed aqueous pKa values of UCB, critically evaluating their methodologies in relation to essential preconditions for valid pKa measurements (short-duration experiments with purified UCB below saturation and accounting for self-association of UCB. Results These re-assessments identified major deficiencies that invalidate the results of all but our partition studies. New theoretical modeling of UCB titrations shows remarkable, unexpected effects of self-association, yielding falsely low pKa estimates, and provides some rationalization of the titration anomalies. The titration behavior reported for a soluble thioether conjugate of UCB at high aqueous concentrations is shown to be highly anomalous. Theoretical re-interpretations of data in DMSO and dimethylformamide show that those indirectly-derived aqueous pKa values are unacceptable, and indicate new, high average pKa values for UCB in non-aqueous media (>11 in DMSO and, probably, >10 in dimethylformamide. Conclusions No reliable aqueous pKa values of UCB are available for comparison with our partition-derived results. A companion paper shows that only the high pKa values can explain the pH-dependence of UCB binding to phospholipids, cyclodextrins, and alkyl-glycoside and bile salt micelles.

  16. Criticality Model

    International Nuclear Information System (INIS)

    Alsaed, A.

    2004-01-01

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  17. Using Logistic Regression for Validating or Invalidating Initial Statewide Cut-Off Scores on Basic Skills Placement Tests at the Community College Level

    Science.gov (United States)

    Secolsky, Charles; Krishnan, Sathasivam; Judd, Thomas P.

    2013-01-01

    The community colleges in the state of New Jersey went through a process of establishing statewide cut-off scores for English and mathematics placement tests. The colleges wanted to communicate to secondary schools a consistent preparation that would be necessary for enrolling in Freshman Composition and College Algebra at the community college…

  18. Sub-cellular localisation studies may spuriously detect the Yes-associated protein, YAP, in nucleoli leading to potentially invalid conclusions of its function.

    Science.gov (United States)

    Finch, Megan L; Passman, Adam M; Strauss, Robyn P; Yeoh, George C; Callus, Bernard A

    2015-01-01

    The Yes-associated protein (YAP) is a potent transcriptional co-activator that functions as a nuclear effector of the Hippo signaling pathway. YAP is oncogenic and its activity is linked to its cellular abundance and nuclear localisation. Activation of the Hippo pathway restricts YAP nuclear entry via its phosphorylation by Lats kinases and consequent cytoplasmic retention bound to 14-3-3 proteins. We examined YAP expression in liver progenitor cells (LPCs) and surprisingly found that transformed LPCs did not show an increase in YAP abundance compared to the non-transformed LPCs from which they were derived. We then sought to ascertain whether nuclear YAP was more abundant in transformed LPCs. We used an antibody that we confirmed was specific for YAP by immunoblotting to determine YAP's sub-cellular localisation by immunofluorescence. This antibody showed diffuse staining for YAP within the cytosol and nuclei, but, noticeably, it showed intense staining of the nucleoli of LPCs. This staining was non-specific, as shRNA treatment of cells abolished YAP expression to undetectable levels by Western blot yet the nucleolar staining remained. Similar spurious YAP nucleolar staining was also seen in mouse embryonic fibroblasts and mouse liver tissue, indicating that this antibody is unsuitable for immunological applications to determine YAP sub-cellular localisation in mouse cells or tissues. Interestingly nucleolar staining was not evident in D645 cells suggesting the antibody may be suitable for use in human cells. Given the large body of published work on YAP in recent years, many of which utilise this antibody, this study raises concerns regarding its use for determining sub-cellular localisation. From a broader perspective, it serves as a timely reminder of the need to perform appropriate controls to ensure the validity of published data.

  19. The detection of content-based invalid responding: a meta-analysis of the MMPI-2-Restructured Form's (MMPI-2-RF) over-reporting validity scales.

    Science.gov (United States)

    Ingram, Paul B; Ternes, Michael S

    2016-05-01

    This study synthesized research evaluation of the effectiveness of the over-reporting validity scales of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) for detecting intentionally feigned over-endorsements of symptoms using a moderated meta-analysis. After identifying experimental and quasi-experimental studies for inclusion (k = 25) in which the validity scales of the MMPI-2-RF were compared between groups of respondents, moderated meta-analyses were conducted for each of its five over-reporting scales. These meta-analyses explored the general effectiveness of each scale across studies, as well as the impact that several moderators had on scale performance, including comparison group, study type (i.e. real versus simulation), age, education, sex, and diagnosis. The over-reporting scales of the MMPI-2-RF act as effective general measures for the detection of malingering and over endorsement of symptoms with individual scales ranging in effectiveness from an effect size of 1.08 (Symptom Validity; FBS-r) to 1.43 (Infrequent Pathology; Fp-r), each with different patterns of moderating influence. The MMPI-2-RF validity scales effectively discriminate between groups of respondents presenting in either an honest manner or with patterned exaggeration and over-endorsement of symptoms. The magnitude of difference observed between honest and malingering groups was substantially narrower than might be expected using traditional cut-scores for the validity scales, making interpretation within the evaluation context particularly important. While all over-reporting scales are effective, the FBS-r and RBS scales are those least influenced by common and context specific moderating influences, such as respondent or comparison grouping.

  20. THE UNEMPLOYMENT PENSION-AGE AND INVALIDITY OF THE LAW OF THE IMSS, A PRACTICAL THEORETICAL ANALYSIS IN WORKERS OF SMES

    Directory of Open Access Journals (Sweden)

    Manuel Ildefonso Ruiz-Medina

    2016-01-01

    Full Text Available The present study analyzes and discloses the impact of base salary contribution in determining the amount of pension severance at old age and disability insurance, that covers the Law of the current Social Security for each case, is studied also causes the lack of knowledge of Social Security benefits by the workers. This requires a mixed methodological approach supported in the qualitative tradition of case study aimed to particularization and not generalization, which made it possible to link the obtained data with the theory, and to describe, analyze and explain the results found with the object of study. The results emerged from the application of the survey conducted with 22 items, whose questions were closed and structured with the method of Likert that were answered by 40 workers at two companies known as SMEs in the City of Culiacan, Sinaloa, Mexico, during the month March 2014. On completion of the analysis of the data collected, the results show a severe deterioration of pensions due to low wages and lack of jobs and declining resources with the new pension system of pensions and from the workers an almost total ignorance of the benefits that the law provides motivated by the lack of diffusion by the IMSS and the lack of enterprise training.

  1. Dose Response Model of Biological Reaction to Low Dose Rate Gamma Radiation

    International Nuclear Information System (INIS)

    Magae, J.; Furikawa, C.; Hoshi, Y.; Kawakami, Y.; Ogata, H.

    2004-01-01

    It is necessary to use reproducible and stable indicators to evaluate biological responses to long term irradiation at low dose-rate. They should be simple and quantitative enough to produce the results statistically accurate, because we have to analyze the subtle changes of biological responses around background level at low dose. For these purposes we chose micronucleus formation of U2OS, a human osteosarcoma cell line, as indicators of biological responses. Cells were exposed to gamma ray in irradiation rom bearing 50,000 Ci 60Co. After irradiation, they were cultured for 24 h in the presence of cytochalasin B to block cytokinesis, and cytoplasm and nucleus were stained with DAPI and prospidium iodide, respectively. the number of binuclear cells bearing micronuclei was counted under a fluorescence microscope. Dose rate in the irradiation room was measured with PLD. Dose response of PLD is linear between 1 mGy to 10 Gy, and standard deviation of triplicate count was several percent of mean value. We fitted statistically dose response curves to the data, and they were plotted on the coordinate of linearly scale response and dose. The results followed to the straight line passing through the origin of the coordinate axes between 0.1-5 Gy, and dose and does rate effectiveness factor (DDREF) was less than 2 when cells were irradiated for 1-10 min. Difference of the percent binuclear cells bearing micronucleus between irradiated cells and control cells was not statistically significant at the dose above 0.1 Gy when 5,000 binuclear cells were analyzed. In contrast, dose response curves never followed LNT, when cells were irradiated for 7 to 124 days. Difference of the percent binuclear cells bearing micronucleus between irradiated cells and control cells was not statistically significant at the dose below 6 Gy, when cells were continuously irradiated for 124 days. These results suggest that dose response curve of biological reaction is remarkably affected by exposure

  2. Building Models and Building Modelling

    DEFF Research Database (Denmark)

    Jørgensen, Kaj; Skauge, Jørn

    2008-01-01

    I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygnings­model­lerings­programmer beskrevet. Vigtige aspekter om comp...

  3. Persistent Modelling

    DEFF Research Database (Denmark)

    2012-01-01

    The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....... on this subject, this book makes essential reading for anyone considering new ways of thinking about architecture. In drawing upon both historical and contemporary perspectives this book provides evidence of the ways in which relations between representation and the represented continue to be reconsidered...

  4. Persistent Modelling

    DEFF Research Database (Denmark)

    The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....... on this subject, this book makes essential reading for anyone considering new ways of thinking about architecture. In drawing upon both historical and contemporary perspectives this book provides evidence of the ways in which relations between representation and the represented continue to be reconsidered...

  5. Acyclic models

    CERN Document Server

    Barr, Michael

    2002-01-01

    Acyclic models is a method heavily used to analyze and compare various homology and cohomology theories appearing in topology and algebra. This book is the first attempt to put together in a concise form this important technique and to include all the necessary background. It presents a brief introduction to category theory and homological algebra. The author then gives the background of the theory of differential modules and chain complexes over an abelian category to state the main acyclic models theorem, generalizing and systemizing the earlier material. This is then applied to various cohomology theories in algebra and topology. The volume could be used as a text for a course that combines homological algebra and algebraic topology. Required background includes a standard course in abstract algebra and some knowledge of topology. The volume contains many exercises. It is also suitable as a reference work for researchers.

  6. Molecular Modelling

    Directory of Open Access Journals (Sweden)

    Aarti Sharma

    2009-12-01

    Full Text Available

    The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important
    tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and
    the exponential growth of the knowledge of protein structures have made it possible for organic compounds to tailored to
    decrease harmful side effects and increase the potency. This article provides a detailed description of the techniques
    employed in molecular modeling. Molecular modelling is a rapidly developing discipline, and has been supported from
    the dramatic improvements in computer hardware and software in recent years.

  7. RNICE Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Jin; Stritch, Justin Michael

    2018-01-01

    Replication studies relate to the scientific principle of replicability and serve the significant purpose of providing supporting (or contradicting) evidence regarding the existence of a phenomenon. However, replication has never been an integral part of public administration and management...... research. Recently, scholars have issued calls for more replication, but academic reflections on when replication adds substantive value to public administration and management research are needed. This concise article presents a conceptual model, RNICE, for assessing when and how a replication study...... contributes knowledge about a social phenomenon and advances knowledge in the public administration and management literatures. The RNICE model provides a vehicle for researchers who seek to evaluate or demonstrate the value of a replication study systematically. We illustrate the practical application...

  8. Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi

    2016-01-01

    Recent advancements in set theory and readily available software have enabled social science researchers to bridge the variable-centered quantitative and case-based qualitative methodological paradigms in order to analyze multi-dimensional associations beyond the linearity assumptions, aggregate...... effects, unicausal reduction, and case specificity. Based on the developments in set theoretical thinking in social sciences and employing methods like Qualitative Comparative Analysis (QCA), Necessary Condition Analysis (NCA), and set visualization techniques, in this position paper, we propose...... and demonstrate a new approach to maturity models in the domain of Information Systems. This position paper describes the set-theoretical approach to maturity models, presents current results and outlines future research work....

  9. Modelling Defiguration

    DEFF Research Database (Denmark)

    Bork Petersen, Franziska

    2013-01-01

    advantageous manner. Stepping on the catwalk’s sloping, moving surfaces decelerates the models’ walk and makes it cautious, hesitant and shaky: suddenly the models lack exactly the affirmative, staccato, striving quality of motion, and the condescending expression that they perform on most contemporary......For the presentation of his autumn/winter 2012 collection in Paris and subsequently in Copenhagen, Danish designer Henrik Vibskov installed a mobile catwalk. The article investigates the choreographic impact of this scenography on those who move through it. Drawing on Dance Studies, the analytical...... focus centres on how the catwalk scenography evokes a ‘defiguration’ of the walking models and to what effect. Vibskov’s mobile catwalk draws attention to the walk, which is a key element of models’ performance but which usually functions in fashion shows merely to present clothes in the most...

  10. Cheating models

    DEFF Research Database (Denmark)

    Arnoldi, Jakob

    The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing....... The article analyses these challenges and argues that we witness a new post-social form of human-technology interaction that will lead to a reconfiguration of professional codes for financial trading....

  11. Biomimetic modelling.

    OpenAIRE

    Vincent, Julian F V

    2003-01-01

    Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more compl...

  12. Ozone modeling

    Energy Technology Data Exchange (ETDEWEB)

    McIllvaine, C M

    1994-07-01

    Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO{sub 2}), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO{sub x} concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO{sub x} coordinates of the point, known as the NMOC/NO{sub x} ratio. Results obtained by the described model are presented.

  13. Ozone modeling

    International Nuclear Information System (INIS)

    McIllvaine, C.M.

    1994-01-01

    Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO 2 ), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO x concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO x coordinates of the point, known as the NMOC/NO x ratio. Results obtained by the described model are presented

  14. Animal models.

    Science.gov (United States)

    Walker, Ellen A

    2010-01-01

    As clinical studies reveal that chemotherapeutic agents may impair several different cognitive domains in humans, the development of preclinical animal models is critical to assess the degree of chemotherapy-induced learning and memory deficits and to understand the underlying neural mechanisms. In this chapter, the effects of various cancer chemotherapeutic agents in rodents on sensory processing, conditioned taste aversion, conditioned emotional response, passive avoidance, spatial learning, cued memory, discrimination learning, delayed-matching-to-sample, novel-object recognition, electrophysiological recordings and autoshaping is reviewed. It appears at first glance that the effects of the cancer chemotherapy agents in these many different models are inconsistent. However, a literature is emerging that reveals subtle or unique changes in sensory processing, acquisition, consolidation and retrieval that are dose- and time-dependent. As more studies examine cancer chemotherapeutic agents alone and in combination during repeated treatment regimens, the animal models will become more predictive tools for the assessment of these impairments and the underlying neural mechanisms. The eventual goal is to collect enough data to enable physicians to make informed choices about therapeutic regimens for their patients and discover new avenues of alternative or complementary therapies that reduce or eliminate chemotherapy-induced cognitive deficits.

  15. Modeling biomembranes.

    Energy Technology Data Exchange (ETDEWEB)

    Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas

    2005-11-01

    Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.

  16. A critical review of anaesthetised animal models and alternatives for military research, testing and training, with a focus on blast damage, haemorrhage and resuscitation.

    Science.gov (United States)

    Combes, Robert D

    2013-11-01

    Military research, testing, and surgical and resuscitation training, are aimed at mitigating the consequences of warfare and terrorism to armed forces and civilians. Traumatisation and tissue damage due to explosions, and acute loss of blood due to haemorrhage, remain crucial, potentially preventable, causes of battlefield casualties and mortalities. There is also the additional threat from inhalation of chemical and aerosolised biological weapons. The use of anaesthetised animal models, and their respective replacement alternatives, for military purposes -- particularly for blast injury, haemorrhaging and resuscitation training -- is critically reviewed. Scientific problems with the animal models include the use of crude, uncontrolled and non-standardised methods for traumatisation, an inability to model all key trauma mechanisms, and complex modulating effects of general anaesthesia on target organ physiology. Such effects depend on the anaesthetic and influence the cardiovascular system, respiration, breathing, cerebral haemodynamics, neuroprotection, and the integrity of the blood-brain barrier. Some anaesthetics also bind to the NMDA brain receptor with possible differential consequences in control and anaesthetised animals. There is also some evidence for gender-specific effects. Despite the fact that these issues are widely known, there is little published information on their potential, at best, to complicate data interpretation and, at worst, to invalidate animal models. There is also a paucity of detail on the anaesthesiology used in studies, and this can hinder correct data evaluation. Welfare issues relate mainly to the possibility of acute pain as a side-effect of traumatisation in recovered animals. Moreover, there is the increased potential for animals to suffer when anaesthesia is temporary, and the procedures invasive. These dilemmas can be addressed, however, as a diverse range of replacement approaches exist, including computer and mathematical

  17. Model visionary

    Energy Technology Data Exchange (ETDEWEB)

    Chandler, Graham

    2011-03-15

    Ken Dedeluk is the president and CEO of Computer Modeling Group (CMG). Dedeluk started his career with Gulf Oil in 1972, worked in computer assisted design; then joined Imperial Esso and Shell, where he became international operations' VP; and finally joined CMG in 1998. CMG made a decision that turned out to be the company's turning point: they decided to provide intensive support and service to their customer to better use their technology. Thanks to this service, their customers' satisfaction grew as well as their revenues.

  18. Model integration and a theory of models

    OpenAIRE

    Dolk, Daniel R.; Kottemann, Jeffrey E.

    1993-01-01

    Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...

  19. Intrusion-Related Gold Deposits: New insights from gravity and hydrothermal integrated 3D modeling applied to the Tighza gold mineralization (Central Morocco)

    Science.gov (United States)

    Eldursi, Khalifa; Branquet, Yannick; Guillou-Frottier, Laurent; Martelet, Guillaume; Calcagno, Philippe

    2018-04-01

    The Tighza (or Jebel Aouam) district is one of the most important polymetallic districts in Morocco. It belongs to the Variscan Belt of Central Meseta, and includes W-Au, Pb-Zn-Ag, and Sb-Ba mineralization types that are spatially related to late-Carboniferous granitic stocks. One of the proposed hypotheses suggests that these granitic stocks are connected to a large intrusive body lying beneath them and that W-Au mineralization is directly related to this magmatism during a 287-285 Ma time span. A more recent model argues for a disconnection between the older barren outcropping magmatic stocks and a younger hidden magmatic complex responsible for the W-Au mineralization. Independently of the magmatic scenario, the W-Au mineralization is consensually recognized as of intrusion-related gold deposit (IRGD) type, W-rich. In addition to discrepancies between magmatic sceneries, the IRGD model does not account for published older age corresponding to a high-temperature hydrothermal event at ca. 291 Ma. Our study is based on gravity data inversion and hydro-thermal modeling, and aims to test this model of IRGD and its related magmatic geometries, with respect to subsurface geometries, favorable physical conditions for deposition and time record of hydrothermal processes. Combined inversion of geology and gravity data suggests that an intrusive body is rooted mainly at the Tighza fault in the north and that it spreads horizontally toward the south during a trans-tensional event (D2). Based on the numerical results, two types of mineralization can be distinguished: 1) the "Pre-Main" type appears during the emplacement of the magmatic body, and 2) the "Main" type appears during magma crystallization and the cooling phase. The time-lag between the two mineralization types depends on the cooling rate of magma. Although our numerical model of thermally-driven fluid flow around the Tighza pluton is simplified, as it does not take into account the chemical and deformation

  20. Flow regime transition and heat transfer model at low mass flux condition in a post-dryout region

    International Nuclear Information System (INIS)

    Jeong, Hae Yong

    1996-02-01

    The post-dryout flow regime transition criterion from inverted annular flow (IAF) to agitated inverted annular flow (AIAF) is suggested based on the hyperbolicity breaking concept. The hyperbolicity breaking represents a bifurcation point where a sudden flow transition occurs. The hyperbolicity breaking concept is applied to describe the flow regime transition from IAF to AIAF by the growth of disturbance on liquid core surface. The resultant correlation has the similar form to Takenaka's empirical one. To validate the proposed model, it is applied to predict Takenake's experimental results using R-113 refrigerant with four different tube diameters of 3, 5, 7 and 10 mm. The proposed model gives accurate predictions for the tube diameters of 7 and 10 mm. As the tube diameter decreases, the differences between the predictions and the experimental results slightly increase. The flow regime transition from AIAF to dispersed flow (DF) is described by the drift flux model. It is shown that the transition criterion can be well predicted if the droplet sizes in dispersed flow are evaluated appropriately. Existing mechanistic post-dryout models result in fairly good predictions when the mass flux is high or when the film dryout occurs. However, the predictions by these models become poor at low mass flux at which the flow regime before dryout is believed to be churn-turbulent. This is because the constitutive relations and/or the imposed assumptions used in the models become erroneous at low mass flux. The droplet size predicted by the correlation used in the model becomes unrealistically large. In addition, the single phase vapor heat transfer correlation becomes invalid at low mass flux condition. To develop a mechanistic post-dryout model which is available at low mass flux condition, the entrainment mechanisms and the entrained droplet sizes with relation to the flow regimes are investigated. Through the analysis of many experimental post-dryout data, it is shown that

  1. ALEPH model

    CERN Multimedia

    1989-01-01

    A wooden model of the ALEPH experiment and its cavern. ALEPH was one of 4 experiments at CERN's 27km Large Electron Positron collider (LEP) that ran from 1989 to 2000. During 11 years of research, LEP's experiments provided a detailed study of the electroweak interaction. Measurements performed at LEP also proved that there are three – and only three – generations of particles of matter. LEP was closed down on 2 November 2000 to make way for the construction of the Large Hadron Collider in the same tunnel. The cavern and detector are in separate locations - the cavern is stored at CERN and the detector is temporarily on display in Glasgow physics department. Both are available for loan.

  2. modelling distances

    Directory of Open Access Journals (Sweden)

    Robert F. Love

    2001-01-01

    Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.

  3. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  4. Comparison: Binomial model and Black Scholes model

    Directory of Open Access Journals (Sweden)

    Amir Ahmad Dar

    2018-03-01

    Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.

  5. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  6. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  7. Commentary on inhaled {sup 239}PuO{sub 2} in dogs - a prophylaxis against lung cancer?

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M., E-mail: jerrycuttler@rogers.com [Cuttler and Associates, Vaughan, ON (Canada); Feinendegen, L. [Brookhaven National Laboratories, Upton, NY (United States)

    2015-07-01

    Several studies on the effect of inhaled plutonium-dioxide particulates and the incidence of lung tumors in dogs reveal beneficial effects when the cumulative alpha-radiation dose is low. There is a threshold at an exposure level of about 100 cGy for excess tumor incidence and reduced lifespan. The observations conform to the expectations of the radiation hormesis dose-response model and contradict the predictions of the Linear No-Threshold (LNT) hypothesis. These studies suggest investigating the possibility of employing low-dose alpha-radiation, such as from {sup 239}PuO{sub 2} inhalation, as a prophylaxis against lung cancer. (author)

  8. Commentary on inhaled {sup 239}PuO{sub 2} in dogs - a prophylaxis against lung cancer?

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M. [Cuttler and Assoc., Vaughan, Ontario (Canada); Feinendegen, L. [Brookhaven National Laboratories, Upton, New York (United States)

    2015-06-15

    Several studies on the effect of inhaled plutonium-dioxide particulates and the incidence of lung tumors in dogs reveal beneficial effects when the cumulative alpha-radiation dose is low. There is a threshold at an exposure level of about 100 cGy for excess tumor incidence and reduced lifespan. The observations conform to the expectations of the radiation hormesis dose-response model and contradict the predictions of the Linear No-Threshold (LNT) hypothesis. These studies suggest investigating the possibility of employing low-dose alpha-radiation, such as from {sup 239}PuO {sub 2} inhalation, as a prophylaxis against lung cancer. (author)

  9. Water in the Gas Phase.

    Science.gov (United States)

    1998-06-01

    Valentin, Ch. Claveau A.D. Bykov, N.N. Lavrentieva, VN. Saveliev , L.N. Sinitsa « THE TCPE MANY-BODY MODEL FOR WATER » 79 M. Masella and J-P. Flament...Laboratoire de Physique Moleculaire et Applications, CNRS Universite Pierre et Marie Curie, Paris, France. A.D. Bykov, N.N. Lavrentieva, V.N. Saveliev , L.N...T19 Lozada M. T4 Rothman L.S. T22 Lutz B. L. T33 Ruiz J. P24 Lynch R. P3 Sadlej A. T5 Lynden-Bell R. M. P8 Saveliev V.N. P4 Maemets V. P18 Saykally

  10. Some environmental challenges which the uranium production industry faces in the 21st century

    International Nuclear Information System (INIS)

    Zhang Lisheng

    2004-01-01

    Some of the environmental challenges which the uranium production industry faces in the 21st century have been discussed in the paper. They are: the use of the linear non-threshold (LNT) model for radiation protection, the concept of 'controllable dose' as an alternative to the current International Commission on Radiological Protection (ICRP) system of dose limitation, the future of collective dose and the ALARA (As low As Reasonably Achievable) principle and the application of a risk-based framework for managing hazards. The author proposes that, the risk assessment/risk management framework could be used for managing the environmental, safety and decommissioning issues associated with the uranium fuel cycle. (author)

  11. A more flexible lipoprotein sorting pathway.

    Science.gov (United States)

    Chahales, Peter; Thanassi, David G

    2015-05-01

    Lipoprotein biogenesis in Gram-negative bacteria occurs by a conserved pathway, each step of which is considered essential. In contrast to this model, LoVullo and colleagues demonstrate that the N-acyl transferase Lnt is not required in Francisella tularensis or Neisseria gonorrhoeae. This suggests the existence of a more flexible lipoprotein pathway, likely due to a modified Lol transporter complex, and raises the possibility that pathogens may regulate lipoprotein processing to modulate interactions with the host. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  12. Hormesis: Fact or fiction?

    International Nuclear Information System (INIS)

    Holzman, D.

    1995-01-01

    Bernard Cohen had not intended to foment revolution. To be sure, he had hoped that the linear, no-threshold (LNT) model of ionizing radiation's effects on humans would prove to be an exaggeration of reality at the low levels of radiation that one can measure in humans throughout the United States. His surprising conclusion, however, was that within the low dose ranges of radiation one receives in the home, the higher the dose, the less chance one had of contracting lung cancer. 1 fig., 1 tab

  13. Similarity of the leading contributions to the self-energy and the thermodynamics in two- and three-dimensional Fermi Liquids

    International Nuclear Information System (INIS)

    Coffey, D.; Bedell, K.S.

    1993-01-01

    We compare the self-energy and entropy of a two- and three-dimensional Fermi Liquids (FLs) using a model with a contact interaction between fermions. For a two-dimensional (2D) FL we find that there are T 2 contributions to the entropy from interactions separate from those due to the collective modes. These T 2 contributions arise from nonanalytic corrections to the real part of the self-energy and areanalogous to T 3 lnT contributions present in the entropy of a three-dimensional (3D) FL. The difference between the 2D and 3D results arises solely from the different phase space factors

  14. Nuclear disaster in Fukushima. Based on the WHO data between 22.000 and 66.000 carcinoma deaths are expected in Japan; Atomkatastrophe in Fukushima. Auf der Grundlage der WHO-Daten sind in Japan zwischen 22.000 und 66.000 Krebserkrankungen zu erwarten

    Energy Technology Data Exchange (ETDEWEB)

    Paulitz, Henrik; Eisenberg, Winfrid; Thiel, Reinhold

    2013-03-14

    The authors show that based on the data and assumption of WHO about 22.000 deaths due to cancer are expected in Japan as a consequence of the nuclear disaster in Fukushima in March 2011. The following data are used: the radiation exposure of the Japanese public in the first year after the nuclear catastrophe, the linear no-threshold model (LNT), risk factor for mortality (EAR excess absolute risk). When the factor to determine the lifetime dose is based on the experience of Chernobyl (UNSCEAR calculations) and the most recent scientific research the number of expected cancer cases rises to 66.000.

  15. Vector models and generalized SYK models

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Cheng [Department of Physics, Brown University,Providence RI 02912 (United States)

    2017-05-23

    We consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. A chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  16. Neurite density imaging versus imaging of microscopic anisotropy in diffusion MRI: A model comparison using spherical tensor encoding.

    Science.gov (United States)

    Lampinen, Björn; Szczepankiewicz, Filip; Mårtensson, Johan; van Westen, Danielle; Sundgren, Pia C; Nilsson, Markus

    2017-02-15

    In diffusion MRI (dMRI), microscopic diffusion anisotropy can be obscured by orientation dispersion. Separation of these properties is of high importance, since it could allow dMRI to non-invasively probe elongated structures such as neurites (axons and dendrites). However, conventional dMRI, based on single diffusion encoding (SDE), entangles microscopic anisotropy and orientation dispersion with intra-voxel variance in isotropic diffusivity. SDE-based methods for estimating microscopic anisotropy, such as the neurite orientation dispersion and density imaging (NODDI) method, must thus rely on model assumptions to disentangle these features. An alternative approach is to directly quantify microscopic anisotropy by the use of variable shape of the b-tensor. Along those lines, we here present the 'constrained diffusional variance decomposition' (CODIVIDE) method, which jointly analyzes data acquired with diffusion encoding applied in a single direction at a time (linear tensor encoding, LTE) and in all directions (spherical tensor encoding, STE). We then contrast the two approaches by comparing neurite density estimated using NODDI with microscopic anisotropy estimated using CODIVIDE. Data were acquired in healthy volunteers and in glioma patients. NODDI and CODIVIDE differed the most in gray matter and in gliomas, where NODDI detected a neurite fraction higher than expected from the level of microscopic diffusion anisotropy found with CODIVIDE. The discrepancies could be explained by the NODDI tortuosity assumption, which enforces a connection between the neurite density and the mean diffusivity of tissue. Our results suggest that this assumption is invalid, which leads to a NODDI neurite density that is inconsistent between LTE and STE data. Using simulations, we demonstrate that the NODDI assumptions result in parameter bias that precludes the use of NODDI to map neurite density. With CODIVIDE, we found high levels of microscopic anisotropy in white matter

  17. Modeling styles in business process modeling

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.

    2012-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording

  18. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  19. From Product Models to Product State Models

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...

  20. Exogenous calcium alleviates low night temperature stress on the photosynthetic apparatus of tomato leaves.

    Directory of Open Access Journals (Sweden)

    Guoxian Zhang

    Full Text Available The effect of exogenous CaCl2 on photosystem I and II (PSI and PSII activities, cyclic electron flow (CEF, and proton motive force of tomato leaves under low night temperature (LNT was investigated. LNT stress decreased the net photosynthetic rate (Pn, effective quantum yield of PSII [Y(II], and photochemical quenching (qP, whereas CaCl2 pretreatment improved Pn, Y(II, and qP under LNT stress. LNT stress significantly increased the non-regulatory quantum yield of energy dissipation [Y(NO], whereas CaCl2 alleviated this increase. Exogenous Ca2+ enhanced stimulation of CEF by LNT stress. Inhibition of oxidized PQ pools caused by LNT stress was alleviated by CaCl2 pretreatment. LNT stress reduced zeaxanthin formation and ATPase activity, but CaCl2 pretreatment reversed both of these effects. LNT stress caused excess formation of a proton gradient across the thylakoid membrane, whereas CaCl2 pretreatment decreased the said factor under LNT. Thus, our results showed that photoinhibition of LNT-stressed plants could be alleviated by CaCl2 pretreatment. Our findings further revealed that this alleviation was mediated in part by improvements in carbon fixation capacity, PQ pools, linear and cyclic electron transports, xanthophyll cycles, and ATPase activity.

  1. Modelling live forensic acquisition

    CSIR Research Space (South Africa)

    Grobler, MM

    2009-06-01

    Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...

  2. Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.

    Science.gov (United States)

    Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo

    2011-06-01

    The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Linear versus non-linear: a perspective from health physics and radiobiology

    International Nuclear Information System (INIS)

    Gentner, N.E.; Osborne, R.V.

    1998-01-01

    There is a vigorous debate about whether or not there may be a 'threshold' for radiation-induced adverse health effects. A linear-no threshold (LNT) model allows radiation protection practitioners to manage putative risk consistently, because different types of exposure, exposures at different times, and exposures to different organs may be summed. If we are to argue to regulators and the public that low doses are less dangerous than we presently assume, it is incumbent on us to prove this. The question is, therefore, whether any consonant body of evidence exists that the risk of low doses has been over-estimated. From the perspectives of both health physics and radiobiology, we conclude that the evidence for linearity at high doses (and arguably of fairly small total doses if delivered at high dose rate) is strong. For low doses (or in fact, even for fairly high doses) delivered at low dose rate, the evidence is much less compelling. Since statistical limitations at low doses are almost always going to prevent a definitive answer, one way or the other, from human data, we need a way out of this epistemological dilemma of 'LNT or not LNT, that is the question'. To our minds, the path forward is to exploit (1) radiobiological studies which address directly the question of what the dose and dose rate effectiveness factor is in actual human bodies exposed to low-level radiation, in concert with (2) epidemiological studies of human populations exposed to fairly high doses (to obtain statistical power) but where exposure was protracted over some years. (author)

  4. Models in architectural design

    OpenAIRE

    Pauwels, Pieter

    2017-01-01

    Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...

  5. Rotating universe models

    International Nuclear Information System (INIS)

    Tozini, A.V.

    1984-01-01

    A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.) [pt

  6. Final Report for Subcontract B541028,Pore-Scale Modeling to Support 'Pore Connectivity' Research Work

    International Nuclear Information System (INIS)

    Ewing, R.P.

    2008-01-01

    A central concept for the geological barrier at the proposed Yucca Mountain radioactive waste repository is diffusive retardation: solute moving through a fracture diffuses into and out of the rock matrix. This diffusive exchange retards overall solute movement, and retardation both dilutes waste being released, and allows additional decay. The original concept of diffusive retardation required knowledge only of the fracture conductivity and the matrix diffusion. But that simple concept is unavoidably complicated by other issues and processes: contaminants may sorb to the rock matrix, fracture flow may be episodic, a given fracture may or may not flow depending on the volume of flow and the fracture's connection to the overall fracture network, the matrix imbibes water during flow episodes and dries between episodes, and so on. Some of these issues have been examined by other projects. This particular project is motivated by a simple fact: Yucca Mountain tuff has low pore connectivity. This fact is not widely recognized, nor are its implications widely appreciated. Because low pore connectivity affects many processes, it may invalidate many assumptions that are basic (though perhaps not stated) to other investigations. The overall project's objective statement (from the proposal) was: This proposal aims to improve our understanding of diffusive retardation of radionuclides due to fracture/matrix interactions. Results from this combined experimental/modeling work will (1) determine whether the current understanding and model representation of matrix diffusion is valid, (2) provide insights into the upscaling of laboratory-scale diffusion experiments, and (3) evaluate the impact on diffusive retardation of episodic fracture flow and pore connectivity in Yucca Mountain tuffs. An obvious data gap addressed by the project was that there were only a few limited measurements of the diffusion coefficient of the rock at the repository level. That is, at the time we wrote

  7. Biological effect and tumor risk of diagnostic x-rays. The ''war of the theories''; Biologische Wirkung und Tumorrisiko diagnostischer Roentgenstrahlen. Der ''Krieg der Modelle''

    Energy Technology Data Exchange (ETDEWEB)

    Selzer, E.; Hebar, A. [Medizinische Universitaet Wien, Abteilung fuer Strahlenbiologie, Klinik fuer Strahlentherapie, Wien (Austria)

    2012-10-15

    Since the introduction of ionizing radiation as a treatment and diagnostic tool in humans, scientists have been trying to estimate its side effects and potential health risks. There is now ample evidence for the principal existence of a direct relationship between higher doses and the risks of side effects. Most of the uncertainties lie in the field of low-dose effects especially with respect to the risk of cancer induction. Low-dose effects are usually of relevance in diagnostic medicine while high-dose radiation effects are typically observed after radiotherapeutic treatment for cancer or after nuclear accidents. The current state of the ''war of theories'' may be summarized as follows: one group of scientists and health regulatory officials favors the hypothesis that there is no threshold dose, i.e. the linear-no-threshold hypothesis (LNT) of radiation which can be regarded as safe. On the contrary, the critics of this hypothesis suggest that the risks of doses below 50 mSv are not measurable or even of clinical relevance and are not adequately described by a linear dose-response relationship. The aim of this article is to summarize the major unresolved issues in this field. Arguments are presented why the validity of the LNT model in the low-dose range should be regarded as at least inconsistent and is thus questionable. (orig.) [German] Seit der Einfuehrung ionisierender Strahlen als ein Mittel zur Behandlung und Diagnose beim Menschen haben Wissenschaftler versucht, ihre Nebenwirkungen und potenziellen Risiken fuer die Gesundheit einzuschaetzen. Es gibt nun ausreichende Evidenz fuer das grundsaetzliche Vorliegen einer direkten Beziehung zwischen hoeheren Dosen und Nebenwirkungsrisiken. Die meisten Unsicherheiten liegen auf dem Gebiet der Niedrigdosisforschung v. a. im Hinblick auf das Risiko der Induktion von Krebs. Niedrigdosiseffekte sind ueblicherweise von Bedeutung in der diagnostischen Medizin, waehrend Hochdosisbestrahlungseffekte

  8. Concept Modeling vs. Data modeling in Practice

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2015-01-01

    This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models....... We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from...

  9. Model-to-model interface for multiscale materials modeling

    Energy Technology Data Exchange (ETDEWEB)

    Antonelli, Perry Edward [Iowa State Univ., Ames, IA (United States)

    2017-12-17

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface will also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.

  10. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  11. Characterization and Modeling of High Power Microwave Effects in CMOS Microelectronics

    Science.gov (United States)

    2010-01-01

    margin measurement 28 Any voltage above the line marked VIH is considered a valid logic high on the input of the gate. VIH and VIL are defined...can handle any voltage noise level at the input up to VIL without changing state. The region in between VIL and VIH is considered an invalid logic...29 Table 2.2: Intrinsic device characteristics derived from SPETCRE simulations   VIH  (V)  VIL (V)  High Noise Margin  (V)  Low Noise Margin (V

  12. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    , and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...

  13. Air Quality Dispersion Modeling - Alternative Models

    Science.gov (United States)

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  14. Topological massive sigma models

    International Nuclear Information System (INIS)

    Lambert, N.D.

    1995-01-01

    In this paper we construct topological sigma models which include a potential and are related to twisted massive supersymmetric sigma models. Contrary to a previous construction these models have no central charge and do not require the manifold to admit a Killing vector. We use the topological massive sigma model constructed here to simplify the calculation of the observables. Lastly it is noted that this model can be viewed as interpolating between topological massless sigma models and topological Landau-Ginzburg models. ((orig.))

  15. Business Model Innovation

    OpenAIRE

    Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher

    2014-01-01

    The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...

  16. [Bone remodeling and modeling/mini-modeling.

    Science.gov (United States)

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  17. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  18. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  19. Modelling freight transport

    NARCIS (Netherlands)

    Tavasszy, L.A.; Jong, G. de

    2014-01-01

    Freight Transport Modelling is a unique new reference book that provides insight into the state-of-the-art of freight modelling. Focusing on models used to support public transport policy analysis, Freight Transport Modelling systematically introduces the latest freight transport modelling

  20. Semantic Business Process Modeling

    OpenAIRE

    Markovic, Ivan

    2010-01-01

    This book presents a process-oriented business modeling framework based on semantic technologies. The framework consists of modeling languages, methods, and tools that allow for semantic modeling of business motivation, business policies and rules, and business processes. Quality of the proposed modeling framework is evaluated based on the modeling content of SAP Solution Composer and several real-world business scenarios.

  1. Modelling of Hydraulic Robot

    DEFF Research Database (Denmark)

    Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik

    1997-01-01

    This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...

  2. Model-Independent Diffs

    DEFF Research Database (Denmark)

    Könemann, Patrick

    just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models...

  3. Forest-fire models

    Science.gov (United States)

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  4. Environmental Satellite Models for a Macroeconomic Model

    International Nuclear Information System (INIS)

    Moeller, F.; Grinderslev, D.; Werner, M.

    2003-01-01

    To support national environmental policy, it is desirable to forecast and analyse environmental indicators consistently with economic variables. However, environmental indicators are physical measures linked to physical activities that are not specified in economic models. One way to deal with this is to develop environmental satellite models linked to economic models. The system of models presented gives a frame of reference where emissions of greenhouse gases, acid gases, and leaching of nutrients to the aquatic environment are analysed in line with - and consistently with - macroeconomic variables. This paper gives an overview of the data and the satellite models. Finally, the results of applying the model system to calculate the impacts on emissions and the economy are reviewed in a few illustrative examples. The models have been developed for Denmark; however, most of the environmental data used are from the CORINAIR system implemented in numerous countries

  5. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  6. Geologic Framework Model Analysis Model Report

    International Nuclear Information System (INIS)

    Clayton, R.

    2000-01-01

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and

  7. Lapse rate modeling

    DEFF Research Database (Denmark)

    De Giovanni, Domenico

    2010-01-01

    prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...

  8. Lapse Rate Modeling

    DEFF Research Database (Denmark)

    De Giovanni, Domenico

    prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...

  9. Multivariate GARCH models

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...

  10. Collaborative networks: Reference modeling

    NARCIS (Netherlands)

    Camarinha-Matos, L.M.; Afsarmanesh, H.

    2008-01-01

    Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of

  11. Models in Action

    DEFF Research Database (Denmark)

    Juhl, Joakim

    This thesis is about mathematical modelling and technology development. While mathematical modelling has become widely deployed within a broad range of scientific practices, it has also gained a central position within technology development. The intersection of mathematical modelling and technol...

  12. Business Model Canvas

    NARCIS (Netherlands)

    D'Souza, Austin

    2013-01-01

    Presentatie gegeven op 13 mei 2013 op de bijeenkomst "Business Model Canvas Challenge Assen".
    Het Business Model Canvas is ontworpen door Alex Osterwalder. Het model werkt zeer overzichtelijk en bestaat uit negen bouwstenen.

  13. Energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-01-01

    Full Text Available The construction industry has turned to energy modelling in order to assist them in reducing the amount of energy consumed by buildings. However, while the energy loads of buildings can be accurately modelled, energy models often under...

  14. Wildfire Risk Main Model

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire regime...

  15. Mathematical Modeling Using MATLAB

    National Research Council Canada - National Science Library

    Phillips, Donovan

    1998-01-01

    .... Mathematical Modeling Using MA MATLAB acts as a companion resource to A First Course in Mathematical Modeling with the goal of guiding the reader to a fuller understanding of the modeling process...

  16. Analytic Modeling of Insurgencies

    Science.gov (United States)

    2014-08-01

    Counterinsurgency, Situational Awareness, Civilians, Lanchester 1. Introduction Combat modeling is one of the oldest areas of operations research, dating...Army. The ground-breaking work of Lanchester in 1916 [1] marks the beginning of formal models of conflicts, where mathematical formulas and, later...Warfare model [3], which is a Lanchester - based mathematical model (see more details about this model later on), and McCormick’s Magic Diamond model [4

  17. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  18. Environmental Modeling Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...

  19. Multilevel modeling using R

    CERN Document Server

    Finch, W Holmes; Kelley, Ken

    2014-01-01

    A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo

  20. Cosmological models without singularities

    International Nuclear Information System (INIS)

    Petry, W.

    1981-01-01

    A previously studied theory of gravitation in flat space-time is applied to homogeneous and isotropic cosmological models. There exist two different classes of models without singularities: (i) ever-expanding models, (ii) oscillating models. The first class contains models with hot big bang. For these models there exist at the beginning of the universe-in contrast to Einstein's theory-very high but finite densities of matter and radiation with a big bang of very short duration. After short time these models pass into the homogeneous and isotropic models of Einstein's theory with spatial curvature equal to zero and cosmological constant ALPHA >= O. (author)

  1. TRACKING CLIMATE MODELS

    Data.gov (United States)

    National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...

  2. First Principles Modeling of Phonon Heat Conduction in Nanoscale Crystalline Structures

    International Nuclear Information System (INIS)

    Mazumder, Sandip; Li, Ju

    2010-01-01

    The inability to remove heat efficiently is currently one of the stumbling blocks toward further miniaturization and advancement of electronic, optoelectronic, and micro-electro-mechanical devices. In order to formulate better heat removal strategies and designs, it is first necessary to understand the fundamental mechanisms of heat transport in semiconductor thin films. Modeling techniques, based on first principles, can play the crucial role of filling gaps in our understanding by revealing information that experiments are incapable of. Heat conduction in crystalline semiconductor films occurs by lattice vibrations that result in the propagation of quanta of energy called phonons. If the mean free path of the traveling phonons is larger than the film thickness, thermodynamic equilibrium ceases to exist, and thus, the Fourier law of heat conduction is invalid. In this scenario, bulk thermal conductivity values, which are experimentally determined by inversion of the Fourier law itself, cannot be used for analysis. The Boltzmann Transport Equation (BTE) is a powerful tool to treat non-equilibrium heat transport in thin films. The BTE describes the evolution of the number density (or energy) distribution for phonons as a result of transport (or drift) and inter-phonon collisions. Drift causes the phonon energy distribution to deviate from equilibrium, while collisions tend to restore equilibrium. Prior to solution of the BTE, it is necessary to compute the lifetimes (or scattering rates) for phonons of all wave-vector and polarization. The lifetime of a phonon is the net result of its collisions with other phonons, which in turn is governed by the conservation of energy and momentum during the underlying collision processes. This research project contributed to the state-of-the-art in two ways: (1) by developing and demonstrating a calibration-free simple methodology to compute intrinsic phonon scattering (Normal and Umklapp processes) time scales with the inclusion

  3. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    International Nuclear Information System (INIS)

    Clinton Lum

    2002-01-01

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4

  4. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  5. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  6. The biomechanics study of rabbit osteoporosis models treated by 99Tcm-MDP combined with GuKangLing

    International Nuclear Information System (INIS)

    Gao Kejia; Zhao Guoding; Ye Zhiwei; Mei Xiaogang; Tian Yingmin; Yan Chushun; Wang Wei; Li Wei; Cai Zhengyu; Song Haiping

    2011-01-01

    Objective: To study the bone biomechanics of the rabbit osteoporosis models induced by dexamethasone sodium phosphate injection (DX) using a combined treatment modality of 99 Tc-MDP and GuKangLing. Methods: Rabbits were intramuscularly injected with DX (2 mg/kg) twice a week for 6 weeks. The animal osteoporosis model group (Group C) and normal group (Group A) were compared to confirm the model was available. Another control group (Group B), the osteoporosis control group (Group D) were set for the comparison at the end of the experiment. The 99 Tc-MDP therapy group (Group E), GuKangLing therapy group (Group F) and 99 Tc-MDP plus GuKangLing therapy group (Group G) were included in the study. The treatment lasted for 16 weeks. The bone biomechanics, cytopathology bone histomorphology, bone mineral density (BMD), X-ray, CT, bone scintigraphy and serum bone alkaline phosphatase (BALP) and P (bone gla protein) were chosen as the markers or methods to evaluate the treatment results (excellent, effective and invalid). The analysis of variance (ANOVA) and t-test were used for group comparison analysis. Results: Cytopathology result indicated that there was no bone trabecular destruction in Group A. However, there was distinct bone destruction in Group C. The bone biomechanics (left femur head (265.914 ±52.773) N, L 4 (369.671 ±94.919) N), BMD (left femur (0.238 ±0.016) g/cm 2 , L 4 (0.236 ±0.016) g/cm 2 ) and bone histomorphology ((66.230 ± 10.848)%) in Group C reduced clearly as compared with Group A ((405.343±55.410) N, (750.870±53.718) N, (0.294±0.017) g/cm 2 , (0.302±0.023) g/cm 2 , (131.500 ± 21.846)%) (t ≥4.550, all P<0.01). Radionuclide bone scan also showed that the uptake of tracers was higher by the main arthrosis in Group C than that in Group A. Vertebra was not clearly visualized on bone scan image. There were significant differences between Group A and Group C in serum BALP and P ((45.000±7.303) vs (12.485 ±1.512) U/L, (0.168±0.018) vs (0.115

  7. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  8. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  9. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  10. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  11. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  12. Biosphere Model Report

    International Nuclear Information System (INIS)

    D. W. Wu

    2003-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  13. AIDS Epidemiological models

    Science.gov (United States)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  14. A Model for Conversation

    DEFF Research Database (Denmark)

    Ayres, Phil

    2012-01-01

    This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...

  15. HRM: HII Region Models

    Science.gov (United States)

    Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.

    2017-07-01

    HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.

  16. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  17. The Moody Mask Model

    DEFF Research Database (Denmark)

    Larsen, Bjarke Alexander; Andkjær, Kasper Ingdahl; Schoenau-Fog, Henrik

    2015-01-01

    This paper proposes a new relation model, called "The Moody Mask model", for Interactive Digital Storytelling (IDS), based on Franceso Osborne's "Mask Model" from 2011. This, mixed with some elements from Chris Crawford's Personality Models, is a system designed for dynamic interaction between ch...

  18. Efficient polarimetric BRDF model.

    Science.gov (United States)

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  19. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  20. Composite hadron models

    International Nuclear Information System (INIS)

    Ogava, S.; Savada, S.; Nakagava, M.

    1983-01-01

    Composite models of hadrons are considered. The main attention is paid to the Sakata, S model. In the framework of the model it is presupposed that proton, neutron and Λ particle are the fundamental particles. Theoretical studies of unknown fundamental constituents of a substance have led to the creation of the quark model. In the framework of the quark model using the theory of SU(6)-symmetry the classification of mesons and baryons is considered. Using the quark model relations between hadron masses, their spins and electromagnetic properties are explained. The problem of three-colour model with many flavours is briefly presented

  1. Modeller af komplicerede systemer

    DEFF Research Database (Denmark)

    Mortensen, J.

    emphasizes their use in relation to technical systems. All the presented models, with the exception of the types presented in chapter 2, are non-theoretical non-formal conceptual network models. Two new model types are presented: 1) The System-Environment model, which describes the environments interaction...... with conceptual modeling in relation to process control. It´s purpose is to present classify and exemplify the use of a set of qualitative model types. Such model types are useful in the early phase of modeling, where no structured methods are at hand. Although the models are general in character, this thesis......This thesis, "Modeller af komplicerede systemer", represents part of the requirements for the Danish Ph.D.degree. Assisting professor John Nørgaard-Nielsen, M.Sc.E.E.Ph.D. has been principal supervisor and professor Morten Lind, M.Sc.E.E.Ph.D. has been assisting supervisor. The thesis is concerned...

  2. The Hospitable Meal Model

    DEFF Research Database (Denmark)

    Justesen, Lise; Overgaard, Svend Skafte

    2017-01-01

    This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open......-ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored...

  3. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  4. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  5. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  6. Business Models and Business Model Innovation

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Saebi, Tina

    2018-01-01

    While research on business models and business model innovation continue to exhibit growth, the field is still, even after more than two decades of research, characterized by a striking lack of cumulative theorizing and an opportunistic borrowing of more or less related ideas from neighbouring...

  7. Wake modelling combining mesoscale and microscale models

    DEFF Research Database (Denmark)

    Badger, Jake; Volker, Patrick; Prospathospoulos, J.

    2013-01-01

    In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake paramet...

  8. Introduction to Adjoint Models

    Science.gov (United States)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  9. Business Model Visualization

    OpenAIRE

    Zagorsek, Branislav

    2013-01-01

    Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...

  10. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  11. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  12. Pavement Aging Model by Response Surface Modeling

    Directory of Open Access Journals (Sweden)

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  13. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  14. Implementation of a Web-Based Organ Donation Educational Intervention: Development and Use of a Refined Process Evaluation Model

    Science.gov (United States)

    Harker, Laura; Bamps, Yvan; Flemming, Shauna St. Clair; Perryman, Jennie P; Thompson, Nancy J; Patzer, Rachel E; Williams, Nancy S DeSousa; Arriola, Kimberly R Jacob

    2017-01-01

    Background The lack of available organs is often considered to be the single greatest problem in transplantation today. Internet use is at an all-time high, creating an opportunity to increase public commitment to organ donation through the broad reach of Web-based behavioral interventions. Implementing Internet interventions, however, presents challenges including preventing fraudulent respondents and ensuring intervention uptake. Although Web-based organ donation interventions have increased in recent years, process evaluation models appropriate for Web-based interventions are lacking. Objective The aim of this study was to describe a refined process evaluation model adapted for Web-based settings and used to assess the implementation of a Web-based intervention aimed to increase organ donation among African Americans. Methods We used a randomized pretest-posttest control design to assess the effectiveness of the intervention website that addressed barriers to organ donation through corresponding videos. Eligible participants were African American adult residents of Georgia who were not registered on the state donor registry. Drawing from previously developed process evaluation constructs, we adapted reach (the extent to which individuals were found eligible, and participated in the study), recruitment (online recruitment mechanism), dose received (intervention uptake), and context (how the Web-based setting influenced study implementation) for Internet settings and used the adapted model to assess the implementation of our Web-based intervention. Results With regard to reach, 1415 individuals completed the eligibility screener; 948 (67.00%) were determined eligible, of whom 918 (96.8%) completed the study. After eliminating duplicate entries (n=17), those who did not initiate the posttest (n=21) and those with an invalid ZIP code (n=108), 772 valid entries remained. Per the Internet protocol (IP) address analysis, only 23 of the 772 valid entries (3.0%) were

  15. Implementation of a Web-Based Organ Donation Educational Intervention: Development and Use of a Refined Process Evaluation Model.

    Science.gov (United States)

    Redmond, Nakeva; Harker, Laura; Bamps, Yvan; Flemming, Shauna St Clair; Perryman, Jennie P; Thompson, Nancy J; Patzer, Rachel E; Williams, Nancy S DeSousa; Arriola, Kimberly R Jacob

    2017-11-30

    The lack of available organs is often considered to be the single greatest problem in transplantation today. Internet use is at an all-time high, creating an opportunity to increase public commitment to organ donation through the broad reach of Web-based behavioral interventions. Implementing Internet interventions, however, presents challenges including preventing fraudulent respondents and ensuring intervention uptake. Although Web-based organ donation interventions have increased in recent years, process evaluation models appropriate for Web-based interventions are lacking. The aim of this study was to describe a refined process evaluation model adapted for Web-based settings and used to assess the implementation of a Web-based intervention aimed to increase organ donation among African Americans. We used a randomized pretest-posttest control design to assess the effectiveness of the intervention website that addressed barriers to organ donation through corresponding videos. Eligible participants were African American adult residents of Georgia who were not registered on the state donor registry. Drawing from previously developed process evaluation constructs, we adapted reach (the extent to which individuals were found eligible, and participated in the study), recruitment (online recruitment mechanism), dose received (intervention uptake), and context (how the Web-based setting influenced study implementation) for Internet settings and used the adapted model to assess the implementation of our Web-based intervention. With regard to reach, 1415 individuals completed the eligibility screener; 948 (67.00%) were determined eligible, of whom 918 (96.8%) completed the study. After eliminating duplicate entries (n=17), those who did not initiate the posttest (n=21) and those with an invalid ZIP code (n=108), 772 valid entries remained. Per the Internet protocol (IP) address analysis, only 23 of the 772 valid entries (3.0%) were within Georgia, and only 17 of those

  16. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  17. Modeling for Battery Prognostics

    Science.gov (United States)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient

  18. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    . These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety......This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  19. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  20. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  1. Modeling volatility using state space models.

    Science.gov (United States)

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  2. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  3. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  4. [Inhibition and resource capacity during normal aging: a confrontation of the dorsal-ventral and frontal models in a modified version of negative priming].

    Science.gov (United States)

    Martin, S; Brouillet, D; Guerdoux, E; Tarrago, R

    2006-01-01

    -be-ignored properties' responsiveness. In contrast, information matching subjects' goal is enhanced through an automatic excitatory imbalance. The accurate functioning of the Match/Mismatch field requires efficient executive functioning responsible for the uphold of goals and correct responses. In the case of negative priming, manipulating the efficiency of working memory is of interest as it should affect the triggering of slowing, ie, an indirect inhibitory deficit, when the task is resource demanding [Conwayet al. (6)]. Moreover, if inhibition, as reflected by negative priming, is mediated by individual resource capacity, then NP should disappear during aging only when individuals are engaged in a resource-demanding task. To address this issue, we examine whether cognitive control load in a gender decision task contributed to the presence or absence of NP during aging. According to the dorsal-ventral model, task complexity should not have any impact on performance, since gender decision task relies on a conceptual analysis of information. In turn, the frontal model predicts that age differences in performance profile will only differ when individual resource capacity is overloaded. Sixty-four participants (32 young and 32 older adults) performed a gender categorisation task through two experiments. Trials involved two stimuli presented successively at the same location. A word served as a prime and a word as a target. Both prime and target could be male or female. When prime and target matched on gender, we talked about VALID pairs (or compatible). When prime and target mismatched on the manipulated features, we talked about INVALID pairs (or incompatible). Participants' task was to identify the gender of the target. They were explicitly instructed not to respond to primes but to read them silently. Our interest was in response latencies for valid versus invalid pairs. We manipulated task complexity by the absence (experiment 1) or presence (experiment 2) of a distractor during

  5. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina

    2011-01-01

    This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor...... be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment....

  6. Holographic twin Higgs model.

    Science.gov (United States)

    Geller, Michael; Telem, Ofri

    2015-05-15

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.

  7. Models of light nuclei

    International Nuclear Information System (INIS)

    Harvey, M.; Khanna, F.C.

    1975-01-01

    The general problem of what constitutes a physical model and what is known about the free nucleon-nucleon interaction are considered. A time independent formulation of the basic equations is chosen. Construction of the average field in which particles move in a general independent particle model is developed, concentrating on problems of defining the average spherical single particle field for any given nucleus, and methods for construction of effective residual interactions and other physical operators. Deformed shell models and both spherical and deformed harmonic oscillator models are discussed in detail, and connections between spherical and deformed shell models are analyzed. A section on cluster models is included. 11 tables, 21 figures

  8. Holographic Twin Higgs Model

    Science.gov (United States)

    Geller, Michael; Telem, Ofri

    2015-05-01

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at mKK , naturally allowing for mKK beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.

  9. Five models of capitalism

    Directory of Open Access Journals (Sweden)

    Luiz Carlos Bresser-Pereira

    2012-03-01

    Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.

  10. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    description of biological phosphorus removal, physicalchemical processes, hydraulics and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2D/3D dynamic numerical models. Plant-wide modeling is set to advance further the practice......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  11. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    description of biological phosphorus removal, physical–chemical processes, hydraulics, and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2-D/3-D dynamic numerical models. Plant-wide modeling is set to advance further......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  12. Microsoft tabular modeling cookbook

    CERN Document Server

    Braak, Paul te

    2013-01-01

    This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling

  13. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  14. Biosphere Model Report

    International Nuclear Information System (INIS)

    D.W. Wu; A.J. Smith

    2004-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  15. Modelling of Innovation Diffusion

    Directory of Open Access Journals (Sweden)

    Arkadiusz Kijek

    2010-01-01

    Full Text Available Since the publication of the Bass model in 1969, research on the modelling of the diffusion of innovation resulted in a vast body of scientific literature consisting of articles, books, and studies of real-world applications of this model. The main objective of the diffusion model is to describe a pattern of spread of innovation among potential adopters in terms of a mathematical function of time. This paper assesses the state-of-the-art in mathematical models of innovation diffusion and procedures for estimating their parameters. Moreover, theoretical issues related to the models presented are supplemented with empirical research. The purpose of the research is to explore the extent to which the diffusion of broadband Internet users in 29 OECD countries can be adequately described by three diffusion models, i.e. the Bass model, logistic model and dynamic model. The results of this research are ambiguous and do not indicate which model best describes the diffusion pattern of broadband Internet users but in terms of the results presented, in most cases the dynamic model is inappropriate for describing the diffusion pattern. Issues related to the further development of innovation diffusion models are discussed and some recommendations are given. (original abstract

  16. Nonlinear Modeling by Assembling Piecewise Linear Models

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  17. Integrated Medical Model – Chest Injury Model

    Data.gov (United States)

    National Aeronautics and Space Administration — The Exploration Medical Capability (ExMC) Element of NASA's Human Research Program (HRP) developed the Integrated Medical Model (IMM) to forecast the resources...

  18. Traffic & safety statewide model and GIS modeling.

    Science.gov (United States)

    2012-07-01

    Several steps have been taken over the past two years to advance the Utah Department of Transportation (UDOT) safety initiative. Previous research projects began the development of a hierarchical Bayesian model to analyze crashes on Utah roadways. De...

  19. A large enhancement of photoinduced second harmonic generation in CdI2--Cu layered nanocrystals.

    Science.gov (United States)

    Miah, M Idrish

    2009-02-12

    Photoinduced second harmonic generation (PISHG) in undoped as well as in various Cu-doped (0.05-1.2% Cu) CdI2 nanocrystals was measured at liquid nitrogen temperature (LNT). It was found that the PISHG increases with increasing Cu doping up to approximately 0.6% and then decreases almost to that for the undoped CdI2 for doping higher than approximately 1%. The values of the second-order susceptibility ranged from 0.50 to 0.67 pm V(-1) for the Cu-doped nanocrystals with a thickness of 0.5 nm. The Cu-doping dependence shown in a parabolic fashion suggests a crucial role of the Cu agglomerates in the observed effects. The PISHG in crystals with various nanosizes was also measured at LNT. The size dependence demonstrated the quantum-confined effect with a maximum PISHG for 0.5 nm and with a clear increase in the PISHG with decreasing thickness of the nanocrystal. The Raman scattering spectra at different pumping powers were taken for thin nanocrystals, and the phonon modes originating from interlayer phonons were observed in the spectra. The results were discussed within a model of photoinduced electron-phonon anharmonicity.

  20. Inefficient Metabolism of the Human Milk Oligosaccharides Lacto-N-tetraose and Lacto-N-neotetraose Shifts Bifidobacterium longum subsp. infantis Physiology

    Directory of Open Access Journals (Sweden)

    Ezgi Özcan

    2018-05-01

    Full Text Available Human milk contains a high concentration of indigestible oligosaccharides, which likely mediated the coevolution of the nursing infant with its gut microbiome. Specifically, Bifidobacterium longum subsp. infantis (B. infantis often colonizes the infant gut and utilizes these human milk oligosaccharides (HMOs to enrich their abundance. In this study, the physiology and mechanisms underlying B. infantis utilization of two HMO isomers lacto-N-tetraose (LNT and lacto-N-neotetraose (LNnT was investigated in addition to their carbohydrate constituents. Both LNT and LNnT utilization induced a significant shift in the ratio of secreted acetate to lactate (1.7–2.0 in contrast to the catabolism of their component carbohydrates (~1.5. Inefficient metabolism of LNnT prompts B. infantis to shunt carbon toward formic acid and ethanol secretion. The global transcriptome presents genomic features differentially expressed to catabolize these two HMO species that vary by a single glycosidic linkage. Furthermore, a measure of strain-level variation exists between B. infantis isolates. Regardless of strain, inefficient HMO metabolism induces the metabolic shift toward formic acid and ethanol production. Furthermore, bifidobacterial metabolites reduced LPS-induced inflammation in a cell culture model. Thus, differential metabolism of milk glycans potentially drives the emergent physiology of host-microbial interactions to impact infant health.