WorldWideScience

Sample records for invalid lnt model

  1. Radiation, Ecology and the Invalid LNT Model: The Evolutionary Imperative

    OpenAIRE

    Parsons, Peter A.

    2006-01-01

    Metabolic and energetic efficiency, and hence fitness of organisms to survive, should be maximal in their habitats. This tenet of evolutionary biology invalidates the linear-nothreshold (LNT) model for the risk consequences of environmental agents. Hormesis in response to selection for maximum metabolic and energetic efficiency, or minimum metabolic imbalance, to adapt to a stressed world dominated by oxidative stress should therefore be universal. Radiation hormetic zones extending substanti...

  2. Radiation, ecology and the invalid LNT model: the evolutionary imperative.

    Science.gov (United States)

    Parsons, Peter A

    2006-09-27

    Metabolic and energetic efficiency, and hence fitness of organisms to survive, should be maximal in their habitats. This tenet of evolutionary biology invalidates the linear-no threshold (LNT) model for the risk consequences of environmental agents. Hormesis in response to selection for maximum metabolic and energetic efficiency, or minimum metabolic imbalance, to adapt to a stressed world dominated by oxidative stress should therefore be universal. Radiation hormetic zones extending substantially beyond common background levels, can be explained by metabolic interactions among multiple abiotic stresses. Demographic and experimental data are mainly in accord with this expectation. Therefore, non-linearity becomes the primary model for assessing risks from low-dose ionizing radiation. This is the evolutionary imperative upon which risk assessment for radiation should be based.

  3. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT.

    Science.gov (United States)

    Calabrese, Edward J

    2017-04-01

    This paper reveals that nearly 25 years after the National Academy of Sciences (NAS), Biological Effects of Ionizing Radiation (BEIR) I Committee (1972) used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. From AADL Model to LNT Specification

    OpenAIRE

    Mkaouar, Hana; Zalila, Bechir; Hugues, Jérôme; Jmaiel, Mohamed

    2015-01-01

    The verification of distributed real-time systems designed by architectural languages such as AADL (Architecture Analysis and Design Language) is a research challenge. These systems are often used in safety- critical domains where one mistake can result in physical damages and even life loss. In such domains, formal methods are a suitable solution for rigorous analysis. This paper studies the formal verification of distributed real-time systems modelled with AADL. We transform AADL model to a...

  5. Does scientific evidence support a change from the LNT model for low-dose radiation risk extrapolation?

    Science.gov (United States)

    Averbeck, Dietrich

    2009-11-01

    The linear no-threshold (LNT) model has been widely used to establish international rules and standards in radiation protection. It is based on the notion that the physical energy deposition of ionizing radiation (IR) increases carcinogenic risk linearly with increasing dose (i.e., the carcinogenic effectiveness remains constant irrespective of dose) and, within a factor of two, also with dose-rate. However, recent findings have strongly put into question the LNT concept and its scientific validity, especially for very low doses and dose-rates. Low-dose effects are more difficult to ascertain than high-dose effects. Epidemiological studies usually lack sufficient statistical power to determine health risks from very low-dose exposures. In this situation, studies of the fundamental mechanisms involved help to understand and assess short- and long-term effects of low-dose IR and to evaluate low-dose radiation risks. Several lines of evidence demonstrate that low-dose and low dose-rate effects are generally lower than expected from high-dose exposures. DNA damage signaling, cell cycle checkpoint activation, DNA repair, gene and protein expression, apoptosis, and cell transformation differ qualitatively and quantitatively at high- and low-dose IR exposures, and most animal and epidemiological data support this conclusion. Thus, LNT appears to be scientifically invalid in the low-dose range.

  6. The LNT model provides the best approach for practical implementation of radiation protection.

    Science.gov (United States)

    Martin, C J

    2005-01-01

    This contribution argues the case that, at the present time, the linear-no-threshold (LNT) model provides the only rational framework on which practical radiation protection can be organized. Political, practical and healthcare difficulties with attempting to introduce an alternative approach, e.g. a threshold model, are discussed.

  7. Model Uncertainty via the Integration of Hormesis and LNT as the Default in Cancer Risk Assessment.

    Science.gov (United States)

    Calabrese, Edward J

    2015-01-01

    On June 23, 2015, the US Nuclear Regulatory Commission (NRC) issued a formal notice in the Federal Register that it would consider whether "it should amend its 'Standards for Protection Against Radiation' regulations from the linear non-threshold (LNT) model of radiation protection to the hormesis model." The present commentary supports this recommendation based on the (1) flawed and deceptive history of the adoption of LNT by the US National Academy of Sciences (NAS) in 1956; (2) the documented capacity of hormesis to make more accurate predictions of biological responses for diverse biological end points in the low-dose zone; (3) the occurrence of extensive hormetic data from the peer-reviewed biomedical literature that revealed hormetic responses are highly generalizable, being independent of biological model, end point measured, inducing agent, level of biological organization, and mechanism; and (4) the integration of hormesis and LNT models via a model uncertainty methodology that optimizes public health responses at 10(-4). Thus, both LNT and hormesis can be integratively used for risk assessment purposes, and this integration defines the so-called "regulatory sweet spot."

  8. Model Uncertainty via the Integration of Hormesis and LNT as the Default in Cancer Risk Assessment

    Directory of Open Access Journals (Sweden)

    Edward J. Calabrese

    2015-12-01

    Full Text Available On June 23, 2015, the US Nuclear Regulatory Commission (NRC issued a formal notice in the Federal Register that it would consider whether “it should amend its ‘Standards for Protection Against Radiation’ regulations from the linear non-threshold (LNT model of radiation protection to the hormesis model.” The present commentary supports this recommendation based on the (1 flawed and deceptive history of the adoption of LNT by the US National Academy of Sciences (NAS in 1956; (2 the documented capacity of hormesis to make more accurate predictions of biological responses for diverse biological end points in the low-dose zone; (3 the occurrence of extensive hormetic data from the peer-reviewed biomedical literature that revealed hormetic responses are highly generalizable, being independent of biological model, end point measured, inducing agent, level of biological organization, and mechanism; and (4 the integration of hormesis and LNT models via a model uncertainty methodology that optimizes public health responses at 10−4. Thus, both LNT and hormesis can be integratively used for risk assessment purposes, and this integration defines the so-called “regulatory sweet spot.”

  9. The linear nonthreshold (LNT) model as used in radiation protection: an NCRP update.

    Science.gov (United States)

    Boice, John D

    2017-10-01

    The linear nonthreshold (LNT) model has been used in radiation protection for over 40 years and has been hotly debated. It relies heavily on human epidemiology, with support from radiobiology. The scientific underpinnings include NCRP Report No. 136 ('Evaluation of the Linear-Nonthreshold Dose-Response Model for Ionizing Radiation'), UNSCEAR 2000, ICRP Publication 99 (2004) and the National Academies BEIR VII Report (2006). NCRP Scientific Committee 1-25 is reviewing recent epidemiologic studies focusing on dose-response models, including threshold, and the relevance to radiation protection. Recent studies after the BEIR VII Report are being critically reviewed and include atomic-bomb survivors, Mayak workers, atomic veterans, populations on the Techa River, U.S. radiological technologists, the U.S. Million Person Study, international workers (INWORKS), Chernobyl cleanup workers, children given computerized tomography scans, and tuberculosis-fluoroscopy patients. Methodologic limitations, dose uncertainties and statistical approaches (and modeling assumptions) are being systematically evaluated. The review of studies continues and will be published as an NCRP commentary in 2017. Most studies reviewed to date are consistent with a straight-line dose response but there are a few exceptions. In the past, the scientific consensus process has worked in providing practical and prudent guidance. So pragmatic judgment is anticipated. The evaluations are ongoing and the extensive NCRP review process has just begun, so no decisions or recommendations are in stone. The march of science requires a constant assessment of emerging evidence to provide an optimum, though not necessarily perfect, approach to radiation protection. Alternatives to the LNT model may be forthcoming, e.g. an approach that couples the best epidemiology with biologically-based models of carcinogenesis, focusing on chronic (not acute) exposure circumstances. Currently for the practical purposes of

  10. The LNT model is the best we can do--today.

    Science.gov (United States)

    Preston, R Julian

    2003-09-01

    The form of the dose-response curve for radiation-induced cancers, particularly at low doses, is the subject of an ongoing and spirited debate. The present review describes the current database and basis for establishing a low dose, linear no threshold (LNT) model. The requirement for a dose-response model to be used for risk assessment purposes is that it fits the great majority of data derived from epidemiological and experimental tumour studies. Such is the case for the LNT model as opposed to other nonlinear models. This view is supported by data developed for radiation-induced mutations and chromosome aberrations. Potential modifiers of low dose cellular responses to radiation (such as adaptive response, bystander effects and genomic instability) have not been shown to be associated with tumour development. Such modifiers tend to influence the slope of the dose-response curve for cellular responses at low doses and not the shape--thereby resulting in a quantitative modification rather than a qualitative one. Additional data pertinent to addressing the shape of the tumour dose-response relationship at low doses are needed.

  11. Regulatory-Science: Biphasic Cancer Models or the LNT-Not Just a Matter of Biology!

    Science.gov (United States)

    Ricci, Paolo F; Sammis, Ian R

    2012-01-01

    There is no doubt that prudence and risk aversion must guide public decisions when the associated adverse outcomes are either serious or irreversible. With any carcinogen, the levels of risk and needed protection before and after an event occurs, are determined by dose-response models. Regulatory law should not crowd out the actual beneficial effects from low dose exposures-when demonstrable-that are inevitably lost when it adopts the linear non-threshold (LNT) as its causal model. Because regulating exposures requires planning and developing protective measures for future acute and chronic exposures, public management decisions should be based on minimizing costs and harmful exposures. We address the direct and indirect effects of causation when the danger consists of exposure to very low levels of carcinogens and toxicants. The societal consequences of a policy can be deleterious when that policy is based on a risk assumed by the LNT, in cases where low exposures are actually beneficial. Our work develops the science and the law of causal risk modeling: both are interwoven. We suggest how their relevant characteristics differ, but do not attempt to keep them separated; as we demonstrate, this union, however unsatisfactory, cannot be severed.

  12. Univariate time series modeling and an application to future claims amount in SOCSO's invalidity pension scheme

    Science.gov (United States)

    Chek, Mohd Zaki Awang; Ahmad, Abu Bakar; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md.; Jamal, Nur Faezah; Ismail, Isma Liana; Zulkifli, Faiz; Noor, Syamsul Ikram Mohd

    2012-09-01

    The main objective of this study is to forecast the future claims amount of Invalidity Pension Scheme (IPS). All data were derived from SOCSO annual reports from year 1972 - 2010. These claims consist of all claims amount from 7 benefits offered by SOCSO such as Invalidity Pension, Invalidity Grant, Survivors Pension, Constant Attendance Allowance, Rehabilitation, Funeral and Education. Prediction of future claims of Invalidity Pension Scheme will be made using Univariate Forecasting Models to predict the future claims among workforce in Malaysia.

  13. Whither LNT?

    Energy Technology Data Exchange (ETDEWEB)

    Higson, D.J.

    2015-03-15

    UNSCEAR and ICRP have reported that no health effects have been attributed to radiation exposure at Fukushima. As at Chernobyl, however, fear that there is no safe dose of radiation has caused enormous psychological damage to health; and evacuation to protect the public from exposure to radiation appears to have done. more harm than good. UNSCEAR and ICRP both stress that collective doses, aggregated from the exposure of large numbers of individuals to very low doses, should not be used to estimate numbers of radiation-induced health effects. This is incompatible with the LNT assumption recommended by the ICRP. (author)

  14. The bell tolls for LNT

    International Nuclear Information System (INIS)

    Higson, D.J.

    2003-01-01

    The the linear no-threshold (LNT) model has been a convenient tool in the practice of radiation protection but it is not supported by scientific data at doses less than about 100 millisievert or at chronic dose rates up to at least 200 millisievert per year. Radiation protection practices based on the LNT model yield no demonstrable benefits to health when applied at lower annual doses. The assumption that such exposures are harmful may not even be conservative and has helped to foster an unwarranted fear of low-level radiation. For its new recommendations, to be issued probably in 2005, the ICRP has said that it expects to continue the application of the LNT model 'above a few millisievert per year'. National societies for radiation protection may wish to consider the need to lobby the ICRP, through the auspices of IRPA, to further relax adherence to the LNT assumption - up to 'a few tens of millisievert per year'. Copyright (2003) Australasian Radiation Protection Society Inc

  15. The bell tolls for LNT.

    Science.gov (United States)

    Higson, Don J

    2004-11-01

    The linear no-threshold (LNT) model has been a convenient tool in the practice of radiation protection, but it is not supported by scientific data at doses less than about 100 millisievert or at chronic dose rates up to at least 200 millisievert per year. Radiation protection practices based on the LNT model yield no demonstrable benefits to health when applied at lower annual doses. The assumption that such exposures are harmful may not even be conservative and has helped to foster an unwarranted fear of low-level radiation. For its new recommendations, to be issued probably in 2005, the International Commission on Radiological Protection (ICRP) has said that it expects to continue the application of the LNT model "above a few millisievert per year." National societies for radiation protection may wish to consider the need to lobby the ICRP, through the auspices of International Radiation Protection Association, to further relax adherence to the LNT assumption--up to "a few tens of millisievert per year."

  16. Combining Generated Data Models with Formal Invalidation for Insider Threat Analysis

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2014-01-01

    to detect insider attacks. Integration of higher order logic specification techniques allows the use of data refinement to explore attack possibilities beyond the initial system specification. We illustrate this combined invalidation technique on the classical example of the naughty lottery fairy. Data...... draw from recent insights into generation of insider data to complement a logic based mechanical approach. We show how insider analysis can be traced back to the early days of security verification and the Lowe-attack on NSPK. The invalidation of policies allows modelchecking organizational structures...

  17. A model invalidation-based approach for elucidating biological signalling pathways, applied to the chemotaxis pathway in R. sphaeroides.

    Science.gov (United States)

    Roberts, Mark A J; August, Elias; Hamadeh, Abdullah; Maini, Philip K; McSharry, Patrick E; Armitage, Judith P; Papachristodoulou, Antonis

    2009-10-31

    Developing methods for understanding the connectivity of signalling pathways is a major challenge in biological research. For this purpose, mathematical models are routinely developed based on experimental observations, which also allow the prediction of the system behaviour under different experimental conditions. Often, however, the same experimental data can be represented by several competing network models. In this paper, we developed a novel mathematical model/experiment design cycle to help determine the probable network connectivity by iteratively invalidating models corresponding to competing signalling pathways. To do this, we systematically design experiments in silico that discriminate best between models of the competing signalling pathways. The method determines the inputs and parameter perturbations that will differentiate best between model outputs, corresponding to what can be measured/observed experimentally. We applied our method to the unknown connectivities in the chemotaxis pathway of the bacterium Rhodobacter sphaeroides. We first developed several models of R. sphaeroides chemotaxis corresponding to different signalling networks, all of which are biologically plausible. Parameters in these models were fitted so that they all represented wild type data equally well. The models were then compared to current mutant data and some were invalidated. To discriminate between the remaining models we used ideas from control systems theory to determine efficiently in silico an input profile that would result in the biggest difference in model outputs. However, when we applied this input to the models, we found it to be insufficient for discrimination in silico. Thus, to achieve better discrimination, we determined the best change in initial conditions (total protein concentrations) as well as the best change in the input profile. The designed experiments were then performed on live cells and the resulting data used to invalidate all but one of the

  18. A model invalidation-based approach for elucidating biological signalling pathways, applied to the chemotaxis pathway in R. sphaeroides

    Directory of Open Access Journals (Sweden)

    Hamadeh Abdullah

    2009-10-01

    Full Text Available Abstract Background Developing methods for understanding the connectivity of signalling pathways is a major challenge in biological research. For this purpose, mathematical models are routinely developed based on experimental observations, which also allow the prediction of the system behaviour under different experimental conditions. Often, however, the same experimental data can be represented by several competing network models. Results In this paper, we developed a novel mathematical model/experiment design cycle to help determine the probable network connectivity by iteratively invalidating models corresponding to competing signalling pathways. To do this, we systematically design experiments in silico that discriminate best between models of the competing signalling pathways. The method determines the inputs and parameter perturbations that will differentiate best between model outputs, corresponding to what can be measured/observed experimentally. We applied our method to the unknown connectivities in the chemotaxis pathway of the bacterium Rhodobacter sphaeroides. We first developed several models of R. sphaeroides chemotaxis corresponding to different signalling networks, all of which are biologically plausible. Parameters in these models were fitted so that they all represented wild type data equally well. The models were then compared to current mutant data and some were invalidated. To discriminate between the remaining models we used ideas from control systems theory to determine efficiently in silico an input profile that would result in the biggest difference in model outputs. However, when we applied this input to the models, we found it to be insufficient for discrimination in silico. Thus, to achieve better discrimination, we determined the best change in initial conditions (total protein concentrations as well as the best change in the input profile. The designed experiments were then performed on live cells and the resulting

  19. The Genetics Panel of the NAS BEAR I Committee (1956): epistolary evidence suggests self-interest may have prompted an exaggeration of radiation risks that led to the adoption of the LNT cancer risk assessment model.

    Science.gov (United States)

    Calabrese, Edward J

    2014-09-01

    This paper extends a series of historical papers which demonstrated that the linear-no-threshold (LNT) model for cancer risk assessment was founded on ideological-based scientific deceptions by key radiation genetics leaders. Based on an assessment of recently uncovered personal correspondence, it is shown that some members of the United States (US) National Academy of Sciences (NAS) Biological Effects of Atomic Radiation I (BEAR I) Genetics Panel were motivated by self-interest to exaggerate risks to promote their science and personal/professional agenda. Such activities have profound implications for public policy and may have had a significant impact on the adoption of the LNT model for cancer risk assessment.

  20. The impact of pediatric neuropsychological consultation in mild traumatic brain injury: a model for providing feedback after invalid performance.

    Science.gov (United States)

    Connery, Amy K; Peterson, Robin L; Baker, David A; Kirkwood, Michael W

    2016-05-01

    In recent years, pediatric practitioners have increasingly recognized the importance of objectively measuring performance validity during clinical assessments. Yet, no studies have examined the impact of neuropsychological consultation when invalid performance has been identified in pediatric populations and little published guidance exists for clinical management. Here we provide a conceptual model for providing feedback after noncredible performance has been detected. In a pilot study, we examine caregiver satisfaction and postconcussive symptoms following provision of this feedback for patients seen through our concussion program. Participants (N = 70) were 8-17-year-olds with a history of mild traumatic brain injury who underwent an abbreviated neuropsychological evaluation between 2 and 12 months post-injury. We examined postconcussive symptom reduction and caregiver satisfaction after neuropsychological evaluation between groups of patients who were determined to have provided noncredible effort (n = 9) and those for whom no validity concerns were present (n = 61). We found similarly high levels of caregiver satisfaction between groups and greater reduction in self-reported symptoms after feedback was provided using the model with children with noncredible presentations compared to those with credible presentations. The current study lends preliminary support to the idea that the identification and communication of invalid performance can be a beneficial clinical intervention that promotes high levels of caregiver satisfaction and a reduction in self-reported and caregiver-reported symptoms.

  1. Invalidating Policies using Structural Information

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2014-01-01

    by invalidating policies using structural information of the organisational model. Based on this structural information and a description of the organisation’s policies, our approach invalidates the policies and identifies exemplary sequences of actions that lead to a violation of the policy in question. Based...... on these examples, the organisation can identify real attack vectors that might result in an insider attack. This information can be used to refine access control systems or policies. We provide case studies showing how mechanical verification tools, i.e. modelchecking with MCMAS and interactive theorem proving...

  2. Commentary on Using LNT for Radiation Protection and Risk Assessment.

    Science.gov (United States)

    Cuttler, Jerry M

    2010-02-04

    An article by Jerome Puskin attempts to justify the continued use of the linear no-threshold (LNT) assumption in radiation protection and risk assessment. In view of the substantial and increasing amount of data that contradicts this assumption; it is difficult to understand the reason for endorsing this unscientific behavior, which severely constrains nuclear energy projects and the use of CT scans in medicine. Many Japanese studies over the past 25 years have shown that low doses and low dose rates of radiation improve health in living organisms including humans. Recent studies on fruit flies have demonstrated that the original basis for the LNT notion is invalid. The Puskin article omits any mention of important reports from UNSCEAR, the NCRP and the French Academies of Science and Medicine, while citing an assessment of the Canadian breast cancer study that manipulated the data to obscure evidence of reduced breast cancer mortality following a low total dose. This commentary provides dose limits that are based on real human data, for both single and chronic radiation exposures.

  3. The Mistaken Birth and Adoption of LNT: An Abridged Version.

    Science.gov (United States)

    Calabrese, Edward J

    2017-01-01

    The historical foundations of cancer risk assessment were based on the discovery of X-ray-induced gene mutations by Hermann J. Muller, its transformation into the linear nonthreshold (LNT) single-hit theory, the recommendation of the model by the US National Academy of Sciences, Biological Effects of Atomic Radiation I, Genetics Panel in 1956, and subsequent widespread adoption by regulatory agencies worldwide. This article summarizes substantial recent historical revelations of this history, which profoundly challenge the standard and widely acceptable history of cancer risk assessment, showing multiple significant scientific errors and incorrect interpretations, mixed with deliberate misrepresentation of the scientific record by leading ideologically motivated radiation geneticists. These novel historical findings demonstrate that the scientific foundations of the LNT single-hit model were seriously flawed and should not have been adopted for cancer risk assessment.

  4. In modelling effects of global warming, invalid assumptions lead to unrealistic projections.

    Science.gov (United States)

    Lefevre, Sjannie; McKenzie, David J; Nilsson, Göran E

    2018-02-01

    In their recent Opinion, Pauly and Cheung () provide new projections of future maximum fish weight (W ∞ ). Based on criticism by Lefevre et al. (2017) they changed the scaling exponent for anabolism, d G . Here we find that changing both d G and the scaling exponent for catabolism, b, leads to the projection that fish may even become 98% smaller with a 1°C increase in temperature. This unrealistic outcome indicates that the current W ∞ is unlikely to be explained by the Gill-Oxygen Limitation Theory (GOLT) and, therefore, GOLT cannot be used as a mechanistic basis for model projections about fish size in a warmer world. © 2017 John Wiley & Sons Ltd.

  5. Invalidating Policies using Structural Information

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2013-01-01

    Insider threats are a major threat to many organisations. Even worse, insider attacks are usually hard to detect, especially if an attack is based on actions that the attacker has the right to perform. In this paper we present a step towards detecting the risk for this kind of attacks by invalida...... on these examples, the organisation can identify real attack vectors that might result in an insider attack. This information can be used to refine access control system or policies....... by invalidating policies using structural information of the organisational model. Based on this structural information and a description of the organisation's policies, our approach invalidates the policies and identifies exemplary sequences of actions that lead to a violation of the policy in question. Based...

  6. The Integration of LNT and Hormesis for Cancer Risk Assessment Optimizes Public Health Protection.

    Science.gov (United States)

    Calabrese, Edward J; Shamoun, Dima Yazji; Hanekamp, Jaap C

    2016-03-01

    This paper proposes a new cancer risk assessment strategy and methodology that optimizes population-based responses by yielding the lowest disease/tumor incidence across the entire dose continuum. The authors argue that the optimization can be achieved by integrating two seemingly conflicting models; i.e., the linear no-threshold (LNT) and hormetic dose-response models. The integration would yield the optimized response at a risk of 10 with the LNT model. The integrative functionality of the LNT and hormetic dose response models provides an improved estimation of tumor incidence through model uncertainty analysis and major reductions in cancer incidence via hormetic model estimates. This novel approach to cancer risk assessment offers significant improvements over current risk assessment approaches by revealing a regulatory sweet spot that maximizes public health benefits while incorporating practical approaches for model validation.

  7. Observations on the Chernobyl Disaster and LNT

    Science.gov (United States)

    Jaworowski, Zbigniew

    2010-01-01

    The Chernobyl accident was probably the worst possible catastrophe of a nuclear power station. It was the only such catastrophe since the advent of nuclear power 55 years ago. It resulted in a total meltdown of the reactor core, a vast emission of radionuclides, and early deaths of only 31 persons. Its enormous political, economic, social and psychological impact was mainly due to deeply rooted fear of radiation induced by the linear non-threshold hypothesis (LNT) assumption. It was a historic event that provided invaluable lessons for nuclear industry and risk philosophy. One of them is demonstration that counted per electricity units produced, early Chernobyl fatalities amounted to 0.86 death/GWe-year), and they were 47 times lower than from hydroelectric stations (∼40 deaths/GWe-year). The accident demonstrated that using the LNT assumption as a basis for protection measures and radiation dose limitations was counterproductive, and lead to sufferings and pauperization of millions of inhabitants of contaminated areas. The projections of thousands of late cancer deaths based on LNT, are in conflict with observations that in comparison with general population of Russia, a 15% to 30% deficit of solid cancer mortality was found among the Russian emergency workers, and a 5% deficit solid cancer incidence among the population of most contaminated areas. PMID:20585443

  8. Observations on the Chernobyl Disaster and LNT.

    Science.gov (United States)

    Jaworowski, Zbigniew

    2010-01-28

    The Chernobyl accident was probably the worst possible catastrophe of a nuclear power station. It was the only such catastrophe since the advent of nuclear power 55 years ago. It resulted in a total meltdown of the reactor core, a vast emission of radionuclides, and early deaths of only 31 persons. Its enormous political, economic, social and psychological impact was mainly due to deeply rooted fear of radiation induced by the linear non-threshold hypothesis (LNT) assumption. It was a historic event that provided invaluable lessons for nuclear industry and risk philosophy. One of them is demonstration that counted per electricity units produced, early Chernobyl fatalities amounted to 0.86 death/GWe-year), and they were 47 times lower than from hydroelectric stations ( approximately 40 deaths/GWe-year). The accident demonstrated that using the LNT assumption as a basis for protection measures and radiation dose limitations was counterproductive, and lead to sufferings and pauperization of millions of inhabitants of contaminated areas. The projections of thousands of late cancer deaths based on LNT, are in conflict with observations that in comparison with general population of Russia, a 15% to 30% deficit of solid cancer mortality was found among the Russian emergency workers, and a 5% deficit solid cancer incidence among the population of most contaminated areas.

  9. Origin of the linearity no threshold (LNT) dose-response concept.

    Science.gov (United States)

    Calabrese, Edward J

    2013-09-01

    This paper identifies the origin of the linearity at low-dose concept [i.e., linear no threshold (LNT)] for ionizing radiation-induced mutation. After the discovery of X-ray-induced mutations, Olson and Lewis (Nature 121(3052):673-674, 1928) proposed that cosmic/terrestrial radiation-induced mutations provide the principal mechanism for the induction of heritable traits, providing the driving force for evolution. For this concept to be general, a LNT dose relationship was assumed, with genetic damage proportional to the energy absorbed. Subsequent studies suggested a linear dose response for ionizing radiation-induced mutations (Hanson and Heys in Am Nat 63(686):201-213, 1929; Oliver in Science 71:44-46, 1930), supporting the evolutionary hypothesis. Based on an evaluation of spontaneous and ionizing radiation-induced mutation with Drosophila, Muller argued that background radiation had a negligible impact on spontaneous mutation, discrediting the ionizing radiation-based evolutionary hypothesis. Nonetheless, an expanded set of mutation dose-response observations provided a basis for collaboration between theoretical physicists (Max Delbruck and Gunter Zimmer) and the radiation geneticist Nicolai Timoféeff-Ressovsky. They developed interrelated physical science-based genetics perspectives including a biophysical model of the gene, a radiation-induced gene mutation target theory and the single-hit hypothesis of radiation-induced mutation, which, when integrated, provided the theoretical mechanism and mathematical basis for the LNT model. The LNT concept became accepted by radiation geneticists and recommended by national/international advisory committees for risk assessment of ionizing radiation-induced mutational damage/cancer from the mid-1950s to the present. The LNT concept was later generalized to chemical carcinogen risk assessment and used by public health and regulatory agencies worldwide.

  10. Racial identity invalidation with multiracial individuals: An instrument development study.

    Science.gov (United States)

    Franco, Marisa G; O'Brien, Karen M

    2018-01-01

    Racial identity invalidation, others' denial of an individual's racial identity, is a salient racial stressor with harmful effects on the mental health and well-being of Multiracial individuals. The purpose of this study was to create a psychometrically sound measure to assess racial identity invalidation for use with Multiracial individuals (N = 497). The present sample was mostly female (75%) with a mean age of 26.52 years (SD = 9.60). The most common racial backgrounds represented were Asian/White (33.4%) and Black/White (23.7%). Participants completed several online measures via Qualtrics. Exploratory factor analyses revealed 3 racial identity invalidation factors: behavior invalidation, phenotype invalidation, and identity incongruent discrimination. A confirmatory factor analysis provided support for the initial factor structure. Alternative model testing indicated that the bifactor model was superior to the 3-factor model. Thus, a total score and/or 3 subscale scores can be used when administering this instrument. Support was found for the reliability and validity of the total scale and subscales. In line with the minority stress theory, challenges with racial identity mediated relationships between racial identity invalidation and mental health and well-being outcomes. The findings highlight the different dimensions of racial identity invalidation and indicate their negative associations with connectedness and psychological well-being. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. A Capacity-Restraint Transit Assignment Model When a Predetermination Method Indicates the Invalidity of Time Independence

    Directory of Open Access Journals (Sweden)

    Haoyang Ding

    2015-01-01

    Full Text Available The statistical independence of time of every two adjacent bus links plays a crucial role in deciding the feasibility of using many mathematical models to analyze urban transit networks. Traditional research generally ignores the time independence that acts as the ground of their models. Assumption is usually made that time independence of every two adjacent links is sound. This is, however, actually groundless and probably causes problematic conclusions reached by corresponding models. Many transit assignment models such as multinomial probit-based models lose their effects when the time independence is not valid. In this paper, a simple method to predetermine the time independence is proposed. Based on the predetermination method, a modified capacity-restraint transit assignment method aimed at engineering practice is put forward and tested through a small contrived network and a case study in Nanjing city, China, respectively. It is found that the slope of regression equation between the mean and standard deviation of normal distribution acts as the indicator of time independence at the same time. Besides, our modified assignment method performs better than the traditional one with more reasonable results while keeping the property of simplicity well.

  12. Identification of Apolipoprotein N-Acyltransferase (Lnt) in Mycobacteria*

    Science.gov (United States)

    Tschumi, Andreas; Nai, Corrado; Auchli, Yolanda; Hunziker, Peter; Gehrig, Peter; Keller, Peter; Grau, Thomas; Sander, Peter

    2009-01-01

    Lipoproteins of Gram-negative and Gram-positive bacteria carry a thioether-bound diacylglycerol but differ by a fatty acid amide bound to the α-amino group of the universally conserved cysteine. In Escherichia coli the N-terminal acylation is catalyzed by the N-acyltransferase Lnt. Using E. coli Lnt as a query in a BLASTp search, we identified putative lnt genes also in Gram-positive mycobacteria. The Mycobacterium tuberculosis lipoprotein LppX, heterologously expressed in Mycobacterium smegmatis, was N-acylated at the N-terminal cysteine, whereas LppX expressed in a M. smegmatis lnt::aph knock-out mutant was accessible for N-terminal sequencing. Western blot analyses of a truncated and tagged form of LppX indicated a smaller size of about 0.3 kDa in the lnt::aph mutant compared with the parental strain. Matrix-assisted laser desorption ionization time-of-flight/time-of-flight analyses of a trypsin digest of LppX proved the presence of the diacylglycerol modification in both strains, the parental strain and lnt::aph mutant. N-Acylation was found exclusively in the M. smegmatis parental strain. Complementation of the lnt::aph mutant with M. tuberculosis ppm1 restored N-acylation. The substrate for N-acylation is a C16 fatty acid, whereas the two fatty acids of the diacylglycerol residue were identified as C16 and C19:0 fatty acid, the latter most likely tuberculostearic acid. We demonstrate that mycobacterial lipoproteins are triacylated. For the first time to our knowledge, we identify Lnt activity in Gram-positive bacteria and assigned the responsible genes. In M. smegmatis and M. tuberculosis the open reading frames are annotated as MSMEG_3860 and M. tuberculosis ppm1, respectively. PMID:19661058

  13. Identification of apolipoprotein N-acyltransferase (Lnt) in mycobacteria.

    Science.gov (United States)

    Tschumi, Andreas; Nai, Corrado; Auchli, Yolanda; Hunziker, Peter; Gehrig, Peter; Keller, Peter; Grau, Thomas; Sander, Peter

    2009-10-02

    Lipoproteins of Gram-negative and Gram-positive bacteria carry a thioether-bound diacylglycerol but differ by a fatty acid amide bound to the alpha-amino group of the universally conserved cysteine. In Escherichia coli the N-terminal acylation is catalyzed by the N-acyltransferase Lnt. Using E. coli Lnt as a query in a BLASTp search, we identified putative lnt genes also in Gram-positive mycobacteria. The Mycobacterium tuberculosis lipoprotein LppX, heterologously expressed in Mycobacterium smegmatis, was N-acylated at the N-terminal cysteine, whereas LppX expressed in a M. smegmatis lnt::aph knock-out mutant was accessible for N-terminal sequencing. Western blot analyses of a truncated and tagged form of LppX indicated a smaller size of about 0.3 kDa in the lnt::aph mutant compared with the parental strain. Matrix-assisted laser desorption ionization time-of-flight/time-of-flight analyses of a trypsin digest of LppX proved the presence of the diacylglycerol modification in both strains, the parental strain and lnt::aph mutant. N-Acylation was found exclusively in the M. smegmatis parental strain. Complementation of the lnt::aph mutant with M. tuberculosis ppm1 restored N-acylation. The substrate for N-acylation is a C16 fatty acid, whereas the two fatty acids of the diacylglycerol residue were identified as C16 and C19:0 fatty acid, the latter most likely tuberculostearic acid. We demonstrate that mycobacterial lipoproteins are triacylated. For the first time to our knowledge, we identify Lnt activity in Gram-positive bacteria and assigned the responsible genes. In M. smegmatis and M. tuberculosis the open reading frames are annotated as MSMEG_3860 and M. tuberculosis ppm1, respectively.

  14. A review of dosimetry used in epidemiological studies considered to evaluate the linear no-threshold (LNT) dose-response model for radiation protection.

    Science.gov (United States)

    Till, John E; Beck, Harold L; Grogan, Helen A; Caffrey, Emily A

    2017-10-01

    Accurate dosimetry is key to deriving the dose response from radiation exposure in an epidemiological study. It becomes increasingly important to estimate dose as accurately as possible when evaluating low dose and low dose rate as the calculation of excess relative risk per Gray (ERR/Gy) is very sensitive to the number of excess cancers observed, and this can lead to significant errors if the dosimetry is of poor quality. By including an analysis of the dosimetry, we gain a far better appreciation of the robustness of the work from the standpoint of its value in supporting the shape of the dose response curve at low doses and low dose rates. This article summarizes a review of dosimetry supporting epidemiological studies currently being considered for a re-evaluation of the linear no-threshold assumption as a basis for radiation protection. The dosimetry for each study was evaluated based on important attributes from a dosimetry perspective. Our dosimetry review consisted of dosimetry supporting epidemiological studies published in the literature during the past 15 years. Based on our review, it is clear there is wide variation in the quality of the dosimetry underlying each study. Every study has strengths and weaknesses. The article describes the results of our review, explaining which studies clearly stand out for their strengths as well as common weaknesses among all investigations. To summarize a review of dosimetry used in epidemiological studies being considered by the National Council on Radiation Protection and Measurements (NCRP) in an evaluation of the linear no-threshold dose-response model that underpins the current framework of radiation protection. The authors evaluated each study using criteria considered important from a dosimetry perspective. The dosimetry analysis was divided into the following categories: (1) general study characteristics, (2) dose assignment, (3) uncertainty, (4) dose confounders (5) dose validation, and (6) strengths and

  15. Immunohistochemical analysis of LNT, NeuAc2----3LNT, and Lex carbohydrate antigens in human tumors and normal tissues.

    OpenAIRE

    Garin-Chesa, P.; Rettig, W. J.

    1989-01-01

    Monoclonal antibodies (MAbs) K21 and K4 define two carbohydrate determinants, lacto-N-tetraose (LNT) and sialylated LNT (NeuAc2----3LNT). respectively, which are expressed on the surface of cultured human teratocarcinoma cells, but not on a wide range of other epithelial and nonepithelial cells in culture. The present study used immunohistochemical methods to examine LNT and NeuAc2----3LNT expression in normal human tissues, 29 germ cell tumors, and over 200 tumors of other histologic types. ...

  16. LNT IS THE BEST WE CAN DO - TO-DAY

    Science.gov (United States)

    AbstractThe form of the dose-response curve for radiation-induced cancers, particularly at low doses, is the subject of an ongoing and spirited debate. The present review describes the current data base and basis for establishing a low dose, linear no threshold (LNT) mode...

  17. Attack Tree Generation by Policy Invalidation

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof

    2015-01-01

    Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified...... through brainstorming of experts. In this work we formalize attack tree generation including human factors; based on recent advances in system models we develop a technique to identify possible attacks analytically, including technical and human factors. Our systematic attack generation is based...... on invalidating policies in the system model by identifying possible sequences of actions that lead to an attack. The generated attacks are precise enough to illustrate the threat, and they are general enough to hide the details of individual steps....

  18. Response to, "On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith.".

    Science.gov (United States)

    Beyea, Jan

    2016-07-01

    It is not true that successive groups of researchers from academia and research institutions-scientists who served on panels of the US National Academy of Sciences (NAS)-were duped into supporting a linear no-threshold model (LNT) by the opinions expressed in the genetic panel section of the 1956 "BEAR I" report. Successor reports had their own views of the LNT model, relying on mouse and human data, not fruit fly data. Nor was the 1956 report biased and corrupted, as has been charged in an article by Edward J. Calabrese in this journal. With or without BEAR I, the LNT model would likely have been accepted in the US for radiation protection purposes in the 1950's. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. The LNT Debate in Radiation Protection: Science vs. Policy.

    Science.gov (United States)

    Mossman, Kenneth L

    2012-01-01

    There is considerable interest in revisiting LNT theory as the basis for the system of radiation protection in the US and worldwide. Arguing the scientific merits of policy options is not likely to be fruitful because the science is not robust enough to support one theory to the exclusion of others. Current science cannot determine the existence of a dose threshold, a key piece to resolving the matter scientifically. The nature of the scientific evidence is such that risk assessment at small effective doses (defined as LNT debate as a policy question and analyzes the problem from a social and economic perspective. In other words, risk assessment and a strictly scientific perspective are insufficiently broad enough to resolve the issue completely. A wider perspective encompassing social and economic impacts in a risk management context is necessary, but moving the debate to the policy and risk management arena necessarily marginalizes the role of scientists.

  20. LNT-an apparent rather than a real controversy?

    Energy Technology Data Exchange (ETDEWEB)

    Charles, M W [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)

    2006-09-15

    Can the carcinogenic risks of radiation that are observed at high doses be extrapolated to low doses? This question has been debated through the whole professional life of the author-now nearing four decades. In its extreme form the question relates to a particular hypothesis (LNT) used widely by the international community for radiological protection applications. The linear no-threshold (LNT) hypothesis propounds that the extrapolation is linear and that it extends down to zero dose. The debate on the validity of LNT has increased dramatically in recent years. This is in no small part due to concern that exaggerated risks at low doses leads to undue amounts of societal resources being used to reduce man-made human exposure and because of the related growing public aversion to diagnostic and therapeutic medical exposures. The debate appears to be entering a new phase. There is a growing realisation of the limitations of fundamental data and the scientific approach to address this question at low doses. There also appears to be an increasing awareness that the assumptions necessary for a workable and acceptable system of radiological protection at low doses must necessarily be based on considerable pragmatism. Recent developments are reviewed and a historical perspective is given on the general nature of controversies in radiation protection over the years. All the protagonists in the debate will at the end of the day probably be able to claim that they were right{exclamation_point} (opinion)

  1. LNT--an apparent rather than a real controversy?

    Science.gov (United States)

    Charles, M W

    2006-09-01

    Can the carcinogenic risks of radiation that are observed at high doses be extrapolated to low doses? This question has been debated through the whole professional life of the author--now nearing four decades. In its extreme form the question relates to a particular hypothesis (LNT) used widely by the international community for radiological protection applications. The linear no-threshold (LNT) hypothesis propounds that the extrapolation is linear and that it extends down to zero dose. The debate on the validity of LNT has increased dramatically in recent years. This is in no small part due to concern that exaggerated risks at low doses leads to undue amounts of societal resources being used to reduce man-made human exposure and because of the related growing public aversion to diagnostic and therapeutic medical exposures. The debate appears to be entering a new phase. There is a growing realisation of the limitations of fundamental data and the scientific approach to address this question at low doses. There also appears to be an increasing awareness that the assumptions necessary for a workable and acceptable system of radiological protection at low doses must necessarily be based on considerable pragmatism. Recent developments are reviewed and a historical perspective is given on the general nature of controversies in radiation protection over the years. All the protagonists in the debate will at the end of the day probably be able to claim that they were right!

  2. Instrumental variables and Mendelian randomization with invalid instruments

    Science.gov (United States)

    Kang, Hyunseung

    inferential results that are robust to mis-specifications of the covariate-outcome model. We also provide a sensitivity analysis should the instrument turn out to be invalid, specifically violate (A3). Fourth, in application work, we study the causal effect of malaria on stunting among children in Ghana. Previous studies on the effect of malaria and stunting were observational and contained various unobserved confounders, most notably nutritional deficiencies. To infer causality, we use the sickle cell genotype, a trait that confers some protection against malaria and was randomly assigned at birth, as an IV and apply our nonparametric IV method. We find that the risk of stunting increases by 0.22 (95% CI: 0.044,1) for every malaria episode and is sensitive to unmeasured confounders.

  3. Radiation hormesis: challenging LNT theory via ecological and evolutionary considerations.

    Science.gov (United States)

    Parsons, Peter A

    2002-04-01

    Ecological and evolutionary considerations suggest that radiation hormesis is made up of two underlying components. The first (a) is background radiation hormesis based upon the background exposure to which all organisms are subjected throughout evolutionary time. The second and much larger component (b) is stress-derived radiation hormesis arising as a protective mechanism derived from metabolic adaptation to environmental stresses throughout evolutionary time especially from climate-based extremes. Since (b) > > (a), hormesis for ionizing radiation becomes an evolutionary expectation at exposures substantially exceeding background. This biological model renders linear no-threshold theory invalid. Accumulating evidence from experimental organisms ranging from protozoa to rodents, and from demographic studies on humans, is consistent with this interpretation. Although hormesis is not universally accepted, the model presented can be subjected to hypothesis-based empirical investigations in a range of organisms. At this stage, however, two consequences follow from this evolutionary model: (1) hormesis does not connote a value judgement usually expressed as a benefit; and (2) there is an emerging and increasingly convincing case for reviewing and relaxing some recommended radiation protection exposure levels in the low range.

  4. Parental Invalidation and the Development of Narcissism.

    Science.gov (United States)

    Huxley, Elizabeth; Bizumic, Boris

    2017-02-17

    Parenting behaviors and childhood experiences have played a central role in theoretical approaches to the etiology of narcissism. Research has suggested an association between parenting and narcissism; however, it has been limited in its examination of different narcissism subtypes and individual differences in parenting behaviors. This study investigates the influence of perceptions of parental invalidation, an important aspect of parenting behavior theoretically associated with narcissism. Correlational and hierarchical regression analyses were conducted using a sample of 442 Australian participants to examine the relationship between invalidating behavior from mothers and fathers, and grandiose and vulnerable narcissism. Results indicate that stronger recollections of invalidating behavior from either mothers or fathers are associated with higher levels of grandiose and vulnerable narcissism when controlling for age, gender, and the related parenting behaviors of rejection, coldness, and overprotection. The lowest levels of narcissism were found in individuals who reported low levels of invalidation in both parents. These findings support the idea that parental invalidation is associated with narcissism.

  5. The LNT-controversy and the concept of "controllable dose".

    Science.gov (United States)

    Kellerer, A M; Nekolla, E A

    2000-10-01

    There is no firm scientific information on the potential health effects, such as increased cancer rates, due to low doses of ionizing radiation. In view of this uncertainty ICRP has adopted as a prudent default option the linear no-threshold (LNT) assumption and has used it to derive nominal risk coefficients. Subsequent steps, such as the comparison of putative fatality rates in radiation workers with observed accident rates in other professions, have given the risk estimates a false appearance of scientific fact. This has encouraged meaningless computations of radiation-induced fatalities in large populations and has caused a trend to measure dose limits for the public not against the magnitude of the natural radiation exposure and its geographic variations, but against the numerical risk estimates. In reaction to this development, opposing claims are being made of a threshold in dose for deleterious health effects in humans. In view of the growing polarization, ICRP is now exploring a new concept "controllable dose" that aims to abandon the quantity collective dose, emphasizing, instead, individual dose and, in particular, the control of the maximum individual dose from single sources. Essential features of the new proposal are here examined, and it is concluded that the control of individual dose will still have to be accompanied by the avoidance of unnecessary exposures of large populations, even if their magnitude lies below that acceptable to the individual. If a reasonable cut-off at trivial doses is made, the collective dose can remain useful. Misapplications of collective dose are not the deeper cause of the current controversy; the actual root is the misrepresentation of the LNT-assumption as a scientific fact and the amplification of this confusion by loose terminology. If over-interpretation and distortion are avoided, the current system of radiation protection is workable and essentially sound, and there is no need for a fruitless LNT-controversy. The

  6. The Use of Lexical Neighborhood Test (LNT) in the Assessment of Speech Recognition Performance of Cochlear Implantees with Normal and Malformed Cochlea.

    Science.gov (United States)

    Kant, Anjali R; Banik, Arun A

    2017-09-01

    The present study aims to use the model-based test Lexical Neighborhood Test (LNT), to assess speech recognition performance in early and late implanted hearing impaired children with normal and malformed cochlea. The LNT was administered to 46 children with congenital (prelingual) bilateral severe-profound sensorineural hearing loss, using Nucleus 24 cochlear implant. The children were grouped into Group 1-(early implantees with normal cochlea-EI); n = 15, 31/2-61/2 years of age; mean age at implantation-3½ years. Group 2-(late implantees with normal cochlea-LI); n = 15, 6-12 years of age; mean age at implantation-5 years. Group 3-(early implantees with malformed cochlea-EIMC); n = 9; 4.9-10.6 years of age; mean age at implantation-3.10 years. Group 4-(late implantees with malformed cochlea-LIMC); n = 7; 7-12.6 years of age; mean age at implantation-6.3 years. The following were the malformations: dysplastic cochlea, common cavity, Mondini's, incomplete partition-1 and 2 (IP-1 and 2), enlarged IAC. The children were instructed to repeat the words on hearing them. Means of the word and phoneme scores were computed. The LNT can also be used to assess speech recognition performance of hearing impaired children with malformed cochlea. When both easy and hard lists of LNT are considered, although, late implantees (with or without normal cochlea), have achieved higher word scores than early implantees, the differences are not statistically significant. Using LNT for assessing speech recognition enables a quantitative as well as descriptive report of phonological processes used by the children.

  7. Low-night temperature (LNT) induced changes of photosynthesis in grapevine (Vitis vinifera L.) plants.

    Science.gov (United States)

    Bertamini, M; Muthuchelian, K; Rubinigg, M; Zorer, R; Nedunchezhian, N

    2005-07-01

    Changes of leaf pigments, ribulose-1,5-bisphosphate carboxylase (RuBPC) and photosynthetic efficiency were examined in grapevine (Vitis vinifera L.) plants grown under ambient irradiation (maximum daily PAR = 1500 micromol m(-2) s(-1)) for 7 days to low night temperature (LNT) of 5 degrees C (daily from 18:00 to 06:00). The contents of chlorophyll (Chl) and carotenoids (Car) per fresh mass were lower in LNT leaves than in control leaves. The contents of alpha + beta carotene and lutein-5,6-epoxide remained unaffected, but the de-epoxidation state involving the components of xanthophyll cycle increased. RuBPC activity and soluble proteins were also significantly reduced in LNT leaves. In isolated thylakoids, a marked inhibition of whole chain (PS I + PS II) and PS II activity were observed in LNT leaves. Smaller inhibition of PS I activity was observed in LNT leaves. The artificial exogenous electron donors, MnCl2, DPC and NH2OH did not restored the loss of PS II activity in LNT leaves. The same results were obtained when F(v)/F(m) was evaluated by Chl fluorescence measurements. The marked loss of PS II activity in LNT leaves was due to the marked loss of D1 protein which was determined by immunological studies.

  8. Growth of non-toxigenic Clostridium botulinum mutant LNT01 in cooked beef: One-step kinetic analysis and comparison with C. sporogenes and C. perfringens.

    Science.gov (United States)

    Huang, Lihan

    2018-05-01

    The objective of this study was to investigate the growth kinetics of Clostridium botulinum LNT01, a non-toxigenic mutant of C. botulinum 62A, in cooked ground beef. The spores of C. botulinum LNT01 were inoculated to ground beef and incubated anaerobically under different temperature conditions to observe growth and develop growth curves. A one-step kinetic analysis method was used to analyze the growth curves simultaneously to minimize the global residual error. The data analysis was performed using the USDA IPMP-Global Fit, with the Huang model as the primary model and the cardinal parameters model as the secondary model. The results of data analysis showed that the minimum, optimum, and maximum growth temperatures of this mutant are 11.5, 36.4, and 44.3 °C, and the estimated optimum specific growth rate is 0.633 ln CFU/g per h, or 0.275 log CFU/g per h. The maximum cell density is 7.84 log CFU/g. The models and kinetic parameters were validated using additional isothermal and dynamic growth curves. The resulting residual errors of validation followed a Laplace distribution, with about 60% of the residual errors within ±0.5 log CFU/g of experimental observations, suggesting that the models could predict the growth of C. botulinum LNT01 in ground beef with reasonable accuracy. Comparing with C. perfringens, C. botulinum LNT01 grows at much slower rates and with much longer lag times. Its growth kinetics is also very similar to C. sporogenes in ground beef. The results of computer simulation using kinetic models showed that, while prolific growth of C. perfringens may occur in ground beef during cooling, no growth of C. botulinum LNT01 or C. sporogenes would occur under the same cooling conditions. The models developed in this study may be used for prediction of the growth and risk assessments of proteolytic C. botulinum in cooked meats. Published by Elsevier Ltd.

  9. Regulatory implications of a linear non-threshold (LNT) dose-based risks.

    Science.gov (United States)

    Aleta, C R

    2009-01-01

    Current radiation protection regulatory limits are based on the linear non-threshold (LNT) theory using health data from atomic bombing survivors. Studies in recent years sparked debate on the validity of the theory, especially at low doses. The present LNT overestimates radiation risks since the dosimetry included only acute gammas and neutrons; the role of other bomb-caused factors, e.g. fallout, induced radioactivity, thermal radiation (UVR), electromagnetic pulse (EMP), and blast, were excluded. Studies are proposed to improve the dose-response relationship.

  10. LNT-menetelmä sulametallihaurauden arvioimiseen terästen kuumasinkityksessä

    OpenAIRE

    Nygren, Ville

    2015-01-01

    Opinnäytetyö on osa Tekesin rahoittamaa FIMECC BSA projektia. Projektissa mukana olivat Metropolia, Boliden Kokkola Oy, SSAB Europe Oy, Aurajoki Oy, VTT, Lappeenrannan teknillinen yliopisto, Kuormaväline Oy, sekä Majava Group Oy. LNT kokeissa mukana olivat Metropolia, Boliden Kokkola Oy, SSAB Europe Oy, Aurajoki Oy, sekä RWTH Aachen ja Feldmann + Weynand GmbH. Opinnäytetyön kokeellisessa osuudessa testattiin SSAB:n neljää eri terästuotetta suorittamalla kokeita LNT-menetelmällä k...

  11. 16 CFR 460.24 - Stayed or invalid parts.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Stayed or invalid parts. 460.24 Section 460.24 Commercial Practices FEDERAL TRADE COMMISSION TRADE REGULATION RULES LABELING AND ADVERTISING OF HOME INSULATION § 460.24 Stayed or invalid parts. If any part of this regulation is stayed or held...

  12. Development of Optimal Catalyst Designs and Operating Strategies for Lean NOx Reduction in Coupled LNT-SCR Systems

    Energy Technology Data Exchange (ETDEWEB)

    Harold, Michael [Univ. of Houston, TX (United States); Crocker, Mark [Univ. of Kentucky, Lexington, KY (United States); Balakotaiah, Vemuri [Univ. of Houston, TX (United States); Luss, Dan [Univ. of Houston, TX (United States); Choi, Jae-Soon [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dearth, Mark [Ford Motor Company, Dearborn, MI (United States); McCabe, Bob [Ford Motor Company, Dearborn, MI (United States); Theis, Joe [Ford Motor Company, Dearborn, MI (United States)

    2013-09-30

    Oxides of nitrogen in the form of nitric oxide (NO) and nitrogen dioxide (NO2) commonly referred to as NOx, is one of the two chemical precursors that lead to ground-level ozone, a ubiquitous air pollutant in urban areas. A major source of NOx} is generated by equipment and vehicles powered by diesel engines, which have a combustion exhaust that contains NOx in the presence of excess O2. Catalytic abatement measures that are effective for gasoline-fueled engines such as the precious metal containing three-way catalytic converter (TWC) cannot be used to treat O2-laden exhaust containing NOx. Two catalytic technologies that have emerged as effective for NOx abatement are NOx storage and reduction (NSR) and selective catalytic reduction (SCR). NSR is similar to TWC but requires much larger quantities of expensive precious metals and sophisticated periodic switching operation, while SCR requires an on-board source of ammonia which serves as the chemical reductant of the NOx. The fact that NSR produces ammonia as a byproduct while SCR requires ammonia to work has led to interest in combining the two together to avoid the need for the cumbersome ammonia generation system. In this project a comprehensive study was carried out of the fundamental aspects and application feasibility of combined NSR/SCR. The project team, which included university, industry, and national lab researchers, investigated the kinetics and mechanistic features of the underlying chemistry in the lean NOx trap (LNT) wherein NSR was carried out, with particular focus on identifying the operating conditions such as temperature and catalytic properties which lead to the production of ammonia in the LNT. The performance features of SCR on both model and commercial catalysts focused on the synergy between the LNT and SCR converters in terms of utilizing the upstream-generated ammonia and

  13. On the origins of the linear no-threshold (LNT) dogma by means of untruths, artful dodges and blind faith

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2015-01-01

    This paper is an historical assessment of how prominent radiation geneticists in the United States during the 1940s and 1950s successfully worked to build acceptance for the linear no-threshold (LNT) dose–response model in risk assessment, significantly impacting environmental, occupational and medical exposure standards and practices to the present time. Detailed documentation indicates that actions taken in support of this policy revolution were ideologically driven and deliberately and deceptively misleading; that scientific records were artfully misrepresented; and that people and organizations in positions of public trust failed to perform the duties expected of them. Key activities are described and the roles of specific individuals are documented. These actions culminated in a 1956 report by a Genetics Panel of the U.S. National Academy of Sciences (NAS) on Biological Effects of Atomic Radiation (BEAR). In this report the Genetics Panel recommended that a linear dose response model be adopted for the purpose of risk assessment, a recommendation that was rapidly and widely promulgated. The paper argues that current international cancer risk assessment policies are based on fraudulent actions of the U.S. NAS BEAR I Committee, Genetics Panel and on the uncritical, unquestioning and blind-faith acceptance by regulatory agencies and the scientific community. - Highlights: • The 1956 recommendation of the US NAS to use the LNT for risk assessment was adopted worldwide. • This recommendation is based on a falsification of the research record and represents scientific misconduct. • The record misrepresented the magnitude of panelist disagreement of genetic risk from radiation. • These actions enhanced public acceptance of their risk assessment policy recommendations.

  14. Activation of the NRF2-ARE signalling pathway by the Lentinula edodes polysaccharose LNT alleviates ROS-mediated cisplatin nephrotoxicity.

    Science.gov (United States)

    Chen, Qian; Peng, Huixia; Dong, Lei; Chen, Lijuan; Ma, Xiaobin; Peng, Yuping; Dai, Shejiao; Liu, Qiang

    2016-07-01

    The nephrotoxicity of cisplatin (cis-DDP) limits its general clinical applications. Lentinan (LNT), a dextran extracted from the mushroom Lentinula edodes, has been shown to have multiple pharmacological activities. The primary objective of the current study was to determine whether and how LNT alleviates cis-DDP- induced cytotoxicity in HK-2 cells and nephrotoxicity in mice. LNT did not interfere with cisplatin's anti-tumour efficacy in vitro and functioned cooperatively with cis-DDP to inhibit activity in HeLa and A549 tumour cells. LNT alleviated the cis-DDP-induced decrease in HK-2 cell viability, caspase-3 activation and cleavage of the DNA repair enzyme PARP, decreased HK-2 cell apoptosis and inhibited reactive oxygen species (ROS) accumulation in HK-2 cells. The inhibitor of ROS (N-acetyl-L-cysteine, NAC) could decreased the apoptosis of HK-2 cell. In addition, LNT significantly prevented cis-DDP-induced kidney injury in vivo. LNT itself could not eliminate ROS levels in vitro. Further studies demonstrated that LNT induced NF-E2 p45-related factor 2 (Nrf2) protein and mRNA expression in a time- and dose-dependent manner. LNT promoted Nrf2 translocation to the nucleus and binding to the antioxidant-response element (ARE) sequence and induced the transcription and translation of heme oxygenase 1 (HO-1), aldo-keto reductases 1C1 and 1C2 (AKR1C), and NADP(H):quinone oxidoreductase 1 (NQO1). Finally, we used hNrf2 siRNA and an Nrf2 agonist (tBHQ) to inhibit or enhance Nrf2 expression. The results demonstrated that the LNT-mediated alleviation of cis-DDP-induced nephrotoxicity was achieved by preventing the accumulation of ROS in a manner that depended on the activation of the Nrf2-ARE signalling pathway. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. The essential Escherichia coli apolipoprotein N-acyltransferase (Lnt) exists as an extracytoplasmic thioester acyl-enzyme intermediate.

    Science.gov (United States)

    Buddelmeijer, Nienke; Young, Ry

    2010-01-19

    Escherichia coli apolipoprotein N-acyltransferase (Lnt) transfers an acyl group from sn-1-glycerophospholipid to the free alpha-amino group of the N-terminal cysteine of apolipoproteins, resulting in mature triacylated lipoprotein. Here we report that the Lnt reaction proceeds through an acyl-enzyme intermediate in which a palmitoyl group forms a thioester bond with the thiol of the active site residue C387 that was cleaved by neutral hydroxylamine. Lnt(C387S) also formed a fatty acyl intermediate that was resistant to neutral hydroxylamine treatment, consistent with formation of an oxygen-ester linkage. Lnt(C387A) did not form an acyl-enzyme intermediate and, like Lnt(C387S), did not have any detectable Lnt activity, indicating that acylation cannot occur at other positions in the catalytic domain. The existence of this thioacyl-enzyme intermediate allowed us to determine whether essential residues in the catalytic domain of Lnt affect the first step of the reaction, the formation of the acyl-enzyme intermediate, or the second step in which the acyl chain is transferred to the apolipoprotein substrate. In the catalytic triad, E267 is required for the formation of the acyl-enzyme intermediate, indicating its role in enhancing the nucleophilicity of C387. E343 is also involved in the first step but is not in close proximity to the active site. W237, Y388, and E389 play a role in the second step of the reaction since acyl-Lnt is formed but N-acylation does not occur. The data presented allow discrimination between the functions of essential Lnt residues in catalytic activity and substrate recognition.

  16. THE ESSENTIAL E. COLI APOLIPOPROTEIN N-ACYLTRANSFERASE (LNT) EXISTS AS AN EXTRACYTOPLASMIC THIOESTER ACYL-ENZYME INTERMEDIATE‡

    Science.gov (United States)

    Buddelmeijer, Nienke; Young, Ry

    2011-01-01

    Escherichia coli apolipoprotein N-acyltransferase (Lnt) transfers an acyl group from sn-1-glycerolphospholipid to the free α-amino group of the N-terminal cysteine of apolipoproteins, resulting in mature triacylated lipoprotein. Here we report that the Lnt reaction proceeds through an acyl enzyme intermediate in which a palmitoyl group forms a thioester bond with the thiol of active site residue C387 that was cleaved by neutral hydroxylamine. Lnt(C387S) also formed a fatty acyl intermediate that was resistant to neutral hydroxylamine treatment, consistent with formation of an oxygen-ester linkage. Lnt(C387A) did not form an acyl enzyme intermediate and, like Lnt(C387S), did not have any detectable Lnt activity, indicating that acylation can not occur at other positions in the catalytic domain. The existence of this thioacyl-enzyme intermediate allowed us to determine whether essential residues in the catalytic domain of Lnt affect the first step of the reaction, the formation of the acyl enzyme intermediate, or the second step in which the acyl chain is transferred to apolipoprotein substrate. In the catalytic triad, E267 is required for the formation of the acyl-enzyme intermediate, indicating its role in enhancing the nucleophilicity of C387. E343 is also involved in the first step but is not in close proximity to the active site. W237, Y388 and E389 play a role in the second step of the reaction since acyl-Lnt is formed but N-acylation does not occur. The data presented allow discrimination between the functions of essential Lnt residues in catalytic activity and substrate recognition. PMID:20000742

  17. THE ESSENTIAL E. COLI APOLIPOPROTEIN N-ACYLTRANSFERASE (LNT) EXISTS AS AN EXTRACYTOPLASMIC THIOESTER ACYL-ENZYME INTERMEDIATE‡

    OpenAIRE

    Buddelmeijer, Nienke; Young, Ry

    2010-01-01

    Escherichia coli apolipoprotein N-acyltransferase (Lnt) transfers an acyl group from sn-1-glycerolphospholipid to the free α-amino group of the N-terminal cysteine of apolipoproteins, resulting in mature triacylated lipoprotein. Here we report that the Lnt reaction proceeds through an acyl enzyme intermediate in which a palmitoyl group forms a thioester bond with the thiol of active site residue C387 that was cleaved by neutral hydroxylamine. Lnt(C387S) also formed a fatty acyl intermediate t...

  18. The (unclear effects of invalid retro-cues.

    Directory of Open Access Journals (Sweden)

    Marcel eGressmann

    2016-03-01

    Full Text Available Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.

  19. Perspective on the use of LNT for radiation protection and risk assessment by the U.S. Environmental Protection Agency.

    Science.gov (United States)

    Puskin, Jerome S

    2009-08-21

    The U.S. Environmental Protection Agency (EPA) bases its risk assessments, regulatory limits, and nonregulatory guidelines for population exposures to low level ionizing radiation on the linear no-threshold (LNT) hypothesis, which assumes that the risk of cancer due to a low dose exposure is proportional to dose, with no threshold. The use of LNT for radiation protection purposes has been repeatedly endorsed by authoritative scientific advisory bodies, including the National Academy of Sciences' BEIR Committees, whose recommendations form a primary basis of EPA's risk assessment methodology. Although recent radiobiological findings indicate novel damage and repair processes at low doses, LNT is supported by data from both epidemiology and radiobiology. Given the current state of the science, the consensus positions of key scientific and governmental bodies, as well as the conservatism and calculational convenience of the LNT assumption, it is unlikely that EPA will modify this approach in the near future.

  20. Molecular biology, epidemiology, and the demise of the linear no-threshold (LNT) hypothesis.

    Science.gov (United States)

    Pollycove, M; Feinendegen, L E

    1999-01-01

    The prime concern of radiation protection policy since 1959 has been protecting DNA from damage. The 1995 NCRP Report 121 on collective dose states that since no human data provides direct support for the linear no threshold hypothesis (LNT), and some studies provide quantitative data that, with statistical significance, contradict LNT, ultimately, confidence in LNT is based on the biophysical concept that the passage of a single charged particle could cause damage to DNA that would result in cancer. Current understanding of the basic molecular biologic mechanisms involved and recent data are examined before presenting several statistically significant epidemiologic studies that contradict the LNT hypothesis. Over eons of time a complex biosystem evolved to control the DNA alterations (oxidative adducts) produced by about 10(10) free radicals/cell/d derived from 2-3% of all metabolized oxygen. Antioxidant prevention, enzymatic repair of DNA damage, and removal of persistent DNA alterations by apoptosis, differentiation, necrosis, and the immune system, sequentially reduce DNA damage from about 10(6) DNA alterations/cell/d to about 1 mutation/cell/d. These mutations accumulate in stem cells during a lifetime with progressive DNA damage-control impairment associated with aging and malignant growth. A comparatively negligible number of mutations, an average of about 10(-7) mutations/cell/d, is produced by low LET radiation background of 0.1 cGy/y. The remarkable efficiency of this biosystem is increased by the adaptive responses to low-dose ionizing radiation. Each of the sequential functions that prevent, repair, and remove DNA damage are adaptively stimulated by low-dose ionizing radiation in contrast to their impairment by high-dose radiation. The biologic effect of radiation is not determined by the number of mutations it creates, but by its effect on the biosystem that controls the relentless enormous burden of oxidative DNA damage. At low doses, radiation

  1. Invalidation in patients with rheumatic diseases: Clinical and psychological framework

    NARCIS (Netherlands)

    Santiago, M.G.; Marques, A.; Kool, M.B.; Geenen, R.; Da Silva, J.A.P.

    2017-01-01

    Objective. The term “invalidation” refers to the patients’ perception that their medical condition is not recognized by the social environment. Invalidation can be a major issue in patients’ lives, adding a significant burden to symptoms and limitations while increasing the risk of physical and

  2. Intriguing legacy of Einstein, Fermi, Jordan, and others: the possible invalidation of quark conjectures

    International Nuclear Information System (INIS)

    Santilli, R.M.

    1981-01-01

    The objective of this paper is to present an outline of a number of criticisms of the quark models of hadron structure which have been present in the community of basic research for some time. The hope is that quark supporters will consider these criticisms and present possible counterarguments for a scintifically effective resolution of the issues. In particular, it is submitted that the problem of whether quarks exist as physical particles necessarily calls for the prior theoretical and experimental resolution of the question of the validity or invalidity, for hadronic structure, of the relativity and quantum mechanical laws established for atomic structure. The current theoretical studies leading to the conclusion that they are invalid are considered, together with the experimental situation. We also recall the doubts by Einstein, Fermi, Jordan, and others on the final character of contemporary physical knowledge. Most of all, this paper is an appeal to young minds of all ages. The possible invalidity for the strong interactions of the physical laws of the electromagnetic interactions, rather than constituting a scientific drawback, represents instead an invaluable impetus toward the search for covering laws specifically conceived for hadronic structure and strong interactions in general, a program which has already been initiated by a number of researchers. In turn, this situation appears to have all the ingredients for a new scientific renaissance, perhaps comparable to that of the early part of this century

  3. [Assessment of invalidity as a result of infectious diseases].

    Science.gov (United States)

    Čeledová, L; Čevela, R; Bosák, M

    2016-01-01

    The article features the new medical assessment paradigm for invalidity as a result of infectious disease which is applied as of 1 January 2010. The invalidity assessment criteria are regulated specifically by Regulation No. 359/2009. Chapter I of the Annexe to the invalidity assessment regulation addresses the area of infectious diseases with respect to functional impairment and its impact on the quality of life. Since 2010, the invalidity has also been newly categorized into three groups. The new assessment approach makes it possible to evaluate a persons functional capacity, type of disability, and eligibility for compensation for reduced capacity for work. In 2010, a total of 170 375 invalidity cases were assessed, and in 2014, 147 121 invalidity assessments were made. Invalidity as a result of infectious disease was assessed in 177 persons in 2010, and 128 invalidity assessments were made in 2014. The most common causes of invalidity as a result of infectious disease are chronic viral hepatitis, other spirochetal infections, tuberculosis of the respiratory tract, tick-borne viral encephalitis, and HIV/AIDS. The number of assessments of invalidity as a result of infectious disease showed a declining trend between 2010 and 2014, similarly to the total of invalidity assessments. In spite of this fact, the cases of invalidity as a result of infectious disease account for approximately half percent of all invalidity assessments made in the above-mentioned period of time.

  4. Does overprotection cause cardiac invalidism after acute myocardial infarction?

    Science.gov (United States)

    Riegel, B J; Dracup, K A

    1992-01-01

    To determine if overprotection on the part of the patient's family and friends contributes to the development of cardiac invalidism after acute myocardial infarction. Longitudinal survey. Nine hospitals in the southwestern United States. One hundred eleven patients who had experienced a first acute myocardial infarction. Subjects were predominantly male, older-aged, married, caucasian, and in functional class I. Eighty-one patients characterized themselves as being overprotected (i.e., receiving more social support from family and friends than desired), and 28 reported receiving inadequate support. Only two patients reported receiving as much support as they desired. Self-esteem, emotional distress, health perceptions, interpersonal dependency, return to work. Overprotected patients experienced less anxiety, depression, anger, confusion, more vigor, and higher self-esteem than inadequately supported patients 1 month after myocardial infarction (p Overprotection on the part of family and friends may facilitate psychosocial adjustment in the early months after an acute myocardial infarction rather than lead to cardiac invalidism.

  5. Rapid synthesis of Eu3+-doped LNT (Li-Nb-Ti-O) phosphor by millimeter-wave heating

    Science.gov (United States)

    Nakano, Hiromi; Ozono, Keita; Saji, Tasaburo; Miyake, Syoji; Hayashi, Hiroyuki

    2013-09-01

    We have been investigated a new phosphor containing the LNT (Li1+x-yNb1-x-3yTix+4yO3 (0.11 ⩽ x ⩽ 0.33, 0 ⩽ y ⩽ 0.09) with the superstructure as a host material. An Eu3+-doped LNT has prepared by sintering at 1373 K for 24 h by a conventional electric furnace before. A fast synthesizing technique that uses lower energy is required for application of the materials. In the present study, the Eu3+-doped LNT phosphor has been successfully synthesized by millimeter-wave (MM) heating at 1173 K for 1 h. A bright red emission was observed at an excitation wavelength of 398 nm. The maximum emission peak observed at around 625 nm is associated with the intra-4f shell 5D0-7F2 transition in Eu3+ ions. The photoluminescence (PL) intensity of the specimen by MM heating was nearly equivalent to that of the specimen obtained by electric furnace heating. MM radiation can therefore be considered to be a highly efficient, energy-saving method for the formation of Eu3+-doped LNT phosphors.

  6. Has the prevalence of invalidating musculoskeletal pain changed over the last 15 years (1993-2006)? A Spanish population-based survey.

    Science.gov (United States)

    Jiménez-Sánchez, Silvia; Jiménez-García, Rodrigo; Hernández-Barrera, Valentín; Villanueva-Martínez, Manuel; Ríos-Luna, Antonio; Fernández-de-las-Peñas, César

    2010-07-01

    The aim of the current study was to estimate the prevalence and time trend of invalidating musculoskeletal pain in the Spanish population and its association with socio-demographic factors, lifestyle habits, self-reported health status, and comorbidity with other diseases analyzing data from 1993-2006 Spanish National Health Surveys (SNHS). We analyzed individualized data taken from the SNHS conducted in 1993 (n = 20,707), 2001 (n = 21,058), 2003 (n = 21,650) and 2006 (n = 29,478). Invalidating musculoskeletal pain was defined as pain suffered from the preceding 2 weeks that decreased main working activity or free-time activity by at least half a day. We analyzed socio-demographic characteristics, self-perceived health status, lifestyle habits, and comorbid conditions using multivariate logistic regression models. Overall, the prevalence of invalidating musculoskeletal pain in Spanish adults was 6.1% (95% CI, 5.7-6.4) in 1993, 7.3% (95% CI, 6.9-7.7) in 2001, 5.5% (95% CI, 5.1-5.9) in 2003 and 6.4% (95% CI 6-6.8) in 2006. The prevalence of invalidating musculoskeletal pain among women was almost twice that of men in every year (P postural hygiene, physical exercise, and how to prevent obesity and sedentary lifestyle habits should be provided by Public Health Services. This population-based study indicates that invalidating musculoskeletal pain that reduces main working activity is a public health problem in Spain. The prevalence of invalidating musculoskeletal pain was higher in women than in men and associated to lower income, poor sleeping, worse self-reported health status, and other comorbid conditions. Further, the prevalence of invalidating musculoskeletal pain increased from 1993 to 2001, but remained stable from the last years (2001 to 2006).

  7. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.

  8. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO’s design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein’s theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein’s equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne’s formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein’s equivalence principle is not important or even irrelevant.

  9. Can Invalid Bioactives Undermine Natural Product-Based Drug Discovery?

    Science.gov (United States)

    2015-01-01

    High-throughput biology has contributed a wealth of data on chemicals, including natural products (NPs). Recently, attention was drawn to certain, predominantly synthetic, compounds that are responsible for disproportionate percentages of hits but are false actives. Spurious bioassay interference led to their designation as pan-assay interference compounds (PAINS). NPs lack comparable scrutiny, which this study aims to rectify. Systematic mining of 80+ years of the phytochemistry and biology literature, using the NAPRALERT database, revealed that only 39 compounds represent the NPs most reported by occurrence, activity, and distinct activity. Over 50% are not explained by phenomena known for synthetic libraries, and all had manifold ascribed bioactivities, designating them as invalid metabolic panaceas (IMPs). Cumulative distributions of ∼200,000 NPs uncovered that NP research follows power-law characteristics typical for behavioral phenomena. Projection into occurrence–bioactivity–effort space produces the hyperbolic black hole of NPs, where IMPs populate the high-effort base. PMID:26505758

  10. A-LNT: A Wireless Sensor Network Platform for Low-Power Real-Time Voice Communications

    Directory of Open Access Journals (Sweden)

    Yong Fu

    2014-01-01

    Full Text Available Combining wireless sensor networks and voice communication for multidata hybrid wireless network suggests possible applications in numerous fields. However, voice communication and sensor data transmissions have significant differences, Meanwhile, high-speed massive real-time voice data processing poses challenges for hardware design, protocol design, and especially power management. In this paper, we present a wireless audio sensor network platform A-LNT and study and discuss key elements for systematic design and implementation: node hardware design, low-power voice codec and processing, wireless network topology, hybrid MAC protocol design based on superframe, radio channel allocation, and clock synchronization. Furthermore, we discuss energy management methods such as address filtering and efficient power management in detail. The experimental and simulation results show that A-LNT is a lightweight, low-power, low-speed, and high-performance wireless sensor network platform for multichannel real-time voice communications.

  11. Can we put aside the LNT dilemma by the introduction of the controllable dose?

    Science.gov (United States)

    Koblinger, L

    2000-03-01

    Recently, Professor R Clarke, ICRP Chairman, published his proposal for a renewal of the basic radiation protection concept (1999 J. Radiol. Prot. 19 107-15). The two main points of his proposed system are: (a) the term 'controllable dose' is introduced and (b) the protection philosophy is based on the individual. For practical use terms like 'action level', 'investigation level' etc are introduced. The outline of the new system promises a much less complex frame; no distinction between practices and interventions and unified treatment for occupational, medical and public exposures. There is, however, an inconsistency within the new system: though linearity is not assumed, the relations between the definitions of the new terms of the system of protection and the doses assigned to them are still based on the LNT hypothesis. To avoid this inconsistency a new definition of action level is recommended as a conservative estimate of the lowest dose where harmful effects have ever been demonstrated. Other levels should be defined by the action level and safety factors applied on the doses.

  12. Perceived family and peer invalidation as predictors of adolescent suicidal behaviors and self-mutilation.

    Science.gov (United States)

    Yen, Shirley; Kuehn, Kevin; Tezanos, Katherine; Weinstock, Lauren M; Solomon, Joel; Spirito, Anthony

    2015-03-01

    The present study investigates the longitudinal relationship between perceived family and peer invalidation and adolescent suicidal events (SE) and self-mutilation (SM) in a 6 month follow-up (f/u) study of adolescents admitted to an inpatient psychiatric unit for suicide risk. Adolescents (n=119) and their parent(s) were administered interviews and self-report assessments at baseline and at a 6 month f/u, with 99 (83%) completing both assessments. The Adolescent Longitudinal Interval Follow-Up Evaluation (A-LIFE) was modified to provide weekly ratings (baseline and each week of f/u) for perceived family and peer invalidation. Regression analyses examined whether: 1) Prospectively rated perceived family and peer invalidation at baseline predicted SE and SM during f/u; and 2) chronicity of perceived invalidation operationalized as proportion of weeks at moderate to high invalidation during f/u was associated with SE and SM during f/u. Multiple regression analyses, controlling for previously identified covariates, revealed that perceived family invalidation predicted SE over f/u for boys only and perceived peer invalidation predicted SM over f/u in the overall sample. This was the case for both baseline and f/u ratings of perceived invalidation. Our results demonstrate the adverse impact of perceived family and peer invalidation. Specifically, boys who experienced high perceived family invalidation were more likely to have an SE over f/u. Both boys and girls who experienced high perceived peer invalidation were more likely to engage in SM over f/u.

  13. 30 CFR 253.50 - How can MMS refuse or invalidate my OSFR evidence?

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false How can MMS refuse or invalidate my OSFR... can MMS refuse or invalidate my OSFR evidence? (a) If MMS determines that any OSFR evidence you submit... acceptable evidence without being subject to civil penalty under § 253.51. (b) MMS may immediately and...

  14. 20 CFR 656.30 - Validity of and invalidation of labor certifications.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Validity of and invalidation of labor... Certification Process § 656.30 Validity of and invalidation of labor certifications. (a) Priority date. (1) The... of Homeland Security within 180 calendar days of July 16, 2007. (c) Scope of validity. For...

  15. Implications of invalidity of Data Retention Directive to telecom operators

    Directory of Open Access Journals (Sweden)

    Darja LONČAR DUŠANOVIĆ

    2014-12-01

    Full Text Available Obligation for telecom operators to retain traffic and location data for combating crime purposes had been controversial ever since the adoption of the Data Retention Directive in 2006 because of its inherent negative impact on the fundamental right to privacy and personal data protection. However, the awaited judgment of the CJEU in April this year, which declared the Directive invalid, did not so far resolve the ambiguity of the issue. Namely, having in mind that half a year later, some countries did not amend their national data retention legislations (yet to comply with the aforementioned CJEU judgment, telecom operators as addresses of this obligation are in uncertain legal situation which could be called “lose-lose” situation. Also, the emphasis from the question of proportionality between data privacy and public security is shifted to the question of existence of valid legal basis for data processing (retaining data and providing them to authorities in the new legal environment in which national and EU law are still not in compliance. In this paper the author examines the implications of the CJEU judgment to national EU legislation, telecom operators and data subjects, providing comparative analysis of national data retention legislation status in EU member states. The existence of valid legal basis for data processing is examined within EU law sources, including within proposed EU General Data Protection Regulation and opinions of the relevant data protection bodies (e.g. Article 29 Working Party.

  16. Effect of exhaust gases of Exhaust Gas Recirculation (EGR) coupling lean-burn gasoline engine on NOx purification of Lean NOx trap (LNT)

    Science.gov (United States)

    Liu, Lei; Li, Zhijun; Liu, Shiyu; Shen, Boxi

    2017-03-01

    Based on pervious experimental research on the application of Exhaust Gas Recirculation (EGR) and Lean NOx Trap (LNT) with its effects on NOx emission control and secondary development of CHEMKIN software, an integrated NOx purification chemical kinetics mechanism including NOx adsorption, NOx desorption and NOx reduction process of LNT was created based on actual exhaust gases of the lean-burn gasoline engine. The effect of exhaust gases on NOx deterioration of LNT was investigated by modifying H2, O2 and overlap phase in mechanism of NOx desorption and NOx reduction process. Research found that the inlet temperature of LNT around 300 °C possesses the best NOx adsorption performance compared with 200 °C and 400 °C. Pt plays an import role in the process of NOx adsorption and NOx reduction. The reductive capability order of complex compound between Pt, and H2, CO and HC is Pt-H2>Pt-CO>Pt-C3H6. Both CO2 and H2O(g) could deteriorate NOx purification of LNT. The deterioration caused by H2O(g) is not significant as CO2 but harder to be regenerated. O2 could be beneficial to the NOx adsorption process, but it also could weaken the reductive atmosphere in the process of NOx desorption and NOx reduction.

  17. Lymph Node Transplantation Decreases Swelling and Restores Immune Responses in a Transgenic Model of Lymphedema.

    Directory of Open Access Journals (Sweden)

    Jung-Ju Huang

    Full Text Available Secondary lymphedema is a common complication of cancer treatment and recent studies have demonstrated that lymph node transplantation (LNT can decrease swelling, as well as the incidence of infections. However, although these results are exciting, the mechanisms by which LNT improves these pathologic findings of lymphedema remain unknown. Using a transgenic mouse model of lymphedema, this study sought to analyze the effect of LNT on lymphatic regeneration and T cell-mediated immune responses.We used a mouse model in which the expression of the human diphtheria toxin receptor is driven by the FLT4 promoter to enable the local ablation of the lymphatic system through subdermal hindlimb diphtheria toxin injections. Popliteal lymph node dissection was subsequently performed after a two-week recovery period, followed by either orthotopic LNT or sham surgery after an additional two weeks. Hindlimb swelling, lymphatic vessel regeneration, immune cell trafficking, and T cell-mediated immune responses were analyzed 10 weeks later.LNT resulted in a marked decrease in hindlimb swelling, fibroadipose tissue deposition, and decreased accumulation of perilymphatic inflammatory cells, as compared to controls. In addition, LNT induced a marked lymphangiogenic response in both capillary and collecting lymphatic vessels. Interestingly, the resultant regenerated lymphatics were abnormal in appearance on lymphangiography, but LNT also led to a notable increase in dendritic cell trafficking from the periphery to the inguinal lymph nodes and improved adaptive immune responses.LNT decreases pathological changes of lymphedema and was shown to potently induce lymphangiogenesis. Lymphatic vessels induced by LNT were abnormal in appearance, but were functional and able to transport antigen-presenting cells. Animals treated with LNT have an increased ability to mount T cell-mediated immune responses when sensitized to antigens in the affected hindlimb.

  18. Lymph Node Transplantation Decreases Swelling and Restores Immune Responses in a Transgenic Model of Lymphedema.

    Science.gov (United States)

    Huang, Jung-Ju; Gardenier, Jason C; Hespe, Geoffrey E; García Nores, Gabriela D; Kataru, Raghu P; Ly, Catherine L; Martínez-Corral, Inés; Ortega, Sagrario; Mehrara, Babak J

    2016-01-01

    Secondary lymphedema is a common complication of cancer treatment and recent studies have demonstrated that lymph node transplantation (LNT) can decrease swelling, as well as the incidence of infections. However, although these results are exciting, the mechanisms by which LNT improves these pathologic findings of lymphedema remain unknown. Using a transgenic mouse model of lymphedema, this study sought to analyze the effect of LNT on lymphatic regeneration and T cell-mediated immune responses. We used a mouse model in which the expression of the human diphtheria toxin receptor is driven by the FLT4 promoter to enable the local ablation of the lymphatic system through subdermal hindlimb diphtheria toxin injections. Popliteal lymph node dissection was subsequently performed after a two-week recovery period, followed by either orthotopic LNT or sham surgery after an additional two weeks. Hindlimb swelling, lymphatic vessel regeneration, immune cell trafficking, and T cell-mediated immune responses were analyzed 10 weeks later. LNT resulted in a marked decrease in hindlimb swelling, fibroadipose tissue deposition, and decreased accumulation of perilymphatic inflammatory cells, as compared to controls. In addition, LNT induced a marked lymphangiogenic response in both capillary and collecting lymphatic vessels. Interestingly, the resultant regenerated lymphatics were abnormal in appearance on lymphangiography, but LNT also led to a notable increase in dendritic cell trafficking from the periphery to the inguinal lymph nodes and improved adaptive immune responses. LNT decreases pathological changes of lymphedema and was shown to potently induce lymphangiogenesis. Lymphatic vessels induced by LNT were abnormal in appearance, but were functional and able to transport antigen-presenting cells. Animals treated with LNT have an increased ability to mount T cell-mediated immune responses when sensitized to antigens in the affected hindlimb.

  19. A Study on the Optimal Injection Conditions for an HC-LNT Catalyst System with a 12-Hole Type Injector

    Science.gov (United States)

    Oh, Jungmo; Lee, Kihyung; Lee, Jinha

    NOx catalytic converter systems periodically require rich or stoichiometric operating conditions to reduce NOx. The HC (hydrocarbon) concentration in a diesel engine is typically so low that the HC is not sufficient for NOx conversion. It was proposed that a rich air fuel ratio in a diesel engine could be realized via post fuel injection or supplemental fuel injection into the exhaust gas. A new method that optimizes the control of an external HC injection to diesel exhaust pipes for HC type LNT (Lean NOx Trap) catalyst systems has been developed. The external injection has other benefits: it can be controlled independently without disturbing engine control, it can be adapted to various layouts for exhaust systems, it has no oil dilution problems, among other benefits. In this study, the concentration and amount of HC can be controlled via control of the external injection. This research investigated the spray behavior of hydrocarbons injected into the transparent exhaust pipe. From this experiment, we obtained useful information about the optimal injection conditions for the HC-LNT catalyst system with an MPI injector.

  20. Age and test setting affect the prevalence of invalid baseline scores on neurocognitive tests.

    Science.gov (United States)

    Lichtenstein, Jonathan D; Moser, Rosemarie Scolaro; Schatz, Philip

    2014-02-01

    Prevalence rates of invalid baseline scores on computerized neurocognitive assessments for high school, collegiate, and professional athletes have been published in the literature. At present, there is limited research on the prevalence of invalid baseline scores in pre-high school athletes. Pre-high school athletes assessed with baseline neurocognitive tests would show higher prevalence rates of invalidity than older youth athletes, and those athletes, regardless of age, who were tested in a large group setting would show a higher prevalence rate of invalidity than athletes tested in a small group setting. Cross-sectional study; Level of evidence, 3. A total of 502 athletes between the ages of 10 and 18 years completed preseason baseline neurocognitive tests in "large" or "small" groups. All athletes completed the online version of ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing). Baseline test results that were "flagged" by the computer software as being of suspect validity and labeled with a "++" symbol were identified for analysis. Participants were retrospectively assigned to 2 independent groups: large group or small group. Test administration of the large group occurred off-site in groups of approximately 10 athletes, and test administration of the small group took place at a private-practice neuropsychology center with only 1 to 3 athletes present. Chi-square analyses identified a significantly greater proportion of participants obtaining invalid baseline results on the basis of age; younger athletes produced significantly more invalid baseline scores (7.0%, 17/244) than older athletes (2.7%, 7/258) (χ2 (1) = 4.99; P = .021). Log-linear analysis revealed a significant age (10-12 years, 13-18 years) × size (small, large) interaction effect (χ2 (4) = 66.1; P < .001) on the prevalence of invalidity, whereby younger athletes tested in larger groups were significantly more likely to provide invalid results (11.9%) than younger athletes

  1. Health Effects of High Radon Environments in Central Europe: Another Test for the LNT Hypothesis?

    Science.gov (United States)

    Becker, Klaus

    2003-01-01

    Bechterew, radon treatments are beneficial, with the positive effect lasting until at least 6 months after the normally 3-week treatment by inhalation or bathes. Studies on the mechanism of these effects are progressing. In other cases of extensive use of radon treatment for a wide spectrum of various diseases, for example, in the former Soviet Union, the positive results are not so well established. However, according to a century of radon treatment experience (after millenniums of unknown radon therapy), in particular in Germany and Austria, the positive medical effects for some diseases far exceed any potential detrimental health effects.The total amount of available data in this field is too large to be covered in a brief review. Therefore, less known - in particular recent - work from Central Europe has been analyzed in an attempt to summarize new developments and trends. This includes cost/benefit aspects of radon reduction programs. As a test case for the LNT (linear non-threshold) hypothesis and possible biopositive effects of low radiation exposures, the data support a nonlinear human response to low and medium-level radon exposures.

  2. The Defencelessness and Inmotivation as Causes of Invalidity of the Arbitration Award in the Venezuelan Law

    Directory of Open Access Journals (Sweden)

    María Candelaria Domínguez Guillén

    2016-12-01

    Full Text Available This article tackles on the defencelessness as a cause of invalidity of the arbitration award in the Venezuelan law and other legal systems. Also, the “immotivation” is analyzed as a cause of invalidity of the arbitration award. Although Venezuelan law seems to clarify the requirement with the intervention of party autonomy, it is concluded that the motivation of the arbitration award is part of the public policy, including decisions of equity as a manifestation of due process, of the right to defense and for the sake of social peace.

  3. Truncated forms of the second-rank orthorhombic Hamiltonians used in magnetism and electron magnetic resonance (EMR) studies are invalid-Why it went unnoticed for so long?

    International Nuclear Information System (INIS)

    Rudowicz, Czeslaw

    2009-01-01

    This paper deals with the truncated forms of the second-rank orthorhombic Hamiltonians employed in magnetism and electron magnetic resonance (EMR) studies. Consideration of the intrinsic features of orthorhombic Hamiltonians reveals that the truncations, which consist in omission of one of three interdependent orthorhombic terms, are fundamentally invalid. Implications of the invalid truncations are: loss of generality of quantized spin models, misinterpretation of physical properties of systems studied (e.g. maximum rhombicity ratio and relative parameter values), and inconsistent notations for Hamiltonian parameters that hamper direct comparison of data from various sources. Truncated Hamiltonian forms identified in our survey are categorized and systematically reviewed. Examples are taken from studies of various magnetic systems, especially those involving transition ions, as well as model magnetic systems. The pertinent studies include magnetic ordering in three- and lower dimensions, e.g. [(CH 3 ) 4 N]MnCl 3 (TMMC), canted ferromagnets, Haldane gap antiferromagnets, single molecule magnets exhibiting macroscopic quantum tunneling, e.g. Mn 12 complexes with spin S=10. Our study provides better insight into magnetic and spectroscopic properties of pertinent magnetic systems, which calls for reconsideration of the experimental and theoretical results based on invalid truncated Hamiltonians. The physical nature of Hamiltonians used in magnetism and EMR studies and other types of inappropriate terminology occurring, especially in model magnetism studies, require separate discussion.

  4. Sociodemographic characteristics and diabetes predict invalid self-reported non-smoking in a population-based study of U.S. adults

    Directory of Open Access Journals (Sweden)

    Shelton Brent J

    2007-03-01

    Full Text Available Abstract Background Nearly all studies reporting smoking status collect self-reported data. The objective of this study was to assess sociodemographic characteristics and selected, common smoking-related diseases as predictors of invalid reporting of non-smoking. Valid self-reported smoking may be related to the degree to which smoking is a behavior that is not tolerated by the smoker's social group. Methods True smoking was defined as having serum cotinine of 15+ng/ml. 1483 "true" smokers 45+ years of age with self-reported smoking and serum cotinine data from the Mobile Examination Center were identified in the third National Health and Nutrition Examination Survey. Invalid non-smoking was defined as "true" smokers self-reporting non-smoking. To assess predictors of invalid self-reported non-smoking, odds ratios (OR and 95% confidence intervals (CI were calculated for age, race/ethnicity-gender categories, education, income, diabetes, hypertension, and myocardial infarction. Multiple logistic regression modeling took into account the complex survey design and sample weights. Results Among smokers with diabetes, invalid non-smoking status was 15%, ranging from 0% for Mexican-American (MA males to 22%–25% for Non-Hispanic White (NHW males and Non-Hispanic Black (NHB females. Among smokers without diabetes, invalid non-smoking status was 5%, ranging from 3% for MA females to 10% for NHB females. After simultaneously taking into account diabetes, education, race/ethnicity and gender, smokers with diabetes (ORAdj = 3.15; 95% CI: 1.35–7.34, who did not graduate from high school (ORAdj = 2.05; 95% CI: 1.30–3.22 and who were NHB females (ORAdj = 5.12; 95% CI: 1.41–18.58 were more likely to self-report as non-smokers than smokers without diabetes, who were high school graduates, and MA females, respectively. Having a history of myocardial infarction or hypertension did not predict invalid reporting of non-smoking. Conclusion Validity of self

  5. The Role of Maternal Emotional Validation and Invalidation on Children's Emotional Awareness

    Science.gov (United States)

    Lambie, John A.; Lindberg, Anja

    2016-01-01

    Emotional awareness--that is, accurate emotional self-report--has been linked to positive well-being and mental health. However, it is still unclear how emotional awareness is socialized in young children. This observational study examined how a particular parenting communicative style--emotional validation versus emotional invalidation--was…

  6. An Overlooked Population in Community College: International Students' (In)Validation Experiences With Academic Advising

    Science.gov (United States)

    Zhang, Yi

    2016-01-01

    Objective: Guided by validation theory, this study aims to better understand the role that academic advising plays in international community college students' adjustment. More specifically, this study investigated how academic advising validates or invalidates their academic and social experiences in a community college context. Method: This…

  7. Cardiac and electrophysiological responses to valid and invalid feedback in a time-estimation task

    NARCIS (Netherlands)

    Mies, G.W.; van der Veen, F.M.; Tulen, J.H.; Hengeveld, M.W.; van der Molen, M.W.

    2011-01-01

    This study investigated the cardiac and electrophysiological responses to feedback in a time-estimation task in which feedback-validity was manipulated. Participants across a wide age range had to produce 1 s intervals followed by positive and negative feedback that was valid or invalid (i.e.,

  8. Validation of Measures of Biosocial Precursors to Borderline Personality Disorder: Childhood Emotional Vulnerability and Environmental Invalidation

    Science.gov (United States)

    Sauer, Shannon E.; Baer, Ruth A.

    2010-01-01

    Linehan's biosocial theory suggests that borderline personality disorder (BPD) results from a transaction of two childhood precursors: emotional vulnerability and an invalidating environment. Until recently, few empirical studies have explored relationships between these theoretical precursors and symptoms of the disorder. Psychometrically sound…

  9. LNT un TV5 programmu transformācijas pēc īpašnieku maiņas 2012. gadā

    OpenAIRE

    Sušinska, Oksana

    2013-01-01

    Bakalaura darba – „LNT un TV5 programmu transformācijas pēc īpašnieku maiņas 2012. gadā” – mērķis – noskaidrot, kā izmainījās LNT un TV5 programma pēc īpašnieku maiņas 2012. gadā, kā izmainījās programmu saturs, un kā īpašnieku maiņa ietekmēja LNT un TV5 programmas saturu. Darba teorētiskajā daļā skaidrots televīzijas koncepta teorētiskās izpratnes sociālajās zinātnēs, televīzijas pozicionējums mediju vidē un tās transformācijas procesi, attīstība digitalizācijas laikmetā. Darba pētniecis...

  10. [Physicians as Experts of the Integration of war invalids of WWI and WWII].

    Science.gov (United States)

    Wolters, Christine

    2015-12-01

    After the First World War the large number of war invalids posed a medical as well as a socio-political problem. This needed to be addressed, at least to some extent, through healthcare providers (Versorgungsbehörden) and reintegration into the labour market. Due to the demilitarization of Germany, this task was taken on by the civil administration, which was dissolved during the time of National Socialism. In 1950, the Federal Republic of Germany enacted the Federal War Victims Relief Act (Bundesversorgungsgesetz), which created a privileged group of civil and military war invalids, whereas other disabled people and victims of national socialist persecution were initially excluded. This article examines the continuities and discontinuities of the institutions following the First World War. A particular focus lies on the groups of doctors which structured this field. How did doctors become experts and what was their expertise?

  11. An abuse of risk assessment: how regulatory agencies improperly adopted LNT for cancer risk assessment.

    Science.gov (United States)

    Calabrese, Edward J

    2015-04-01

    The Genetics Panel of the National Academy of Sciences' Committee on Biological Effects of Atomic Radiation (BEAR) recommended the adoption of the linear dose-response model in 1956, abandoning the threshold dose-response for genetic risk assessments. This recommendation was quickly generalized to include somatic cells for cancer risk assessment and later was instrumental in the adoption of linearity for carcinogen risk assessment by the Environmental Protection Agency. The Genetics Panel failed to provide any scientific assessment to support this recommendation and refused to do so when later challenged by other leading scientists. Thus, the linearity model used in cancer risk assessment was based on ideology rather than science and originated with the recommendation of the NAS BEAR Committee Genetics Panel. Historical documentation in support of these conclusions is provided in the transcripts of the Panel meetings and in previously unexamined correspondence among Panel members.

  12. MODUS PONENS AND MODUS TOLLENS: THEIR VALIDITY/INVALIDITY IN NATURAL LANGUAGE ARGUMENTS

    Directory of Open Access Journals (Sweden)

    Ri Yong-Sok

    2017-06-01

    Full Text Available The precedent studies on the validity of Modus ponens and Modus tollens have been carried out with most regard to a major type of conditionals in which the conditional clause is a sufficient condition for the main clause. But we sometimes, in natural language arguments, find other types of conditionals in which the conditional clause is a necessary or necessary and sufficient condition for the main clause. In this paper I reappraise, on the basis of new definitions of Modus ponens and Modus tollens, their validity/invalidity in natural language arguments in consideration of all types of conditionals.

  13. Study of N2O Formation over Rh- and Pt-Based LNT Catalysts

    Directory of Open Access Journals (Sweden)

    Lukasz Kubiak

    2016-03-01

    Full Text Available In this paper, mechanistic aspects involved in the formation of N2O over Pt-BaO/Al2O3 and Rh-BaO/Al2O3 model NOx Storage-Reduction (NSR catalysts are discussed. The reactivity of both gas-phase NO and stored nitrates was investigated by using H2 and NH3 as reductants. It was found that N2O formation involves the presence of gas-phase NO, since no N2O is observed upon the reduction of nitrates stored over both Pt- and Rh-BaO/Al2O3 catalyst samples. In particular, N2O formation involves the coupling of undissociated NO molecules with N-adspecies formed upon NO dissociation onto reduced Platinum-Group-Metal (PGM sites. Accordingly, N2O formation is observed at low temperatures, when PGM sites start to be reduced, and disappears at high temperatures where PGM sites are fully reduced and complete NO dissociation takes place. Besides, N2O formation is observed at lower temperatures with H2 than with NH3 in view of the higher reactivity of hydrogen in the reduction of the PGM sites and onto Pt-containing catalyst due to the higher reducibility of Pt vs. Rh.

  14. LNT un LTV1 ziņu skatīšanās ietekme uz cilvēka dienaskārtību

    OpenAIRE

    Linde, Signija

    2013-01-01

    Bakalaura darba ”LNT un LTV1 ziņu skatīšanās ietekme uz cilvēka dienaskārtību” pētnieciskā problēma ir noskaidrot, vai televīzijas ziņu skatīšanās ietekmē cilvēka dienaskārtību. Teorētiskajā daļa tiek aplūkota literatūra par dienaskārtību un ziņu veidošanos. Darba empīriskajā daļā izmantota kontentanalīze LNT un LTV1 ziņām, izveidojot ziņu kategorijas pēc kurām tika analizētas ziņas, tad vēl darba empīriskajā daļā tika izmantota dziļās intervijas metode, lai atklātu, vai ziņu skatīšanās ietek...

  15. Valid but Invalid: Suboptimal ImPACT© Baseline Performance in University Athletes.

    Science.gov (United States)

    Walton, Samuel R; Broshek, Donna K; Freeman, Jason; Cullum, C Munro; Resch, Jacob E

    2018-02-27

    To investigate the frequency of valid yet suboptimal Immediate Postconcussion Assessment and Cognitive Test© (ImPACT) performance in university athletes and to explore the benefit of subsequent ImPACT administrations. This descriptive laboratory study involved baseline administration of ImPACT to 769 university athletes per the institution's concussion management protocol. Testing was proctored in groups of ≤ 2 participants. Participants who scored below the 16th percentile according to ImPACT normative data were re-administered ImPACT up to two additional times as these scores were thought to be potentially indicative of suboptimal effort or poor understanding of instructions. Descriptive analyses were used to examine validity indicators and individual Verbal and Visual Memory, Visual Motor Speed and Reaction Time ImPACT composite scores in initial and subsequent administrations. Based on ImPACT's validity criteria, 1% (9/769) of administrations were invalid and 14.6% (112/769) had one or more composite score 16th percentile. Clinicians must be aware of suboptimal ImPACT performance as it limits the clinical utility of the baseline assessment. Further research is needed to address factors leading to 'valid' but invalid baseline performance.

  16. The role of apolipoprotein N-acyl transferase, Lnt, in the lipidation of factor H binding protein of Neisseria meningitidis strain MC58 and its potential as a drug target.

    Science.gov (United States)

    da Silva, R A G; Churchward, C P; Karlyshev, A V; Eleftheriadou, O; Snabaitis, A K; Longman, M R; Ryan, A; Griffin, R

    2017-07-01

    The level of cell surface expression of the meningococcal vaccine antigen, Factor H binding protein (FHbp) varies between and within strains and this limits the breadth of strains that can be targeted by FHbp-based vaccines. The molecular pathway controlling expression of FHbp at the cell surface, including its lipidation, sorting to the outer membrane and export, and the potential regulation of this pathway have not been investigated until now. This knowledge will aid our evaluation of FHbp vaccines. A meningococcal transposon library was screened by whole cell immuno-dot blotting using an anti-FHbp antibody to identify a mutant with reduced binding and the disrupted gene was determined. In a mutant with markedly reduced binding, the transposon was located in the lnt gene which encodes apolipoprotein N-acyl transferase, Lnt, responsible for the addition of the third fatty acid to apolipoproteins prior to their sorting to the outer membrane. We provide data indicating that in the Lnt mutant, FHbp is diacylated and its expression within the cell is reduced 10 fold, partly due to inhibition of transcription. Furthermore the Lnt mutant showed 64 fold and 16 fold increase in susceptibility to rifampicin and ciprofloxacin respectively. We speculate that the inefficient sorting of diacylated FHbp in the meningococcus results in its accumulation in the periplasm inducing an envelope stress response to down-regulate its expression. We propose Lnt as a potential novel drug target for combination therapy with antibiotics. This article is part of a themed section on Drug Metabolism and Antibiotic Resistance in Micro-organisms. To view the other articles in this section visit http://onlinelibrary.wiley.com/doi/10.1111/bph.v174.14/issuetoc. © 2016 The British Pharmacological Society.

  17. Sequential and base rate analysis of emotional validation and invalidation in chronic pain couples: patient gender matters.

    Science.gov (United States)

    Leong, Laura E M; Cano, Annmarie; Johansen, Ayna B

    2011-11-01

    The purpose of this study was to examine the extent to which communication patterns that foster or hinder intimacy and emotion regulation in couples were related to pain, marital satisfaction, and depression in 78 chronic pain couples attempting to problem-solve an area of disagreement in their marriage. Sequences and base rates of validation and invalidation communication patterns were almost uniformly unrelated to adjustment variables unless patient gender was taken into account. Male patient couples' reciprocal invalidation was related to worse pain, but this was not found in female patient couples. In addition, spouses' validation was related to poorer patient pain and marital satisfaction, but only in couples with a male patient. It was not only the presence or absence of invalidation and validation that mattered (base rates), but the context and timing of these events (sequences) that affected patients' adjustment. This research demonstrates that sequences of interaction behaviors that foster and hinder emotion regulation should be attended to when assessing and treating pain patients and their spouses. This article presents analyses of both sequences and base rates of chronic pain couples' communication patterns, focusing on validation and invalidation. These results may potentially improve psychosocial treatments for these couples, by addressing sequential interactions of intimacy and empathy. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  18. [Tremendous Human, Social, and Economic Losses Caused by Obstinate Application of the Failed Linear No-threshold Model].

    Science.gov (United States)

    Sutou, Shizuyo

    2015-01-01

    The linear no-threshold model (LNT) was recommended in 1956, with abandonment of the traditional threshold dose-response for genetic risk assessment. Adoption of LNT by the International Commission on Radiological Protection (ICRP) became the standard for radiation regulation worldwide. The ICRP recommends a dose limit of 1 mSv/year for the public, which is too low and which terrorizes innocent people. Indeed, LNT arose mainly from the lifespan survivor study (LSS) of atomic bomb survivors. The LSS, which asserts linear dose-response and no threshold, is challenged mainly on three points. 1) Radiation doses were underestimated by half because of disregard for major residual radiation, resulting in cancer risk overestimation. 2) The dose and dose-rate effectiveness factor (DDREF) of 2 is used, but the actual DDREF is estimated as 16, resulting in cancer risk overestimation by several times. 3) Adaptive response (hormesis) is observed in leukemia and solid cancer cases, consistently contradicting the linearity of LNT. Drastic reduction of cancer risk moves the dose-response curve close to the control line, allowing the setting of a threshold. Living organisms have been evolving for 3.8 billion years under radiation exposure, naturally acquiring various defense mechanisms such as DNA repair mechanisms, apoptosis, and immune response. The failure of LNT lies in the neglect of carcinogenesis and these biological mechanisms. Obstinate application of LNT continues to cause tremendous human, social, and economic losses. The 60-year-old LNT must be rejected to establish a new scientific knowledge-based system.

  19. The unwanted heroes: war invalids in Poland after World War I.

    Science.gov (United States)

    Magowska, Anita

    2014-04-01

    This article focuses on the unique and hitherto unknown history of disabled ex-servicemen and civilians in interwar Poland. In 1914, thousands of Poles were conscripted into the Austrian, Prussian, and Russian armies and forced to fight against each other. When the war ended and Poland regained independence after more than one hundred years of partition, the fledgling government was unable to provide support for the more than three hundred thousand disabled war victims, not to mention the many civilians left injured or orphaned by the war. The vast majority of these victims were ex-servicemen of foreign armies, and were deprived of any war compensation. Neither the Polish government nor the impoverished society could meet the disabled ex-servicemen's medical and material needs; therefore, these men had to take responsibility for themselves and started cooperatives and war-invalids-owned enterprises. A social collaboration between Poland and America, rare in Europe at that time, was initiated by the Polish community in the United States to help blind ex-servicemen in Poland.

  20. Brain transcriptional stability upon prion protein-encoding gene invalidation in zygotic or adult mouse

    Directory of Open Access Journals (Sweden)

    Béringue Vincent

    2010-07-01

    Full Text Available Abstract Background The physiological function of the prion protein remains largely elusive while its key role in prion infection has been expansively documented. To potentially assess this conundrum, we performed a comparative transcriptomic analysis of the brain of wild-type mice with that of transgenic mice invalidated at this locus either at the zygotic or at the adult stages. Results Only subtle transcriptomic differences resulting from the Prnp knockout could be evidenced, beside Prnp itself, in the analyzed adult brains following microarray analysis of 24 109 mouse genes and QPCR assessment of some of the putatively marginally modulated loci. When performed at the adult stage, neuronal Prnp disruption appeared to sequentially induce a response to an oxidative stress and a remodeling of the nervous system. However, these events involved only a limited number of genes, expression levels of which were only slightly modified and not always confirmed by RT-qPCR. If not, the qPCR obtained data suggested even less pronounced differences. Conclusions These results suggest that the physiological function of PrP is redundant at the adult stage or important for only a small subset of the brain cell population under classical breeding conditions. Following its early reported embryonic developmental regulation, this lack of response could also imply that PrP has a more detrimental role during mouse embryogenesis and that potential transient compensatory mechanisms have to be searched for at the time this locus becomes transcriptionally activated.

  1. Mendelian randomization with invalid instruments: effect estimation and bias detection through Egger regression.

    Science.gov (United States)

    Bowden, Jack; Davey Smith, George; Burgess, Stephen

    2015-04-01

    The number of Mendelian randomization analyses including large numbers of genetic variants is rapidly increasing. This is due to the proliferation of genome-wide association studies, and the desire to obtain more precise estimates of causal effects. However, some genetic variants may not be valid instrumental variables, in particular due to them having more than one proximal phenotypic correlate (pleiotropy). We view Mendelian randomization with multiple instruments as a meta-analysis, and show that bias caused by pleiotropy can be regarded as analogous to small study bias. Causal estimates using each instrument can be displayed visually by a funnel plot to assess potential asymmetry. Egger regression, a tool to detect small study bias in meta-analysis, can be adapted to test for bias from pleiotropy, and the slope coefficient from Egger regression provides an estimate of the causal effect. Under the assumption that the association of each genetic variant with the exposure is independent of the pleiotropic effect of the variant (not via the exposure), Egger's test gives a valid test of the null causal hypothesis and a consistent causal effect estimate even when all the genetic variants are invalid instrumental variables. We illustrate the use of this approach by re-analysing two published Mendelian randomization studies of the causal effect of height on lung function, and the causal effect of blood pressure on coronary artery disease risk. The conservative nature of this approach is illustrated with these examples. An adaption of Egger regression (which we call MR-Egger) can detect some violations of the standard instrumental variable assumptions, and provide an effect estimate which is not subject to these violations. The approach provides a sensitivity analysis for the robustness of the findings from a Mendelian randomization investigation. © The Author 2015; Published by Oxford University Press on behalf of the International Epidemiological Association.

  2. Illness invalidation from spouse and family is associated with depression in diabetic patients with first superficial diabetic foot ulcers.

    Science.gov (United States)

    Sehlo, Mohammad G; Alzahrani, Owiss H; Alzahrani, Hasan A

    (1) To assess the prevalence of depressive disorders in a sample of diabetic patients with their first superficial diabetic foot ulcer. (2) To evaluate the association between illness invalidation from spouse, family, and depressive disorders in those patients. Depressive disorders and severity were diagnosed by the Structured Clinical Interview for DSM-IV Axis Ι disorders, clinical version, and the spouse and family scales of the Illness Invalidation Inventory, respectively (3*I). Physical functioning was also assessed using the Physical Component of The Short Form 36 item health-related quality of life questionnaire. The prevalence of depressive disorders was 27.50% (22/80). There was a significant decrease in physical health component summary mean score and a significant increase in ulcer size, Center for Epidemiologic Studies-Depression Scale, spouse discounting, spouse lack of understanding, and family discounting mean scores in the depressed group compared to the non-depressed group. Higher levels of spouse discounting, spouse understanding, and family discounting were significant predictors of diagnosis of depressive disorders and were strongly associated with increased severity of depressive symptoms in diabetic patients with first superficial diabetic foot ulcers. Poor physical functioning was associated with increased depressive symptom severity. This study demonstrated that illness invalidation from spouse and family is associated with diagnosis of depressive disorders and increased severity of depressive symptoms in diabetic patients with first superficial diabetic foot ulcers. © The Author(s) 2015.

  3. Pain patients' experiences of validation and invalidation from physicians before and after multimodal pain rehabilitation: Associations with pain, negative affectivity, and treatment outcome.

    Science.gov (United States)

    Edlund, Sara M; Wurm, Matilda; Holländare, Fredrik; Linton, Steven J; Fruzzetti, Alan E; Tillfors, Maria

    2017-10-01

    Validating and invalidating responses play an important role in communication with pain patients, for example regarding emotion regulation and adherence to treatment. However, it is unclear how patients' perceptions of validation and invalidation relate to patient characteristics and treatment outcome. The aim of this study was to investigate the occurrence of subgroups based on pain patients' perceptions of validation and invalidation from their physicians. The stability of these perceptions and differences between subgroups regarding pain, pain interference, negative affectivity and treatment outcome were also explored. A total of 108 pain patients answered questionnaires regarding perceived validation and invalidation, pain severity, pain interference, and negative affectivity before and after pain rehabilitation treatment. Two cluster analyses using perceived validation and invalidation were performed, one on pre-scores and one on post-scores. The stability of patient perceptions from pre- to post-treatment was investigated, and clusters were compared on pain severity, pain interference, and negative affectivity. Finally, the connection between perceived validation and invalidation and treatment outcome was explored. Three clusters emerged both before and after treatment: (1) low validation and heightened invalidation, (2) moderate validation and invalidation, and (3) high validation and low invalidation. Perceptions of validation and invalidation were generally stable over time, although there were individuals whose perceptions changed. When compared to the other two clusters, the low validation/heightened invalidation cluster displayed significantly higher levels of pain interference and negative affectivity post-treatment but not pre-treatment. The whole sample significantly improved on pain interference and depression, but treatment outcome was independent of cluster. Unexpectedly, differences between clusters on pain interference and negative affectivity

  4. Educating Jurors about Forensic Evidence: Using an Expert Witness and Judicial Instructions to Mitigate the Impact of Invalid Forensic Science Testimony.

    Science.gov (United States)

    Eastwood, Joseph; Caldwell, Jiana

    2015-11-01

    Invalid expert witness testimony that overstated the precision and accuracy of forensic science procedures has been highlighted as a common factor in many wrongful conviction cases. This study assessed the ability of an opposing expert witness and judicial instructions to mitigate the impact of invalid forensic science testimony. Participants (N = 155) acted as mock jurors in a sexual assault trial that contained both invalid forensic testimony regarding hair comparison evidence, and countering testimony from either a defense expert witness or judicial instructions. Results showed that the defense expert witness was successful in educating jurors regarding limitations in the initial expert's conclusions, leading to a greater number of not-guilty verdicts. The judicial instructions were shown to have no impact on verdict decisions. These findings suggest that providing opposing expert witnesses may be an effective safeguard against invalid forensic testimony in criminal trials. © 2015 American Academy of Forensic Sciences.

  5. Systems Cancer Biology and the Controlling Mechanisms for the J-Shaped Cancer Dose Response: Towards Relaxing the LNT Hypothesis.

    Science.gov (United States)

    Lou, In Chio; Zhao, Yuchao; Wu, Yingjie; Ricci, Paolo F

    2012-01-01

    The hormesis phenomena or J-shaped dose response have been accepted as a common phenomenon regardless of the involved biological model, endpoint measured and chemical class/physical stressor. This paper first introduced a mathematical dose response model based on systems biology approach. It links molecular-level cell cycle checkpoint control information to clonal growth cancer model to predict the possible shapes of the dose response curves of Ionizing Radiation (IR) induced tumor transformation frequency. J-shaped dose response curves have been captured with consideration of cell cycle checkpoint control mechanisms. The simulation results indicate the shape of the dose response curve relates to the behavior of the saddle-node points of the model in the bifurcation diagram. A simplified version of the model in previous work of the authors was used mathematically to analyze behaviors relating to the saddle-node points for the J-shaped dose response curve. It indicates that low-linear energy transfer (LET) is more likely to have a J-shaped dose response curve. This result emphasizes the significance of systems biology approach, which encourages collaboration of multidiscipline of biologists, toxicologists and mathematicians, to illustrate complex cancer-related events, and confirm the biphasic dose-response at low doses.

  6. A proposed strategy for the validation of ground-water flow and solute transport models

    International Nuclear Information System (INIS)

    Davis, P.A.; Goodrich, M.T.

    1991-01-01

    Ground-water flow and transport models can be thought of as a combination of conceptual and mathematical models and the data that characterize a given system. The judgment of the validity or invalidity of a model depends both on the adequacy of the data and the model structure (i.e., the conceptual and mathematical model). This report proposes a validation strategy for testing both components independently. The strategy is based on the philosophy that a model cannot be proven valid, only invalid or not invalid. In addition, the authors believe that a model should not be judged in absence of its intended purpose. Hence, a flow and transport model may be invalid for one purpose but not invalid for another. 9 refs

  7. Likert or Not, Survey (In)Validation Requires Explicit Theories and True Grit

    Science.gov (United States)

    McGrane, Joshua A.; Nowland, Trisha

    2017-01-01

    From the time of Likert (1932) on, attitudes of expediency regarding both theory and methodology became apparent with reference to survey construction and validation practices. In place of theory and more--theoretically minded methods, such as those found in the early work of Thurstone (1928) and Coombs (1964), statistical models and…

  8. Evaluating the accuracy of the Wechsler Memory Scale-Fourth Edition (WMS-IV) logical memory embedded validity index for detecting invalid test performance.

    Science.gov (United States)

    Soble, Jason R; Bain, Kathleen M; Bailey, K Chase; Kirton, Joshua W; Marceaux, Janice C; Critchfield, Edan A; McCoy, Karin J M; O'Rourke, Justin J F

    2018-01-08

    Embedded performance validity tests (PVTs) allow for continuous assessment of invalid performance throughout neuropsychological test batteries. This study evaluated the utility of the Wechsler Memory Scale-Fourth Edition (WMS-IV) Logical Memory (LM) Recognition score as an embedded PVT using the Advanced Clinical Solutions (ACS) for WAIS-IV/WMS-IV Effort System. This mixed clinical sample was comprised of 97 total participants, 71 of whom were classified as valid and 26 as invalid based on three well-validated, freestanding criterion PVTs. Overall, the LM embedded PVT demonstrated poor concordance with the criterion PVTs and unacceptable psychometric properties using ACS validity base rates (42% sensitivity/79% specificity). Moreover, 15-39% of participants obtained an invalid ACS base rate despite having a normatively-intact age-corrected LM Recognition total score. Receiving operating characteristic curve analysis revealed a Recognition total score cutoff of < 61% correct improved specificity (92%) while sensitivity remained weak (31%). Thus, results indicated the LM Recognition embedded PVT is not appropriate for use from an evidence-based perspective, and that clinicians may be faced with reconciling how a normatively intact cognitive performance on the Recognition subtest could simultaneously reflect invalid performance validity.

  9. Collegiate Student Athletes With History of ADHD or Academic Difficulties Are More Likely to Produce an Invalid Protocol on Baseline ImPACT Testing.

    Science.gov (United States)

    Manderino, Lisa; Gunstad, John

    2018-03-01

    Attention deficit hyperactivity disorder (ADHD) and other academically-relevant diagnoses have been suggested as modifiers of neurocognitive testing in sport-related concussion, such as Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT). These preexisting conditions may suppress ImPACT scores to the extent that they are indistinguishable from low scores because of poor effort. The present study hypothesized that student athletes with history of ADHD or academic difficulties produce lower ImPACT composite scores and are more likely to produce invalid protocols than those without such conditions. Cross-sectional study. Midsized public university. Nine hundred forty-nine National College Athletic Association athletes (average age = 19.2 years; 6.8% ADHD, 5.6% Academic Difficulties, 2.0% comorbid ADHD/Academic Difficulties). Three seasons of baseline ImPACT protocols were analyzed. Student athletes were grouped using self-reported histories of ADHD or academic difficulties taken from ImPACT demographic questions. ImPACT composite scores and protocol validity. Student athletes in the academic difficulties and comorbid groups performed worse on ImPACT composite scores (Pillai's Trace = 0.05), though this pattern did not emerge for those with ADHD. Student athletes with comorbid history were more likely to produce an invalid baseline (10.5% invalid) (χ (2) = 11.08, P = 0.004). Those with ADHD were also more likely to produce an invalid protocol (7.7% invalid, compared with 2.6% in student athletes with no history) (χ (2) = 10.70, P = 0.005). These findings suggest that student athletes reporting comorbid histories or histories of academic difficulties alone produce lower ImPACT composite scores, and that those with comorbid histories or histories of ADHD alone produce invalid protocol warnings more frequently than student athletes without such histories. Future studies should further examine invalid score thresholds on the ImPACT, especially in student

  10. Caregiver Mentalization and Emotional Invalidation as Contextual Risk Factors for Adolescent Borderline Personality Disorder

    OpenAIRE

    CLAIR BENNETT

    2018-01-01

    This thesis examined the role of interpersonal caregiving factors in borderline personality disorder (BPD) during the adolescent developmental period. Based on developmental models of BPD, emotional responding and social cognition in caregivers of adolescents with BPD were explored. Results identified relative impairments in caregivers’ capacity to understand, respond to and regulate adolescent mental states. Impairments in these domains were directly and indirectly related to adolescent bord...

  11. Multi-impulsive eating disorders: the role of distress tolerance and invalidating environments.

    OpenAIRE

    Evans, J.

    2006-01-01

    Dialectical behaviour therapy (DBT) was developed by Linehan (1993) for the treatment of borderline personality disorder (BPD). The model proposes that BPD is primarily a dysfunction of the emotion regulation system, and that impulsive behaviours (e.g., self-harm, compulsive spending, risky sexual behaviour, alcohol abuse) serve the function of regulating emotion regulation. It has been proposed that binge eating may also serve a similar function (e.g., Root & Fallon, 1989). For this reason t...

  12. Prevalence of Invalid Performance on Baseline Testing for Sport-Related Concussion by Age and Validity Indicator.

    Science.gov (United States)

    Abeare, Christopher A; Messa, Isabelle; Zuccato, Brandon G; Merker, Bradley; Erdodi, Laszlo

    2018-03-12

    Estimated base rates of invalid performance on baseline testing (base rates of failure) for the management of sport-related concussion range from 6.1% to 40.0%, depending on the validity indicator used. The instability of this key measure represents a challenge in the clinical interpretation of test results that could undermine the utility of baseline testing. To determine the prevalence of invalid performance on baseline testing and to assess whether the prevalence varies as a function of age and validity indicator. This retrospective, cross-sectional study included data collected between January 1, 2012, and December 31, 2016, from a clinical referral center in the Midwestern United States. Participants included 7897 consecutively tested, equivalently proportioned male and female athletes aged 10 to 21 years, who completed baseline neurocognitive testing for the purpose of concussion management. Baseline assessment was conducted with the Immediate Postconcussion Assessment and Cognitive Testing (ImPACT), a computerized neurocognitive test designed for assessment of concussion. Base rates of failure on published ImPACT validity indicators were compared within and across age groups. Hypotheses were developed after data collection but prior to analyses. Of the 7897 study participants, 4086 (51.7%) were male, mean (SD) age was 14.71 (1.78) years, 7820 (99.0%) were primarily English speaking, and the mean (SD) educational level was 8.79 (1.68) years. The base rate of failure ranged from 6.4% to 47.6% across individual indicators. Most of the sample (55.7%) failed at least 1 of 4 validity indicators. The base rate of failure varied considerably across age groups (117 of 140 [83.6%] for those aged 10 years to 14 of 48 [29.2%] for those aged 21 years), representing a risk ratio of 2.86 (95% CI, 2.60-3.16; P validity indicator and the age of the examinee. The strong age association, with 3 of 4 participants aged 10 to 12 years failing validity indicators, suggests that

  13. Estimation of lower-bound K{sub Jc} on pressure vessel steels from invalid data

    Energy Technology Data Exchange (ETDEWEB)

    McCable, D.E.; Merkle, J.G.

    1996-10-01

    Statistical methods are currently being introduced into the transition temperature characterization of ferritic steels. Objective is to replace imprecise correlations between empirical impact test methods and universal K{sub Ic} or K{sub Ia} lower-bound curves with direct use of material-specific fracture mechanics data. This paper introduces a computational procedure that couples order statistics, weakest-link statistical theory, and a constraint model to arrive at estimates of lower-bound K{sub Jc} values. All of the above concepts have been used before to meet various objectives. In the present case, scheme is to make a best estimate of lower-bound fracture toughness when resource K{sub Jc} data are too few to use conventional statistical analyses. Utility of the procedure is of greatest value in the middle-to-high toughness part of the transition range where specimen constraint loss and elevated lower-bound toughness interfere with conventional statistical analysis methods.

  14. Implicações da declaração de invalidade da Diretiva 2006/24 na conservação de dados (“metadados” nos Estados-Membros da UE: uma leitura jusfundamental / The Directive 2006/24 declaration of invalidity and the consequences of metadata retention in the EU Member States: A Fundamental Rights Standards Approach

    Directory of Open Access Journals (Sweden)

    Alessandra Silveira

    2017-04-01

    Full Text Available Purpose – The text deals with the recent case law of the European Court of Justice (ECJ on the directive on the retention of data (metadata by providers of electronic communications services for the purposes of investigation, detection and prosecution of serious crimes. The authors seek to clarify the implications of the declaration of invalidity of this European directive for the EU Member States, towards the protection of legal equality of European citizens. Methodology/approach/design – The text was drafted while there was a pending ECJ’s response to the questions referred by two national courts (one Swedish and one British on the effects of that invalidity decision on the domestic legislation that transposed it. Thus, the authors sought to anticipate the Court's decision in the light of its settled case law and the reaction of the Member States’ authorities’ after the declaration of invalidity of the referred directive. Findings – In the light of the particularities of the protection of fundamental rights in the EU and the legal model of integration, the authors draw some guidelines as to the procedure to be followed in future cases in order to safeguard the effectiveness of the Union law, namely when it comes to the legal equality of European citizens.

  15. Trpm4 gene invalidation leads to cardiac hypertrophy and electrophysiological alterations.

    Directory of Open Access Journals (Sweden)

    Marie Demion

    Full Text Available RATIONALE: TRPM4 is a non-selective Ca2+-activated cation channel expressed in the heart, particularly in the atria or conduction tissue. Mutations in the Trpm4 gene were recently associated with several human conduction disorders such as Brugada syndrome. TRPM4 channel has also been implicated at the ventricular level, in inotropism or in arrhythmia genesis due to stresses such as ß-adrenergic stimulation, ischemia-reperfusion, and hypoxia re-oxygenation. However, the physiological role of the TRPM4 channel in the healthy heart remains unclear. OBJECTIVES: We aimed to investigate the role of the TRPM4 channel on whole cardiac function with a Trpm4 gene knock-out mouse (Trpm4-/- model. METHODS AND RESULTS: Morpho-functional analysis revealed left ventricular (LV eccentric hypertrophy in Trpm4-/- mice, with an increase in both wall thickness and chamber size in the adult mouse (aged 32 weeks when compared to Trpm4+/+ littermate controls. Immunofluorescence on frozen heart cryosections and qPCR analysis showed no fibrosis or cellular hypertrophy. Instead, cardiomyocytes in Trpm4-/- mice were smaller than Trpm4+/+with a higher density. Immunofluorescent labeling for phospho-histone H3, a mitosis marker, showed that the number of mitotic myocytes was increased 3-fold in the Trpm4-/-neonatal stage, suggesting hyperplasia. Adult Trpm4-/- mice presented multilevel conduction blocks, as attested by PR and QRS lengthening in surface ECGs and confirmed by intracardiac exploration. Trpm4-/-mice also exhibited Luciani-Wenckebach atrioventricular blocks, which were reduced following atropine infusion, suggesting paroxysmal parasympathetic overdrive. In addition, Trpm4-/- mice exhibited shorter action potentials in atrial cells. This shortening was unrelated to modifications of the voltage-gated Ca2+ or K+ currents involved in the repolarizing phase. CONCLUSIONS: TRPM4 has pleiotropic roles in the heart, including the regulation of conduction and cellular

  16. Lean NOx Trap Modeling in Vehicle Systems Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Zhiming [ORNL; Chakravarthy, Veerathu K [ORNL; Daw, C Stuart [ORNL; Conklin, Jim [ORNL

    2010-09-01

    A one-dimensional model for simulating lean NOx trap (LNT) performance is developed and validated using both steady state cycling data and transient data from FTP testing cycles. The model consists of the conservation equations for chemical species and energy in the bulk flow, energy of the solid walls, O2 storage and NOx storage (in the form of nitrites and nitrates). Nitrites and nitrates are formed by diffusion of NO and NO2, respectively, into sorbent particles (assumed to be hemi-spherical in shape) along with O2 and their formation rates are controlled by chemical kinetics as well as solid-phase diffusion rates of NOx species. The model also accounts for thermal aging and sulfation of LNTs. Empirical correlations are developed on the basis of published experimental data to capture these effects. These empirical correlations depend on total mileage for which the LNT has been in use, the mileage accumulated since the last desulfation event in addition to the freshly degreened catalyst characteristics. The model has been used in studies of vehicle systems (integration, performance etc.) including hybrid powertrain configurations. Since the engines in hybrid vehicles turn on and off multiple number of times during single drive cycles, the exhaust systems may encounter multiple cold start transients. Accurate modeling of catalyst warm-up and cooling is, therefore, very important to simulate LNT performance in such vehicles. For this purpose, the convective heat loss from the LNT to the ambient is modeled using a Nusselt number correlation that includes effects of both forced convection and natural convection (with later being important when vehicle is stationary). Using the model, the fuel penalty associated with operating LNTs on small diesel engine powered car during FTP drive cycles is estimated.

  17. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    International Nuclear Information System (INIS)

    Little, M.P.

    2010-01-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported statistical associations for cardiovascular disease is unreliable but cannot be excluded. Inflammatory processes are the most likely mechanism by which radiation could modify the atherosclerotic disease process. If there is to be modification by low doses of ionizing radiation of cardiovascular disease through this mechanism, a role for non-DNA-targeted effects cannot be excluded.

  18. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?☆

    Science.gov (United States)

    Little, M.P.

    2011-01-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose–response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported statistical associations for cardiovascular disease is unreliable but cannot be excluded. Inflammatory processes are the most likely mechanism by which radiation could modify the atherosclerotic disease process. If there is to be modification by low doses of ionizing radiation of cardiovascular disease through this mechanism, a role for non-DNA-targeted effects cannot be excluded. PMID:20105434

  19. Do non-targeted effects increase or decrease low dose risk in relation to the linear-non-threshold (LNT) model?

    Science.gov (United States)

    Little, M P

    2010-05-01

    In this paper we review the evidence for departure from linearity for malignant and non-malignant disease and in the light of this assess likely mechanisms, and in particular the potential role for non-targeted effects. Excess cancer risks observed in the Japanese atomic bomb survivors and in many medically and occupationally exposed groups exposed at low or moderate doses are generally statistically compatible. For most cancer sites the dose-response in these groups is compatible with linearity over the range observed. The available data on biological mechanisms do not provide general support for the idea of a low dose threshold or hormesis. This large body of evidence does not suggest, indeed is not statistically compatible with, any very large threshold in dose for cancer, or with possible hormetic effects, and there is little evidence of the sorts of non-linearity in response implied by non-DNA-targeted effects. There are also excess risks of various types of non-malignant disease in the Japanese atomic bomb survivors and in other groups. In particular, elevated risks of cardiovascular disease, respiratory disease and digestive disease are observed in the A-bomb data. In contrast with cancer, there is much less consistency in the patterns of risk between the various exposed groups; for example, radiation-associated respiratory and digestive diseases have not been seen in these other (non-A-bomb) groups. Cardiovascular risks have been seen in many exposed populations, particularly in medically exposed groups, but in contrast with cancer there is much less consistency in risk between studies: risks per unit dose in epidemiological studies vary over at least two orders of magnitude, possibly a result of confounding and effect modification by well known (but unobserved) risk factors. In the absence of a convincing mechanistic explanation of epidemiological evidence that is, at present, less than persuasive, a cause-and-effect interpretation of the reported statistical associations for cardiovascular disease is unreliable but cannot be excluded. Inflammatory processes are the most likely mechanism by which radiation could modify the atherosclerotic disease process. If there is to be modification by low doses of ionizing radiation of cardiovascular disease through this mechanism, a role for non-DNA-targeted effects cannot be excluded. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Linear No-Threshold Model VS. Radiation Hormesis

    Science.gov (United States)

    Doss, Mohan

    2013-01-01

    The atomic bomb survivor cancer mortality data have been used in the past to justify the use of the linear no-threshold (LNT) model for estimating the carcinogenic effects of low dose radiation. An analysis of the recently updated atomic bomb survivor cancer mortality dose-response data shows that the data no longer support the LNT model but are consistent with a radiation hormesis model when a correction is applied for a likely bias in the baseline cancer mortality rate. If the validity of the phenomenon of radiation hormesis is confirmed in prospective human pilot studies, and is applied to the wider population, it could result in a considerable reduction in cancers. The idea of using radiation hormesis to prevent cancers was proposed more than three decades ago, but was never investigated in humans to determine its validity because of the dominance of the LNT model and the consequent carcinogenic concerns regarding low dose radiation. Since cancer continues to be a major health problem and the age-adjusted cancer mortality rates have declined by only ∼10% in the past 45 years, it may be prudent to investigate radiation hormesis as an alternative approach to reduce cancers. Prompt action is urged. PMID:24298226

  1. Ziņu vērtības un dienaskārtība Latvijas televīziju ziņās: raidījumu „Panorāma” un „LNT Ziņas” salīdzinošā analīze

    OpenAIRE

    Faušteina, Signe

    2012-01-01

    Bakalaura darba tēma ir „Ziņu vērtības un dienaskārtība Latvijas televīziju ziņās: raidījumu „Panorāma” un „LNT Ziņas” salīdzinošā analīze”. Darbam ir divas pētnieciskās problēmas: 1.Kāda ir Latvijas televīziju ziņu raidījumu „Panorāma” un „LNT Ziņas” dienaskārtība? 2.Kādi ziņu atlases kritēriji nosaka Latvijas televīziju ziņu raidījumu „Panorāma” un „LNT Ziņas” dienaskārtības veidošanos? Darbs balstīts uz dienaskārtības, ziņu atlases kritēriju teorijām, aplūkots arī ziņu jēdziens...

  2. Cancer risk assessment foundation unraveling: new historical evidence reveals that the US National Academy of Sciences (US NAS), Biological Effects of Atomic Radiation (BEAR) Committee Genetics Panel falsified the research record to promote acceptance of the LNT.

    Science.gov (United States)

    Calabrese, Edward J

    2015-04-01

    The NAS Genetics Panel (1956) recommended a switch from a threshold to a linear dose response for radiation risk assessment. To support this recommendation, geneticists on the panel provided individual estimates of the number of children in subsequent generations (one to ten) that would be adversely affected due to transgenerational reproductive cell mutations. It was hoped that there would be close agreement among the individual risk estimates. However, extremely large ranges of variability and uncertainty characterized the wildly divergent expert estimates. The panel members believed that sharing these estimates with the scientific community and general public would strongly undercut their linearity recommendation, as it would have only highlighted their own substantial uncertainties. Essentially, their technical report in the journal Science omitted and misrepresented key adverse reproductive findings in an effort to ensure support for their linearity recommendation. These omissions and misrepresentations not only belie the notion of an impartial and independent appraisal by the NAS Panel, but also amount to falsification and fabrication of the research record at the highest possible level, leading ultimately to the adoption of LNT by governments worldwide. Based on previously unexamined correspondence among panel members and Genetics Panel meeting transcripts, this paper provides the first documentation of these historical developments.

  3. Invalidity of the Fermi liquid theory and magnetic phase transition in quasi-1D dopant-induced armchair-edged graphene nanoribbons

    Science.gov (United States)

    Hoi, Bui Dinh; Davoudiniya, Masoumeh; Yarmohammadi, Mohsen

    2018-04-01

    Based on theoretically tight-binding calculations considering nearest neighbors and Green's function technique, we show that the magnetic phase transition in both semiconducting and metallic armchair graphene nanoribbons with width ranging from 9.83 Å to 69.3 Å would be observed in the presence of injecting electrons by doping. This transition is explained by the temperature-dependent static charge susceptibility through calculation of the correlation function of charge density operators. This work showed that charge concentration of dopants in such system plays a crucial role in determining the magnetic phase. A variety of multicritical points such as transition temperatures and maximum susceptibility are compared in undoped and doped cases. Our findings show that there exist two different transition temperatures and maximum susceptibility depending on the ribbon width in doped structures. Another remarkable point refers to the invalidity (validity) of the Fermi liquid theory in nanoribbons-based systems at weak (strong) concentration of dopants. The obtained interesting results of magnetic phase transition in such system create a new potential for magnetic graphene nanoribbon-based devices.

  4. Dose-responses from multi-model inference for the non-cancer disease mortality of atomic bomb survivors.

    Science.gov (United States)

    Schöllnberger, H; Kaiser, J C; Jacob, P; Walsh, L

    2012-05-01

    The non-cancer mortality data for cerebrovascular disease (CVD) and cardiovascular diseases from Report 13 on the atomic bomb survivors published by the Radiation Effects Research Foundation were analysed to investigate the dose-response for the influence of radiation on these detrimental health effects. Various parametric and categorical models (such as linear-no-threshold (LNT) and a number of threshold and step models) were analysed with a statistical selection protocol that rated the model description of the data. Instead of applying the usual approach of identifying one preferred model for each data set, a set of plausible models was applied, and a sub-set of non-nested models was identified that all fitted the data about equally well. Subsequently, this sub-set of non-nested models was used to perform multi-model inference (MMI), an innovative method of mathematically combining different models to allow risk estimates to be based on several plausible dose-response models rather than just relying on a single model of choice. This procedure thereby produces more reliable risk estimates based on a more comprehensive appraisal of model uncertainties. For CVD, MMI yielded a weak dose-response (with a risk estimate of about one-third of the LNT model) below a step at 0.6 Gy and a stronger dose-response at higher doses. The calculated risk estimates are consistent with zero risk below this threshold-dose. For mortalities related to cardiovascular diseases, an LNT-type dose-response was found with risk estimates consistent with zero risk below 2.2 Gy based on 90% confidence intervals. The MMI approach described here resolves a dilemma in practical radiation protection when one is forced to select between models with profoundly different dose-responses for risk estimates.

  5. Dose-responses from multi-model inference for the non-cancer disease mortality of atomic bomb survivors

    Energy Technology Data Exchange (ETDEWEB)

    Schoellnberger, H.; Kaiser, J.C.; Jacob, P. [Institute of Radiation Protection, Helmholtz Zentrum Muenchen, Department of Radiation Sciences, Neuherberg (Germany); Walsh, L. [BfS-Federal Office for Radiation Protection, Neuherberg (Germany)

    2012-05-15

    The non-cancer mortality data for cerebrovascular disease (CVD) and cardiovascular diseases from Report 13 on the atomic bomb survivors published by the Radiation Effects Research Foundation were analysed to investigate the dose-response for the influence of radiation on these detrimental health effects. Various parametric and categorical models (such as linear-no-threshold (LNT) and a number of threshold and step models) were analysed with a statistical selection protocol that rated the model description of the data. Instead of applying the usual approach of identifying one preferred model for each data set, a set of plausible models was applied, and a sub-set of non-nested models was identified that all fitted the data about equally well. Subsequently, this sub-set of non-nested models was used to perform multi-model inference (MMI), an innovative method of mathematically combining different models to allow risk estimates to be based on several plausible dose-response models rather than just relying on a single model of choice. This procedure thereby produces more reliable risk estimates based on a more comprehensive appraisal of model uncertainties. For CVD, MMI yielded a weak dose-response (with a risk estimate of about one-third of the LNT model) below a step at 0.6 Gy and a stronger dose-response at higher doses. The calculated risk estimates are consistent with zero risk below this threshold-dose. For mortalities related to cardiovascular diseases, an LNT-type dose-response was found with risk estimates consistent with zero risk below 2.2 Gy based on 90% confidence intervals. The MMI approach described here resolves a dilemma in practical radiation protection when one is forced to select between models with profoundly different dose-responses for risk estimates. (orig.)

  6. Sulfur Deactivation of NOx Storage Catalysts: A Multiscale Modeling Approach

    Directory of Open Access Journals (Sweden)

    Rankovic N.

    2013-09-01

    Full Text Available Lean NOx Trap (LNT catalysts, a promising solution for reducing the noxious nitrogen oxide emissions from the lean burn and Diesel engines, are technologically limited by the presence of sulfur in the exhaust gas stream. Sulfur stemming from both fuels and lubricating oils is oxidized during the combustion event and mainly exists as SOx (SO2 and SO3 in the exhaust. Sulfur oxides interact strongly with the NOx trapping material of a LNT to form thermodynamically favored sulfate species, consequently leading to the blockage of NOx sorption sites and altering the catalyst operation. Molecular and kinetic modeling represent a valuable tool for predicting system behavior and evaluating catalytic performances. The present paper demonstrates how fundamental ab initio calculations can be used as a valuable source for designing kinetic models developed in the IFP Exhaust library, intended for vehicle simulations. The concrete example we chose to illustrate our approach was SO3 adsorption on the model NOx storage material, BaO. SO3 adsorption was described for various sites (terraces, surface steps and kinks and bulk for a closer description of a real storage material. Additional rate and sensitivity analyses provided a deeper understanding of the poisoning phenomena.

  7. Implicações da declaração de invalidade da Diretiva 2006/24 na conservação de dados (“metadados” nos Estados-Membros da UE: uma leitura jusfundamental / The Directive 2006/24 declaration of invalidity and the consequences of metadata retention in the EU Member States: A Fundamental Rights Standards Approach

    Directory of Open Access Journals (Sweden)

    Alessandra Silveira

    2017-04-01

    while there was a pending ECJ’s response to the questions referred by two national courts (one Swedish and one British on the effects of that invalidity decision on the domestic legislation that transposed it. Thus, the authors sought to anticipate the Court's decision in the light of its settled case law and the reaction of the Member States’ authorities’ after the declaration of invalidity of the referred directive. Findings – In the light of the particularities of the protection of fundamental rights in the EU and the legal model of integration, the authors draw some guidelines as to the procedure to be followed in future cases in order to safeguard the effectiveness of the Union law, namely when it comes to the legal equality of European citizens.

  8. Regulatory-Science: Biphasic Cancer Models or the LNT—Not Just a Matter of Biology!

    Science.gov (United States)

    Ricci, Paolo F.; Sammis, Ian R.

    2012-01-01

    There is no doubt that prudence and risk aversion must guide public decisions when the associated adverse outcomes are either serious or irreversible. With any carcinogen, the levels of risk and needed protection before and after an event occurs, are determined by dose-response models. Regulatory law should not crowd out the actual beneficial effects from low dose exposures—when demonstrable—that are inevitably lost when it adopts the linear non-threshold (LNT) as its causal model. Because regulating exposures requires planning and developing protective measures for future acute and chronic exposures, public management decisions should be based on minimizing costs and harmful exposures. We address the direct and indirect effects of causation when the danger consists of exposure to very low levels of carcinogens and toxicants. The societal consequences of a policy can be deleterious when that policy is based on a risk assumed by the LNT, in cases where low exposures are actually beneficial. Our work develops the science and the law of causal risk modeling: both are interwoven. We suggest how their relevant characteristics differ, but do not attempt to keep them separated; as we demonstrate, this union, however unsatisfactory, cannot be severed. PMID:22740778

  9. Sieve bootstrapping in the Lee-Carter model

    NARCIS (Netherlands)

    Heinemann, A.

    2013-01-01

    This paper studies an alternative approach to construct confidence intervals for parameter estimates of the Lee-Carter model. First, the procedure of obtaining confidence intervals using regular nonparametric i.i.d. bootstrap is specified. Empirical evidence seems to invalidate this approach as it

  10. Conformal invariance in the long-range Ising model

    NARCIS (Netherlands)

    Paulos, M.F.; Rychkov, S.; van Rees, B.C.; Zan, B.

    We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to

  11. Autonomous classification models in ubiquitous environments

    OpenAIRE

    Abad Arranz, Miguel Angel

    2016-01-01

    Stream-mining approach is defined as a set of cutting-edge techniques designed to process streams of data in real time, in order to extract knowledge. In the particular case of classification, stream-mining has to adapt its behaviour to the volatile underlying data distributions, what has been called concept drift. Moreover, it is important to note that concept drift may lead to situations where predictive models become invalid and have therefore to be updated to represent the actual concepts...

  12. Attack Tree Generation by Policy Invalidation

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian; Naeem Akram, R.; Jajodia, S.

    2015-01-01

    Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identi﬿cation. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identi﬿ed

  13. 我國智慧財產訴訟中專利權無效抗辯趨勢報導 The Defense of Patent Invalidity in the Intellectual Property Litigation Special Report

    Directory of Open Access Journals (Sweden)

    陳群顯 Chun-Hsien Chen

    2007-06-01

    Full Text Available 我國智慧財產民事訴訟中,以往囿於「公、私法訴訟二元制」之體系設計,被告即便認為原告所主張之智慧財產權有無效的理由,亦僅能循行政救濟的途徑主張,並無法直接於民事訴訟中直接提起智慧財產權無效抗辯,造成民事訴訟程序之延滯等不便。我國預計於2007 年間設立智慧財產法院,而該法院之設立對於我國智慧財產案件之爭訟將產生巨大而直接之影響,而攸關該法院成敗之主要關鍵⎯⎯「智慧財產法院組織法」及「智慧財產案件審理法」等二法案,業已送立法院進行審查。其中「智慧財產案件審理法」已 於2007 年1 月9 日經立法院三讀通過,「智慧財產法院組織法」亦已於2007 年3 月5 日經立法院三讀通過。「智慧財產案件審理法」中一項劃時代的變革,即是在第16 條第1 項規定:「當事人主張或抗辯智慧財產權有應撤銷、廢止之原因者,法院應就其主張或抗辯有無理由自為判斷」,易言之,該法條規定將直接改變目前我國「公、私法訴訟二元制」的現狀,對於專利訴訟當事人間自產生重大之影響,然依據該法案之規定,是否確能達到立法者之目的?以及是否需要有其他配套制度?本文將介紹我國智慧財產訴訟中 專利權無效抗辯相關制度沿革,並嘗試提供分析意見,同時就目前各國相關專利訴訟制度之設計,提供分析及建議。 In the past, the defendant of intellectual property (IP litigation cannot raise the defense of patent invalidity in the civil litigation. The defendant can only file an invalidity action against the IP at issue. Such judicial system design delays the proceeding of the civil litigation of the IP infringement. The IP Court is proposed to be established in 2007. The establishment of the IP Court will change the current court proceeding of the intellectual

  14. An examination of adaptive cellular protective mechanisms using a multi-stage carcinogenesis model

    International Nuclear Information System (INIS)

    Schollnberger, H.; Stewart, R. D.; Mitchel, R. E. J.; Hofmann, W.

    2004-01-01

    A multi-stage cancer model that describes the putative rate-limiting steps in carcinogenesis was developed and used to investigate the potential impact on lung cancer incidence of the hormesis mechanisms suggested by Feinendegen and Pollycove. In this deterministic cancer model, radiation and endogenous processes damage the DNA of target cells in the lung. Some fraction of the misrepaired our unrepaired DNA damage induces genomic instability and, ultimately, leads to the accumulation of malignant cells. The model accounts for cell birth and death processes. Ita also includes a rate of malignant transformation and a lag period for tumour formation. Cellular defence mechanisms are incorporated into the model by postulating dose and dose rate dependent radical scavenging. The accuracy of DNA damage repair also depends on dose and dose rate. Sensitivity studies were conducted to identify critical model inputs and to help define the shapes of the cumulative lung cancer incidence curves that may arise when dose and dose rate dependent cellular defence mechanisms are incorporated into a multi-stage cancer model. For lung cancer, both linear no-threshold (LNT) and non-LNT shaped responses can be obtained. The reported studied clearly show that it is critical to know whether or not and to what extent multiply damaged DNA sites are formed by endogenous processes. Model inputs that give rise to U-shaped responses are consistent with an effective cumulative lung cancer incidence threshold that may be as high as 300 mGy (4 mGy per year for 75 years). (Author) 11 refs

  15. Generative models for chemical structures.

    Science.gov (United States)

    White, David; Wilson, Richard C

    2010-07-26

    We apply recently developed techniques for pattern recognition to construct a generative model for chemical structure. This approach can be viewed as ligand-based de novo design. We construct a statistical model describing the structural variations present in a set of molecules which may be sampled to generate new structurally similar examples. We prevent the possibility of generating chemically invalid molecules, according to our implicit hydrogen model, by projecting samples onto the nearest chemically valid molecule. By populating the input set with molecules that are active against a target, we show how new molecules may be generated that will likely also be active against the target.

  16. Item bias detection in the Hospital Anxiety and Depression Scale using structural equation modeling: comparison with other item bias detection methods

    NARCIS (Netherlands)

    Verdam, M.G.E.; Oort, F.J.; Sprangers, M.A.G.

    Purpose Comparison of patient-reported outcomes may be invalidated by the occurrence of item bias, also known as differential item functioning. We show two ways of using structural equation modeling (SEM) to detect item bias: (1) multigroup SEM, which enables the detection of both uniform and

  17. A classical simulation of nonlinear Jaynes-Cummings and Rabi models in photonic lattices: comment.

    Science.gov (United States)

    Lo, C F

    2014-01-27

    Recently Rodriguez-Lara et al. [Opt. Express 21(10), 12888 (2013)] proposed a classical simulation of the dynamics of the nonlinear Rabi model by propagating classical light fields in a set of two photonic lattices. However, the nonlinear Rabi model has already been rigorously proven to be undefined by Lo [Quantum Semiclass. Opt. 10, L57 (1998)]. Hence, the proposed classical simulation is actually not applicable to the nonlinear Rabi model and the simulation results are completely invalid.

  18. Microkinetic Modeling of Lean NOx Trap Sulfation and Desulfation

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Richard S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-08-01

    A microkinetic reaction sub-mechanism designed to account for the sulfation and desulfation of a commercial lean NOx trap (LNT) is presented. This set of reactions is appended to a previously developed mechanism for the normal storage and regeneration processes in an LNT in order to provide a comprehensive modeling tool. The reactions describing the storage, release, and reduction of sulfur oxides are patterned after those involving NOx, but the number of reactions is kept to the minimum necessary to give an adequate simulation of the experimental observations. Values for the kinetic constants are estimated by fitting semi-quantitatively the somewhat limited experimental data, using a transient plug flow reactor code to model the processes occurring in a single monolith channel. Rigorous thermodynamic constraints are imposed in order to ensure that the overall mechanism is consistent both internally and with the known properties of all gas-phase species. The final mechanism is shown to be capable of reproducing the principal aspects of sulfation/desulfation behavior, most notably (a) the essentially complete trapping of SO2 during normal cycling; (b) the preferential sulfation of NOx storage sites over oxygen storage sites and the consequent plug-like and diffuse sulfation profiles; (c) the degradation of NOx storage and reduction (NSR) capability with increasing sulfation level; and (d) the mix of H2S and SO2 evolved during desulfation by temperature-programmed reduction.

  19. Distribution of shortest path lengths in a class of node duplication network models

    Science.gov (United States)

    Steinbock, Chanania; Biham, Ofer; Katzav, Eytan

    2017-09-01

    We present analytical results for the distribution of shortest path lengths (DSPL) in a network growth model which evolves by node duplication (ND). The model captures essential properties of the structure and growth dynamics of social networks, acquaintance networks, and scientific citation networks, where duplication mechanisms play a major role. Starting from an initial seed network, at each time step a random node, referred to as a mother node, is selected for duplication. Its daughter node is added to the network, forming a link to the mother node, and with probability p to each one of its neighbors. The degree distribution of the resulting network turns out to follow a power-law distribution, thus the ND network is a scale-free network. To calculate the DSPL we derive a master equation for the time evolution of the probability Pt(L =ℓ ) , ℓ =1 ,2 ,⋯ , where L is the distance between a pair of nodes and t is the time. Finding an exact analytical solution of the master equation, we obtain a closed form expression for Pt(L =ℓ ) . The mean distance 〈L〉 t and the diameter Δt are found to scale like lnt , namely, the ND network is a small-world network. The variance of the DSPL is also found to scale like lnt . Interestingly, the mean distance and the diameter exhibit properties of a small-world network, rather than the ultrasmall-world network behavior observed in other scale-free networks, in which 〈L〉 t˜lnlnt .

  20. Setting standards for radiation protection: A time for change

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, H.W.; Hickman, D.P.

    1996-01-01

    In 1950, the International Commission on Radiation Protection (ICRP) recommended that ``certain radiation effects are irreversible and cumulative.`` Furthermore, the ICRP ``strongly recommended that every effort be made to reduce exposures to all types of ionizing radiations to the lowest possible level.`` Then in 1954, the ICRP published its assumption that human response to ionizing radiation was linear with dose, together with the recommendation that exposures be kept as low as practicable. These concepts are still the foundation of radiation protection policy today, even though, as Evans has stated, ``The linear non-threshold (LNT) model was adopted specifically on a basis of mathematical simplicity, not from radio-biological data.... Groups responsible for setting standards for radiation protection should be abreast of new developments and new data as they are published; however, this does not seem to be the case. For example, there have been many reports in scientific, peer-reviewed, and other publications during the last three decades that have shown the LNT model and the policy of As Low As Reasonably Achievable (ALARA) to be invalid. However, none of these reports has been refuted or even discussed by standard-setting groups. We believe this mandates a change in the standard-setting process.

  1. Simulation of deposition and activity distribution of radionuclides in human airways

    International Nuclear Information System (INIS)

    Farkas, A.; Balashazy, I.; Szoke, I.; Hofmann, W.; Golser, R.

    2002-01-01

    The aim of our research activities is the modelling of the biological processes related to the development of lung cancer at the large central-airways observed in the case of uranium miners caused by the inhalation of radionuclides (especially alpha-emitting radon decay products). Statistical data show that at the uranium miners the lung cancer has developed mainly in the 3-4.-5. airway generations and especially in the right upper lobe. Therefore, it is rather important to study the physical and biological effects in this section of the human airways to find relations between the radiation dose and the adverse health effects. These results may provide useful information about the validity or invalidity of the currently used LNT (Linear-No-Threshold) dose-effect hypothesis at low doses

  2. Evidence for beneficial low level radiation effects and radiation hormesis

    International Nuclear Information System (INIS)

    Feinendegen, L.E.

    2005-01-01

    Low doses in the mGy range cause a dual effect on cellular DNA. One effect concerns a relatively low probability of DNA damage per energy deposition event and it increases proportional with dose, with possible bystander effects operating. This damage at background radiation exposure is orders of magnitudes lower than that from endogenous sources, such as ROS. The other effect at comparable doses brings an easily obeservable adaptive protection against DNA damage from any, mainly endogenous sources, depending on cell type, species, and metabolism. Protective responses express adaptive responses to metabolic perturbations and also mimic oxygen stress responses. Adaptive protection operates in terms of DNA damage prevention and repair, and of immune stimulation. It develops with a delay of hours, may last for days to months, and increasingly disappears at doses beyond about 100 to 200 mGy. Radiation-induced apoptosis and terminal cell differentiation occurs also at higher doses and adds to protection by reducing genomic instability and the number of mutated cells in tissues. At low doses, damage reduction by adaptive protection against damage from endogenous sources predictably outweighs radiogenic damage induction. The analysis of the consequences of the particular low-dose scenario shows that the linear-no-threshold (LNT) hypothesis for cancer risk is scientifically unfounded and appears to be invalid in favor of a threshold or hormesis. This is consistent with data both from animal studies and human epidemiological observations on low-dose induced cancer. The LNT hypothesis should be abandoned and be replaced by a hypothesis that is scientifically justified. The appropriate model should include terms for both linear and non-linear response probabilities. Maintaining the LNT-hypothesis as basis for radiation protection causes unressonable fear and expenses. (author)

  3. Leukemia and ionizing radiation revisited

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M. [Cuttler & Associates Inc., Vaughan, Ontario (Canada); Welsh, J.S. [Loyola University-Chicago, Dept. or Radiation Oncology, Stritch School of Medicine, Maywood, Illinois (United States)

    2016-03-15

    A world-wide radiation health scare was created in the late 19508 to stop the testing of atomic bombs and block the development of nuclear energy. In spite of the large amount of evidence that contradicts the cancer predictions, this fear continues. It impairs the use of low radiation doses in medical diagnostic imaging and radiation therapy. This brief article revisits the second of two key studies, which revolutionized radiation protection, and identifies a serious error that was missed. This error in analyzing the leukemia incidence among the 195,000 survivors, in the combined exposed populations of Hiroshima and Nagasaki, invalidates use of the LNT model for assessing the risk of cancer from ionizing radiation. The threshold acute dose for radiation-induced leukemia, based on about 96,800 humans, is identified to be about 50 rem, or 0.5 Sv. It is reasonable to expect that the thresholds for other cancer types are higher than this level. No predictions or hints of excess cancer risk (or any other health risk) should be made for an acute exposure below this value until there is scientific evidence to support the LNT hypothesis. (author)

  4. Internal Models Support Specific Gaits in Orthotic Devices

    DEFF Research Database (Denmark)

    Matthias Braun, Jan; Wörgötter, Florentin; Manoonpong, Poramate

    2014-01-01

    such limitations is to supply the patient—via the orthosis—with situation-dependent gait models. To achieve this, we present a method for gait recognition using model invalidation. We show that these models are capable to predict the individual patient's movements and supply the correct gait. We investigate...... the system's accuracy and robustness on a Knee-Ankle-Foot-Orthosis, introducing behaviour changes depending on the patient's current walking situation. We conclude that the here presented model-based support of different gaits has the power to enhance the patient's mobility....

  5. correction to numerica advection of moments of the particle size distribution in eulerian model

    Science.gov (United States)

    Chang, L.; Schwartz, S.; McGraw, B.; Lewis, E.

    2007-12-01

    Quadrature method of moments(QMOM) offers a alternative more efficient than sectional methods and more accurate than modal methods. If QMOM is incorporated into eulerian model, invalid moments set are produced by nonlinear transport methods when valid moments are transported as seperate tracers. A non-negative least squares (NNLS) soultion eliminates the problem without requiring modification of the transport algorithm. The evaluation of NNLS for two representative advection schemes in one dimension was done for 10E4 test cases.

  6. Classical Antiferromagnetism in Kinetically Frustrated Electronic Models

    Science.gov (United States)

    Sposetti, C. N.; Bravo, B.; Trumper, A. E.; Gazza, C. J.; Manuel, L. O.

    2014-05-01

    We study, by means of the density matrix renormalization group, the infinite U Hubbard model—with one hole doped away from half filling—in triangular and square lattices with frustrated hoppings, which invalidate Nagaoka's theorem. We find that these kinetically frustrated models have antiferromagnetic ground states with classical local magnetization in the thermodynamic limit. We identify the mechanism of this kinetic antiferromagnetism with the release of the kinetic energy frustration, as the hole moves in the established antiferromagnetic background. This release can occur in two different ways: by a nontrivial spin Berry phase acquired by the hole, or by the effective vanishing of the hopping amplitude along the frustrating loops.

  7. HOW RADIATION EXPOSURE HISTORIES INFLUENCE PHYSICIAN IMAGING DECISIONS: A MULTICENTER RADIOLOGIST SURVEY STUDY

    Science.gov (United States)

    Pandharipande, Pari V.; Eisenberg, Jonathan D.; Avery, Laura L.; Gunn, Martin L.; Kang, Stella K.; Megibow, Alec J.; Turan, Ekin A.; Harvey, H. Benjamin; Kong, Chung Yin; Dowling, Emily C.; Halpern, Elkan F.; Donelan, Karen; Gazelle, G. Scott

    2014-01-01

    Purpose To evaluate the influence of patient-level radiation exposure histories on radiologists’ imaging decisions. Materials and Methods We conducted an IRB exempt, HIPAA compliant, physician survey study in three academic medical centers. Radiologists were asked to make a prospective imaging recommendation for a hypothetical patient with a history of multiple CT scans. We queried radiologists’ decision-making, evaluating whether they: incorporated cancer risks from previous imaging; reported acceptance (or rejection) of the linear no-threshold (LNT) model; and understood LNT model implications in this setting. Consistency between radiologists’ decisions and their LNT model beliefs was evaluated – those acting in accordance with the LNT model were expected to disregard previously incurred cancer risks. Fisher’s exact test was used to verify the generalizability of results across institutions and training levels (residents, fellows, and attendings). Results Fifty-six percent (322/578) of radiologists completed the survey. Most (92% (295/322)) incorporated risks from the patient’s exposure history during decision-making. Most (61% (196/322) also reported acceptance of the LNT model. Fewer (25% (79/322)) rejected the LNT model, and 15% (47/322) could not judge. Among radiologists reporting LNT model acceptance or rejection, the minority (36% (98/275)) made decisions in a manner consistent with their LNT model beliefs. This finding was not statistically different across institutions (p=0.070) or training levels (p=0.183). Few radiologists (4% (13/322)) demonstrated an accurate understanding of LNT model implications. Conclusion Most radiologists, when faced with patient exposure histories, make decisions that contradict their self-reported acceptance of the LNT model and the LNT model itself. These findings underscore a need for related educational initiatives. PMID:23701064

  8. A new model integrating short- and long-term aging of copper added to soils.

    Directory of Open Access Journals (Sweden)

    Saiqi Zeng

    Full Text Available Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2, and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt. Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors-soil pH, incubation time, soil organic matter content and temperature.

  9. Discriminating between rival biochemical network models: three approaches to optimal experiment design.

    Science.gov (United States)

    Mélykúti, Bence; August, Elias; Papachristodoulou, Antonis; El-Samad, Hana

    2010-04-01

    The success of molecular systems biology hinges on the ability to use computational models to design predictive experiments, and ultimately unravel underlying biological mechanisms. A problem commonly encountered in the computational modelling of biological networks is that alternative, structurally different models of similar complexity fit a set of experimental data equally well. In this case, more than one molecular mechanism can explain available data. In order to rule out the incorrect mechanisms, one needs to invalidate incorrect models. At this point, new experiments maximizing the difference between the measured values of alternative models should be proposed and conducted. Such experiments should be optimally designed to produce data that are most likely to invalidate incorrect model structures. In this paper we develop methodologies for the optimal design of experiments with the aim of discriminating between different mathematical models of the same biological system. The first approach determines the 'best' initial condition that maximizes the L2 (energy) distance between the outputs of the rival models. In the second approach, we maximize the L2-distance of the outputs by designing the optimal external stimulus (input) profile of unit L2-norm. Our third method uses optimized structural changes (corresponding, for example, to parameter value changes reflecting gene knock-outs) to achieve the same goal. The numerical implementation of each method is considered in an example, signal processing in starving Dictyostelium amoebae. Model-based design of experiments improves both the reliability and the efficiency of biochemical network model discrimination. This opens the way to model invalidation, which can be used to perfect our understanding of biochemical networks. Our general problem formulation together with the three proposed experiment design methods give the practitioner new tools for a systems biology approach to experiment design.

  10. Discriminating between rival biochemical network models: three approaches to optimal experiment design

    Directory of Open Access Journals (Sweden)

    August Elias

    2010-04-01

    Full Text Available Abstract Background The success of molecular systems biology hinges on the ability to use computational models to design predictive experiments, and ultimately unravel underlying biological mechanisms. A problem commonly encountered in the computational modelling of biological networks is that alternative, structurally different models of similar complexity fit a set of experimental data equally well. In this case, more than one molecular mechanism can explain available data. In order to rule out the incorrect mechanisms, one needs to invalidate incorrect models. At this point, new experiments maximizing the difference between the measured values of alternative models should be proposed and conducted. Such experiments should be optimally designed to produce data that are most likely to invalidate incorrect model structures. Results In this paper we develop methodologies for the optimal design of experiments with the aim of discriminating between different mathematical models of the same biological system. The first approach determines the 'best' initial condition that maximizes the L2 (energy distance between the outputs of the rival models. In the second approach, we maximize the L2-distance of the outputs by designing the optimal external stimulus (input profile of unit L2-norm. Our third method uses optimized structural changes (corresponding, for example, to parameter value changes reflecting gene knock-outs to achieve the same goal. The numerical implementation of each method is considered in an example, signal processing in starving Dictyostelium amœbæ. Conclusions Model-based design of experiments improves both the reliability and the efficiency of biochemical network model discrimination. This opens the way to model invalidation, which can be used to perfect our understanding of biochemical networks. Our general problem formulation together with the three proposed experiment design methods give the practitioner new tools for a systems

  11. Diurnal cloud cycle biases in climate models.

    Science.gov (United States)

    Yin, Jun; Porporato, Amilcare

    2017-12-22

    Clouds' efficiency at reflecting solar radiation and trapping the terrestrial radiation is strongly modulated by the diurnal cycle of clouds (DCC). Much attention has been paid to mean cloud properties due to their critical role in climate projections; however, less research has been devoted to the DCC. Here we quantify the mean, amplitude, and phase of the DCC in climate models and compare them with satellite observations and reanalysis data. While the mean appears to be reliable, the amplitude and phase of the DCC show marked inconsistencies, inducing overestimation of radiation in most climate models. In some models, DCC appears slightly shifted over the ocean, likely as a result of tuning and fortuitously compensating the large DCC errors over the land. While this model tuning does not seem to invalidate climate projections because of the limited DCC response to global warming, it may potentially increase the uncertainty of climate predictions.

  12. Stability of the electroweak ground state in the Standard Model and its extensions

    International Nuclear Information System (INIS)

    Di Luzio, Luca; Isidori, Gino; Ridolfi, Giovanni

    2016-01-01

    We review the formalism by which the tunnelling probability of an unstable ground state can be computed in quantum field theory, with special reference to the Standard Model of electroweak interactions. We describe in some detail the approximations implicitly adopted in such calculation. Particular attention is devoted to the role of scale invariance, and to the different implications of scale-invariance violations due to quantum effects and possible new degrees of freedom. We show that new interactions characterized by a new energy scale, close to the Planck mass, do not invalidate the main conclusions about the stability of the Standard Model ground state derived in absence of such terms.

  13. Stability of the electroweak ground state in the Standard Model and its extensions

    Energy Technology Data Exchange (ETDEWEB)

    Di Luzio, Luca, E-mail: diluzio@ge.infn.it [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Isidori, Gino [Department of Physics, University of Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy)

    2016-02-10

    We review the formalism by which the tunnelling probability of an unstable ground state can be computed in quantum field theory, with special reference to the Standard Model of electroweak interactions. We describe in some detail the approximations implicitly adopted in such calculation. Particular attention is devoted to the role of scale invariance, and to the different implications of scale-invariance violations due to quantum effects and possible new degrees of freedom. We show that new interactions characterized by a new energy scale, close to the Planck mass, do not invalidate the main conclusions about the stability of the Standard Model ground state derived in absence of such terms.

  14. Stability of the electroweak ground state in the Standard Model and its extensions

    Directory of Open Access Journals (Sweden)

    Luca Di Luzio

    2016-02-01

    Full Text Available We review the formalism by which the tunnelling probability of an unstable ground state can be computed in quantum field theory, with special reference to the Standard Model of electroweak interactions. We describe in some detail the approximations implicitly adopted in such calculation. Particular attention is devoted to the role of scale invariance, and to the different implications of scale-invariance violations due to quantum effects and possible new degrees of freedom. We show that new interactions characterized by a new energy scale, close to the Planck mass, do not invalidate the main conclusions about the stability of the Standard Model ground state derived in absence of such terms.

  15. Non-white noise in fMRI: does modelling have an impact?

    DEFF Research Database (Denmark)

    Lund, Torben E; Madsen, Kristoffer H; Sidaros, Karam

    2006-01-01

    not accounted for by rigid body registration. These contributions give rise to temporal autocorrelation in the residuals of the fMRI signal and invalidate the statistical analysis as the errors are no longer independent. The low-frequency drift is often removed by high-pass filtering, and other effects...... are typically modelled as an autoregressive (AR) process. In this paper, we propose an alternative approach: Nuisance Variable Regression (NVR). By inclusion of confounding effects in a general linear model (GLM), we first confirm that the spatial distribution of the various fMRI noise sources is similar...

  16. Understanding lack of understanding : Invalidation in rheumatic diseases

    NARCIS (Netherlands)

    Kool, M.B.

    2012-01-01

    The quality of life of patients with chronic rheumatic diseases is negatively influenced by symptoms such as pain, fatigue, and stiffness, and secondary symptoms such as physical limitations and depressive mood. On top of this burden, some patients experience negative responses from others, such as

  17. Leaf arrangements are invalid in the taxonomy of orchid species

    Directory of Open Access Journals (Sweden)

    Anna Jakubska-Busse

    2017-07-01

    Full Text Available The selection and validation of proper distinguishing characters are of crucial importance in taxonomic revisions. The modern classifications of orchids utilize the molecular tools, but still the selection and identification of the material used in these studies is for the most part related to general species morphology. One of the vegetative characters quoted in orchid manuals is leaf arrangement. However, phyllotactic diversity and ontogenetic changeability have not been analysed in detail in reference to particular taxonomic groups. Therefore, we evaluated the usefulness of leaf arrangements in the taxonomy of the genus Epipactis Zinn, 1757. Typical leaf arrangements in shoots of this genus are described as distichous or spiral. However, in the course of field research and screening of herbarium materials, we indisputably disproved the presence of distichous phyllotaxis in the species Epipactis purpurata Sm. and confirmed the spiral Fibonacci pattern as the dominant leaf arrangement. In addition, detailed analyses revealed the presence of atypical decussate phyllotaxis in this species, as well as demonstrated the ontogenetic formation of pseudowhorls. These findings confirm ontogenetic variability and plasticity in E. purpurata. Our results are discussed in the context of their significance in delimitations of complex taxa within the genus Epipactis.

  18. 25 CFR 11.604 - Declaration of invalidity.

    Science.gov (United States)

    2010-04-01

    ...) A party lacked capacity to consent to the marriage, either because of mental incapacity or infirmity... the marriage by sexual intercourse and at the time the marriage was entered into, the other party did... lacked capacity to consent. ...

  19. Invalid Cookery, Nursing and Domestic Medicine in Ireland, c. 1900.

    Science.gov (United States)

    Adelman, Juliana

    2018-04-01

    This article uses a 1903 text by the Irish cookery instructress Kathleen Ferguson to examine the intersections between food, medicine and domestic work. Sick Room Cookery, and numerous texts like it, drew on traditions of domestic medicine and Anglo-Irish gastronomy while also seeking to establish female expertise informed by modern science and medicine. Placing the text in its broader cultural context, the article examines how it fit into the tradition of domestic medicine and the emerging profession of domestic science. Giving equal weight to the history of food and of medicine, and seeing each as shaped by historical context, help us to see the practice of feeding the sick in a different way.

  20. OPERA and MINOS Experimental Result Prove Big Bang Theory Invalid

    Science.gov (United States)

    Pressler, David E.

    2012-03-01

    The greatest error in the history of science is the misinterpretation of the Michelson-Morley Experiment. The speed of light was measured to travel at the same speed in all three directions (x, y, z axis) in ones own inertial reference system; however, c will always be measured as having an absolute different speed in all other inertial frames at different energy levels. Time slows down due to motion or a gravity field. Time is the rate of physical process. Speed = Distance/Time. If the time changes the distance must change. Therefore, BOTH mirrors must move towards the center of the interferometer and space must contract in all-three-directions; C-Space. Gravity is a C-Space condition, and is the cause of redshift in our universe-not motion. The universe is not expanding. OPERA results are directly indicated; at the surface of earth, the strength of the gravity field is at maximum-below the earth's surface, time and space is less distorted, C-Space; therefore, c is faster. Newtonian mechanics dictate that a spherical shell of matter at greater radii, with uniform density, produces no net force on an observer located centrally. An observer located on the sphere's surface, like our Earth's or a large sphere, like one located in a remote galaxy, will construct a picture centered on himself to be identical to the one centered inside the spherical shell of mass. Both observers will view the incoming radiation, emitted by the other observer, as redshifted, because they lay on each others radial line. The Universe is static and very old.

  1. Statistical challenges in modelling the health consequences of social mobility: the need for diagonal reference models.

    Science.gov (United States)

    van der Waal, Jeroen; Daenekindt, Stijn; de Koster, Willem

    2017-12-01

    Various studies on the health consequences of socio-economic position address social mobility. They aim to uncover whether health outcomes are affected by: (1) social mobility, besides, (2) social origin, and (3) social destination. Conventional methods do not, however, estimate these three effects separately, which may produce invalid conclusions. We highlight that diagonal reference models (DRMs) overcome this problem, which we illustrate by focusing on overweight/obesity (OWOB). Using conventional methods (logistic-regression analyses with dummy variables) and DRMs, we examine the effects of intergenerational educational mobility on OWOB (BMI ≥ 25 kg/m 2 ) using survey data representative of the Dutch population aged 18-45 (1569 males, 1771 females). Conventional methods suggest that mobility effects on OWOB are present. Analyses with DRMs, however, indicate that no such effects exist. Conventional analyses of the health consequences of social mobility may produce invalid results. We, therefore, recommend the use of DRMs. DRMs also validly estimate the health consequences of other types of social mobility (e.g. intra- and intergenerational occupational and income mobility) and status inconsistency (e.g. in educational or occupational attainment between partners).

  2. [Consolidating the medical model of disability: on poliomyelitis and constitution of orthopedic surgery and orthopaedics as a speciality in Spain (1930-1950)].

    Science.gov (United States)

    Martínez-Pérez, José

    2009-01-01

    At the beginning of the 1930s, various factors made it necessary to transform one of the institutions which was renowned for its work regarding the social reinsertion of the disabled, that is, the Instituto de Reeducación Profesional de Inválidos del Trabajo (Institute for Occupational Retraining of Invalids of Work). The economic crisis of 1929 and the legislative reform aimed at regulating occupational accidents highlighted the failings of this institution to fulfill its objectives. After a time of uncertainty, the centre was renamed the Instituto Nacional de Reeducación de Inválidos (National Institute for Retraining of Invalids). This was done to take advantage of its work in championing the recovery of all people with disabilities.This work aims to study the role played in this process by the poliomyelitis epidemics in Spain at this time. It aims to highlight how this disease justified the need to continue the work of a group of professionals and how it helped to reorient the previous programme to re-educate the "invalids." Thus we shall see the way in which, from 1930 to 1950, a specific medical technology helped to consolidate an "individual model" of disability and how a certain cultural stereotype of those affected developed as a result. Lastly, this work discusses the way in which all this took place in the midst of a process of professional development of orthopaedic surgeons.

  3. Effect of Flux Adjustments on Temperature Variability in Climate Models

    International Nuclear Information System (INIS)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-01-01

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability

  4. Biologically-based mechanistic models of radiation-related carcinogenesis applied to epidemiological data.

    Science.gov (United States)

    Rühm, Werner; Eidemüller, Markus; Kaiser, Jan Christian

    2017-10-01

    Biologically-based mechanistic models that are used in combining current understanding of human carcinogenesis with epidemiological studies were reviewed. Assessment was made of how well they fit the data, whether they account for non-linear radiobiological low-dose effects, and whether they suggest any implications for the dose response at low doses and dose rates. However, the present paper does not make an attempt to provide a complete review of the existing literature on biologically-based models and their application to epidemiological data. In most studies the two-stage clonal expansion (TSCE) model of carcinogenesis was used. The model provided robust estimates of identifiable parameters and radiation risk. While relatively simple, it is flexible, so that more stages can easily be added, and tests made of various types of radiation action. In general, the model performed similarly or better than descriptive excess absolute and excess relative risk models, in terms of quality of fit and number of parameters. Only very rarely the shape of dose-response predicted by the models was investigated. For some tumors, when more detailed biological information was known, additional pathways were included in the model. The future development of these models will benefit from growing knowledge on carcinogenesis processes, and in particular from use of biobank tissue samples and advances in omics technologies. Their use appears a promising approach to investigate the radiation risk at low doses and low dose rates. However, the uncertainties involved are still considerable, and the models provide only a simplified description of the underlying complexity of carcinogenesis. Current assumptions in radiation protection including the linear-non-threshold (LNT) model are not in contradiction to what is presently known on the process of cancer development.

  5. Standardization of a multivariate calibration model applied to the determination of chromium in tanning sewage.

    Science.gov (United States)

    Sales, F; Rius, A; Callao, M P; Rius, F X

    2000-06-21

    A multivariate standardization procedure was used to extend the lifetime of a multivariate partial least squares (PLS) calibration model for determining chromium in tanning sewage. The Kennard/Stone algorithm was used to select the transfer samples and the F-test was used to decide whether slope/bias correction (SBC) or piecewise direct standardization (PDS) had to be applied. Special attention was paid to the transfer samples since the process can be invalidated if samples are selected which behave anomalously. The results of the F-test were extremely sensitive to heterogeneity in the transfer set. In these cases, it should be taken as an interpretation tool.

  6. Non-targeted effects of radiation: applications for radiation protection and contribution to LNT discussion

    International Nuclear Information System (INIS)

    Belyakov, O.V.; Folkard, M.; Prise, K.M.; Michael, B.D.; Mothersill, C.

    2002-01-01

    According to the target theory of radiation induced effects (Lea, 1946), which forms a central core of radiation biology, DNA damage occurs during or very shortly after irradiation of the nuclei in targeted cells and the potential for biological consequences can be expressed within one or two cell generations. A range of evidence has now emerged that challenges the classical effects resulting from targeted damage to DNA. These effects have also been termed non-(DNA)-targeted (Ward, 1999) and include radiation-induced bystander effects (Iyer and Lehnert, 2000a), genomic instability (Wright, 2000), adaptive response (Wolff, 1998), low dose hyper-radiosensitivity (HRS) (Joiner, et al., 2001), delayed reproductive death (Seymour, et al., 1986) and induction of genes by radiation (Hickman, et al., 1994). An essential feature of non-targeted effects is that they do not require a direct nuclear exposure by irradiation to be expressed and they are particularly significant at low doses. This new evidence suggests a new paradigm for radiation biology that challenges the universality of target theory. In this paper we will concentrate on the radiation-induced bystander effects because of its particular importance for radiation protection

  7. THE HIGH BACKGROUND RADIATION AREA IN RAMSAR IRAN: GEOLOGY, NORM, BIOLOGY, LNT, AND POSSIBLE REGULATORY FUN

    Energy Technology Data Exchange (ETDEWEB)

    Karam, P. A.

    2002-02-25

    The city of Ramsar Iran hosts some of the highest natural radiation levels on earth, and over 2000 people are exposed to radiation doses ranging from 1 to 26 rem per year. Curiously, inhabitants of this region seem to have no greater incidence of cancer than those in neighboring areas of normal background radiation levels, and preliminary studies suggest their blood cells experience fewer induced chromosomal abnormalities when exposed to 150 rem ''challenge'' doses of radiation than do the blood cells of their neighbors. This paper will briefly describe the unique geology that gives Ramsar its extraordinarily high background radiation levels. It will then summarize the studies performed to date and will conclude by suggesting ways to incorporate these findings (if they are borne out by further testing) into future radiation protection standards.

  8. Anomalous dielectric nonlinearity and dielectric relaxation in xBST-(1- x) (LMT-LNT) ceramics

    Science.gov (United States)

    Liu, Cheng; Liu, Peng

    2011-11-01

    xwt%Ba0.6Sr0.4TiO3-(1- x)wt%[0.4La (Mg0.5Ti0.5)O3-0.6(La0.5Na0.5)TiO3] ( x=0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 0.95) ceramics were prepared via a traditional solid-state reaction route. Interesting anomalous dielectric nonlinearity (ADN)—permittivity increased with dc bias electric field ( E-field), and low-temperature dielectric relaxation (LTDR) behaviors—were observed within a x range of 0.30˜0.70 for the first time. Based on our experimental facts, it was suggested that the LTDR was originated from a charge-associated process between electron-oxygen vacancy pairs during a thermal stimulation, while the ADN was related with a metastable state of polarized nano-regions (PNRs).

  9. Mechanistic Investigation of the Reduction of NOx over Pt- and Rh-Based LNT Catalysts

    Directory of Open Access Journals (Sweden)

    Lukasz Kubiak

    2016-03-01

    Full Text Available The influence of the noble metals (Pt vs. Rh on the NOx storage reduction performances of lean NOx trap catalysts is here investigated by transient micro-reactor flow experiments. The study indicates a different behavior during the storage in that the Rh-based catalyst showed higher storage capacity at high temperature as compared to the Pt-containing sample, while the opposite is seen at low temperatures. It is suggested that the higher storage capacity of the Rh-containing sample at high temperature is related to the higher dispersion of Rh as compared to Pt, while the lower storage capacity of Rh-Ba/Al2O3 at low temperature is related to its poor oxidizing properties. The noble metals also affect the catalyst behavior upon reduction of the stored NOx, by decreasing the threshold temperature for the reduction of the stored NOx. The Pt-based catalyst promotes the reduction of the adsorbed NOx at lower temperatures if compared to the Rh-containing sample, due to its superior reducibility. However, Rh-based material shows higher reactivity in the NH3 decomposition significantly enhancing N2 selectivity. Moreover, formation of small amounts of N2O is observed on both Pt- and Rh-based catalyst samples only during the reduction of highly reactive NOx stored at 150 °C, where NOx is likely in the form of nitrites.

  10. Conformal invariance in the long-range Ising model

    Directory of Open Access Journals (Sweden)

    Miguel F. Paulos

    2016-01-01

    Full Text Available We consider the question of conformal invariance of the long-range Ising model at the critical point. The continuum description is given in terms of a nonlocal field theory, and the absence of a stress tensor invalidates all of the standard arguments for the enhancement of scale invariance to conformal invariance. We however show that several correlation functions, computed to second order in the epsilon expansion, are nontrivially consistent with conformal invariance. We proceed to give a proof of conformal invariance to all orders in the epsilon expansion, based on the description of the long-range Ising model as a defect theory in an auxiliary higher-dimensional space. A detailed review of conformal invariance in the d-dimensional short-range Ising model is also included and may be of independent interest.

  11. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  12. Possible roles of Peccei-Quinn symmetry in an effective low energy model

    Science.gov (United States)

    Suematsu, Daijiro

    2017-12-01

    The strong C P problem is known to be solved by imposing Peccei-Quinn (PQ) symmetry. However, the domain wall problem caused by the spontaneous breaking of its remnant discrete subgroup could make models invalid in many cases. We propose a model in which the PQ charge is assigned quarks so as to escape this problem without introducing any extra colored fermions. In the low energy effective model resulting after the PQ symmetry breaking, both the quark mass hierarchy and the CKM mixing could be explained through Froggatt-Nielsen mechanism. If the model is combined with the lepton sector supplemented by an inert doublet scalar and right-handed neutrinos, the effective model reduces to the scotogenic neutrino mass model in which both the origin of neutrino masses and dark matter are closely related. The strong C P problem could be related to the quark mass hierarchy, neutrino masses, and dark matter through the PQ symmetry.

  13. Galilean invariance in the exponential model of atomic collisions

    Energy Technology Data Exchange (ETDEWEB)

    del Pozo, A.; Riera, A.; Yaez, M.

    1986-11-01

    Using the X/sup n//sup +/(1s/sup 2/)+He/sup 2+/ colliding systems as specific examples, we study the origin dependence of results in the application of the two-state exponential model, and we show the relevance of polarization effects in that study. Our analysis shows that polarization effects of the He/sup +/(1s) orbital due to interaction with X/sup (//sup n//sup +1)+/ ion in the exit channel yield a very small contribution to the energy difference and render the dynamical coupling so strongly origin dependent that it invalidates the basic premises of the model. Further study, incorporating translation factors in the formalism, is needed.

  14. Atmospheric radionuclide transport model with radon postprocessor and SBG module. Model description version 2.8.0; ARTM. Atmosphaerisches Radionuklid-Transport-Modell mit Radon Postprozessor und SBG-Modul. Modellbeschreibung zu Version 2.8.0

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Cornelia; Sogalla, Martin; Thielen, Harald; Martens, Reinhard

    2015-04-20

    The study on the atmospheric radionuclide transport model with radon postprocessor and SBG module (model description version 2.8.0) covers the following issues: determination of emissions, radioactive decay, atmospheric dispersion calculation for radioactive gases, atmospheric dispersion calculation for radioactive dusts, determination of the gamma cloud radiation (gamma submersion), terrain roughness, effective source height, calculation area and model points, geographic reference systems and coordinate transformations, meteorological data, use of invalid meteorological data sets, consideration of statistical uncertainties, consideration of housings, consideration of bumpiness, consideration of terrain roughness, use of frequency distributions of the hourly dispersion situation, consideration of the vegetation period (summer), the radon post processor radon.exe, the SBG module, modeling of wind fields, shading settings.

  15. An Improved Coupling of Numerical and Physical Models for Simulating Wave Propagation

    DEFF Research Database (Denmark)

    Yang, Zhiwen; Liu, Shu-xue; Li, Jin-xuan

    2014-01-01

    An improved coupling of numerical and physical models for simulating 2D wave propagation is developed in this paper. In the proposed model, an unstructured finite element model (FEM) based Boussinesq equations is applied for the numerical wave simulation, and a 2D piston-type wavemaker is used...... for the physical wave generation. An innovative scheme combining fourth-order Lagrange interpolation and Runge-Kutta scheme is described for solving the coupling equation. A Transfer function modulation method is presented to minimize the errors induced from the hydrodynamic invalidity of the coupling model and....../or the mechanical capability of the wavemaker in area where nonlinearities or dispersion predominate. The overall performance and applicability of the coupling model has been experimentally validated by accounting for both regular and irregular waves and varying bathymetry. Experimental results show...

  16. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  17. Comparisons of patch-use models for wintering American tree sparrows

    Science.gov (United States)

    Tome, M.W.

    1990-01-01

    Optimal foraging theory has stimulated numerous theoretical and empirical studies of foraging behavior for >20 years. These models provide a valuable tool for studying the foraging behavior of an organism. As with any other tool, the models are most effective when properly used. For example, to obtain a robust test of a foraging model, Stephens and Krebs (1986) recommend experimental designs in which four questions are answered in the affirmative. First, do the foragers play the same "game" as the model? Sec- ond, are the assumptions of the model met? Third, does the test rule out alternative possibilities? Finally, are the appropriate variables measured? Negative an- swers to any of these questions could invalidate the model and lead to confusion over the usefulness of foraging theory in conducting ecological studies. Gaines (1989) attempted to determine whether American Tree Sparrows (Spizella arborea) foraged by a time (Krebs 1973) or number expectation rule (Gibb 1962), or in a manner consistent with the predictions of Charnov's (1976) marginal value theorem (MVT). Gaines (1989: 118) noted appropriately that field tests of foraging models frequently involve uncontrollable circumstances; thus, it is often difficult to meet the assumptions of the models. Gaines also states (1989: 118) that "violations of the assumptions are also in- formative but do not constitute robust tests of predicted hypotheses," and that "the problem can be avoided by experimental analyses which concurrently test mutually exclusive hypotheses so that alter- native predictions will be eliminated if falsified." There is a problem with this approach because, when major assumptions of models are not satisfied, it is not justifiable to compare a predator's foraging behavior with the model's predictions. I submit that failing to follow the advice offered by Stephens and Krebs (1986) can invalidate tests of foraging models.

  18. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  19. Linear, no threshold response at low doses of ionizing radiation: ideology, prejudice and science

    International Nuclear Information System (INIS)

    Kesavan, P.C.

    2014-01-01

    The linear, no threshold (LNT) response model assumes that there is no threshold dose for the radiation-induced genetic effects (heritable mutations and cancer), and it forms the current basis for radiation protection standards for radiation workers and the general public. The LNT model is, however, based more on ideology than valid radiobiological data. Further, phenomena such as 'radiation hormesis', 'radioadaptive response', 'bystander effects' and 'genomic instability' are now demonstrated to be radioprotective and beneficial. More importantly, the 'differential gene expression' reveals that qualitatively different proteins are induced by low and high doses. This finding negates the LNT model which assumes that qualitatively similar proteins are formed at all doses. Thus, all available scientific data challenge the LNT hypothesis. (author)

  20. Paradigm lost, paradigm found: The re-emergence of hormesis as a fundamental dose response model in the toxicological sciences

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J. [Environmental Health Sciences, School of Public Health, Morrill I, N344, University of Massachusetts, Amherst, MA 01003 (United States)]. E-mail: edwardc@schoolph.umass.edu

    2005-12-15

    This paper provides an assessment of the toxicological basis of the hormetic dose-response relationship including issues relating to its reproducibility, frequency, and generalizability across biological models, endpoints measured and chemical class/physical stressors and implications for risk assessment. The quantitative features of the hormetic dose response are described and placed within toxicological context that considers study design, temporal assessment, mechanism, and experimental model/population heterogeneity. Particular emphasis is placed on an historical evaluation of why the field of toxicology rejected hormesis in favor of dose response models such as the threshold model for assessing non-carcinogens and linear no threshold (LNT) models for assessing carcinogens. The paper argues that such decisions were principally based on complex historical factors that emerged from the intense and protracted conflict between what is now called traditional medicine and homeopathy and the overly dominating influence of regulatory agencies on the toxicological intellectual agenda. Such regulatory agency influence emphasized hazard/risk assessment goals such as the derivation of no observed adverse effect levels (NOAELs) and the lowest observed adverse effect levels (LOAELs) which were derived principally from high dose studies using few doses, a feature which restricted perceptions and distorted judgments of several generations of toxicologists concerning the nature of the dose-response continuum. Such historical and technical blind spots lead the field of toxicology to not only reject an established dose-response model (hormesis), but also the model that was more common and fundamental than those that the field accepted. - The quantitative features of the hormetic dose/response are described and placed within the context of toxicology.

  1. Paradigm lost, paradigm found: The re-emergence of hormesis as a fundamental dose response model in the toxicological sciences

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2005-01-01

    This paper provides an assessment of the toxicological basis of the hormetic dose-response relationship including issues relating to its reproducibility, frequency, and generalizability across biological models, endpoints measured and chemical class/physical stressors and implications for risk assessment. The quantitative features of the hormetic dose response are described and placed within toxicological context that considers study design, temporal assessment, mechanism, and experimental model/population heterogeneity. Particular emphasis is placed on an historical evaluation of why the field of toxicology rejected hormesis in favor of dose response models such as the threshold model for assessing non-carcinogens and linear no threshold (LNT) models for assessing carcinogens. The paper argues that such decisions were principally based on complex historical factors that emerged from the intense and protracted conflict between what is now called traditional medicine and homeopathy and the overly dominating influence of regulatory agencies on the toxicological intellectual agenda. Such regulatory agency influence emphasized hazard/risk assessment goals such as the derivation of no observed adverse effect levels (NOAELs) and the lowest observed adverse effect levels (LOAELs) which were derived principally from high dose studies using few doses, a feature which restricted perceptions and distorted judgments of several generations of toxicologists concerning the nature of the dose-response continuum. Such historical and technical blind spots lead the field of toxicology to not only reject an established dose-response model (hormesis), but also the model that was more common and fundamental than those that the field accepted. - The quantitative features of the hormetic dose/response are described and placed within the context of toxicology

  2. Principles and interest of GOF tests for multistate capture-recapture models

    Directory of Open Access Journals (Sweden)

    Pradel, R.

    2005-12-01

    Full Text Available Optimal goodness–of–fit procedures for multistate models are new. Drawing a parallel with the corresponding single–state procedures, we present their singularities and show how the overall test can be decomposed into interpretable components. All theoretical developments are illustrated with an application to the now classical study of movements of Canada geese between wintering sites. Through this application, we exemplify how the interpretable components give insight into the data, leading eventually to the choice of an appropriate general model but also sometimes to the invalidation of the multistate models as a whole. The method for computing a corrective overdispersion factor is then mentioned. We also take the opportunity to try to demystify some statistical notions like that of Minimal Sufficient Statistics by introducing them intuitively. We conclude that these tests should be considered an important part of the analysis itself, contributing in ways that the parametric modelling cannot always do to the understanding of the data.

  3. CLEERS Aftertreatment Modeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Mark L.; Zelenyuk, Alla; Gao, Feng; Muntean, George G.; Peden, Charles HF; Rappe, Kenneth G.; Szanyi, Janos; Howden, Ken

    2014-12-09

    CLEERS is an R&D focus project of the Diesel Cross-Cut Team. The overall objective is to promote the development of improved computational tools for simulating realistic full-system performance of lean-burn engines and the associated emissions control systems. Three fundamental research projects are sponsored at PNNL through CLEERS: DPF, SCR, and LNT. Resources are shared between the three efforts in order to actively respond to current industrial needs. This report documents recent results obtained during FY14.

  4. Composite-Nanoparticles Thermal History Sensors

    Science.gov (United States)

    2014-05-01

    Figure 3-3. PL blue-shift of 1st QDs. (a) ln(ΔE)-1/T. (b) ln(ΔE)- lnt . 2 3 4 5 0.0012 0.0017 0.0022 1/T(K) ln (Δ E( m eV )) Data Model 2.5...the time, lnt , with n as the slope of the plot. Figure 3-8. Band-gap shift ln(ΔE) with time lnt for the 2nd kind of CdSe/ZnS quantum dots...The QDs were heated at (a) 400 oC and (b) 500 oC in air. -1 0 1 2 3 4 5 -3.4 -3.2 -3.0 -2.8 -2.6 -2.4 -2.2 -2.0 -1.8 ln (D el ta E ) ( eV ) lnt

  5. Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows

    Science.gov (United States)

    Rolland, Joran

    2018-02-01

    This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T ) =Apr -Bp , with Ap and Bp positive. Moreover, Ap and Bp are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T ≍exp[L (A r -B )] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability

  6. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    Science.gov (United States)

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  7. Application of JAERI quantum molecular dynamics model for collisions of heavy nuclei

    Directory of Open Access Journals (Sweden)

    Ogawa Tatsuhiko

    2016-01-01

    Full Text Available The quantum molecular dynamics (QMD model incorporated into the general-purpose radiation transport code PHITS was revised for accurate prediction of fragment yields in peripheral collisions. For more accurate simulation of peripheral collisions, stability of the nuclei at their ground state was improved and the algorithm to reject invalid events was modified. In-medium correction on nucleon-nucleon cross sections was also considered. To clarify the effect of this improvement on fragmentation of heavy nuclei, the new QMD model coupled with a statistical decay model was used to calculate fragment production cross sections of Ag and Au targets and compared with the data of earlier measurement. It is shown that the revised version can predict cross section more accurately.

  8. Towards product design automation based on parameterized standard model with diversiform knowledge

    Science.gov (United States)

    Liu, Wei; Zhang, Xiaobing

    2017-04-01

    Product standardization based on CAD software is an effective way to improve design efficiency. In the past, research and development on standardization mainly focused on the level of component, and the standardization of the entire product as a whole is rarely taken into consideration. In this paper, the size and structure of 3D product models are both driven by the Excel datasheets, based on which a parameterized model library is therefore established. Diversiform knowledge including associated parameters and default properties are embedded into the templates in advance to simplify their reuse. Through the simple operation, we can obtain the correct product with the finished 3D models including single parts or complex assemblies. Two examples are illustrated later to invalid the idea, which will greatly improve the design efficiency.

  9. Lentinan diminishes apoptotic bodies in the ileal crypts associated with S-1 administration.

    Science.gov (United States)

    Suga, Yasuyo; Takehana, Kenji

    2017-09-01

    S-1 is an oral agent containing tegafur (a prodrug of 5-fluorouracil) that is used to treat various cancers, but adverse effects are frequent. Two pilot clinical studies have suggested that lentinan (LNT; β-1,3-glucan) may reduce the incidence of adverse effects caused by S-1 therapy. In this study, we established a murine model for assessment of gastrointestinal toxicity associated with S-1 and studied the effect of LNT. S-1 was administered orally to BALB/c mice at the effective dose (8.3mg/kg, as tegafur equivalent) once daily (5days per week) for 3weeks. Stool consistency and intestinal specimens were examined. We investigated the effect of combined intravenous administration of LNT at 0.1mg, which is an effective dose in murine tumor models. We also investigated the effect of a single administration of S-1. During long-term administration of S-1, some mice had loose stools and an increase in apoptotic bodies was observed in the ileal crypts. An increase in apoptotic bodies was also noted after a single administration of S-1 (15mg/kg). Prior or concomitant administration of LNT inhibited the increase in apoptotic bodies in both settings. Administration of LNT also increased the accumulation of CD11b + TIM-4 + cells in the ileum, while depletion of these cells by liposomal clodronate diminished the inhibitory effect of LNT on S-1 toxicity. Combined administration of LNT with S-1 led to a decrease in apoptotic bodies in the ileal crypts, possibly because LNT promoted phagocytosis of damaged cells by CD11b + TIM-4 + cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Towards a transport model for epistemic UQ in RANS closures

    Science.gov (United States)

    Edeling, Wouter; Iaccarino, Gianluca

    2016-11-01

    Due to their computational efficiency, Reynold-Averaged Navier-Stokes (RANS) turbulence models remain a vital tool for modeling turbulent flows. However, it is well known that RANS predictions are locally corrupted by epistemic model-form uncertainty. Whereas some Uncertainty Quantification (UQ) approaches attempt to quantify this uncertainty by considering the model coefficients as random variables, we directly perturb the Reynold-stress tensor at locations in the flow domain where the modeling assumptions are likely to be invalid. Inferring the perturbations on a point-by-point basis would lead to a high-dimensional problem. To reduce the dimensionality, we propose separate model equations based on the transport of linear invariants of the anisotropy tensor. This provides us with a low-dimensional UQ framework where the invariant transport model decides on the magnitude and direction of the perturbations. Where the perturbations are small, the RANS result is recovered. Using traditional turbulence modeling practices we derive weak realizability constraints, and we will rely on Bayesian inference to calibrate the model on high-fidelity data. We will demonstrate our framework on a number of canonical flow problems where RANS models are prone to failure.

  11. The cooperative effect of p53 and Rb in local nanotherapy in a rabbit VX2 model of hepatocellular carcinoma

    Directory of Open Access Journals (Sweden)

    Dong S

    2013-10-01

    Full Text Available Shengli Dong,1 Qibin Tang,2 Miaoyun Long,3 Jian Guan,4 Lu Ye,5 Gaopeng Li6 1Department of General Surgery, The Second Hospital of Shanxi Medical University, Shanxi Medical University, Taiyuan, Shanxi Province, 2Department of Hepatobiliopancreatic Surgery, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, 3Department of Thyroid and Vascular Surgery, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, 4Department of Radiology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, 5Infection Department, Guangzhou No 8 Hospital, Guangzhou, Guangdong Province, 6Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong Province, People's Republic of China Background/aim: A local nanotherapy (LNT combining the therapeutic efficacy of trans-arterial embolization, nanoparticles, and p53 gene therapy has been previously presented. The study presented here aimed to further improve the incomplete tumor eradication and limited survival enhancement and to elucidate the molecular mechanism of the LNT. Methods: In a tumor-targeting manner, recombinant expressing plasmids harboring wild-type p53 and Rb were either co-transferred or transferred separately to rabbit hepatic VX2 tumors in a poly-L-lysine-modified hydroxyapatite nanoparticle nanoplex and Lipiodol® (Guerbet, Villepinte, France emulsion via the hepatic artery. Subsequent co-expression of p53 and Rb proteins within the treated tumors was investigated by Western blotting and in situ analysis by laser-scanning confocal microscopy. The therapeutic effect was evaluated by the tumor growth velocity, apoptosis and necrosis rates, their sensitivity to Adriamycin® (ADM, mitomycin C, and fluorouracil, the microvessel density of tumor tissue, and the survival time of animals. Eventually, real-time polymerase chain reaction and enhanced chemiluminescence Western blotting

  12. A fractal model for nuclear organization: current evidence and biological implications

    Science.gov (United States)

    Bancaud, Aurélien; Lavelle, Christophe; Huet, Sébastien; Ellenberg, Jan

    2012-01-01

    Chromatin is a multiscale structure on which transcription, replication, recombination and repair of the genome occur. To fully understand any of these processes at the molecular level under physiological conditions, a clear picture of the polymorphic and dynamic organization of chromatin in the eukaryotic nucleus is required. Recent studies indicate that a fractal model of chromatin architecture is consistent with both the reaction-diffusion properties of chromatin interacting proteins and with structural data on chromatin interminglement. In this study, we provide a critical overview of the experimental evidence that support a fractal organization of chromatin. On this basis, we discuss the functional implications of a fractal chromatin model for biological processes and propose future experiments to probe chromatin organization further that should allow to strongly support or invalidate the fractal hypothesis. PMID:22790985

  13. Use of nonlinear dose-effect models to predict consequences

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    The linear dose-effect relationship was introduced as a model for the induction of cancer from exposure to nuclear radiation. Subsequently, it has been used by analogy to assess the risk of chemical carcinogens also. Recently, however, the model for radiation carcinogenesis has come increasingly under attack because its calculations contradict the epidemiological data, such as cancer in atomic bomb survivors. Even so, its proponents vigorously defend it, often using arguments that are not so much scientific as a mix of scientific, societal, and often political arguments. At least in part, the resilience of the linear model is due to two convenient properties that are exclusive to linearity: First, the risk of an event is determined solely by the event dose; second, the total risk of a population group depends only on the total population dose. In reality, the linear model has been conclusively falsified; i.e., it has been shown to make wrong predictions, and once this fact is generally realized, the scientific method calls for a new paradigm model. As all alternative models are by necessity nonlinear, all the convenient properties of the linear model are invalid, and calculational procedures have to be used that are appropriate for nonlinear models

  14. Model Reduction in Biomechanics

    Science.gov (United States)

    Feng, Yan

    mechanical parameters from experimental results. However, in real biological world, these homogeneous and isotropic assumptions are usually invalidate. Thus, instead of using hypothesized model, a specific continuum model at mesoscopic scale can be introduced based upon data reduction of the results from molecular simulations at atomistic level. Once a continuum model is established, it can provide details on the distribution of stresses and strains induced within the biomolecular system which is useful in determining the distribution and transmission of these forces to the cytoskeletal and sub-cellular components, and help us gain a better understanding in cell mechanics. A data-driven model reduction approach to the problem of microtubule mechanics as an application is present, a beam element is constructed for microtubules based upon data reduction of the results from molecular simulation of the carbon backbone chain of alphabeta-tubulin dimers. The data base of mechanical responses to various types of loads from molecular simulation is reduced to dominant modes. The dominant modes are subsequently used to construct the stiffness matrix of a beam element that captures the anisotropic behavior and deformation mode coupling that arises from a microtubule's spiral structure. In contrast to standard Euler-Bernoulli or Timoshenko beam elements, the link between forces and node displacements results not from hypothesized deformation behavior, but directly from the data obtained by molecular scale simulation. Differences between the resulting microtubule data-driven beam model (MTDDBM) and standard beam elements are presented, with a focus on coupling of bending, stretch, shear deformations. The MTDDBM is just as economical to use as a standard beam element, and allows accurate reconstruction of the mechanical behavior of structures within a cell as exemplified in a simple model of a component element of the mitotic spindle.

  15. Is the Bifactor Model a Better Model or Is It Just Better at Modeling Implausible Responses? Application of Iteratively Reweighted Least Squares to the Rosenberg Self-Esteem Scale.

    Science.gov (United States)

    Reise, Steven P; Kim, Dale S; Mansolf, Maxwell; Widaman, Keith F

    2016-01-01

    Although the structure of the Rosenberg Self-Esteem Scale (RSES) has been exhaustively evaluated, questions regarding dimensionality and direction of wording effects continue to be debated. To shed new light on these issues, we ask (a) for what percentage of individuals is a unidimensional model adequate, (b) what additional percentage of individuals can be modeled with multidimensional specifications, and (c) what percentage of individuals respond so inconsistently that they cannot be well modeled? To estimate these percentages, we applied iteratively reweighted least squares (IRLS) to examine the structure of the RSES in a large, publicly available data set. A distance measure, d s , reflecting a distance between a response pattern and an estimated model, was used for case weighting. We found that a bifactor model provided the best overall model fit, with one general factor and two wording-related group factors. However, on the basis of d r  values, a distance measure based on individual residuals, we concluded that approximately 86% of cases were adequately modeled through a unidimensional structure, and only an additional 3% required a bifactor model. Roughly 11% of cases were judged as "unmodelable" due to their significant residuals in all models considered. Finally, analysis of d s revealed that some, but not all, of the superior fit of the bifactor model is owed to that model's ability to better accommodate implausible and possibly invalid response patterns, and not necessarily because it better accounts for the effects of direction of wording.

  16. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Scenario and parameter studies on global deposition of radioactivity using the computer model GLODEP2

    International Nuclear Information System (INIS)

    Shapiro, C.S.

    1984-08-01

    The GLODEP2 computer code was utilized to determine biological impact to humans on a global scale using up-to-date estimates of biological risk. These risk factors use varied biological damage models for assessing effects. All the doses reported are the unsheltered, unweathered, smooth terrain, external gamma dose. We assume the unperturbed atmosphere in determining injection and deposition. Effects due to ''nuclear winter'' may invalidate this assumption. The calculations also include scenarios that attempt to assess the impact of the changing nature of the nuclear stockpile. In particular, the shift from larger to smaller yield nuclear devices significantly changes the injection pattern into the atmosphere, and hence significantly affects the radiation doses that ensue. We have also looked at injections into the equatorial atmosphere. In total, we report here the results for 8 scenarios. 10 refs., 6 figs., 11 tabs

  18. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  19. Thermodynamical aspects of modeling the mechanical response of granular materials

    International Nuclear Information System (INIS)

    Elata, D.

    1995-01-01

    In many applications in rock physics, the material is treated as a continuum. By supplementing the related conservation laws with constitutive equations such as stress-strain relations, a well-posed problem can be formulated and solved. The stress-strain relations may be based on a combination of experimental data and a phenomenological or micromechanical model. If the model is physically sound and its parameters have a physical meaning, it can serve to predict the stress response of the material to unmeasured deformations, predict the stress response of other materials, and perhaps predict other categories of the mechanical response such as failure, permeability, and conductivity. However, it is essential that the model be consistent with all conservation laws and consistent with the second law of thermodynamics. Specifically, some models of the mechanical response of granular materials proposed in literature, are based on intergranular contact force-displacement laws that violate the second law of thermodynamics by permitting energy generation at no cost. This diminishes the usefulness of these models as it invalidates their predictive capabilities. [This work was performed under the auspices of the U.S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48.

  20. A critical comparison of discrete-state and continuous models of recognition memory: implications for recognition and beyond.

    Science.gov (United States)

    Pazzaglia, Angela M; Dube, Chad; Rotello, Caren M

    2013-11-01

    Multinomial processing tree (MPT) models such as the single high-threshold, double high-threshold, and low-threshold models are discrete-state decision models that map internal cognitive events onto overt responses. The apparent benefit of these models is that they provide independent measures of accuracy and response bias, a claim that has motivated their frequent application in many areas of psychological science including perception, item and source memory, social cognition, reasoning, educational testing, eyewitness testimony, and psychopathology. Before appropriate conclusions about a given analysis can be drawn, however, one must first confirm that the model's assumptions about the underlying structure of the data are valid. The current review outlines the assumptions of several popular MPT models and assesses their validity using multiple sources of evidence, including receiver operating characteristics, direct model fits, and experimental tests of qualitative predictions. We argue that the majority of the evidence is inconsistent with these models and that, instead, the evidence supports continuous models such as those based on signal detection theory (SDT). Hybrid models that incorporate both SDT and MPT processes are also explored, and we conclude that these models retain the limitations associated with their threshold model predecessors. The potentially severe consequences associated with using an invalid model to interpret data are discussed, and a simple tutorial and model-fitting tool is provided to allow implementation of the empirically supported SDT model. © 2013 American Psychological Association

  1. Partitioning uncertainty in streamflow projections under nonstationary model conditions

    Science.gov (United States)

    Chawla, Ila; Mujumdar, P. P.

    2018-02-01

    Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them

  2. Systematic validation of non-equilibrium thermochemical models using Bayesian inference

    KAUST Repository

    Miki, Kenji

    2015-10-01

    © 2015 Elsevier Inc. The validation process proposed by Babuška et al. [1] is applied to thermochemical models describing post-shock flow conditions. In this validation approach, experimental data is involved only in the calibration of the models, and the decision process is based on quantities of interest (QoIs) predicted on scenarios that are not necessarily amenable experimentally. Moreover, uncertainties present in the experimental data, as well as those resulting from an incomplete physical model description, are propagated to the QoIs. We investigate four commonly used thermochemical models: a one-temperature model (which assumes thermal equilibrium among all inner modes), and two-temperature models developed by Macheret et al. [2], Marrone and Treanor [3], and Park [4]. Up to 16 uncertain parameters are estimated using Bayesian updating based on the latest absolute volumetric radiance data collected at the Electric Arc Shock Tube (EAST) installed inside the NASA Ames Research Center. Following the solution of the inverse problems, the forward problems are solved in order to predict the radiative heat flux, QoI, and examine the validity of these models. Our results show that all four models are invalid, but for different reasons: the one-temperature model simply fails to reproduce the data while the two-temperature models exhibit unacceptably large uncertainties in the QoI predictions.

  3. Mathematical modelling of complex contagion on clustered networks

    Science.gov (United States)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  4. ON THE LAMPPOST MODEL OF ACCRETING BLACK HOLES

    Energy Technology Data Exchange (ETDEWEB)

    Niedźwiecki, Andrzej; Szanecki, Michał [Łódź University, Department of Physics, Pomorska 149/153, 90-236 Łódź (Poland); Zdziarski, Andrzej A. [Centrum Astronomiczne im. M. Kopernika, Bartycka 18, 00-716 Warszawa (Poland)

    2016-04-10

    We study the lamppost model, in which the X-ray source in accreting black hole (BH) systems is located on the rotation axis close to the horizon. We point out a number of inconsistencies in the widely used lamppost model relxilllp, e.g., neglecting the redshift of the photons emitted by the lamppost that are directly observed. They appear to invalidate those model fitting results for which the source distances from the horizon are within several gravitational radii. Furthermore, if those results were correct, most of the photons produced in the lamppost would be trapped by the BH, and the luminosity generated in the source as measured at infinity would be much larger than that observed. This appears to be in conflict with the observed smooth state transitions between the hard and soft states of X-ray binaries. The required increase of the accretion rate and the associated efficiency reduction also present a problem for active galactic nuclei. Then, those models imply the luminosity measured in the local frame is much higher than that produced in the source and measured at infinity, due to the additional effects of time dilation and redshift, and the electron temperature is significantly higher than that observed. We show that these conditions imply that the fitted sources would be out of the e{sup ±} pair equilibrium. On the other hand, the above issues pose relatively minor problems for sources at large distances from the BH, where relxilllp can still be used.

  5. Mind the Noise When Identifying Computational Models of Cognition from Brain Activity.

    Science.gov (United States)

    Kolossa, Antonio; Kopp, Bruno

    2016-01-01

    The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.

  6. Consequences of PPARα Invalidation on Glutathione Synthesis: Interactions with Dietary Fatty Acids

    Directory of Open Access Journals (Sweden)

    Najoua Guelzim

    2011-01-01

    Full Text Available Glutathione (GSH derives from cysteine and plays a key role in redox status. GSH synthesis is determined mainly by cysteine availability and γ-glutamate cysteine ligase (γGCL activity. Because PPARα activation is known to control the metabolism of certain amino acids, GSH synthesis from cysteine and related metabolisms were explored in wild-type (WT and PPARα-null (KO mice, fed diets containing either saturated (COCO diet or 18 : 3 n-3, LIN diet. In mice fed the COCO diet, but not in those fed the LIN diet, PPARα deficiency enhanced hepatic GSH content and γGCL activity, superoxide dismutase 2 mRNA levels, and plasma uric acid concentration, suggesting an oxidative stress. In addition, in WT mice, the LIN diet increased the hepatic GSH pool, without effect on γGCL activity, or change in target gene expression, which rules out a direct effect of PPARα. This suggests that dietary 18 : 3 n-3 may regulate GSH metabolism and thus mitigate the deleterious effects of PPARα deficiency on redox status, without direct PPARα activation.

  7. OPERA, MINOS Experimental Result Prove Special and General Relativity Theories; the Principle of Lorentz Invariance Invalid

    Science.gov (United States)

    Pressler, David E.

    2012-03-01

    A great discrepancy exists - the speed of light and the neutrino speed must be identical; as indicated by supernova1987A; yet, OPERA predicts faster-than-light neutrinos. Einstein's theories are based on the invariance of the speed of light, and no privileged Galilean frame of reference exists. Both of these hypotheses are in error and must be reconciled in order to solve the dilemma. The Michelson-Morley Experiment was misinterpreted - my Neoclassical Theory postulates that BOTH mirrors of the interferometer physically and absolutely move towards its center. The result is a three-directional-Contraction, (x, y, z axis), an actual distortion of space itself; a C-Space condition. ``PRESSLER'S LAW OF C-SPACE: The speed of light, c, will always be measured the same speed in all three directions (˜300,000 km/sec), in ones own inertial reference system, and will always be measured as having a different speed in all other inertial frames which are at a different kinetic energy level or at a location with a different strength gravity field'' Thus, the faster you go, motion, or the stronger the gravity field the smaller you get in all three directions. OPERA results are explained; at the surface of Earth, the strength of gravity field is at maximum -- below the earth's surface, time and space is less distorted; therefore, time is absolutely faster accordingly. Reference OPERA's preprint: Neutrino's faster time-effect due to altitude difference; (10-13ns) x c (299792458m) = 2.9 x 10-5 m/ns x distance (730085m) + 21.8m.) This is consistent with the OPERA result.

  8. (In)validation in the Minority: The Experiences of Latino Students Enrolled in an HBCU

    Science.gov (United States)

    Allen, Taryn Ozuna

    2016-01-01

    This qualitative, phenomenological study examined the academic and interpersonal validation experiences of four female and four male Latino students who were enrolled in their second- to fifth-year at an HBCU in Texas. Using interviews, campus observations, a questionnaire, and analytic memos, this study sought to understand the role of in- and…

  9. The legislation of the seniors citizens: the carried out and the invalids rights

    Directory of Open Access Journals (Sweden)

    Elcha Britto de Oliveira Gomes

    2013-10-01

    Full Text Available Considering the population aging, a series of Laws and policies have been erected to attend the demands of the senior citizens mean while the old person started to be considered as a special group of human rights. The present work analyzed if the human rights of the old person and the specific policies predicted in Law have been doing, or not in a city with more than two hundred thousand habitants, in the Middle West of Brazil. In the present research it was used the hermeneutic-dialectic methodology.

  10. Invalidity of the spectral Fokker-Planck equation forCauchy noise driven Langevin equation

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2004-01-01

    -called alpha-stable noise (or Levy noise) the Fokker-Planck equation no longer exists as a partial differential equation for the probability density because the property of finite variance is lost. In stead it has been attempted to formulate an equation for the characteristic function (the Fourier transform...

  11. Rectification of invalidly published new names for plants from the late Eocene of North Bohemia

    Directory of Open Access Journals (Sweden)

    Kvaček Zlatko

    2015-12-01

    Full Text Available Valid publication of new names of fossil plant taxa published since 1 January 1996 requires a diagnosis or description in English, besides other requirements included in the International Code of Nomenclature for algae, fungi, and plants (Melbourne Code adopted by the Eighteenth International Botanical Congress, Melbourne, Australia, July 2011 (McNeill et al. 2012. In order to validate names published from the late Eocene flora of the Staré Sedlo Formation, North Bohemia, diagnosed only in German (Knobloch et al. 1996, English translations are provided, including references to the type material and further relevant information.

  12. Defining subspecies, invalid taxonomic tools, and the fate of the woodland caribou

    Directory of Open Access Journals (Sweden)

    Valerius Geist

    2007-04-01

    Full Text Available If my argument is valid, then true woodland caribou are only the very few, dark, smallmanned caribou scattered across the south of caribou distribution. They need the most urgent of attention.

  13. Evaluating a Novel Eye Tracking Tool to Detect Invalid Responding in Neurocognitive Assessment

    Science.gov (United States)

    2014-05-07

    speed. Subjects viewed computerized measures on a 15” Asus VW193 flat- screen monitor set to 1440 x 900 pixel resolution. Examiners used the...responding script asked participants to exaggerate cognitive problems in order to get money from an insurance company . Furthermore, the biased...though you feel normal today, you know that the amount of money you will receive from your insurance company depends on how badly you were injured. You

  14. Genetic invalidation of Lp-PLA2 as a therapeutic target

    DEFF Research Database (Denmark)

    Gregson, John M; Freitag, Daniel F; Surendran, Praveen

    2017-01-01

    AIMS: Darapladib, a potent inhibitor of lipoprotein-associated phospholipase A2 (Lp-PLA2), has not reduced risk of cardiovascular disease outcomes in recent randomized trials. We aimed to test whether Lp-PLA2 enzyme activity is causally relevant to coronary heart disease. METHODS: In 72,657 patie...

  15. An interface tracking model for droplet electrocoalescence.

    Energy Technology Data Exchange (ETDEWEB)

    Erickson, Lindsay Crowl

    2013-09-01

    This report describes an Early Career Laboratory Directed Research and Development (LDRD) project to develop an interface tracking model for droplet electrocoalescence. Many fluid-based technologies rely on electrical fields to control the motion of droplets, e.g. microfluidic devices for high-speed droplet sorting, solution separation for chemical detectors, and purification of biodiesel fuel. Precise control over droplets is crucial to these applications. However, electric fields can induce complex and unpredictable fluid dynamics. Recent experiments (Ristenpart et al. 2009) have demonstrated that oppositely charged droplets bounce rather than coalesce in the presence of strong electric fields. A transient aqueous bridge forms between approaching drops prior to pinch-off. This observation applies to many types of fluids, but neither theory nor experiments have been able to offer a satisfactory explanation. Analytic hydrodynamic approximations for interfaces become invalid near coalescence, and therefore detailed numerical simulations are necessary. This is a computationally challenging problem that involves tracking a moving interface and solving complex multi-physics and multi-scale dynamics, which are beyond the capabilities of most state-of-the-art simulations. An interface-tracking model for electro-coalescence can provide a new perspective to a variety of applications in which interfacial physics are coupled with electrodynamics, including electro-osmosis, fabrication of microelectronics, fuel atomization, oil dehydration, nuclear waste reprocessing and solution separation for chemical detectors. We present a conformal decomposition finite element (CDFEM) interface-tracking method for the electrohydrodynamics of two-phase flow to demonstrate electro-coalescence. CDFEM is a sharp interface method that decomposes elements along fluid-fluid boundaries and uses a level set function to represent the interface.

  16. Modeling Field Line Resonances in the Inner Plasmasphere with the Field Line Interhemispheric Plasma Model

    Science.gov (United States)

    McCarthy, N. M.; Jorgensen, A. M.; Stone, W. D.; Zesta, E.

    2010-12-01

    Equatorial plasma mass density in the Inner Magnetosphere of the Earth has been traditionally derived from measurements of Field Line Resonances from pairs of ground magnetometers closely spaced in latitude. The full plasma mass density along the flux tube can be determined using such measurements in an inversion of the Field Line Resonance Equation. Cummings et al [1969] developed the Field Line Resonance equation and numerically solved for the Field Line Resonances by assuming a power law distribution that varied with the geocentric distance from the equatorial crossing point of the field lines and a dipole model for the Earth's magnetic field. So far all numerical solutions of the Field Line Resonance Equation use some form of a power law distribution of the mass density along the field line, that depends on the magnetic field model, typically assumed to be a dipole, with only one recent work exploring deviations from a dipole magnetic field. Another fundamental assumption in the solution of the Field Line Resonance Equation is that of perfectly conducting, flat ionospheres as the two boundaries of the field line. While this assumption is considered valid for L values greater than 2, recent works have found it to be invalid for L values of 3 or less. In the present paper we solve the Field Line Resonance Equation for L values less than 3.5 using a three dimensional ionosphere, and without assuming a power law for the mass density distribution along the field line. Instead we use plasma mass density data from the Field Line Interhemispheric Plasma (FLIP) model to numerically solve the Field Line Resonance Equation for the eigenfrequencies. We also examine how the resonance frequencies vary as a function of the driving parameters. Finally we examine two events in which we compare the derived frequencies with measurements from the SAMBA magnetometer array.

  17. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    Science.gov (United States)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  18. Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints

    Directory of Open Access Journals (Sweden)

    Raphaël Beamonte

    2016-01-01

    Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.

  19. Mutiscale Modeling of Segregation in Granular Flows

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Jin [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    force networks. This algorithm provides a possible route to constructing a continuum model with microstructural information supplied from it. Microstructures in gas fluidized beds are also analyzed using a hybrid method, which couples the discrete element method (DEM) for particle dynamics with the averaged two-fluid (TF) equations for the gas phase. Multi-particle contacts are found in defluidized regions away from bubbles in fluidized beds. The multi-particle contacts invalidate the binary-collision assumption made in the kinetic theory of granular flows for the defluidized regions. Large ratios of contact forces to drag forces are found in the same regions, which confirms the relative importance of contact forces in determining particle dynamics in the defluidized regions.

  20. Hydrophobic hydration processes. General thermodynamic model by thermal equivalent dilution determinations.

    Science.gov (United States)

    Fisicaro, E; Compari, C; Braibanti, A

    2010-10-01

    The "hydrophobic hydration processes" can be satisfactorily interpreted on the basis of a common molecular model for water, consisting of two types of clusters, namely W(I) and W(II) accompanied by free molecules W(III). The principle of thermal equivalent dilution (TED) is the potent tool (Ergodic Hypothesis) employed to monitor the water equilibrium and to determine the number xi(w) of water molecules W(III) involved in each process. The hydrophobic hydration processes can be subdivided into two Classes: Class A includes those processes for which the transformation A(-xi(w)W(I)-->xi(w)W(II)+xi(w)W(III)+cavity) takes place with the formation of a cavity, by expulsion of xi(w) water molecules W(III) whereas Class B includes those processes for which the opposite transformation B(-xi(w)W(II)-xi(w)W(III)-->xi(w)W(I)-cavity) takes place with reduction of the cavity, by condensation of xi(w) water molecules W(III). The number xi(w) depends on the size of the reactants and measures the extent of the change in volume of the cavity. Disaggregating the thermodynamic functions DeltaH(app) and DeltaS(app) as the functions of T (or lnT) and xi(w) has enabled the separation of the thermodynamic functions into work and thermal components. The work functions DeltaG(Work), DeltaH(Work) and DeltaS(Work) only refer specifically to the hydrophobic effects of cavity formation or cavity reduction, respectively. The constant self-consistent unitary (xi(w)=1) work functions obtained from both large and small molecules indicate that the same unitary reaction is taking place, independent from the reactant size. The thermal functions DeltaH(Th) and DeltaS(Th) refer exclusively to the passage of state of water W(III). Essential mathematical algorithms are presented in the appendices. 2010 Elsevier B.V. All rights reserved.

  1. Dose and Dose-Rate Effectiveness Factor (DDREF); Der Dosis- und Dosisleistungs-Effektivitaetsfaktor (DDREF)

    Energy Technology Data Exchange (ETDEWEB)

    Breckow, Joachim [Fachhochschule Giessen-Friedberg, Giessen (Germany). Inst. fuer Medizinische Physik und Strahlenschutz

    2016-08-01

    For practical radiation protection purposes it is supposed that stochastic radiation effects a determined by a proportional dose relation (LNT). Radiobiological and radiation epidemiological studies indicated that in the low dose range a dependence on dose rates might exist. This would trigger an overestimation of radiation risks based on the LNT model. OCRP had recommended a concept to combine all effects in a single factor DDREF (dose and dose-Rate effectiveness factor). There is still too low information on cellular mechanisms of low dose irradiation including possible repair and other processes. The Strahlenschutzkommission cannot identify a sufficient scientific justification for DDREF and recommends an adaption to the actual state of science.

  2. CRADA Final Report for CRADA Number ORNL00-0605: Advanced Engine/Aftertreatment System R&D

    Energy Technology Data Exchange (ETDEWEB)

    Pihl, Josh A [ORNL; West, Brian H [ORNL; Toops, Todd J [ORNL; Adelman, Brad [Navistar; Derybowski, Edward [Navistar

    2011-10-01

    compound experiments confirmed the previous results regarding hydrocarbon reactivity: 1-pentene was the most efficient LNT reductant, followed by toluene. Injection location had minimal impact on the reactivity of these two compounds. Iso-octane was an ineffective LNT reductant, requiring high doses (resulting in high HC emissions) to achieve reasonable NOx conversions. Diesel fuel reactivity was sensitive to injection location, with the best performance achieved through fuel injection downstream of the DOC. This configuration generated large LNT temperature excursions, which probably improved the efficiency of the NOx storage/reduction process, but also resulted in very high HC emissions. The ORNL team demonstrated an LNT desulfation under 'road load' conditions using throttling, EGR, and in-pipe injection of diesel fuel. Flow reactor characterization of core samples cut from the front and rear of the engine-aged LNT revealed complex spatially dependent degradation mechanisms. The front of the catalyst contained residual sulfates, which impacted NOx storage and conversion efficiencies at high temperatures. The rear of the catalyst showed significant sintering of the washcoat and precious metal particles, resulting in lower NOx conversion efficiencies at low temperatures. Further flow reactor characterization of engine-aged LNT core samples established that low temperature performance was limited by slow release and reduction of stored NOx during regeneration. Carbon monoxide was only effective at regenerating the LNT at temperatures above 200 C; propene was unreactive even at 250 C. Low temperature operation also resulted in unselective NOx reduction, resulting in high emissions of both N{sub 2}O and NH{sub 3}. During the latter years of the CRADA, the focus was shifted from LNTs to other aftertreatment devices. Two years of the CRADA were spent developing detailed ammonia SCR device models with sufficient accuracy and computational efficiency to be used in

  3. A numerical cloud model to interpret the isotope content of hailstones

    International Nuclear Information System (INIS)

    Jouzel, J.; Brichet, N.; Thalmann, B.; Federer, B.

    1980-07-01

    Measurements of the isotope content of hailstones are frequently used to deduce their trajectories and updraft speeds within severe storms. The interpretation was made in the past on the basis of an adiabatic equilibrium model in which the stones grew exclusively by interaction with droplets and vapor. Using the 1D steady-state model of Hirsch with parametrized cloud physics these unrealistic assumptions were dropped and the effects of interactions between droplets, drops, ice crystals and graupel on the concentrations of stable isotopes in hydrometeors were taken into account. The construction of the model is briefly discussed. The resulting height profiles of D and O 18 in hailstones deviate substantially from the equilibrium case, rendering most earlier trajectory calculations invalid. It is also seen that in the lower cloud layers the ice of the stones is richer due to relaxation effects, but at higher cloud layers (T(a) 0 C) the ice is much poorer in isotopes. This yields a broader spread of the isotope values in the interval 0>T(a)>-35 0 C or alternatively, it means that hailstones with a very large range of measured isotope concentrations grow in a smaller and therefore more realistic temperature interval. The use of the model in practice will be demonstrated

  4. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  5. Combinatorial DNA Damage Pairing Model Based on X-Ray-Induced Foci Predicts the Dose and LET Dependence of Cell Death in Human Breast Cells

    Energy Technology Data Exchange (ETDEWEB)

    Vadhavkar, Nikhil [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Pham, Christopher [University of Texas, Houston, TX (United States). MD Anderson Cancer Center; Georgescu, Walter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.; Deschamps, Thomas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.; Heuskin, Anne-Catherine [Univ. of Namur (Belgium). Namur Research inst. for Life Sciences (NARILIS), Research Center for the Physics of Matter and Radiation (PMR); Tang, Jonathan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.; Costes, Sylvain V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Life Sciences Div.

    2014-09-01

    are based on experimental RIF and are three times larger than the hypothetical LEM voxel used to fit survival curves. Our model is therefore an alternative to previous approaches that provides a testable biological mechanism (i.e., RIF). In addition, we propose that DSB pairing will help develop more accurate alternatives to the linear cancer risk model (LNT) currently used for regulating exposure to very low levels of ionizing radiation.

  6. A parsimonious approach to modeling animal movement data.

    Directory of Open Access Journals (Sweden)

    Yann Tremblay

    Full Text Available Animal tracking is a growing field in ecology and previous work has shown that simple speed filtering of tracking data is not sufficient and that improvement of tracking location estimates are possible. To date, this has required methods that are complicated and often time-consuming (state-space models, resulting in limited application of this technique and the potential for analysis errors due to poor understanding of the fundamental framework behind the approach. We describe and test an alternative and intuitive approach consisting of bootstrapping random walks biased by forward particles. The model uses recorded data accuracy estimates, and can assimilate other sources of data such as sea-surface temperature, bathymetry and/or physical boundaries. We tested our model using ARGOS and geolocation tracks of elephant seals that also carried GPS tags in addition to PTTs, enabling true validation. Among pinnipeds, elephant seals are extreme divers that spend little time at the surface, which considerably impact the quality of both ARGOS and light-based geolocation tracks. Despite such low overall quality tracks, our model provided location estimates within 4.0, 5.5 and 12.0 km of true location 50% of the time, and within 9, 10.5 and 20.0 km 90% of the time, for above, equal or below average elephant seal ARGOS track qualities, respectively. With geolocation data, 50% of errors were less than 104.8 km (<0.94 degrees, and 90% were less than 199.8 km (<1.80 degrees. Larger errors were due to lack of sea-surface temperature gradients. In addition we show that our model is flexible enough to solve the obstacle avoidance problem by assimilating high resolution coastline data. This reduced the number of invalid on-land location by almost an order of magnitude. The method is intuitive, flexible and efficient, promising extensive utilization in future research.

  7. Vortex ring state by full-field actuator disc model

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, J.N.; Shen, W.Z.; Munduate, X. [DTU, Dept. of Energy Engineering, Lyngby (Denmark)

    1997-08-01

    One-dimensional momentum theory provides a simple analytical tool for analysing the gross flow behavior of lifting propellers and rotors. Combined with a blade-element strip-theory approach, it has for many years been the most popular model for load and performance predictions of wind turbines. The model works well at moderate and high wind velocities, but is not reliable at small wind velocities, where the expansion of the wake is large and the flow field behind the rotor dominated by turbulent mixing. This is normally referred to as the turbulent wake state or the vortex ring state. In the vortex ring state, momentum theory predicts a decrease of thrust whereas the opposite is found from experiments. The reason for the disagreement is that recirculation takes place behind the rotor with the consequence that the stream tubes past the rotor becomes effectively chocked. This represents a condition at which streamlines no longer carry fluid elements from far upstream to far downstream, hence one-dimensional momentum theory is invalid and empirical corrections have to be introduced. More sophisticated analytical or semi-analytical rotor models have been used to describe stationary flow fields for heavily loaded propellers. In recent years generalized actuator disc models have been developed, but up to now no detailed computations of the turbulent wake state or the vortex ring state have been performed. In the present work the phenomenon is simulated by direct simulation of the Navier-Stokes equations, where the influence of the rotor on the flow field is modelled simply by replacing the blades by an actuator disc with a constant normal load. (EG) 13 refs.

  8. Validity of two-phase polymer electrolyte membrane fuel cell models with respect to the gas diffusion layer

    Science.gov (United States)

    Ziegler, C.; Gerteisen, D.

    A dynamic two-phase model of a proton exchange membrane fuel cell with respect to the gas diffusion layer (GDL) is presented and compared with chronoamperometric experiments. Very good agreement between experiment and simulation is achieved for potential step voltammetry (PSV) and sine wave testing (SWT). Homogenized two-phase models can be categorized in unsaturated flow theory (UFT) and multiphase mixture (M 2) models. Both model approaches use the continuum hypothesis as fundamental assumption. Cyclic voltammetry experiments show that there is a deterministic and a stochastic liquid transport mode depending on the fraction of hydrophilic pores of the GDL. ESEM imaging is used to investigate the morphology of the liquid water accumulation in the pores of two different media (unteflonated Toray-TGP-H-090 and hydrophobic Freudenberg H2315 I3). The morphology of the liquid water accumulation are related with the cell behavior. The results show that UFT and M 2 two-phase models are a valid approach for diffusion media with large fraction of hydrophilic pores such as unteflonated Toray-TGP-H paper. However, the use of the homgenized UFT and M 2 models appears to be invalid for GDLs with large fraction of hydrophobic pores that corresponds to a high average contact angle of the GDL.

  9. Lipoproteins of slow-growing Mycobacteria carry three fatty acids and are N-acylated by apolipoprotein N-acyltransferase BCG_2070c.

    Science.gov (United States)

    Brülle, Juliane K; Tschumi, Andreas; Sander, Peter

    2013-10-05

    Lipoproteins are virulence factors of Mycobacterium tuberculosis. Bacterial lipoproteins are modified by the consecutive action of preprolipoprotein diacylglyceryl transferase (Lgt), prolipoprotein signal peptidase (LspA) and apolipoprotein N- acyltransferase (Lnt) leading to the formation of mature triacylated lipoproteins. Lnt homologues are found in Gram-negative and high GC-rich Gram-positive, but not in low GC-rich Gram-positive bacteria, although N-acylation is observed. In fast-growing Mycobacterium smegmatis, the molecular structure of the lipid modification of lipoproteins was resolved recently as a diacylglyceryl residue carrying ester-bound palmitic acid and ester-bound tuberculostearic acid and an additional amide-bound palmitic acid. We exploit the vaccine strain Mycobacterium bovis BCG as model organism to investigate lipoprotein modifications in slow-growing mycobacteria. Using Escherichia coli Lnt as a query in BLASTp search, we identified BCG_2070c and BCG_2279c as putative lnt genes in M. bovis BCG. Lipoproteins LprF, LpqH, LpqL and LppX were expressed in M. bovis BCG and BCG_2070c lnt knock-out mutant and lipid modifications were analyzed at molecular level by matrix-assisted laser desorption ionization time-of-flight/time-of-flight analysis. Lipoprotein N-acylation was observed in wildtype but not in BCG_2070c mutants. Lipoprotein N- acylation with palmitoyl and tuberculostearyl residues was observed. Lipoproteins are triacylated in slow-growing mycobacteria. BCG_2070c encodes a functional Lnt in M. bovis BCG. We identified mycobacteria-specific tuberculostearic acid as further substrate for N-acylation in slow-growing mycobacteria.

  10. A statistical framework for modeling HLA-dependent T cell response data.

    Directory of Open Access Journals (Sweden)

    Jennifer Listgarten

    2007-10-01

    Full Text Available The identification of T cell epitopes and their HLA (human leukocyte antigen restrictions is important for applications such as the design of cellular vaccines for HIV. Traditional methods for such identification are costly and time-consuming. Recently, a more expeditious laboratory technique using ELISpot assays has been developed that allows for rapid screening of specific responses. However, this assay does not directly provide information concerning the HLA restriction of a response, a critical piece of information for vaccine design. Thus, we introduce, apply, and validate a statistical model for identifying HLA-restricted epitopes from ELISpot data. By looking at patterns across a broad range of donors, in conjunction with our statistical model, we can determine (probabilistically which of the HLA alleles are likely to be responsible for the observed reactivities. Additionally, we can provide a good estimate of the number of false positives generated by our analysis (i.e., the false discovery rate. This model allows us to learn about new HLA-restricted epitopes from ELISpot data in an efficient, cost-effective, and high-throughput manner. We applied our approach to data from donors infected with HIV and identified many potential new HLA restrictions. Among 134 such predictions, six were confirmed in the lab and the remainder could not be ruled as invalid. These results shed light on the extent of HLA class I promiscuity, which has significant implications for the understanding of HLA class I antigen presentation and vaccine development.

  11. Mitigating Bias in Generalized Linear Mixed Models: The Case for Bayesian Nonparametrics.

    Science.gov (United States)

    Antonelli, Joseph; Trippa, Lorenzo; Haneuse, Sebastien

    2016-02-01

    Generalized linear mixed models are a common statistical tool for the analysis of clustered or longitudinal data where correlation is accounted for through cluster-specific random effects. In practice, the distribution of the random effects is typically taken to be a Normal distribution, although if this does not hold then the model is misspecified and standard estimation/inference may be invalid. An alternative is to perform a so-called nonparametric Bayesian analyses in which one assigns a Dirichlet process (DP) prior to the unknown distribution of the random effects. In this paper we examine operating characteristics for estimation of fixed effects and random effects based on such an analysis under a range of "true" random effects distributions. As part of this we investigate various approaches for selection of the precision parameter of the DP prior. In addition, we illustrate the use of the methods with an analysis of post-operative complications among n = 18, 643 female Medicare beneficiaries who underwent a hysterectomy procedure at N = 503 hospitals in the US. Overall, we conclude that using the DP priori n modeling the random effect distribution results in large reductions of bias with little loss of efficiency. While no single choice for the precision parameter will be optimal in all settings, certain strategies such as importance sampling or empirical Bayes can be used to obtain reasonable results in a broad range of data scenarios.

  12. Mechanism for the non-Fermi-liquid behavior in CeCu6-xAux

    DEFF Research Database (Denmark)

    Rosch, A.; Schröder, A.; Stockert, O.

    1997-01-01

    We propose an explanation for the recently observed non-Fermi-liquid behavior of metallic alloys CeCu6-xAux: Near x = 0.1, the specific heat C is proportional to T ln(T-0/T), and the resistivity increases linearly with temperature T over a wide range of T. These features follow from a model...

  13. IUE ultraviolet spectra and chromospheric models of HR 1099 and UX Arietis

    Science.gov (United States)

    Simon, T.; Linsky, J. L.

    1980-01-01

    IUE spectra in the region 1150-3200 A of the RS CVn-type variables HR 1099 and UX Arietis are presented and analyzed in terms of chromospheric models. Measurements of Mg h and k lines and Ca II H-K and H alpha spectra are indicated which are found not to be correlated with orbital phase or radio flares and which suggest that the strong emission arises in the K star rather than the G star in these systems. Under the assumption that the UV emission lines are associated with the K star, surface gravities of log g = 3.6 and 3.4 and effective temperatures of 4850 and 5000 K are adopted for HR 1099 and UX Ari, respectively, along with solar metal abundances for each. Model calculations of the chromospheric structure necessary to account for observed C(+), Mg(+), Si(+) and Si(+2) line fluxes are presented which indicate that the transition region pressure lies in the range 0.18-1.0 dynes/sq cm, implying transition regions that are more extended than that of the sun and are not conductively heated. It is noted that pressure scaling laws and the use of Mg II and C II lines as pressure diagnostics may be invalid, possibly due to atmospheric inhomogeneities or gas flows.

  14. Taxonomic analysis of perceived risk: modeling individual and group perceptions within homogeneous hazard domains

    International Nuclear Information System (INIS)

    Kraus, N.N.; Slovic, P.

    1988-01-01

    Previous studies of risk perception have typically focused on the mean judgments of a group of people regarding the riskiness (or safety) of a diverse set of hazardous activities, substances, and technologies. This paper reports the results of two studies that take a different path. Study 1 investigated whether models within a single technological domain were similar to previous models based on group means and diverse hazards. Study 2 created a group taxonomy of perceived risk for only one technological domain, railroads, and examined whether the structure of that taxonomy corresponded with taxonomies derived from prior studies of diverse hazards. Results from Study 1 indicated that the importance of various risk characteristics in determining perceived risk differed across individuals and across hazards, but not so much as to invalidate the results of earlier studies based on group means and diverse hazards. In Study 2, the detailed analysis of railroad hazards produced a structure that had both important similarities to, and dissimilarities from, the structure obtained in prior research with diverse hazard domains. The data also indicated that railroad hazards are really quite diverse, with some approaching nuclear reactors in their perceived seriousness. These results suggest that information about the diversity of perceptions within a single domain of hazards could provide valuable input to risk-management decisions

  15. Global and local cancer risks after the Fukushima Nuclear Power Plant accident as seen from Chernobyl: a modeling study for radiocaesium ((134)Cs &(137)Cs).

    Science.gov (United States)

    Evangeliou, Nikolaos; Balkanski, Yves; Cozic, Anne; Møller, Anders Pape

    2014-03-01

    The accident at the Fukushima Daiichi Nuclear Power Plant (NPP) in Japan resulted in the release of a large number of fission products that were transported worldwide. We study the effects of two of the most dangerous radionuclides emitted, (137)Cs (half-life: 30.2years) and (134)Cs (half-life: 2.06years), which were transported across the world constituting the global fallout (together with iodine isotopes and noble gasses) after nuclear releases. The main purpose is to provide preliminary cancer risk estimates after the Fukushima NPP accident, in terms of excess lifetime incident and death risks, prior to epidemiology, and compare them with those occurred after the Chernobyl accident. Moreover, cancer risks are presented for the local population in the form of high-resolution risk maps for 3 population classes and for both sexes. The atmospheric transport model LMDZORINCA was used to simulate the global dispersion of radiocaesium after the accident. Air and ground activity concentrations have been incorporated with monitoring data as input to the LNT-model (Linear Non-Threshold) frequently used in risk assessments of all solid cancers. Cancer risks were estimated to be small for the global population in regions outside Japan. Women are more sensitive to radiation than men, although the largest risks were recorded for infants; the risk is not depended on the sex at the age-at-exposure. Radiation risks from Fukushima were more enhanced near the plant, while the evacuation measures were crucial for its reduction. According to our estimations, 730-1700 excess cancer incidents are expected of which around 65% may be fatal, which are very close to what has been already published (see references therein). Finally, we applied the same calculations using the DDREF (Dose and Dose Rate Effectiveness Factor), which is recommended by the ICRP, UNSCEAR and EPA as an alternative reduction factor instead of using a threshold value (which is still unknown). Excess lifetime cancer

  16. The conceptualization model problem—surprise

    Science.gov (United States)

    Bredehoeft, John

    2005-03-01

    The foundation of model analysis is the conceptual model. Surprise is defined as new data that renders the prevailing conceptual model invalid; as defined here it represents a paradigm shift. Limited empirical data indicate that surprises occur in 20-30% of model analyses. These data suggest that groundwater analysts have difficulty selecting the appropriate conceptual model. There is no ready remedy to the conceptual model problem other than (1) to collect as much data as is feasible, using all applicable methods—a complementary data collection methodology can lead to new information that changes the prevailing conceptual model, and (2) for the analyst to remain open to the fact that the conceptual model can change dramatically as more information is collected. In the final analysis, the hydrogeologist makes a subjective decision on the appropriate conceptual model. The conceptualization problem does not render models unusable. The problem introduces an uncertainty that often is not widely recognized. Conceptual model uncertainty is exacerbated in making long-term predictions of system performance. C'est le modèle conceptuel qui se trouve à base d'une analyse sur un modèle. On considère comme une surprise lorsque le modèle est invalidé par des données nouvelles; dans les termes définis ici la surprise est équivalente à un change de paradigme. Des données empiriques limitées indiquent que les surprises apparaissent dans 20 à 30% des analyses effectuées sur les modèles. Ces données suggèrent que l'analyse des eaux souterraines présente des difficultés lorsqu'il s'agit de choisir le modèle conceptuel approprié. Il n'existe pas un autre remède au problème du modèle conceptuel que: (1) rassembler autant des données que possible en utilisant toutes les méthodes applicables—la méthode des données complémentaires peut conduire aux nouvelles informations qui vont changer le modèle conceptuel, et (2) l'analyste doit rester ouvert au fait

  17. The Application of Cyber Physical System for Thermal Power Plants: Data-Driven Modeling

    Directory of Open Access Journals (Sweden)

    Yongping Yang

    2018-03-01

    Full Text Available Optimal operation of energy systems plays an important role to enhance their lifetime security and efficiency. The determination of optimal operating strategies requires intelligent utilization of massive data accumulated during operation or prediction. The investigation of these data solely without combining physical models may run the risk that the established relationships between inputs and outputs, the models which reproduce the behavior of the considered system/component in a wide range of boundary conditions, are invalid for certain boundary conditions, which never occur in the database employed. Therefore, combining big data with physical models via cyber physical systems (CPS is of great importance to derive highly-reliable and -accurate models and becomes more and more popular in practical applications. In this paper, we focus on the description of a systematic method to apply CPS to the performance analysis and decision making of thermal power plants. We proposed a general procedure of CPS with both offline and online phases for its application to thermal power plants and discussed the corresponding methods employed to support each sub-procedure. As an example, a data-driven model of turbine island of an existing air-cooling based thermal power plant is established with the proposed procedure and demonstrates its practicality, validity and flexibility. To establish such model, the historical operating data are employed in the cyber layer for modeling and linking each physical component. The decision-making procedure of optimal frequency of air-cooling condenser is also illustrated to show its applicability of online use. It is concluded that the cyber physical system with the data mining technique is effective and promising to facilitate the real-time analysis and control of thermal power plants.

  18. Internal velocity and mass distributions in simulated clusters of galaxies for a variety of cosmogonic models

    Science.gov (United States)

    Cen, Renyue

    1994-01-01

    The mass and velocity distributions in the outskirts (0.5-3.0/h Mpc) of simulated clusters of galaxies are examined for a suite of cosmogonic models (two Omega(sub 0) = 1 and two Omega(sub 0) = 0.2 models) utilizing large-scale particle-mesh (PM) simulations. Through a series of model computations, designed to isolate the different effects, we find that both Omega(sub 0) and P(sub k) (lambda less than or = 16/h Mpc) are important to the mass distributions in clusters of galaxies. There is a correlation between power, P(sub k), and density profiles of massive clusters; more power tends to point to the direction of a stronger correlation between alpha and M(r less than 1.5/h Mpc); i.e., massive clusters being relatively extended and small mass clusters being relatively concentrated. A lower Omega(sub 0) universe tends to produce relatively concentrated massive clusters and relatively extended small mass clusters compared to their counterparts in a higher Omega(sub 0) model with the same power. Models with little (initial) small-scale power, such as the hot dark matter (HDM) model, produce more extended mass distributions than the isothermal distribution for most of the mass clusters. But the cold dark matter (CDM) models show mass distributions of most of the clusters more concentrated than the isothermal distribution. X-ray and gravitational lensing observations are beginning providing useful information on the mass distribution in and around clusters; some interesting constraints on Omega(sub 0) and/or the (initial) power of the density fluctuations on scales lambda less than or = 16/h Mpc (where linear extrapolation is invalid) can be obtained when larger observational data sets, such as the Sloan Digital Sky Survey, become available.

  19. TU-C-18A-01: Models of Risk From Low-Dose Radiation Exposures: What Does the Evidence Say?

    International Nuclear Information System (INIS)

    Bushberg, J; Boreham, D; Ulsh, B

    2014-01-01

    At dose levels of (approximately) 500 mSv or more, increased cancer incidence and mortality have been clearly demonstrated. However, at the low doses of radiation used in medical imaging, the relationship between dose and cancer risk is not well established. As such, assumptions about the shape of the dose-response curve are made. These assumptions, or risk models, are used to estimate potential long term effects. Common models include 1) the linear non-threshold (LNT) model, 2) threshold models with either a linear or curvilinear dose response above the threshold, and 3) a hormetic model, where the risk is initially decreased below background levels before increasing. The choice of model used when making radiation risk or protection calculations and decisions can have significant implications on public policy and health care decisions. However, the ongoing debate about which risk model best describes the dose-response relationship at low doses of radiation makes informed decision making difficult. This symposium will review the two fundamental approaches to determining the risk associated with low doses of ionizing radiation, namely radiation epidemiology and radiation biology. The strengths and limitations of each approach will be reviewed, the results of recent studies presented, and the appropriateness of different risk models for various real world scenarios discussed. Examples of well-designed and poorly-designed studies will be provided to assist medical physicists in 1) critically evaluating publications in the field and 2) communicating accurate information to medical professionals, patients, and members of the general public. Equipped with the best information that radiation epidemiology and radiation biology can currently provide, and an understanding of the limitations of such information, individuals and organizations will be able to make more informed decisions regarding questions such as 1) how much shielding to install at medical facilities, 2) at

  20. TU-C-18A-01: Models of Risk From Low-Dose Radiation Exposures: What Does the Evidence Say?

    Energy Technology Data Exchange (ETDEWEB)

    Bushberg, J [UC Davis Medical Center, Sacramento, CA (United States); Boreham, D [McMaster University, Ontario, CA (Canada); Ulsh, B

    2014-06-15

    At dose levels of (approximately) 500 mSv or more, increased cancer incidence and mortality have been clearly demonstrated. However, at the low doses of radiation used in medical imaging, the relationship between dose and cancer risk is not well established. As such, assumptions about the shape of the dose-response curve are made. These assumptions, or risk models, are used to estimate potential long term effects. Common models include 1) the linear non-threshold (LNT) model, 2) threshold models with either a linear or curvilinear dose response above the threshold, and 3) a hormetic model, where the risk is initially decreased below background levels before increasing. The choice of model used when making radiation risk or protection calculations and decisions can have significant implications on public policy and health care decisions. However, the ongoing debate about which risk model best describes the dose-response relationship at low doses of radiation makes informed decision making difficult. This symposium will review the two fundamental approaches to determining the risk associated with low doses of ionizing radiation, namely radiation epidemiology and radiation biology. The strengths and limitations of each approach will be reviewed, the results of recent studies presented, and the appropriateness of different risk models for various real world scenarios discussed. Examples of well-designed and poorly-designed studies will be provided to assist medical physicists in 1) critically evaluating publications in the field and 2) communicating accurate information to medical professionals, patients, and members of the general public. Equipped with the best information that radiation epidemiology and radiation biology can currently provide, and an understanding of the limitations of such information, individuals and organizations will be able to make more informed decisions regarding questions such as 1) how much shielding to install at medical facilities, 2) at

  1. Crisis Decision Making Through a Shared Integrative Negotiation Mental Model

    NARCIS (Netherlands)

    Van Santen, W.; Jonker, C.M.; Wijngaards, N.

    2009-01-01

    Decision making during crises takes place in (multi-agency) teams, in a bureaucratic political context. As a result, the common notion that during crises decision making should be done in line with a Command & Control structure is invalid. This paper shows that the best way for crisis decision

  2. A novel mouse model of creatine transporter deficiency [v2; ref status: indexed, http://f1000r.es/4zb

    Directory of Open Access Journals (Sweden)

    Laura Baroncelli

    2015-01-01

    Full Text Available Mutations in the creatine (Cr transporter (CrT gene lead to cerebral creatine deficiency syndrome-1 (CCDS1, an X-linked metabolic disorder characterized by cerebral Cr deficiency causing intellectual disability, seizures, movement  and behavioral disturbances, language and speech impairment ( OMIM #300352. CCDS1 is still an untreatable pathology that can be very invalidating for patients and caregivers. Only two murine models of CCDS1, one of which is an ubiquitous knockout mouse, are currently available to study the possible mechanisms underlying the pathologic phenotype of CCDS1 and to develop therapeutic strategies. Given the importance of validating phenotypes and efficacy of promising treatments in more than one mouse model we have generated a new murine model of CCDS1 obtained by ubiquitous deletion of 5-7 exons in the Slc6a8 gene. We showed a remarkable Cr depletion in the murine brain tissues and cognitive defects, thus resembling the key features of human CCDS1. These results confirm that CCDS1 can be well modeled in mice. This CrT−/y murine model will provide a new tool for increasing the relevance of preclinical studies to the human disease.

  3. A novel mouse model of creatine transporter deficiency [v1; ref status: indexed, http://f1000r.es/4f8

    Directory of Open Access Journals (Sweden)

    Laura Baroncelli

    2014-09-01

    Full Text Available Mutations in the creatine (Cr transporter (CrT gene lead to cerebral creatine deficiency syndrome-1 (CCDS1, an X-linked metabolic disorder characterized by cerebral Cr deficiency causing intellectual disability, seizures, movement  and behavioral disturbances, language and speech impairment ( OMIM #300352. CCDS1 is still an untreatable pathology that can be very invalidating for patients and caregivers. Only two murine models of CCDS1, one of which is an ubiquitous knockout mouse, are currently available to study the possible mechanisms underlying the pathologic phenotype of CCDS1 and to develop therapeutic strategies. Given the importance of validating phenotypes and efficacy of promising treatments in more than one mouse model we have generated a new murine model of CCDS1 obtained by ubiquitous deletion of 5-7 exons in the Slc6a8 gene. We showed a remarkable Cr depletion in the murine brain tissues and cognitive defects, thus resembling the key features of human CCDS1. These results confirm that CCDS1 can be well modeled in mice. This CrT−/y murine model will provide a new tool for increasing the relevance of preclinical studies to the human disease.

  4. Selective experimental review of the Standard Model

    International Nuclear Information System (INIS)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are α/sub s/, α/sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, Mμ, M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta 1 , theta 2 , theta 3 , and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant α/sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring α/sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures

  5. Modelling regional variability of irrigation requirements due to climate change in Northern Germany.

    Science.gov (United States)

    Riediger, Jan; Breckling, Broder; Svoboda, Nikolai; Schröder, Winfried

    2016-01-15

    The question whether global climate change invalidates the efficiency of established land use practice cannot be answered without systemic considerations on a region specific basis. In this context plant water availability and irrigation requirements, respectively, were investigated in Northern Germany. The regions under investigation--Diepholz, Uelzen, Fläming and Oder-Spree--represent a climatic gradient with increasing continentality from West to East. Besides regional climatic variation and climate change, soil conditions and crop management differ on the regional scale. In the model regions, temporal seasonal droughts influence crop success already today, but on different levels of intensity depending mainly on climate conditions. By linking soil water holding capacities, crop management data and calculations of evapotranspiration and precipitation from the climate change scenario RCP 8.5 irrigation requirements for maintaining crop productivity were estimated for the years 1991 to 2070. Results suggest that water requirement for crop irrigation is likely to increase with considerable regional variation. For some of the regions, irrigation requirements might increase to such an extent that the established regional agricultural practice might be hard to retain. Where water availability is limited, agricultural practice, like management and cultivated crop spectrum, has to be changed to deal with the new challenges. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A quadrature-based kinetic model for a dilute non-isothermal granular gas

    Science.gov (United States)

    Passalacqua, Alberto; Galvin, Janine; Vedula, Prakash; Hrenya, Christine; Fox, Rodney

    2009-11-01

    A dilute non-isothermal inelastic granular gas between two stationary Maxwellian walls is studied by means of numerical simulations of the Boltzmann kinetic equation with hard-sphere collisions. The behavior of a granular gas in these conditions is influenced by the thickness of the wall Knudsen layer: if its thickness is not negligible, the traditional description based on the Navier-Stokes-Fourier equations is invalid, and it is necessary to account for the presence of rarefaction effects using high-order solutions of the Boltzmann equation. The system is described by solving the full Boltzmann equation using a quadrature-based moment method (QMOM), with different orders of accuracy in terms of the moments of the distribution function, considering moments up to the seventh order. Four different inelastic collision models (BGK, ES-BGK, Maxwell hard-sphere, Boltzmann hard-sphere) are employed. QMOM results are compared with the predictions of molecular dynamics (MD) simulations of a nearly equivalent system with finite-size particles, showing the agreement of constitutive quantities such as heat flux and stress tensor.

  7. Energy Efficient Thermal Management for Natural Gas Engine Aftertreatment via Active Flow Control

    Energy Technology Data Exchange (ETDEWEB)

    David K. Irick; Ke Nguyen; Vitacheslav Naoumov; Doug Ferguson

    2006-04-01

    The project is focused on the development of an energy efficient aftertreatment system capable of reducing NOx and methane by 90% from lean-burn natural gas engines by applying active exhaust flow control. Compared to conventional passive flow-through reactors, the proposed scheme cuts supplemental energy by 50%-70%. The system consists of a Lean NOx Trap (LNT) system and an oxidation catalyst. Through alternating flow control, a major amount of engine exhaust flows through a large portion of the LNT system in the absorption mode, while a small amount of exhaust goes through a small portion of the LNT system in the regeneration or desulfurization mode. By periodically reversing the exhaust gas flow through the oxidation catalyst, a higher temperature profile is maintained in the catalyst bed resulting in greater efficiency of the oxidation catalyst at lower exhaust temperatures. The project involves conceptual design, theoretical analysis, computer simulation, prototype fabrication, and empirical studies. This report details the progress during the first twelve months of the project. The primary activities have been to develop the bench flow reactor system, develop the computer simulation and modeling of the reverse-flow oxidation catalyst, install the engine into the test cell, and begin design of the LNT system.

  8. Reversal of il-1β-mediated human embryonic pulmonary fibroblast transdifferentiation by targeting the ERK signaling pathway

    Directory of Open Access Journals (Sweden)

    Jin Long-Teng

    2014-01-01

    Full Text Available The aim of the present study was to determine whether Interleukin (IL-1β-mediated human embryonic pulmonary fibroblast transdifferentiation could be reversed by targeting of the ERK signaling pathway. The human embryonic pulmonary fibroblast MRC-5 cell line was used as a model to observe IL-1β-mediated transdifferentiation as well and the inhibitory effects of lentinan (LNT. Cell proliferation was examined by a CCK-8 assay. ERK signaling activity was detected using immunoblotting with phospho-ERK antibody. The expression levels of fibronectin (FN, Col I and α-smooth muscle actin (α-SMA were assessed by either reverse transcription PCR or the SABC assay. IL-1β-induced-ERK signaling activation in MRC-5 cells was inhibited by pretreatment with the LNT or ERK inhibitor U0126. IL-1β-enhanced cell proliferation and expression of FN, Col I and α-SMA were also attenuated by the treatment with LNT. Our study revealed that activation of ERK signaling is involved in IL-1β-mediated human embryonic pulmonary fibroblast proliferation, phenotypic switching and collagen secretion. These transdifferentiation events in MRC-5 cells could be reversed with LNT treatment by targeting the ERK signaling pathway.

  9. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  10. Maintaining the predictive abilities of multivariate calibration models by spectral space transformation.

    Science.gov (United States)

    Du, Wen; Chen, Zeng-Ping; Zhong, Li-Jing; Wang, Shu-Xia; Yu, Ru-Qin; Nordon, Alison; Littlejohn, David; Holden, Megan

    2011-03-25

    In quantitative on-line/in-line monitoring of chemical and bio-chemical processes using spectroscopic instruments, multivariate calibration models are indispensable for the extraction of chemical information from complex spectroscopic measurements. The development of reliable multivariate calibration models is generally time-consuming and costly. Therefore, once a reliable multivariate calibration model is established, it is expected to be used for an extended period. However, any change in the instrumental response or variations in the measurement conditions can render a multivariate calibration model invalid. In this contribution, a new method, spectral space transformation (SST), has been developed to maintain the predictive abilities of multivariate calibration models when the spectrometer or measurement conditions are altered. SST tries to eliminate the spectral differences induced by the changes in instruments or measurement conditions through the transformation between two spectral spaces spanned by the corresponding spectra of a subset of standardization samples measured on two instruments or under two sets of experimental conditions. The performance of the method has been tested on two data sets comprising NIR and MIR spectra. The experimental results show that SST can achieve satisfactory analyte predictions from spectroscopic measurements subject to spectrometer/probe alteration, when only a few standardization samples are used. Compared with the existing popular methods designed for the same purpose, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS), SST has the advantages of implementation simplicity, wider applicability and better performance in terms of predictive accuracy. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Microkinetic Modeling of Lean NOx Trap Storage and Regeneration

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Richard S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chakravarthy, V. Kalyana [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pihl, Josh A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Daw, C. Stuart [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-12-01

    A microkinetic chemical reaction mechanism capable of describing both the storage and regeneration processes in a fully formulated lean NOx trap (LNT) is presented. The mechanism includes steps occurring on the precious metal, barium oxide (NOx storage), and cerium oxide (oxygen storage) sites of the catalyst. The complete reaction set is used in conjunction with a transient plug flow reactor code (including boundary layer mass transfer) to simulate not only a set of long storage/regeneration cycles with a CO/H2 reductant, but also a series of steady flow temperature sweep experiments that were previously analyzed with just a precious metal mechanism and a steady state code neglecting mass transfer. The results show that, while mass transfer effects are generally minor, NOx storage is not negligible during some of the temperature ramps, necessitating a re-evaluation of the precious metal kinetic parameters. The parameters for the entire mechanism are inferred by finding the best overall fit to the complete set of experiments. Rigorous thermodynamic consistency is enforced for parallel reaction pathways and with respect to known data for all of the gas phase species involved. It is found that, with a few minor exceptions, all of the basic experimental observations can be reproduced with the transient simulations. In addition to accounting for normal cycling behavior, the final mechanism should provide a starting point for the description of further LNT phenomena such as desulfation and the role of alternative reductants.

  12. Revisiting the Gram-negative lipoprotein paradigm.

    Science.gov (United States)

    LoVullo, Eric D; Wright, Lori F; Isabella, Vincent; Huntley, Jason F; Pavelka, Martin S

    2015-05-01

    The processing of lipoproteins (Lpps) in Gram-negative bacteria is generally considered an essential pathway. Mature lipoproteins in these bacteria are triacylated, with the final fatty acid addition performed by Lnt, an apolipoprotein N-acyltransferase. The mature lipoproteins are then sorted by the Lol system, with most Lpps inserted into the outer membrane (OM). We demonstrate here that the lnt gene is not essential to the Gram-negative pathogen Francisella tularensis subsp. tularensis strain Schu or to the live vaccine strain LVS. An LVS Δlnt mutant has a small-colony phenotype on sucrose medium and increased susceptibility to globomycin and rifampin. We provide data indicating that the OM lipoprotein Tul4A (LpnA) is diacylated but that it, and its paralog Tul4B (LpnB), still sort to the OM in the Δlnt mutant. We present a model in which the Lol sorting pathway of Francisella has a modified ABC transporter system that is capable of recognizing and sorting both triacylated and diacylated lipoproteins, and we show that this modified system is present in many other Gram-negative bacteria. We examined this model using Neisseria gonorrhoeae, which has the same Lol architecture as that of Francisella, and found that the lnt gene is not essential in this organism. This work suggests that Gram-negative bacteria fall into two groups, one in which full lipoprotein processing is essential and one in which the final acylation step is not essential, potentially due to the ability of the Lol sorting pathway in these bacteria to sort immature apolipoproteins to the OM. This paper describes the novel finding that the final stage in lipoprotein processing (normally considered an essential process) is not required by Francisella tularensis or Neisseria gonorrhoeae. The paper provides a potential reason for this and shows that it may be widespread in other Gram-negative bacteria. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  13. Geochemical modelling of groundwater chemistry in the Tono area

    International Nuclear Information System (INIS)

    Arthur, R.C.

    2003-03-01

    This report summarizes the research result about geochemical modelling of the groundwater during H14 financial year. JNC-TGC have built the geochemical conceptual models of groundwater by using the chemical data of the groundwater of the Mizunami group and Toki granite. Although this models are extremely useful as interpretive tools, they lack the quantitative basis necessary to evaluate coupled processes of fluid-flow and water-rock interaction driving the chemical evolution of groundwater systems. In this research, the following three items have been considered for the purpose of construction of the geochemical model which can express the chemical reaction in groundwater correctly. Evaluation of the quality of the previous analytical data in the Tono region. Grasp of the chemical character of groundwaters. Consideration about the influence between Eh, pH and CO 2 (g) parameter, and which the change has. Evaluation of the quality of the previous analytical data are important because deficiencies in sampling technique, sample-preservation procedures or analytical method may adversely affect the overall quality of groundwater chemical and isotopic analysis. In addition, the effects of borehole drilling and logging, hydraulic testing inappropriate sampling strategies or inadequate sampling tools may perturb groundwater compositions to such an extent that they are unrepresentative of in-situ conditions. The quality of water analysis is indicated by its charge balance. The charge balance of many of the analysis lies within the strictly acceptable range of 0±5%, but charge imbalances exceeding these limits are calculated for many other samples. These reasons are examined in the following. Analytical errors (e.q., of alkalinity). Errors arising from the use of IC values that are unrepresentative because CO 2 was gained by or lost from the sample during storage and (or) analysis. Errors arising from the invalid assumption that non-carbonate contributions to the

  14. Experimental Research Regarding New Models of Organizational Communication in the Romanian Tourism

    Directory of Open Access Journals (Sweden)

    Cristina STATE

    2015-12-01

    Full Text Available Presenting interests for the most various sciences (cybernetics, economics, ethnology, philosophy, history, psycho-sociology etc., the complex communication process incited and triggered a lot of opinions, many of them not complementary at all and even taken to the level of some passions generating contradictions. The result was the conceptualization of the content and of the communication functions on different forms called models by their creators. In time, with their evolution, the communication models have included, besides some basic elements (sender, message, means of communication, receiver and effect also a range of detail elements essential to streamline the process itself: the noise source , codec and feedback, the interaction of the field specific experience of the transmitter and receptor, the organizational context of communication and communication skills, including how to produce and interpretate these ones. Finally, any model’ functions are either heuristic (to explain, organizational (to order or predictive (making assumptions. They are worth only by their degree of probability remaining valid so long as it is not invalidated by practice and is one way of describing reality and not the reality itself. This is the context in which our work, the first of its kind in Romania, proposes in the context of improving organizational management, two new models of communication at both the micro- and macro- economic, models through which, using crowdsourcing, the units in the tourism, hospitality and leisure industry (THLI will be able to communicate more effectively, based not on own insights and / or perceptions but, firstly, on the views of management and experts in the field and especially on the customer’ feedback.

  15. Accurate market price formation model with both supply-demand and trend-following for global food prices providing policy recommendations.

    Science.gov (United States)

    Lagi, Marco; Bar-Yam, Yavni; Bertrand, Karla Z; Bar-Yam, Yaneer

    2015-11-10

    Recent increases in basic food prices are severely affecting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the United States, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time to our knowledge, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, whereas an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities, and bonds to take advantage of increased expected returns. Claims that speculators cannot influence grain prices are shown to be invalid by direct analysis of price-setting practices of granaries. Both causes of price increase, speculative investment and ethanol conversion, are promoted by recent regulatory changes-deregulation of the commodity markets, and policies promoting the conversion of corn to ethanol. Rapid action is needed to reduce the impacts of the price increases on global hunger.

  16. Working with invalid boundary conditions: lessons from the field for communicating about climate change with public audiences

    Science.gov (United States)

    Gunther, A.

    2015-12-01

    There is an ongoing need to communicate with public audiences about climate science, current and projected impacts, the importance of reducing greenhouse gas emissions, and the requirement to prepare for changes that are likely unavoidable. It is essential that scientists are engaged and active in this effort. Scientists can be more effective communicators about climate change to non-scientific audiences if we recognize that some of the normal "boundary conditions" under which we operate do not need to apply. From how we are trained to how we think about our audience, there are some specific skills and practices that allow us to be more effective communicators. The author will review concepts for making our communication more effective based upon his experience from over 60 presentations about climate change to public audiences. These include expressing how your knowledge makes you feel, anticipating (and accepting) questions unconstrained by physics, respecting beliefs and values while separating them from evidence, and using the history of climate science to provide a compelling narrative. Proper attention to presentation structure (particularly an opening statement), speaking techniques for audience engagement, and effective use of presentation software are also important.

  17. Health economics of interdisciplinary rehabilitation for chronic pain: does it support or invalidate the outcomes research of these programs?

    Science.gov (United States)

    Becker, Annette

    2012-04-01

    Interdisciplinary rehabilitation has been shown to be effective for treatment of patients suffering from chronic nonmalignant pain with respect to activity level, pain intensity, function, or days of sick leave. However, effects in clinical outcome do not necessarily imply a superiority of the intervention from an economic point of view. Despite an increasing number of cost-utility and cost-effectiveness studies, systematic reviews outline the methodological heterogeneity of studies, which makes it impossible to perform meta-analyses and to draw conclusions from the studies. Recent publications add interesting information to the current discussion; these studies cover the long-term development of sickness absence post-intervention and the cost effectiveness of workplace interventions, as well as a collaborative intervention in primary care. Much research has been done, and tendencies of effectiveness are visible, but there is still a long way to go to understand the economic implications of interdisciplinary rehabilitation from the perspectives of society, the health insurers, and the patients.

  18. Catalase-Aminotriazole Assay, an Invalid Method for Measurement of Hydrogen Peroxide Production by Wood Decay Fungi

    OpenAIRE

    Highley, Terry L.

    1981-01-01

    The catalase-aminotriazole assay for determination of hydrogen peroxide apparently cannot be used for measuring hydrogen peroxide production in crude preparations from wood decay fungi because of materials in the crude preparations that interfere with the test.

  19. MCQ testing in higher education: Yes, there are bad items and invalid scores—A case study identifying solutions

    OpenAIRE

    Brown, Gavin

    2017-01-01

    This is a lecture given at Umea University, Sweden in September 2017. It is based on the published study: Brown, G. T. L., & Abdulnabi, H. (2017). Evaluating the quality of higher education instructor-constructed multiple-choice tests: Impact on student grades. Frontiers in Education: Assessment, Testing, & Applied Measurement, 2(24).. doi:10.3389/feduc.2017.00024

  20. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  1. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  2. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.

    Science.gov (United States)

    Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John

    2018-03-07

    DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of

  3. Inter-rater reliability of healthcare professional skills' portfolio assessments: The Andalusian Agency for Healthcare Quality model

    Directory of Open Access Journals (Sweden)

    Antonio Almuedo-Paz

    2014-07-01

    Full Text Available This study aims to determine the reliability of assessment criteria used for a portfolio at the Andalusian Agency for Healthcare Quality (ACSA. Data: all competences certification processes, regardless of their discipline. Period: 2010-2011. Three types of tests are used: 368 certificates, 17,895 reports and 22,642 clinical practice reports (N = 3,010 candidates. The tests were evaluated in pairs by the ACSA team of raters using two categories: valid and invalid. Results: The percentage agreement in assessments of certificates was 89,9%, while for the reports of clinical practice was 85,1 % and for clinical practice reports was 81,7%. The inter-rater agreement coefficients (kappa ranged from 0,468 to 0,711. Discussion: The results of this study show that the inter-rater reliability of assessments varies from fair to good. Compared with other similar studies, the results put the reliability of the model in a comfortable position. Among the improvements incorporated, progressive automation of evaluations must be highlighted.

  4. Predictions for heat transfer characteristics in a natural draft reactor cooling system using a second moment closure turbulence model

    International Nuclear Information System (INIS)

    Nishimura, M.; Maekawa, I.

    2004-01-01

    A numerical study is performed on the natural draft reactor cavity cooling system (RCCS). In the cooling system, buoyancy driven heated upward flow could be in the mixed convection regime that is accompanied by heat transfer impairment. Also, the heating wall condition is asymmetric with regard to the channel cross section. These flow regime and thermal boundary conditions may invalidate the use of design correlation. To precisely simulate the flow and thermal fields within the RCCS, the second moment closure turbulence model is applied. Two types of the RCCS channel geometry are selected to make a comparison: an annular duct with fins on the outer surface of the inner circular wall, and a multi-rectangular duct. The prediction shows that the local heat transfer coefficient on the RCCS with finned annular duct is less than 1/6 of that estimated with Dittus-Boelter correlation. Much portion of the natural draft airflow does not contribute cooling at all because mainstream escapes from the narrow gaps between the fins. This result and thus the finned annulus design are unacceptable from the viewpoint for structural integrity of the RCCS wall boundary. The performance of the multi-rectangular duct design is acceptable that the RCCS maximum temperature is less than 400 degree centigrade even when the flow rate is halved from the designed condition. (author)

  5. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    Science.gov (United States)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  6. Empirical modeling of whistler-mode chorus and hiss in the inner magnetosphere using measurements of the Van Allen Probes

    Science.gov (United States)

    Santolik, Ondrej; Hospodarsky, George B.; Kurth, William S.; Kletzing, Craig A.

    2017-04-01

    Recent studies of the dynamics of energetic in the Earth radiation belts show that whistler mode chorus and hiss play an important role. This especially concerns the slot region and the outer Van Allen radiation belts where empirical wave models are used as a component of existing approaches. We analyze these whistler-mode waves using a database of survey measurements of the Waves instruments of the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) onboard the Van Allen Probes. We use multicomponent data to estimate wave polarization and propagation parameters and to asses variability of wave amplitudes as a function of position and geomagnetic activity. Four years of Van Allen Probes EMFISIS Waves survey data give a good orbital coverage in L, latitude, and MLT. Average amplitudes increase with geomagnetic activity but the observed amplitude variations are still much larger than this effect. Statistics of planarity and wave vector directions are strongly linked to wave amplitudes. The planarity of magnetic field polarization is high for strong chorus and its wave normal directions are well defined. We obtain low planarities of the magnetic field polarization for a substantial fraction of plasmaspheric hiss. This invalidates the assumption of a single plane wave for these whistler mode emissions.

  7. Consecutive Short-Scan CT for Geological Structure Analog Models with Large Size on In-Situ Stage.

    Directory of Open Access Journals (Sweden)

    Min Yang

    Full Text Available For the analysis of interior geometry and property changes of a large-sized analog model during a loading or other medium (water or oil injection process with a non-destructive way, a consecutive X-ray computed tomography (XCT short-scan method is developed to realize an in-situ tomography imaging. With this method, the X-ray tube and detector rotate 270° around the center of the guide rail synchronously by switching positive and negative directions alternately on the way of translation until all the needed cross-sectional slices are obtained. Compared with traditional industrial XCTs, this method well solves the winding problems of high voltage cables and oil cooling service pipes during the course of rotation, also promotes the convenience of the installation of high voltage generator and cooling system. Furthermore, hardware costs are also significantly decreased. This kind of scanner has higher spatial resolution and penetrating ability than medical XCTs. To obtain an effective sinogram which matches rotation angles accurately, a structural similarity based method is applied to elimination of invalid projection data which do not contribute to the image reconstruction. Finally, on the basis of geometrical symmetry property of fan-beam CT scanning, a whole sinogram filling a full 360° range is produced and a standard filtered back-projection (FBP algorithm is performed to reconstruct artifacts-free images.

  8. Magazines as wilderness information sources: assessing users' general wilderness knowledge and specific leave no trace knowledge

    Science.gov (United States)

    John J. Confer; Andrew J. Mowen; Alan K. Graefe; James D. Absher

    2000-01-01

    The Leave No Trace (LNT) educational program has the potential to provide wilderness users with useful minimum impact information. For LNT to be effective, managers need to understand who is most/least aware of minimum impact practices and how to expose users to LNT messages. This study examined LNT knowledge among various user groups at an Eastern wilderness area and...

  9. ORF Alignment: NC_000963 [GENIUS II[Archive

    Lifescience Database Archive (English)

    Full Text Available NC_000963 gi|15604233 >1uf5A 4 265 215 450 3e-21 ... ref|NP_220749.1| APOLIPOPROTEIN N-ACYLTRANSFERASE (lnt... ... N-ACYLTRANSFERASE (lnt) [Rickettsia prowazekii] ... pir||G71693 apolipoprotein n-acyltransferase (lnt...) RP366 ... - Rickettsia prowazekii sp|Q9ZDG3|LNT_RICPR ...

  10. Sub-discretized surface model with application to contact mechanics in multi-body simulation

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, S; Williams, J

    2008-02-28

    The mechanics of contact between rough and imperfectly spherical adhesive powder grains are often complicated by a variety of factors, including several which vary over sub-grain length scales. These include several traction factors that vary spatially over the surface of the individual grains, including high energy electron and acceptor sites (electrostatic), hydrophobic and hydrophilic sites (electrostatic and capillary), surface energy (general adhesion), geometry (van der Waals and mechanical), and elasto-plastic deformation (mechanical). For mechanical deformation and reaction, coupled motions, such as twisting with bending and sliding, as well as surface roughness add an asymmetry to the contact force which invalidates assumptions for popular models of contact, such as the Hertzian and its derivatives, for the non-adhesive case, and the JKR and DMT models for adhesive contacts. Though several contact laws have been offered to ameliorate these drawbacks, they are often constrained to particular loading paths (most often normal loading) and are relatively complicated for computational implementation. This paper offers a simple and general computational method for augmenting contact law predictions in multi-body simulations through characterization of the contact surfaces using a hierarchically-defined surface sub-discretization. For the case of adhesive contact between powder grains in low stress regimes, this technique can allow a variety of existing contact laws to be resolved across scales, allowing for moments and torques about the contact area as well as normal and tangential tractions to be resolved. This is especially useful for multi-body simulation applications where the modeler desires statistical distributions and calibration for parameters in contact laws commonly used for resolving near-surface contact mechanics. The approach is verified against analytical results for the case of rough, elastic spheres.

  11. Attention and executive functions in a rat model of chronic epilepsy.

    Science.gov (United States)

    Faure, Jean-Baptiste; Marques-Carneiro, José E; Akimana, Gladys; Cosquer, Brigitte; Ferrandon, Arielle; Herbeaux, Karine; Koning, Estelle; Barbelivien, Alexandra; Nehlig, Astrid; Cassel, Jean-Christophe

    2014-05-01

    Temporal lobe epilepsy is a relatively frequent, invalidating, and often refractory neurologic disorder. It is associated with cognitive impairments that affect memory and executive functions. In the rat lithium-pilocarpine temporal lobe epilepsy model, memory impairment and anxiety disorder are classically reported. Here we evaluated sustained visual attention in this model of epilepsy, a function not frequently explored. Thirty-five Sprague-Dawley rats were subjected to lithium-pilocarpine status epilepticus. Twenty of them received a carisbamate treatment for 7 days, starting 1 h after status epilepticus onset. Twelve controls received lithium and saline. Five months later, attention was assessed in the five-choice serial reaction time task, a task that tests visual attention and inhibitory control (impulsivity/compulsivity). Neuronal counting was performed in brain regions of interest to the functions studied (hippocampus, prefrontal cortex, nucleus basalis magnocellularis, and pedunculopontine tegmental nucleus). Lithium-pilocarpine rats developed motor seizures. When they were able to learn the task, they exhibited attention impairment and a tendency toward impulsivity and compulsivity. These disturbances occurred in the absence of neuronal loss in structures classically related to attentional performance, although they seemed to better correlate with neuronal loss in hippocampus. Globally, rats that received carisbamate and developed motor seizures were as impaired as untreated rats, whereas those that did not develop overt motor seizures performed like controls, despite evidence for hippocampal damage. This study shows that attention deficits reported by patients with temporal lobe epilepsy can be observed in the lithium-pilocarpine model. Carisbamate prevents the occurrence of motor seizures, attention impairment, impulsivity, and compulsivity in a subpopulation of neuroprotected rats. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  12. Consolidating the medical model of disability: on poliomyelitis and the constitution of orthopedic surgery and orthopaedics as a speciality in Spain (1930-1950

    Directory of Open Access Journals (Sweden)

    Martínez-Pérez, José

    2009-06-01

    Full Text Available At the beginning of the 1930s, various factors made it necessary to transform one of the institutions which was renowned for its work regarding the social reinsertion of the disabled, that is, the Instituto de Reeducación Profesional de Inválidos del Trabajo (Institute for Occupational Retraining of Invalids of Work. The economic crisis of 1929 and the legislative reform aimed at regulating occupational accidents highlighted the failings of this institution to fulfil its objectives. After a time of uncertainty, the centre was renamed the Instituto Nacional de Reeducación de Inválidos (National Institute for Retraining of Invalids. This was done to take advantage of its work in championing the recovery of all people with disabilities.

    This work aims to study the role played in this process by the poliomyelitis epidemics in Spain at this time. It aims to highlight how this disease justified the need to continue the work of a group of professionals and how it helped to reorient the previous programme to re-educate the «invalids». Thus we shall see the way in which, from 1930 to 1950, a specific medical technology helped to consolidate an «individual model» of disability and how a certain cultural stereotype of those affected developed as a result. Lastly, this work discusses the way in which all this took place in the midst of a process of professional development of orthopaedic surgeons.

    A comienzos de la década de 1930, una serie de factores obligaron a transformar una de las instituciones que más se había destacado en España en la labor de conseguir la reinserción social de las personas con discapacidades: el Instituto de Reeducación de Inválidos del Trabajo. La crisis económica de 1929 y las reformas legislativas destinadas a regular los accidentes del trabajo pusieron de relieve, entre otros factores, las limitaciones de esa institución para cumplir sus objetivos. Tras un período de cierta indefinición, el

  13. Cognitive modeling

    OpenAIRE

    Zandbelt, Bram

    2017-01-01

    Introductory presentation on cognitive modeling for the course ‘Cognitive control’ of the MSc program Cognitive Neuroscience at Radboud University. It addresses basic questions, such as 'What is a model?', 'Why use models?', and 'How to use models?'

  14. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  15. A fundamental critique of the fascial distortion model and its application in clinical practice.

    Science.gov (United States)

    Thalhamer, Christoph

    2018-01-01

    The therapeutic techniques used in the fascial distortion model (FDM) have become increasingly popular among manual therapists and physical therapists. The reasons for this trend remain to be empirically explored. Therefore this paper pursues two goals: first, to investigate the historical and theoretical background of FDM, and second, to discuss seven problems associated with the theory and practice of FDM. The objectives of this paper are based on a review of the literature. The research mainly focuses on clinical proofs of concept for FDM treatment techniques in musculoskeletal medicine. FDM as a treatment method was founded and developed in the early 1990s by Stephen Typaldos. It is based on the concept that all musculoskeletal complaints can be traced back to three-dimensional deformations or distortions of the fasciae. The concept is that these distortions can be undone through direct application of certain manual techniques. A literature review found no clinical trials or basic research studies to support the empirical foundations of the FDM contentions. Based on the absence of proof of concept for FDM treatment techniques along with certain theoretical considerations, seven problems emerge, the most striking of which include (1) diagnostic criteria for FDM, (2) the biological implausibility of the model, (3) the reduction of all such disorders to a single common denominator: the fasciae, (4) the role of FDM research, and (5) potentially harmful consequences related to FDM treatment. The above problems can only be invalidated through high-quality clinical trials. Allegations that clinical experience is sufficient to validate therapeutic results have been abundantly refuted in the literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    Directory of Open Access Journals (Sweden)

    J. Liebert

    2012-09-01

    Full Text Available Despite considerable progress in recent years, output of both global and regional circulation models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem, bias correction (BC; i.e. the correction of model output towards observations in a post-processing step has now become a standard procedure in climate change impact studies. In this paper we argue that BC is currently often used in an invalid way: it is added to the GCM/RCM model chain without sufficient proof that the consistency of the latter (i.e. the agreement between model dynamics/model output and our judgement as well as the generality of its applicability increases. BC methods often impair the advantages of circulation models by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Currently used BC methods largely neglect feedback mechanisms, and it is unclear whether they are time-invariant under climate change conditions. Applying BC increases agreement of climate model output with observations in hindcasts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this hides rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of circulation models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future global and regional circulation model simulations is the increase in model resolution to the convection-permitting scale in combination with

  17. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...

  18. Analyzing subsurface drain network performance in an agricultural monitoring site with a three-dimensional hydrological model

    Science.gov (United States)

    Nousiainen, Riikka; Warsta, Lassi; Turunen, Mika; Huitu, Hanna; Koivusalo, Harri; Pesonen, Liisa

    2015-10-01

    Effectiveness of a subsurface drainage system decreases with time, leading to a need to restore the drainage efficiency by installing new drain pipes in problem areas. The drainage performance of the resulting system varies spatially and complicates runoff and nutrient load generation within the fields. We presented a method to estimate the drainage performance of a heterogeneous subsurface drainage system by simulating the area with the three-dimensional hydrological FLUSH model. A GIS analysis was used to delineate the surface runoff contributing area in the field. We applied the method to reproduce the water balance and to investigate the effectiveness of a subsurface drainage network of a clayey field located in southern Finland. The subsurface drainage system was originally installed in the area in 1971 and the drainage efficiency was improved in 1995 and 2005 by installing new drains. FLUSH was calibrated against total runoff and drain discharge data from 2010 to 2011 and validated against total runoff in 2012. The model supported quantification of runoff fractions via the three installed drainage networks. Model realisations were produced to investigate the extent of the runoff contributing areas and the effect of the drainage parameters on subsurface drain discharge. The analysis showed that better model performance was achieved when the efficiency of the oldest drainage network (installed in 1971) was decreased. Our analysis method can reveal the drainage system performance but not the reason for the deterioration of the drainage performance. Tillage layer runoff from the field was originally computed by subtracting drain discharge from the total runoff. The drains installed in 1995 bypass the measurement system, which renders the tillage layer runoff calculation procedure invalid after 1995. Therefore, this article suggests use of a local correction coefficient based on the simulations for further research utilizing data from the study area.

  19. Why Do Lie-Catchers Fail? A Lens Model Meta-Analysis of Human Lie Judgments

    Science.gov (United States)

    Hartwig, Maria; Bond, Charles F., Jr.

    2011-01-01

    Decades of research has shown that people are poor at detecting lies. Two explanations for this finding have been proposed. First, it has been suggested that lie detection is inaccurate because people rely on invalid cues when judging deception. Second, it has been suggested that lack of valid cues to deception limits accuracy. A series of 4…

  20. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  1. Interactions of unconjugated bilirubin with vesicles, cyclodextrins and micelles: New modeling and the role of high pKa values

    Directory of Open Access Journals (Sweden)

    Ostrow J Donald

    2010-03-01

    Full Text Available Abstract Background Unconjugated bilirubin (UCB is an unstable substance with very low aqueous solubility. Its aqueous pKa values affect many of its interactions, particularly their pH-dependence. A companion paper shows that only our prior solvent partition studies, leading to pKa values of 8.12 and 8.44, met all essential requirements for valid pKa determinations. Other published values, generally lower, some below 5.0, were shown to be invalid. The present work was designed to derive suitable models for interpreting published data on the pH-dependent binding of UCB with four agents, mentioned below, chosen because they are not, themselves, sensitive to changes in the pH range 4-10, and the data, mainly spectrometric, were of reasonable quality. Results These analyses indicated that the high pKa values, dianion dimerization constant and solubilities of UCB at various pH values, derived from our partition studies, along with literature-derived pH- and time-dependent supersaturation effects, were essential for constructing useful models that showed good qualitative, and sometimes quantitative, fits with the data. In contrast, published pKa values below 5.0 were highly incompatible with the data for all systems considered. The primary species of bound UCB in our models were: undissociated diacid for phosphatidylcholine, dianion for dodecyl maltoside micelles and cyclodextrins, and both monoanions and dianion for sodium taurocholate. The resulting binding versus pH profiles differed strikingly from each other. Conclusions The insights derived from these analyses should be helpful to explore and interpret UCB binding to more complex, pH-sensitive, physiological moieties, such as proteins or membranes, in order to understand its functions.

  2. Fokker-Planck modeling of current penetration during electron cyclotron current drive

    International Nuclear Information System (INIS)

    Merkulov, A.; Westerhof, E.; Schueller, F. C.

    2007-01-01

    The current penetration during electron cyclotron current drive (ECCD) on the resistive time scale is studied with a Fokker-Planck simulation, which includes a model for the magnetic diffusion that determines the parallel electric field evolution. The existence of the synergy between the inductive electric field and EC driven current complicates the process of the current penetration and invalidates the standard method of calculation in which Ohm's law is simply approximated by j-j cd =σE. Here it is proposed to obtain at every time step a self-consistent approximation to the plasma resistivity from the Fokker-Planck code, which is then used in a concurrent calculation of the magnetic diffusion equation in order to obtain the inductive electric field at the next time step. A series of Fokker-Planck calculations including a self-consistent evolution of the inductive electric field has been performed. Both the ECCD power and the electron density have been varied, thus varying the well known nonlinearity parameter for ECCD P rf [MW/m -3 ]/n e 2 [10 19 m -3 ] [R. W. Harvey et al., Phys. Rev. Lett 62, 426 (1989)]. This parameter turns out also to be a good predictor of the synergetic effects. The results are then compared with the standard method of calculations of the current penetration using a transport code. At low values of the Harvey parameter, the standard method is in quantitative agreement with Fokker-Planck calculations. However, at high values of the Harvey parameter, synergy between ECCD and E parallel is found. In the case of cocurrent drive, this synergy leads to the generation of large amounts of nonthermal electrons and a concomitant increase of the electrical conductivity and current penetration time. In the case of countercurrent drive, the ECCD efficiency is suppressed by the synergy with E parallel while only a small amount of nonthermal electrons is produced

  3. Health Benefits of Exposure to Low-dose Radiation.

    Science.gov (United States)

    Rithidech, Kanokporn Noy

    2016-03-01

    Although there is no doubt that exposure to high doses of radiation (delivered at a high dose-rate) induces harmful effects, the health risks and benefits of exposure to low levels (delivered at a low dose-rate) of toxic agents is still a challenging public health issue. There has been a considerable amount of published data against the linear no-threshold (LNT) model for assessing risk of cancers induced by radiation. The LNT model for risk assessment creates "radiophobia," which is a serious public health issue. It is now time to move forward to a paradigm shift in health risk assessment of low-dose exposure by taking the differences between responses to low and high doses into consideration. Moreover, future research directed toward the identification of mechanisms associated with responses to low-dose radiation is critically needed to fully understand their beneficial effects.

  4. STRUCTURAL MODELLING

    Directory of Open Access Journals (Sweden)

    Tea Ya. Danelyan

    2014-01-01

    Full Text Available The article states the general principles of structural modeling in aspect of the theory of systems and gives the interrelation with other types of modeling to adjust them to the main directions of modeling. Mathematical methods of structural modeling, in particular method of expert evaluations are considered.

  5. (HEV) Model

    African Journals Online (AJOL)

    Moatez Billah HARIDA

    The use of the simulator “Hybrid Electrical Vehicle Model Balances Fidelity and. Speed (HEVMBFS)” and the global control strategy make it possible to achieve encouraging results. Key words: Series parallel hybrid vehicle - nonlinear model - linear model - Diesel engine - Engine modelling -. HEV simulator - Predictive ...

  6. Constitutive Models

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina

    2011-01-01

    This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also...

  7. A dynamic model for estimating adult female mortality from ovarian dissection data for the tsetse fly Glossina pallidipes Austen sampled in Zimbabwe.

    Directory of Open Access Journals (Sweden)

    Sarah F Ackley

    2017-08-01

    Full Text Available Human and animal trypanosomiasis, spread by tsetse flies (Glossina spp, is a major public health concern in much of sub-Saharan Africa. The basic reproduction number of vector-borne diseases, such as trypanosomiasis, is a function of vector mortality rate. Robust methods for estimating tsetse mortality are thus of interest for understanding population and disease dynamics and for optimal control. Existing methods for estimating mortality in adult tsetse, from ovarian dissection data, often use invalid assumptions of the existence of a stable age distribution, and age-invariant mortality and capture probability. We develop a dynamic model to estimate tsetse mortality from ovarian dissection data in populations where the age distribution is not necessarily stable. The models correspond to several hypotheses about how temperature affects mortality: no temperature dependence (model 1, identical temperature dependence for mature adults and immature stages, i.e., pupae and newly emerged adults (model 2, and differential temperature dependence for mature adults and immature stages (model 3. We fit our models to ovarian dissection data for G. pallidipes collected at Rekomitjie Research Station in the Zambezi Valley in Zimbabwe. We compare model fits to determine the most probable model, given the data, by calculating the Akaike Information Criterion (AIC for each model. The model that allows for a differential dependence of temperature on mortality for immature stages and mature adults (model 3 performs significantly better than models 1 and 2. All models produce mortality estimates, for mature adults, of approximately 3% per day for mean daily temperatures below 25°C, consistent with those of mark-recapture studies performed in other settings. For temperatures greater than 25°C, mortality among immature classes of tsetse increases substantially, whereas mortality remains roughly constant for mature adults. As a sensitivity analysis, model 3 was

  8. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  9. Hidden Markov event sequence models: toward unsupervised functional MRI brain mapping.

    Science.gov (United States)

    Faisan, Sylvain; Thoraval, Laurent; Armspach, Jean-Paul; Foucher, Jack R; Metz-Lutz, Marie-Noëlle; Heitz, Fabrice

    2005-01-01

    Most methods used in functional MRI (fMRI) brain mapping require restrictive assumptions about the shape and timing of the fMRI signal in activated voxels. Consequently, fMRI data may be partially and misleadingly characterized, leading to suboptimal or invalid inference. To limit these assumptions and to capture the broad range of possible activation patterns, a novel statistical fMRI brain mapping method is proposed. It relies on hidden semi-Markov event sequence models (HSMESMs), a special class of hidden Markov models (HMMs) dedicated to the modeling and analysis of event-based random processes. Activation detection is formulated in terms of time coupling between (1) the observed sequence of hemodynamic response onset (HRO) events detected in the voxel's fMRI signal and (2) the "hidden" sequence of task-induced neural activation onset (NAO) events underlying the HROs. Both event sequences are modeled within a single HSMESM. The resulting brain activation model is trained to automatically detect neural activity embedded in the input fMRI data set under analysis. The data sets considered in this article are threefold: synthetic epoch-related, real epoch-related (auditory lexical processing task), and real event-related (oddball detection task) fMRI data sets. Synthetic data: Activation detection results demonstrate the superiority of the HSMESM mapping method with respect to a standard implementation of the statistical parametric mapping (SPM) approach. They are also very close, sometimes equivalent, to those obtained with an "ideal" implementation of SPM in which the activation patterns synthesized are reused for analysis. The HSMESM method appears clearly insensitive to timing variations of the hemodynamic response and exhibits low sensitivity to fluctuations of its shape (unsustained activation during task). Real epoch-related data: HSMESM activation detection results compete with those obtained with SPM, without requiring any prior definition of the expected

  10. A More Flexible Lipoprotein Sorting Pathway

    Science.gov (United States)

    Chahales, Peter

    2015-01-01

    Lipoprotein biogenesis in Gram-negative bacteria occurs by a conserved pathway, each step of which is considered essential. In contrast to this model, LoVullo and colleagues demonstrate that the N-acyl transferase Lnt is not required in Francisella tularensis or Neisseria gonorrhoeae. This suggests the existence of a more flexible lipoprotein pathway, likely due to a modified Lol transporter complex, and raises the possibility that pathogens may regulate lipoprotein processing to modulate interactions with the host. PMID:25755190

  11. The potential for bias in Cohen's ecological analysis of lung cancer and residential radon

    International Nuclear Information System (INIS)

    Lubin, Jay H.

    2002-01-01

    Cohen's ecological analysis of US lung cancer mortality rates and mean county radon concentration shows decreasing mortality rates with increasing radon concentration (Cohen 1995 Health Phys. 68 157-74). The results prompted his rejection of the linear-no-threshold (LNT) model for radon and lung cancer. Although several authors have demonstrated that risk patterns in ecological analyses provide no inferential value for assessment of risk to individuals, Cohen advances two arguments in a recent response to Darby and Doll (2000 J. Radiol. Prot. 20 221-2) who suggest Cohen's results are and will always be burdened by the ecological fallacy. Cohen asserts that the ecological fallacy does not apply when testing the LNT model, for which average exposure determines average risk, and that the influence of confounding factors is obviated by the use of large numbers of stratification variables. These assertions are erroneous. Average dose determines average risk only for models which are linear in all covariates, in which case ecological analyses are valid. However, lung cancer risk and radon exposure, while linear in the relative risk, are not linearly related to the scale of absolute risk, and thus Cohen's rejection of the LNT model is based on a false premise of linearity. In addition, it is demonstrated that the deleterious association for radon and lung cancer observed in residential and miner studies is consistent with negative trends from ecological studies, of the type described by Cohen. (author)

  12. Galactic models

    International Nuclear Information System (INIS)

    Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.

    1990-01-01

    Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings

  13. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  14. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  15. ICRF modelling

    International Nuclear Information System (INIS)

    Phillips, C.K.

    1985-12-01

    This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs

  16. Mechanics of lipid bilayer junctions affecting the size of a connecting lipid nanotube

    Directory of Open Access Journals (Sweden)

    Voinova Marina

    2011-01-01

    Full Text Available Abstract In this study we report a physical analysis of the membrane mechanics affecting the size of the highly curved region of a lipid nanotube (LNT that is either connected between a lipid bilayer vesicle and the tip of a glass microinjection pipette (tube-only or between a lipid bilayer vesicle and a vesicle that is attached to the tip of a glass microinjection pipette (two-vesicle. For the tube-only configuration (TOC, a micropipette is used to pull a LNT into the interior of a surface-immobilized vesicle, where the length of the tube L is determined by the distance of the micropipette to the vesicle wall. For the two-vesicle configuration (TVC, a small vesicle is inflated at the tip of the micropipette tip and the length of the tube L is in this case determined by the distance between the two interconnected vesicles. An electrochemical method monitoring diffusion of electroactive molecules through the nanotube has been used to determine the radius of the nanotube R as a function of nanotube length L for the two configurations. The data show that the LNT connected in the TVC constricts to a smaller radius in comparison to the tube-only mode and that tube radius shrinks at shorter tube lengths. To explain these electrochemical data, we developed a theoretical model taking into account the free energy of the membrane regions of the vesicles, the LNT and the high curvature junctions. In particular, this model allows us to estimate the surface tension coefficients from R(L measurements.

  17. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  18. Ventilation Model

    International Nuclear Information System (INIS)

    Yang, H.

    1999-01-01

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future

  19. Turbulence modelling

    International Nuclear Information System (INIS)

    Laurence, D.

    1997-01-01

    This paper is an introduction course in modelling turbulent thermohydraulics, aimed at computational fluid dynamics users. No specific knowledge other than the Navier Stokes equations is required beforehand. Chapter I (which those who are not beginners can skip) provides basic ideas on turbulence physics and is taken up in a textbook prepared by the teaching team of the ENPC (Benque, Viollet). Chapter II describes turbulent viscosity type modelling and the 2k-ε two equations model. It provides details of the channel flow case and the boundary conditions. Chapter III describes the 'standard' (R ij -ε) Reynolds tensions transport model and introduces more recent models called 'feasible'. A second paper deals with heat transfer and the effects of gravity, and returns to the Reynolds stress transport model. (author)

  20. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  1. Modelling Overview

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Vesterager, Johan

    sharing many of the characteristics of a virtual enterprise. This extended enterprise will have the following characteristics: The extended enterprise is focused on satisfying the current customer requirement so that it has a limited life expectancy, but should be capable of being recreated to deal....... One or more units from beyond the network may complement the extended enterprise. The common reference model for this extended enterprise will utilise GERAM (Generalised Enterprise Reference Architecture and Methodology) to provide an architectural framework for the modelling carried out within......This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise...

  2. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive......Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...

  3. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics. We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...... be characterized by their occurrence times and the participating books and borrowers. When we characterize events as information objects we focus on concepts like information structures. When viewed as change agents events are phenomena that trigger change. For example, when borrow event occurs books are moved...

  4. Model : making

    OpenAIRE

    Bottle, Neil

    2013-01-01

    The Model : making exhibition was curated by Brian Kennedy in collaboration with Allies & Morrison in September 2013. For the London Design Festival, the Model : making exhibition looked at the increased use of new technologies by both craft-makers and architectural model makers. In both practices traditional ways of making by hand are increasingly being combined with the latest technologies of digital imaging, laser cutting, CNC machining and 3D printing. This exhibition focussed on ...

  5. Spherical models

    CERN Document Server

    Wenninger, Magnus J

    2012-01-01

    Well-illustrated, practical approach to creating star-faced spherical forms that can serve as basic structures for geodesic domes. Complete instructions for making models from circular bands of paper with just a ruler and compass. Discusses tessellation, or tiling, and how to make spherical models of the semiregular solids and concludes with a discussion of the relationship of polyhedra to geodesic domes and directions for building models of domes. "". . . very pleasant reading."" - Science. 1979 edition.

  6. Predictive Modeling in Plasma Reactor and Process Design

    Science.gov (United States)

    Hash, D. B.; Bose, D.; Govindan, T. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1997-01-01

    Research continues toward the improvement and increased understanding of high-density plasma tools. Such reactor systems are lauded for their independent control of ion flux and energy enabling high etch rates with low ion damage and for their improved ion velocity anisotropy resulting from thin collisionless sheaths and low neutral pressures. Still, with the transition to 300 mm processing, achieving etch uniformity and high etch rates concurrently may be a formidable task for such large diameter wafers for which computational modeling can play an important role in successful reactor and process design. The inductively coupled plasma (ICP) reactor is the focus of the present investigation. The present work attempts to understand the fundamental physical phenomena of such systems through computational modeling. Simulations will be presented using both computational fluid dynamics (CFD) techniques and the direct simulation Monte Carlo (DSMC) method for argon and chlorine discharges. ICP reactors generally operate at pressures on the order of 1 to 10 mTorr. At such low pressures, rarefaction can be significant to the degree that the constitutive relations used in typical CFD techniques become invalid and a particle simulation must be employed. This work will assess the extent to which CFD can be applied and evaluate the degree to which accuracy is lost in prediction of the phenomenon of interest; i.e., etch rate. If the CFD approach is found reasonably accurate and bench-marked with DSMC and experimental results, it has the potential to serve as a design tool due to the rapid time relative to DSMC. The continuum CFD simulation solves the governing equations for plasma flow using a finite difference technique with an implicit Gauss-Seidel Line Relaxation method for time marching toward a converged solution. The equation set consists of mass conservation for each species, separate energy equations for the electrons and heavy species, and momentum equations for the gas

  7. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  8. Disease-dependent local IL-10 production ameliorates collagen induced arthritis in mice.

    Directory of Open Access Journals (Sweden)

    Louise Henningsson

    Full Text Available Rheumatoid arthritis (RA is a chronic destructive autoimmune disease characterised by periods of flare and remission. Today's treatment is based on continuous immunosuppression irrespective of the patient's inflammatory status. When the disease is in remission the therapy is withdrawn but withdrawal attempts often results in inflammatory flares, and re-start of the therapy is commenced when the inflammation again is prominent which leads both to suffering and increased risk of tissue destruction. An attractive alternative treatment would provide a disease-regulated therapy that offers increased anti-inflammatory effect during flares and is inactive during periods of remission. To explore this concept we expressed the immunoregulatory cytokine interleukin (IL-10 gene under the control of an inflammation dependent promoter in a mouse model of RA - collagen type II (CII induced arthritis (CIA. Haematopoetic stem cells (HSCs were transduced with lentiviral particles encoding the IL-10 gene (LNT-IL-10, or a green fluorescence protein (GFP as control gene (LNT-GFP, driven by the inflammation-dependent IL-1/IL-6 promoter. Twelve weeks after transplantation of transduced HSCs into DBA/1 mice, CIA was induced. We found that LNT-IL-10 mice developed a reduced severity of arthritis compared to controls. The LNT-IL-10 mice exhibited both increased mRNA expression levels of IL-10 as well as increased amount of IL-10 produced by B cells and non-B APCs locally in the lymph nodes compared to controls. These findings were accompanied by increased mRNA expression of the IL-10 induced suppressor of cytokine signalling 1 (SOCS1 in lymph nodes and a decrease in the serum protein levels of IL-6. We also found a decrease in both frequency and number of B cells and serum levels of anti-CII antibodies. Thus, inflammation-dependent IL-10 therapy suppresses experimental autoimmune arthritis and is a promising candidate in the development of novel treatments for RA.

  9. Didactical modelling

    DEFF Research Database (Denmark)

    Højgaard, Tomas; Hansen, Rune

    The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful...... to construct this approach in mathematics education research....

  10. Virtual modeling

    NARCIS (Netherlands)

    Flores, J.; Kiss, S.; Cano, P.; Nijholt, Antinus; Zwiers, Jakob

    2003-01-01

    We concentrate our efforts on building virtual modelling environments where the content creator uses controls (widgets) as an interactive adjustment modality for the properties of the edited objects. Besides the advantage of being an on-line modelling approach (visualised just like any other on-line

  11. Animal models

    DEFF Research Database (Denmark)

    Gøtze, Jens Peter; Krentz, Andrew

    2014-01-01

    In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...

  12. Education models

    NARCIS (Netherlands)

    Poortman, Sybilla; Sloep, Peter

    2006-01-01

    Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in

  13. Modeling Sunspots

    Science.gov (United States)

    Oh, Phil Seok; Oh, Sung Jin

    2013-01-01

    Modeling in science has been studied by education researchers for decades and is now being applied broadly in school. It is among the scientific practices featured in the "Next Generation Science Standards" ("NGSS") (Achieve Inc. 2013). This article describes modeling activities in an extracurricular science club in a high…

  14. Battery Modeling

    NARCIS (Netherlands)

    Jongerden, M.R.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However,

  15. Pros and cons of the revolution in radiation protection

    International Nuclear Information System (INIS)

    Latek, Stanislav

    2001-01-01

    In 1959, the International Commission of Radiation Protection (ICRP) chose the LNT (Linear No-Threshold) model as an assumption to form the basis for regulating radiation protection. During the 1999 UNSCEAR session, held in April in Vienna, the linear no-threshold (LNT) hypothesis was discussed. Among other LNT-related subjects, the Committee discussed the problem of collective dose and dose commitment. These concepts have been introduced in the early 1960s, as the offspring of the linear no-threshold assumption. At the time they reflected a deep concern about the induction of hereditary effects by nuclear tests fallout. Almost four decades later, collective dose and dose commitment are still widely used, although by now both the concepts and the concern should have faded into oblivion. It seems that the principles and concepts of radiation protection have gone astray and have led to exceedingly prohibitive standards and impractical recommendations. Revision of these principles and concepts is now being proposed by an increasing number of scientists and several organisations

  16. VENTILATION MODEL

    International Nuclear Information System (INIS)

    V. Chipman

    2002-01-01

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses

  17. Modelling Constructs

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2009-01-01

    , these notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult......There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add most...

  18. Model predicting survival/exitus after traumatic brain injury: biomarker S100B 24h.

    Science.gov (United States)

    Gonzćlez-Mao, M C; Repáraz-Andrade, A; Del Campo-Pérez, V; Alvarez-García, E; Vara-Perez, C; Andrade-Olivié, M A

    2011-01-01

    The enigma of Traumatic Brain Injury (TBI), reflected in recent scientific literature, is its uncertain consequences, variability of the final prognosis with apparently similar TBI, necessity for peripheral biomarkers, and more specific predictive models. To study the relationship between serum S100B and survival in TBI patients in various serious situations; the S100B level in patients without traumatic pathology or associated tumour, subjected to stressful situations such as neurological intensive care unit (NICU) stay; the possible overestimation caused by extracerebral liberation in TBI patients and associated polytraumatism; the predictive cutoffs to determine the most sensitive and specific chronology; and achieve a predictive prognostic model. Patients admitted to the NICU within 6 hours after TBI were selected. We measured: a) clinical: exitus yes/no; age and gender, traumatic mechanism, polytraumatism yes/no, GCS score, unconsciousness duration, amnesia duration, neurological focality, and surgical interventions; b) radiological: CT scan for radiological lesions; c) biochemical: serum SB100B at 6, 24, 48 and 72 hours after TBI and drug abuse detected in the urine; d) GOS on hospital discharge. N: 149 TBI patients, independent of polytraumatism, mean serum S100B at 6, 24, 48, and 72 hours: 2.1, 1.3, 1.2, and 0.6 microg/L, respectively; N: 124 without associated polytraumatism, S100B at 6, 24, 48, and 72 hours: 2.0, 1.4, 1.3, and 0.6 microg/L; N: 50 control I S100B 24 hours: 0.17 microg/L (0.04 - 0.56) and 25 healthy subjects S100B 0.057 microg/L (0.02-0.094). Significantly higher S100B levels are observed on exitus, with excellent TBI prognosis and evolution performance. Hospital stay in the NICU produces significant increases in S100B compared to healthy subjects, without invalidating it as a biomarker. Polytraumatism associated to TBI does not significantly alter S100B levels. S100B at 24 hours > or = 0.90 microg/L appears to predict unfavourable TBI

  19. OSPREY Model

    Energy Technology Data Exchange (ETDEWEB)

    Veronica J. Rutledge

    2013-01-01

    The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to

  20. Modeling Applications.

    Science.gov (United States)

    McMEEKIN, Thomas A; Ross, Thomas

    1996-12-01

    The concept of predictive microbiology has developed rapidly through the initial phases of experimental design and model development and the subsequent phase of model validation. A fully validated model represents a general rule which may be brought to bear on particular cases. For some microorganism/food combinations, sufficient confidence now exists to indicate substantial benefits to the food industry from use of predictive models. Several types of devices are available to monitor and record environmental conditions (particularly temperature). These "environmental histories" can be interpreted, using predictive models, in terms of microbial proliferation. The current challenge is to provide systems for the collection and interpretation of environmental information which combine ease of use, reliability, and security, providing the industrial user with the ability to make informed and precise decisions regarding the quality and safety of foods. Many specific applications for predictive modeling can be developed from a basis of understanding the inherent qualities of a fully validated model. These include increased precision and confidence in predictions based on accumulation of quantitative data, objective and rapid assessment of the effect of environmental conditions on microbial proliferation, and flexibility in monitoring the relative contribution of component parts of processing, distribution, and storage systems for assurance of shelf life and safety.

  1. A Model for Math Modeling

    Science.gov (United States)

    Lin, Tony; Erfan, Sasan

    2016-01-01

    Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…

  2. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. RNICE Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Jin; Stritch, Justin Michael

    2018-01-01

    contributes knowledge about a social phenomenon and advances knowledge in the public administration and management literatures. The RNICE model provides a vehicle for researchers who seek to evaluate or demonstrate the value of a replication study systematically. We illustrate the practical application...... research. Recently, scholars have issued calls for more replication, but academic reflections on when replication adds substantive value to public administration and management research are needed. This concise article presents a conceptual model, RNICE, for assessing when and how a replication study...... of the model using two previously published replication studies as examples....

  4. Correct Models

    OpenAIRE

    Blacher, René

    2010-01-01

    Ce rapport complete les deux rapports précédents et apporte une explication plus simple aux résultats précédents : à savoir la preuve que les suites obtenues sont aléatoires.; In previous reports, we have show how to transform a text $y_n$ in a random sequence by using functions of Fibonacci $T_q$. Now, in this report, we obtain a clearer result by proving that $T_q(y_n)$ has the IID model as correct model. But, it is necessary to define correctly a correct model. Then, we study also this pro...

  5. Lessons to be learned from a contentious challenge to mainstream radiobiological science (the linear no-threshold theory of genetic mutations)

    International Nuclear Information System (INIS)

    Beyea, Jan

    2017-01-01

    There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were

  6. Paleoclimate Modeling

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of all...

  7. Anchor Modeling

    Science.gov (United States)

    Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia

    Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.

  8. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  9. Environmental Modeling

    Science.gov (United States)

    EPA's modeling community is working to gain insights into certain parts of a physical, biological, economic, or social system by conducting environmental assessments for Agency decision making to complex environmental issues.

  10. Quark models

    International Nuclear Information System (INIS)

    Rosner, J.L.

    1981-01-01

    This paper invites experimenters to consider the wide variety of tests suggested by the new aspects of quark models since the discovery of charm and beauty, and nonrelativistic models. Colors and flavours are counted and combined into hadrons. The current quark zoo is summarized. Models and theoretical background are studied under: qualitative QCD: strings and bags, potential models, relativistic effects, electromagnetic transitions, gluon emissions, and single quark transition descriptions. Hadrons containing quarks known before 1974 (i.e. that can be made of ''light'' quarks u, d, and s) are treated in Section III, while those containing charmed quarks and beauty (b) quarks are discussed in Section IV. Unfolding the properties of the sixth quark from information on its hadrons is seen as a future application of the methods used in this study

  11. Model theory

    CERN Document Server

    Hodges, Wilfrid

    1993-01-01

    An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.

  12. Numerical models

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A.S.; Manoj, N.T.

    developed most of the above models. This is a good approximation to simulate horizontal distribution of active and passive variables. The future challenge lies in developing capability to simulate the distribution in the vertical....

  13. Composite models

    International Nuclear Information System (INIS)

    Peccei, R.D.

    If quarks and leptons are composite, it should be possible eventually to calculate their mass spectrum and understand the reasons for the observed family replications, questions which lie beyond the standard model. Alas, all experimental evidence to date points towards quark and lepton elemenarity with the typical momentum scale Λsub(comp), beyond which effects of inner structure may be seen, probably being greater than ITeV. One supersymmetric preon model explained provides a new dynamical alternative for obtaining light fermions which is that these states are quasi Goldstone fermions. This, and similar models are discussed. Although quasi Goldstone fermions provide an answer to the 0sup(th)-order question of composite models the questions of how masses and families are generated remain unanswered. (U.K.)

  14. Ventilation models

    Science.gov (United States)

    Skaaret, Eimund

    Calculation procedures, used in the design of ventilating systems, which are especially suited for displacement ventilation in addition to linking it to mixing ventilation, are addressed. The two zone flow model is considered and the steady state and transient solutions are addressed. Different methods of supplying air are discussed, and different types of air flow are considered: piston flow, plane flow and radial flow. An evaluation model for ventilation systems is presented.

  15. Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi

    2016-01-01

    effects, unicausal reduction, and case specificity. Based on the developments in set theoretical thinking in social sciences and employing methods like Qualitative Comparative Analysis (QCA), Necessary Condition Analysis (NCA), and set visualization techniques, in this position paper, we propose...... and demonstrate a new approach to maturity models in the domain of Information Systems. This position paper describes the set-theoretical approach to maturity models, presents current results and outlines future research work....

  16. META 2f: Probabilistic, Compositional, Multi-dimension Model-Based Verification (PROMISE)

    Science.gov (United States)

    2011-10-01

    this situation is to use triple-redundant flow sensors with validity bit and mid- vale select voting in software. This does not require any mitigation...Invalid ( SA /DA – Ethernet Src/Dest Address, Length/Type, Msg, SN Sequence Number, Frame Check Sequence) and (vi) Inconsistent. 40 Approved for...appropriate port).  Source Address ( SA ): Bits 5, 6, and 7 of the 48 bits address indicate the channel/redundant path the frame was transmitted on

  17. Accelerated life models modeling and statistical analysis

    CERN Document Server

    Bagdonavicius, Vilijandas

    2001-01-01

    Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia

  18. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  19. Mechanistic models

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, S.B.

    1990-09-01

    Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) ``interaction`` of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.

  20. Mechanistic models

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, S.B.

    1990-09-01

    Several models and theories are reviewed that incorporate the idea of radiation-induced lesions (repairable and/or irreparable) that can be related to molecular lesions in the DNA molecule. Usually the DNA double-strand or chromatin break is suggested as the critical lesion. In the models, the shoulder on the low-LET survival curve is hypothesized as being due to one (or more) of the following three mechanisms: (1) interaction'' of lesions produced by statistically independent particle tracks; (2) nonlinear (i.e., linear-quadratic) increase in the yield of initial lesions, and (3) saturation of repair processes at high dose. Comparisons are made between the various approaches. Several significant advances in model development are discussed; in particular, a description of the matrix formulation of the Markov versions of the RMR and LPL models is given. The more advanced theories have incorporated statistical fluctuations in various aspects of the energy-loss and lesion-formation process. An important direction is the inclusion of physical and chemical processes into the formulations by incorporating relevant track structure theory (Monte Carlo track simulations) and chemical reactions of radiation-induced radicals. At the biological end, identification of repair genes and how they operate as well as a better understanding of how DNA misjoinings lead to lethal chromosome aberrations are needed for appropriate inclusion into the theories. More effort is necessary to model the complex end point of radiation-induced carcinogenesis.

  1. Reflectance Modeling

    Science.gov (United States)

    Smith, J. A.; Cooper, K.; Randolph, M.

    1984-01-01

    A classical description of the one dimensional radiative transfer treatment of vegetation canopies was completed and the results were tested against measured prairie (blue grama) and agricultural canopies (soybean). Phase functions are calculated in terms of directly measurable biophysical characteristics of the canopy medium. While the phase functions tend to exhibit backscattering anisotropy, their exact behavior is somewhat more complex and wavelength dependent. A Monte Carlo model was developed that treats soil surfaces with large periodic variations in three dimensions. A photon-ray tracing technology is used. Currently, the rough soil surface is described by analytic functions and appropriate geometric calculations performed. A bidirectional reflectance distribution function is calculated and, hence, available for other atmospheric or canopy reflectance models as a lower boundary condition. This technique is used together with an adding model to calculate several cases where Lambertian leaves possessing anisotropic leaf angle distributions yield non-Lambertian reflectance; similar behavior is exhibited for simulated soil surfaces.

  2. Mathematical modeling

    CERN Document Server

    Eck, Christof; Knabner, Peter

    2017-01-01

    Mathematical models are the decisive tool to explain and predict phenomena in the natural and engineering sciences. With this book readers will learn to derive mathematical models which help to understand real world phenomena. At the same time a wealth of important examples for the abstract concepts treated in the curriculum of mathematics degrees are given. An essential feature of this book is that mathematical structures are used as an ordering principle and not the fields of application. Methods from linear algebra, analysis and the theory of ordinary and partial differential equations are thoroughly introduced and applied in the modeling process. Examples of applications in the fields electrical networks, chemical reaction dynamics, population dynamics, fluid dynamics, elasticity theory and crystal growth are treated comprehensively.

  3. Modelling language

    CERN Document Server

    Cardey, Sylviane

    2013-01-01

    In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int

  4. Molecular modeling

    Directory of Open Access Journals (Sweden)

    Aarti Sharma

    2009-01-01

    Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.

  5. Supernova models

    International Nuclear Information System (INIS)

    Woosley, S.E.; Weaver, T.A.

    1980-01-01

    Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the 56 Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed

  6. Cadastral Modeling

    DEFF Research Database (Denmark)

    Stubkjær, Erik

    2005-01-01

    to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...... to land. The paper advances the position that cadastral modeling has to include not only the physical objects, agents, and information sets of the domain, but also the objectives or requirements of cadastral systems....

  7. Isolating lattice from electronic contributions in thermal transport measurements of metals and alloys above ambient temperature and an adiabatic model

    Science.gov (United States)

    Criss, Everett M.; Hofmeister, Anne M.

    2017-06-01

    From femtosecond spectroscopy (fs-spectroscopy) of metals, electrons and phonons reequilibrate nearly independently, which contrasts with models of heat transfer at ordinary temperatures (T > 100 K). These electronic transfer models only agree with thermal conductivity (k) data at a single temperature, but do not agree with thermal diffusivity (D) data. To address the discrepancies, which are important to problems in solid state physics, we separately measured electronic (ele) and phononic (lat) components of D in many metals and alloys over ˜290-1100 K by varying measurement duration and sample length in laser-flash experiments. These mechanisms produce distinct diffusive responses in temperature versus time acquisitions because carrier speeds (u) and heat capacities (C) differ greatly. Electronic transport of heat only operates for a brief time after heat is applied because u is high. High Dele is associated with moderate T, long lengths, low electrical resistivity, and loss of ferromagnetism. Relationships of Dele and Dlat with physical properties support our assignments. Although kele reaches ˜20 × klat near 470 K, it is transient. Combining previous data on u with each D provides mean free paths and lifetimes that are consistent with ˜298 K fs-spectroscopy, and new values at high T. Our findings are consistent with nearly-free electrons absorbing and transmitting a small fraction of the incoming heat, whereas phonons absorb and transmit the majority. We model time-dependent, parallel heat transfer under adiabatic conditions which is one-dimensional in solids, as required by thermodynamic law. For noninteracting mechanisms, k≅ΣCikiΣCi/(ΣCi2). For metals, this reduces to k = klat above ˜20 K, consistent with our measurements, and shows that Meissner’s equation (k≅klat + kele) is invalid above ˜20 K. For one mechanism with multiple, interacting carriers, k≅ΣCiki/(ΣCi). Thus, certain dynamic behaviors of electrons and phonons in metals have been

  8. (SSE) model

    African Journals Online (AJOL)

    Simple analytic polynomials have been proposed for estimating solar radiation in the traditional Northern, Central and Southern regions of Malawi. There is a strong agreement between the polynomials and the SSE model with R2 values of 0.988, 0.989 and 0.989 and root mean square errors of 0.061, 0.057 and 0.062 ...

  9. Lens Model

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...

  10. Markov model

    Indian Academy of Sciences (India)

    pattern of the watershed LULC, leading to an accretive linear growth of agricultural and settlement areas. The annual rate of ... thereby advocates for better agricultural practices with additional energy subsidy to arrest further forest loss and LULC ...... automaton model and GIS: Long-term urban growth pre- diction for San ...

  11. Cheating models

    DEFF Research Database (Denmark)

    Arnoldi, Jakob

    The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing ...

  12. Entrepreneurship Models.

    Science.gov (United States)

    Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.

    This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…

  13. The Model

    DEFF Research Database (Denmark)

    About the reconstruction of Palle Nielsen's (f. 1942) work The Model from 1968: a gigantic playground for children in the museum, where they can freely romp about, climb in ropes, crawl on wooden structures, work with tools, jump in foam rubber, paint with finger paints and dress up in costumes....

  14. Model Checking

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Successful modeling?

    Science.gov (United States)

    Lomnitz, Cinna

    Tichelaar and Ruff [1989] propose to “estimate model variance in complicated geophysical problems,” including the determination of focal depth in earthquakes, by means of unconventional statistical methods such as bootstrapping. They are successful insofar as they are able to duplicate the results from more conventional procedures.

  16. Molecular Modeling

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 9; Issue 5. Molecular Modeling: A Powerful Tool for Drug Design and Molecular Docking. Rama Rao Nadendla. General Article Volume 9 Issue 5 May 2004 pp 51-60. Fulltext. Click here to view fulltext PDF. Permanent link:

  17. Turbulence Model

    DEFF Research Database (Denmark)

    Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens

    2011-01-01

    term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence...

  18. Review: Bilirubin pKa studies; new models and theories indicate high pKa values in water, dimethylformamide and DMSO

    Directory of Open Access Journals (Sweden)

    Ostrow J

    2010-03-01

    Full Text Available Abstract Background Correct aqueous pKa values of unconjugated bilirubin (UCB, a poorly-soluble, unstable substance, are essential for understanding its functions. Our prior solvent partition studies, of unlabeled and [14C] UCB, indicated pKa values above 8.0. These high values were attributed to effects of internal H-bonding in UCB. Many earlier and subsequent studies have reported lower pKa values, some even below 5.0, which are often used to describe the behavior of UCB. We here review 18 published studies that assessed aqueous pKa values of UCB, critically evaluating their methodologies in relation to essential preconditions for valid pKa measurements (short-duration experiments with purified UCB below saturation and accounting for self-association of UCB. Results These re-assessments identified major deficiencies that invalidate the results of all but our partition studies. New theoretical modeling of UCB titrations shows remarkable, unexpected effects of self-association, yielding falsely low pKa estimates, and provides some rationalization of the titration anomalies. The titration behavior reported for a soluble thioether conjugate of UCB at high aqueous concentrations is shown to be highly anomalous. Theoretical re-interpretations of data in DMSO and dimethylformamide show that those indirectly-derived aqueous pKa values are unacceptable, and indicate new, high average pKa values for UCB in non-aqueous media (>11 in DMSO and, probably, >10 in dimethylformamide. Conclusions No reliable aqueous pKa values of UCB are available for comparison with our partition-derived results. A companion paper shows that only the high pKa values can explain the pH-dependence of UCB binding to phospholipids, cyclodextrins, and alkyl-glycoside and bile salt micelles.

  19. A Comparison of Invalidating Family Environment Characteristics between University Students Engaging in Self-Injurious Thoughts & Actions and Non-Self-Injuring University Students

    Science.gov (United States)

    Martin, Jodi; Bureau, Jean-Francois; Cloutier, Paula; Lafontaine, Marie-France

    2011-01-01

    Individuals experiencing non-suicidal self-injurious (NSSI) thoughts only are greatly overlooked by current research. This investigation aimed at determining how three groups of university students differed in their reported quality of childhood relationships with parents, and histories of physical and sexual abuses. These groups included students…

  20. Using Logistic Regression for Validating or Invalidating Initial Statewide Cut-Off Scores on Basic Skills Placement Tests at the Community College Level

    Science.gov (United States)

    Secolsky, Charles; Krishnan, Sathasivam; Judd, Thomas P.

    2013-01-01

    The community colleges in the state of New Jersey went through a process of establishing statewide cut-off scores for English and mathematics placement tests. The colleges wanted to communicate to secondary schools a consistent preparation that would be necessary for enrolling in Freshman Composition and College Algebra at the community college…

  1. THE UNEMPLOYMENT PENSION-AGE AND INVALIDITY OF THE LAW OF THE IMSS, A PRACTICAL THEORETICAL ANALYSIS IN WORKERS OF SMES

    Directory of Open Access Journals (Sweden)

    Manuel Ildefonso Ruiz-Medina

    2016-01-01

    Full Text Available The present study analyzes and discloses the impact of base salary contribution in determining the amount of pension severance at old age and disability insurance, that covers the Law of the current Social Security for each case, is studied also causes the lack of knowledge of Social Security benefits by the workers. This requires a mixed methodological approach supported in the qualitative tradition of case study aimed to particularization and not generalization, which made it possible to link the obtained data with the theory, and to describe, analyze and explain the results found with the object of study. The results emerged from the application of the survey conducted with 22 items, whose questions were closed and structured with the method of Likert that were answered by 40 workers at two companies known as SMEs in the City of Culiacan, Sinaloa, Mexico, during the month March 2014. On completion of the analysis of the data collected, the results show a severe deterioration of pensions due to low wages and lack of jobs and declining resources with the new pension system of pensions and from the workers an almost total ignorance of the benefits that the law provides motivated by the lack of diffusion by the IMSS and the lack of enterprise training.

  2. Tobacco industry argues domestic trademark laws and international treaties preclude cigarette health warning labels, despite consistent legal advice that the argument is invalid.

    Science.gov (United States)

    Crosbie, Eric; Glantz, Stanton A

    2014-05-01

    To analyse the tobacco industry's use of international trade agreements to oppose policies to strengthen health warning labels (HWLs). A review of tobacco industry documents, tobacco control legislation and international treaties. During the early 1990s, the tobacco industry became increasingly alarmed about the advancement of HWLs on cigarettes packages. In response, it requested legal opinions from British American Tobacco's law firms in Australia and England, Britain's Department of Trade and Industry and the World Intellectual Property Organisation on the legality of restricting and prohibiting the use of their trademarks, as embodied in cigarette packages. The consistent legal advice, privately submitted to the companies, was that international treaties do not shield trademark owners from government limitations (including prohibition) on the use of their trademarks. Despite receiving this legal advice, the companies publicly argued that requiring large HWLs compromised their trademark rights under international treaties. The companies successfully used these arguments as part of their successful effort to deter Canadian and Australian governments from enacting laws requiring the plan packaging of cigarettes, which helped delay large graphic HWLs, including 'plain' packaging, for over a decade. Governments should not be intimidated by tobacco company threats and unsubstantiated claims, and carefully craft HWL laws to withstand the inevitable tobacco industry lawsuits with the knowledge that the companies' own lawyers as well as authoritative bodies have told the companies that the rights they claim do not exist.

  3. A Faculty Woman of Color and Micro-Invalidations at a White Research Institution: A Case of Intersectionality and Institutional Betrayal

    Science.gov (United States)

    Carroll, Doris

    2017-01-01

    Faculty Women of Color should be able to thrive and grow at our best research and teaching institutions. Assuring their academic and professional success requires that an institution's academic culture shift from a White, male-dominated, meritocratic environment to a global enrichment campus, one that values the richness and diversity of talent…

  4. An Investigation of Pre-Service Middle School Mathematics Teachers' Ability to Conduct Valid Proofs, Methods Used, and Reasons for Invalid Arguments

    Science.gov (United States)

    Demiray, Esra; Isiksal Bostan, Mine

    2017-01-01

    The purposes of this study are to investigate Turkish pre-service middle school mathematics teachers' ability in conducting valid proofs for statements regarding numbers and algebra in terms of their year of enrollment in a teacher education program, to determine the proof methods used in their valid proofs, and to examine the reasons for their…

  5. Lentinan with S-1 and paclitaxel for gastric cancer chemotherapy improve patient quality of life.

    Science.gov (United States)

    Kataoka, Hiromi; Shimura, Takaya; Mizoshita, Tsutomu; Kubota, Eiji; Mori, Yoshinori; Mizushima, Takashi; Wada, Tsuneya; Ogasawara, Naotaka; Tanida, Satoshi; Sasaki, Makoto; Togawa, Shozo; Sano, Hitoshi; Hirata, Yoshikazu; Ikai, Masahiro; Mochizuki, Hisato; Seno, Kyoji; Itoh, Sachie; Kawai, Takashi; Joh, Takashi

    2009-01-01

    Lentinan (LNT), a purified beta-glucan, is a biological and immunological modifier and has been used as an anticancer drug in combination with 5-fluorouracil for gastric cancer in Japan. In this prospective randomized study, we evaluated the effects of LNT combination with regard to quality of life (QOL) and LNT binding ratio in monocytes. Twenty patients were evaluated for 12 weeks. One cycle was 3 weeks and S-1 (day1-14) and Paclitaxel (days1 and 8) were administered. LNT was used once a week (days 1, 8 and 15) and it was used for all 12 weeks in the LNT 12-wk group and only for the last 6 weeks in the LNT 6-wk group. QOL was evaluated weekly by QOL-ACD, and binding of LNT to monocytes was measured by flow cytometry. There were individual variations in the binding ratio of LNT to monocytes from 0.16% to 11.95%. Toxicity with chemotherapy was not improved in the LNT 12-wk group, however, the total QOL score was significantly elevated in the LNT 12-wk group (p = 0.018) but not in the LNT 6-wk group. LNT combination from the beginning of the chemotherapy may be an important factor for the improvement of patient QOL.

  6. Model-reduced inverse modeling

    NARCIS (Netherlands)

    Vermeulen, P.T.M.

    2006-01-01

    Although faster computers have been developed in recent years, they tend to be used to solve even more detailed problems. In many cases this will yield enormous models that can not be solved within acceptable time constraints. Therefore, there is a need for alternative methods that simulate such

  7. Building Models and Building Modelling

    DEFF Research Database (Denmark)

    Jørgensen, Kaj; Skauge, Jørn

    2008-01-01

    I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygnings­model­lerings­programmer beskrevet. Vigtige aspekter om comp...

  8. Molecular Modelling

    Directory of Open Access Journals (Sweden)

    Aarti Sharma

    2009-12-01

    Full Text Available

    The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important
    tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and
    the exponential growth of the knowledge of protein structures have made it possible for organic compounds to tailored to
    decrease harmful side effects and increase the potency. This article provides a detailed description of the techniques
    employed in molecular modeling. Molecular modelling is a rapidly developing discipline, and has been supported from
    the dramatic improvements in computer hardware and software in recent years.

  9. Acyclic models

    CERN Document Server

    Barr, Michael

    2002-01-01

    Acyclic models is a method heavily used to analyze and compare various homology and cohomology theories appearing in topology and algebra. This book is the first attempt to put together in a concise form this important technique and to include all the necessary background. It presents a brief introduction to category theory and homological algebra. The author then gives the background of the theory of differential modules and chain complexes over an abelian category to state the main acyclic models theorem, generalizing and systemizing the earlier material. This is then applied to various cohomology theories in algebra and topology. The volume could be used as a text for a course that combines homological algebra and algebraic topology. Required background includes a standard course in abstract algebra and some knowledge of topology. The volume contains many exercises. It is also suitable as a reference work for researchers.

  10. RNICE Model

    DEFF Research Database (Denmark)

    Pedersen, Mogens Jin; Stritch, Justin Michael

    2018-01-01

    Replication studies relate to the scientific principle of replicability and serve the significant purpose of providing supporting (or contradicting) evidence regarding the existence of a phenomenon. However, replication has never been an integral part of public administration and management...... research. Recently, scholars have issued calls for more replication, but academic reflections on when replication adds substantive value to public administration and management research are needed. This concise article presents a conceptual model, RNICE, for assessing when and how a replication study...... contributes knowledge about a social phenomenon and advances knowledge in the public administration and management literatures. The RNICE model provides a vehicle for researchers who seek to evaluate or demonstrate the value of a replication study systematically. We illustrate the practical application...

  11. Persistent Modelling

    DEFF Research Database (Denmark)

    2012-01-01

    on this subject, this book makes essential reading for anyone considering new ways of thinking about architecture. In drawing upon both historical and contemporary perspectives this book provides evidence of the ways in which relations between representation and the represented continue to be reconsidered......The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident...... characteristic of architectural practice. But the persistence in persistent modelling can also be understood to apply in other ways, reflecting and anticipating extended roles for representation. This book identifies three principle areas in which these extensions are becoming apparent within contemporary...

  12. Persistent Modelling

    DEFF Research Database (Denmark)

    on this subject, this book makes essential reading for anyone considering new ways of thinking about architecture. In drawing upon both historical and contemporary perspectives this book provides evidence of the ways in which relations between representation and the represented continue to be reconsidered......The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident...... characteristic of architectural practice. But the persistence in persistent modelling can also be understood to apply in other ways, reflecting and anticipating extended roles for representation. This book identifies three principle areas in which these extensions are becoming apparent within contemporary...

  13. Modeling Minds

    DEFF Research Database (Denmark)

    Michael, John

    others' minds. Then (2), in order to bring to light some possible justifications, as well as hazards and criticisms of the methodology of looking time tests, I will take a closer look at the concept of folk psychology and will focus on the idea that folk psychology involves using oneself as a model...... of other people in order to predict and understand their behavior. Finally (3), I will discuss the historical location and significance of the emergence of looking time tests...

  14. Hydroballistics Modeling

    Science.gov (United States)

    1975-01-01

    detailed rendered visible in his photographs by streams of photographs of spheres entering the water small bubbles from electrolysis . So far as is...of the cavity is opaque or, brined wihile the sphere wats still in the oil. At if translucent, the contrast between thle jet and about the time the...and brass, for example) should be so model velocity scale according to Equation 1.18, selected that electrolysis is not a problem. the addition of

  15. Biomimetic modelling.

    OpenAIRE

    Vincent, Julian F V

    2003-01-01

    Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more compl...

  16. Modelling Behaviour

    DEFF Research Database (Denmark)

    This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....

  17. Combustor Modelling

    Science.gov (United States)

    1980-02-01

    a teuto 014aceo 0-oiuato 4 ajj 210- I 14 *Experiments l~~lamCID - l2 C15 model+ Aida ditane &Gray medium K .2 a Experiments hont target n-IO a0 deawa...possibilita di valutazione dello scambio termico in focolai di caldaie per ricaldamento"I Atti E Rassegna Tecnica Societa ingegneri e arc~hitetti in Torino

  18. Persistent Modelling

    DEFF Research Database (Denmark)

    practice: the duration of active influence that representation can hold in relation to the represented; the means, methods and media through which representations are constructed and used; and what it is that is being represented. Featuring contributions from some of the world’s most advanced thinkers....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....

  19. Ozone modeling

    International Nuclear Information System (INIS)

    McIllvaine, C.M.

    1994-01-01

    Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO 2 ), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO x concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO x coordinates of the point, known as the NMOC/NO x ratio. Results obtained by the described model are presented

  20. Modeling biomembranes.

    Energy Technology Data Exchange (ETDEWEB)

    Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas

    2005-11-01

    Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.

  1. A critical review of anaesthetised animal models and alternatives for military research, testing and training, with a focus on blast damage, haemorrhage and resuscitation.

    Science.gov (United States)

    Combes, Robert D

    2013-11-01

    Military research, testing, and surgical and resuscitation training, are aimed at mitigating the consequences of warfare and terrorism to armed forces and civilians. Traumatisation and tissue damage due to explosions, and acute loss of blood due to haemorrhage, remain crucial, potentially preventable, causes of battlefield casualties and mortalities. There is also the additional threat from inhalation of chemical and aerosolised biological weapons. The use of anaesthetised animal models, and their respective replacement alternatives, for military purposes -- particularly for blast injury, haemorrhaging and resuscitation training -- is critically reviewed. Scientific problems with the animal models include the use of crude, uncontrolled and non-standardised methods for traumatisation, an inability to model all key trauma mechanisms, and complex modulating effects of general anaesthesia on target organ physiology. Such effects depend on the anaesthetic and influence the cardiovascular system, respiration, breathing, cerebral haemodynamics, neuroprotection, and the integrity of the blood-brain barrier. Some anaesthetics also bind to the NMDA brain receptor with possible differential consequences in control and anaesthetised animals. There is also some evidence for gender-specific effects. Despite the fact that these issues are widely known, there is little published information on their potential, at best, to complicate data interpretation and, at worst, to invalidate animal models. There is also a paucity of detail on the anaesthesiology used in studies, and this can hinder correct data evaluation. Welfare issues relate mainly to the possibility of acute pain as a side-effect of traumatisation in recovered animals. Moreover, there is the increased potential for animals to suffer when anaesthesia is temporary, and the procedures invasive. These dilemmas can be addressed, however, as a diverse range of replacement approaches exist, including computer and mathematical

  2. Object Modeling and Building Information Modeling

    OpenAIRE

    Auråen, Hege; Gjemdal, Hanne

    2016-01-01

    The main part of this thesis is an online course (Small Private Online Course) entitled "Introduction to Object Modeling and Building Information Modeling". This supplementary report clarifies the choices made in the process of developing the course. The course examines the basic concepts of object modeling, modeling techniques and a modeling language ​​(UML). Further, building information modeling (BIM) is presented as a modeling process, and the object modeling concepts in the BIM softw...

  3. DTN Modeling in OPNET Modeler

    Directory of Open Access Journals (Sweden)

    PAPAJ Jan

    2014-05-01

    Full Text Available Traditional wireless networks use the concept of the point-to-point forwarding inherited from reliable wired networks which seems to be not ideal for wireless environment. New emerging applications and networks operate mostly disconnected. So-called Delay-Tolerant networks (DTNs are receiving increasing attentions from both academia and industry. DTNs introduced a store-carry-and-forward concept solving the problem of intermittent connectivity. Behavior of such networks is verified by real models, computer simulation or combination of the both approaches. Computer simulation has become the primary and cost effective tool for evaluating the performance of the DTNs. OPNET modeler is our target simulation tool and we wanted to spread OPNET’s simulation opportunity towards DTN. We implemented bundle protocol to OPNET modeler allowing simulate cases based on bundle concept as epidemic forwarding which relies on flooding the network with messages and the forwarding algorithm based on the history of past encounters (PRoPHET. The implementation details will be provided in article.

  4. Model integration and a theory of models

    OpenAIRE

    Dolk, Daniel R.; Kottemann, Jeffrey E.

    1993-01-01

    Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...

  5. Model Checking Algorithms for Markov Reward Models

    NARCIS (Netherlands)

    Cloth, Lucia; Cloth, L.

    2006-01-01

    Model checking Markov reward models unites two different approaches of model-based system validation. On the one hand, Markov reward models have a long tradition in model-based performance and dependability evaluation. On the other hand, a formal method like model checking allows for the precise

  6. Modelling Defiguration

    DEFF Research Database (Denmark)

    Bork Petersen, Franziska

    2013-01-01

    focus centres on how the catwalk scenography evokes a ‘defiguration’ of the walking models and to what effect. Vibskov’s mobile catwalk draws attention to the walk, which is a key element of models’ performance but which usually functions in fashion shows merely to present clothes in the most...... catwalks. Vibskov’s catwalk induces what the dance scholar Gabriele Brandstetter has labelled a ‘defigurative choregoraphy’: a straying from definitions, which exist in ballet as in other movement-based genres, of how a figure should move and appear (1998). The catwalk scenography in this instance...

  7. Students' Models of Curve Fitting: A Models and Modeling Perspective

    Science.gov (United States)

    Gupta, Shweta

    2010-01-01

    The Models and Modeling Perspectives (MMP) has evolved out of research that began 26 years ago. MMP researchers use Model Eliciting Activities (MEAs) to elicit students' mental models. In this study MMP was used as the conceptual framework to investigate the nature of students' models of curve fitting in a problem-solving environment consisting of…

  8. Sensitivity of mineral dissolution rates to physical weathering : A modeling approach

    Science.gov (United States)

    Opolot, Emmanuel; Finke, Peter

    2015-04-01

    There is continued interest on accurate estimation of natural weathering rates owing to their importance in soil formation, nutrient cycling, estimation of acidification in soils, rivers and lakes, and in understanding the role of silicate weathering in carbon sequestration. At the same time a challenge does exist to reconcile discrepancies between laboratory-determined weathering rates and natural weathering rates. Studies have consistently reported laboratory rates to be in orders of magnitude faster than the natural weathering rates (White, 2009). These discrepancies have mainly been attributed to (i) changes in fluid composition (ii) changes in primary mineral surfaces (reactive sites) and (iii) the formation of secondary phases; that could slow natural weathering rates. It is indeed difficult to measure the interactive effect of the intrinsic factors (e.g. mineral composition, surface area) and extrinsic factors (e.g. solution composition, climate, bioturbation) occurring at the natural setting, in the laboratory experiments. A modeling approach could be useful in this case. A number of geochemical models (e.g. PHREEQC, EQ3/EQ6) already exist and are capable of estimating mineral dissolution / precipitation rates as a function of time and mineral mass. However most of these approaches assume a constant surface area in a given volume of water (White, 2009). This assumption may become invalid especially at long time scales. One of the widely used weathering models is the PROFILE model (Sverdrup and Warfvinge, 1993). The PROFILE model takes into account the mineral composition, solution composition and surface area in determining dissolution / precipitation rates. However there is less coupling with other processes (e.g. physical weathering, clay migration, bioturbation) which could directly or indirectly influence dissolution / precipitation rates. We propose in this study a coupling between chemical weathering mechanism (defined as a function of reactive area

  9. Intrusion-Related Gold Deposits: New insights from gravity and hydrothermal integrated 3D modeling applied to the Tighza gold mineralization (Central Morocco)

    Science.gov (United States)

    Eldursi, Khalifa; Branquet, Yannick; Guillou-Frottier, Laurent; Martelet, Guillaume; Calcagno, Philippe

    2018-04-01

    The Tighza (or Jebel Aouam) district is one of the most important polymetallic districts in Morocco. It belongs to the Variscan Belt of Central Meseta, and includes W-Au, Pb-Zn-Ag, and Sb-Ba mineralization types that are spatially related to late-Carboniferous granitic stocks. One of the proposed hypotheses suggests that these granitic stocks are connected to a large intrusive body lying beneath them and that W-Au mineralization is directly related to this magmatism during a 287-285 Ma time span. A more recent model argues for a disconnection between the older barren outcropping magmatic stocks and a younger hidden magmatic complex responsible for the W-Au mineralization. Independently of the magmatic scenario, the W-Au mineralization is consensually recognized as of intrusion-related gold deposit (IRGD) type, W-rich. In addition to discrepancies between magmatic sceneries, the IRGD model does not account for published older age corresponding to a high-temperature hydrothermal event at ca. 291 Ma. Our study is based on gravity data inversion and hydro-thermal modeling, and aims to test this model of IRGD and its related magmatic geometries, with respect to subsurface geometries, favorable physical conditions for deposition and time record of hydrothermal processes. Combined inversion of geology and gravity data suggests that an intrusive body is rooted mainly at the Tighza fault in the north and that it spreads horizontally toward the south during a trans-tensional event (D2). Based on the numerical results, two types of mineralization can be distinguished: 1) the "Pre-Main" type appears during the emplacement of the magmatic body, and 2) the "Main" type appears during magma crystallization and the cooling phase. The time-lag between the two mineralization types depends on the cooling rate of magma. Although our numerical model of thermally-driven fluid flow around the Tighza pluton is simplified, as it does not take into account the chemical and deformation

  10. ALEPH model

    CERN Multimedia

    1989-01-01

    A wooden model of the ALEPH experiment and its cavern. ALEPH was one of 4 experiments at CERN's 27km Large Electron Positron collider (LEP) that ran from 1989 to 2000. During 11 years of research, LEP's experiments provided a detailed study of the electroweak interaction. Measurements performed at LEP also proved that there are three – and only three – generations of particles of matter. LEP was closed down on 2 November 2000 to make way for the construction of the Large Hadron Collider in the same tunnel. The cavern and detector are in separate locations - the cavern is stored at CERN and the detector is temporarily on display in Glasgow physics department. Both are available for loan.

  11. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  12. Slope diffusion models and digitally-acquired morphometric parameters yield age constraints on cinder cones, examples from the Spencer High Point and Craters of the Moon National Monument, Snake River Plain, Idaho

    Science.gov (United States)

    Blaser, A. P.; Holman, R. J.; Brown, D. E.; Willis, J. B.

    2011-12-01

    An analytical solution to a diffusion equation for cinder cones and a new digital method for collecting and comparing morphometric data on cinder cones are developed and used to constrain relative KT ages of undated cinder cones from the Spencer High Point (SHP) basalt plateau, southeastern Idaho. We assume that the interior slope of cinder cone craters diffuse at a steady state and that a range of diffusion constants (K=5-15 m^2/ky) derived in other areas of the Intermountain west are applicable in SE Idaho. Previous workers developed diffusion equations that model degradation of the outer flanks of cinder cones over time. The outer flanks of several SHP cones are heavily eroded by landsliding, a non-diffusive process, which invalidates diffusion modeling. However, our observations of the morphology of cinder cones throughout SE Idaho and comparisons with cones in other regions suggest that the interior slopes of cinder cone craters erode diffusively even when the outer flanks of the cones do not. We model and compare KT ages using morphometric measures from both the exterior flanks and the crater interiors; we conclude that the ages based on interior slopes are more valid than those based on exterior slopes. The topographic profiles, used to derive the necessary morphometric parameters (e.g. slope, slope inflection, cone and crater height/width ratios, and crater radius), are generated in a geographic information system (GIS) from readily available 10-m resolution digital elevation models (DEMs) rather than from topographic maps used by previous workers. We analytically solve diffusion equations for cinder cone degradation and compare the consistency of resulting relative KT ages. The one-dimensional equation models how a single topographic profile degrades through time and depends on a diffusion constant K (m^2/ky) that describes the erosion rate. We assume an initial slope of 33° and allow the model to degrade to the slope of the inflection point in the crater

  13. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  14. Building Mental Models by Dissecting Physical Models

    Science.gov (United States)

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  15. Neurite density imaging versus imaging of microscopic anisotropy in diffusion MRI: A model comparison using spherical tensor encoding.

    Science.gov (United States)

    Lampinen, Björn; Szczepankiewicz, Filip; Mårtensson, Johan; van Westen, Danielle; Sundgren, Pia C; Nilsson, Markus

    2017-02-15

    In diffusion MRI (dMRI), microscopic diffusion anisotropy can be obscured by orientation dispersion. Separation of these properties is of high importance, since it could allow dMRI to non-invasively probe elongated structures such as neurites (axons and dendrites). However, conventional dMRI, based on single diffusion encoding (SDE), entangles microscopic anisotropy and orientation dispersion with intra-voxel variance in isotropic diffusivity. SDE-based methods for estimating microscopic anisotropy, such as the neurite orientation dispersion and density imaging (NODDI) method, must thus rely on model assumptions to disentangle these features. An alternative approach is to directly quantify microscopic anisotropy by the use of variable shape of the b-tensor. Along those lines, we here present the 'constrained diffusional variance decomposition' (CODIVIDE) method, which jointly analyzes data acquired with diffusion encoding applied in a single direction at a time (linear tensor encoding, LTE) and in all directions (spherical tensor encoding, STE). We then contrast the two approaches by comparing neurite density estimated using NODDI with microscopic anisotropy estimated using CODIVIDE. Data were acquired in healthy volunteers and in glioma patients. NODDI and CODIVIDE differed the most in gray matter and in gliomas, where NODDI detected a neurite fraction higher than expected from the level of microscopic diffusion anisotropy found with CODIVIDE. The discrepancies could be explained by the NODDI tortuosity assumption, which enforces a connection between the neurite density and the mean diffusivity of tissue. Our results suggest that this assumption is invalid, which leads to a NODDI neurite density that is inconsistent between LTE and STE data. Using simulations, we demonstrate that the NODDI assumptions result in parameter bias that precludes the use of NODDI to map neurite density. With CODIVIDE, we found high levels of microscopic anisotropy in white matter

  16. Atmospheric Models/Global Atmospheric Modeling

    Science.gov (United States)

    1998-09-30

    Atmospheric Models /Global Atmospheric Modeling Timothy F. Hogan Naval Research Laboratory Monterey, CA 93943-5502 phone: (831) 656-4705 fax: (831...to 00-00-1998 4. TITLE AND SUBTITLE Atmospheric Models /Global Atmospheric Modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...initialization of increments, improved cloud prediction, and improved surface fluxes) have been transition to 6.4 (Global Atmospheric Models , PE 0603207N, X-0513

  17. Models in architectural design

    OpenAIRE

    Pauwels, Pieter

    2017-01-01

    Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...

  18. Rotating universe models

    International Nuclear Information System (INIS)

    Tozini, A.V.

    1984-01-01

    A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.) [pt

  19. Wake Expansion Models

    DEFF Research Database (Denmark)

    Branlard, Emmanuel Simon Pierre

    2017-01-01

    Different models of wake expansion are presented in this chapter: the 1D momentum theory model, the cylinder analog model and Theodorsen’s model. Far wake models such as the ones from Frandsen or Rathmann or only briefly mentioned. The different models are compared to each other. Results from...

  20. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... of these proposals. To achieve its first goal, the thesis presents the findings of a Systematic Mapping Study showing that human factors topics are scarcely and relatively poorly addressed in model transformation research. Motivated by these findings, the thesis explores the requirements of end-user modelers......, and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor...

  1. Model-to-model interface for multiscale materials modeling

    Energy Technology Data Exchange (ETDEWEB)

    Antonelli, Perry Edward [Iowa State Univ., Ames, IA (United States)

    2017-12-17

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface will also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.

  2. Concept Modeling vs. Data modeling in Practice

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2015-01-01

    This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models...

  3. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  4. Characterization and Modeling of High Power Microwave Effects in CMOS Microelectronics

    Science.gov (United States)

    2010-01-01

    margin measurement 28 Any voltage above the line marked VIH is considered a valid logic high on the input of the gate. VIH and VIL are defined...can handle any voltage noise level at the input up to VIL without changing state. The region in between VIL and VIH is considered an invalid logic...29 Table 2.2: Intrinsic device characteristics derived from SPETCRE simulations   VIH  (V)  VIL (V)  High Noise Margin  (V)  Low Noise Margin (V

  5. Business Model Innovation

    OpenAIRE

    Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher

    2014-01-01

    The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...

  6. Air Quality Dispersion Modeling - Alternative Models

    Science.gov (United States)

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  7. Wake modelling combining mesoscale and microscale models

    DEFF Research Database (Denmark)

    Badger, Jake; Volker, Patrick; Prospathospoulos, J.

    2013-01-01

    parameterizations are demonstrated in theWeather Research and Forecasting mesoscale model (WRF) in an idealized atmospheric flow. The model framework is the Horns Rev I wind farm experiencing an 7.97 m/s wind from 269.4o. Three of the four parameterizations use thrust output from the CRESflow-NS microscale model......In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake....... The characteristics of the mesoscale wake that developed from the four parameterizations are examined. In addition the mesoscale model wakes are compared to measurement data from Horns Rev I. Overall it is seen as an advantage to incorporate microscale model data in mesocale model wake parameterizations....

  8. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  9. Molecular Models: Construction of Models with Magnets

    Directory of Open Access Journals (Sweden)

    Kalinovčić P.

    2015-07-01

    Full Text Available Molecular models are indispensable tools in teaching chemistry. Beside their high price, commercially available models are generally too small for classroom demonstration. This paper suggests how to make space-filling (callote models from Styrofoam with magnetic balls as connectors and disc magnets for showing molecular polarity

  10. Similarity of the leading contributions to the self-energy and the thermodynamics in two- and three-dimensional Fermi Liquids

    International Nuclear Information System (INIS)

    Coffey, D.; Bedell, K.S.

    1993-01-01

    We compare the self-energy and entropy of a two- and three-dimensional Fermi Liquids (FLs) using a model with a contact interaction between fermions. For a two-dimensional (2D) FL we find that there are T 2 contributions to the entropy from interactions separate from those due to the collective modes. These T 2 contributions arise from nonanalytic corrections to the real part of the self-energy and areanalogous to T 3 lnT contributions present in the entropy of a three-dimensional (3D) FL. The difference between the 2D and 3D results arises solely from the different phase space factors

  11. A more flexible lipoprotein sorting pathway.

    Science.gov (United States)

    Chahales, Peter; Thanassi, David G

    2015-05-01

    Lipoprotein biogenesis in Gram-negative bacteria occurs by a conserved pathway, each step of which is considered essential. In contrast to this model, LoVullo and colleagues demonstrate that the N-acyl transferase Lnt is not required in Francisella tularensis or Neisseria gonorrhoeae. This suggests the existence of a more flexible lipoprotein pathway, likely due to a modified Lol transporter complex, and raises the possibility that pathogens may regulate lipoprotein processing to modulate interactions with the host. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  12. Commentary on inhaled {sup 239}PuO{sub 2} in dogs - a prophylaxis against lung cancer?

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M., E-mail: jerrycuttler@rogers.com [Cuttler and Associates, Vaughan, ON (Canada); Feinendegen, L. [Brookhaven National Laboratories, Upton, NY (United States)

    2015-07-01

    Several studies on the effect of inhaled plutonium-dioxide particulates and the incidence of lung tumors in dogs reveal beneficial effects when the cumulative alpha-radiation dose is low. There is a threshold at an exposure level of about 100 cGy for excess tumor incidence and reduced lifespan. The observations conform to the expectations of the radiation hormesis dose-response model and contradict the predictions of the Linear No-Threshold (LNT) hypothesis. These studies suggest investigating the possibility of employing low-dose alpha-radiation, such as from {sup 239}PuO{sub 2} inhalation, as a prophylaxis against lung cancer. (author)

  13. Commentary on inhaled {sup 239}PuO{sub 2} in dogs - a prophylaxis against lung cancer?

    Energy Technology Data Exchange (ETDEWEB)

    Cuttler, J.M. [Cuttler and Assoc., Vaughan, Ontario (Canada); Feinendegen, L. [Brookhaven National Laboratories, Upton, New York (United States)

    2015-06-15

    Several studies on the effect of inhaled plutonium-dioxide particulates and the incidence of lung tumors in dogs reveal beneficial effects when the cumulative alpha-radiation dose is low. There is a threshold at an exposure level of about 100 cGy for excess tumor incidence and reduced lifespan. The observations conform to the expectations of the radiation hormesis dose-response model and contradict the predictions of the Linear No-Threshold (LNT) hypothesis. These studies suggest investigating the possibility of employing low-dose alpha-radiation, such as from {sup 239}PuO {sub 2} inhalation, as a prophylaxis against lung cancer. (author)

  14. Some environmental challenges which the uranium production industry faces in the 21st century

    International Nuclear Information System (INIS)

    Zhang Lisheng

    2004-01-01

    Some of the environmental challenges which the uranium production industry faces in the 21st century have been discussed in the paper. They are: the use of the linear non-threshold (LNT) model for radiation protection, the concept of 'controllable dose' as an alternative to the current International Commission on Radiological Protection (ICRP) system of dose limitation, the future of collective dose and the ALARA (As low As Reasonably Achievable) principle and the application of a risk-based framework for managing hazards. The author proposes that, the risk assessment/risk management framework could be used for managing the environmental, safety and decommissioning issues associated with the uranium fuel cycle. (author)

  15. Hormesis: Fact or fiction?

    International Nuclear Information System (INIS)

    Holzman, D.

    1995-01-01

    Bernard Cohen had not intended to foment revolution. To be sure, he had hoped that the linear, no-threshold (LNT) model of ionizing radiation's effects on humans would prove to be an exaggeration of reality at the low levels of radiation that one can measure in humans throughout the United States. His surprising conclusion, however, was that within the low dose ranges of radiation one receives in the home, the higher the dose, the less chance one had of contracting lung cancer. 1 fig., 1 tab

  16. Target Scattering Metrics: Model-Model and Model Data comparisons

    Science.gov (United States)

    2017-12-13

    be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for...stainless steel replica of artillery shell Table 7. Targets used in the TIER simulations for the metrics study. C. Four Potential Metrics: Four...Four metrics were investigated. The metric, based on 2D cross-correlation, is typically used in classification algorithms. Model-model comparisons

  17. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  18. Modelling of Hydraulic Robot

    DEFF Research Database (Denmark)

    Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik

    1997-01-01

    This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...

  19. Automated data model evaluation

    International Nuclear Information System (INIS)

    Kazi, Zoltan; Kazi, Ljubica; Radulovic, Biljana

    2012-01-01

    Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation

  20. Elastic Appearance Models

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Fagertun, Jens; Larsen, Rasmus

    2011-01-01

    This paper presents a fusion of the active appearance model (AAM) and the Riemannian elasticity framework which yields a non-linear shape model and a linear texture model – the active elastic appearance model (EAM). The non-linear elasticity shape model is more flexible than the usual linear subs...

  1. Forest-fire models

    Science.gov (United States)

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  2. "Bohr's Atomic Model."

    Science.gov (United States)

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  3. From Numeric Models to Granular System Modeling

    Directory of Open Access Journals (Sweden)

    Witold Pedrycz

    2015-03-01

    To make this study self-contained, we briefly recall the key concepts of granular computing and demonstrate how this conceptual framework and its algorithmic fundamentals give rise to granular models. We discuss several representative formal setups used in describing and processing information granules including fuzzy sets, rough sets, and interval calculus. Key architectures of models dwell upon relationships among information granules. We demonstrate how information granularity and its optimization can be regarded as an important design asset to be exploited in system modeling and giving rise to granular models. With this regard, an important category of rule-based models along with their granular enrichments is studied in detail.

  4. Geologic Framework Model Analysis Model Report

    International Nuclear Information System (INIS)

    Clayton, R.

    2000-01-01

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and

  5. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  6. Mathematical Modeling Using MATLAB

    National Research Council Canada - National Science Library

    Phillips, Donovan

    1998-01-01

    .... Mathematical Modeling Using MA MATLAB acts as a companion resource to A First Course in Mathematical Modeling with the goal of guiding the reader to a fuller understanding of the modeling process...

  7. Energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-01-01

    Full Text Available The construction industry has turned to energy modelling in order to assist them in reducing the amount of energy consumed by buildings. However, while the energy loads of buildings can be accurately modelled, energy models often under...

  8. Multivariate GARCH models

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...

  9. N-Gram models

    NARCIS (Netherlands)

    Hiemstra, Djoerd; Liu, Ling; Tamer Özsu, M.

    2017-01-01

    In language modeling, n-gram models are probabilistic models of text that use some limited amount of history, or word dependencies, where n refers to the number of words that participate in the dependence relation.

  10. Business Model Canvas

    OpenAIRE

    Souza, D', Austin

    2013-01-01

    Presentatie gegeven op 13 mei 2013 op de bijeenkomst "Business Model Canvas Challenge Assen". Het Business Model Canvas is ontworpen door Alex Osterwalder. Het model werkt zeer overzichtelijk en bestaat uit negen bouwstenen.

  11. Wildfire Risk Main Model

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire regime...

  12. Lapse Rate Modeling

    DEFF Research Database (Denmark)

    De Giovanni, Domenico

    prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...

  13. Lapse rate modeling

    DEFF Research Database (Denmark)

    De Giovanni, Domenico

    2010-01-01

    prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...

  14. Quintessence Model Building

    OpenAIRE

    Brax, P.; Martin, J.; Riazuelo, A.

    2001-01-01

    A short review of some of the aspects of quintessence model building is presented. We emphasize the role of tracking models and their possible supersymmetric origin. A short review of some of the aspects of quintessence model building is presented. We emphasize the role of tracking models and their possible supersymmetric origin. A short review of some of the aspects of quintessence model building is presented. We emphasize the role of tracking models and their possible supersymmetric o...

  15. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  16. Overuse Injury Assessment Model

    National Research Council Canada - National Science Library

    Stuhmiller, James H; Shen, Weixin; Sih, Bryant

    2005-01-01

    .... Previously, we developed a preliminary model that predicted the stress fracture rate and used biomechanical modeling, nonlinear optimization for muscle force, and bone structural analysis to estimate...

  17. Multilevel modeling using R

    CERN Document Server

    Finch, W Holmes; Kelley, Ken

    2014-01-01

    A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo

  18. Cosmological models without singularities

    International Nuclear Information System (INIS)

    Petry, W.

    1981-01-01

    A previously studied theory of gravitation in flat space-time is applied to homogeneous and isotropic cosmological models. There exist two different classes of models without singularities: (i) ever-expanding models, (ii) oscillating models. The first class contains models with hot big bang. For these models there exist at the beginning of the universe-in contrast to Einstein's theory-very high but finite densities of matter and radiation with a big bang of very short duration. After short time these models pass into the homogeneous and isotropic models of Einstein's theory with spatial curvature equal to zero and cosmological constant ALPHA >= O. (author)

  19. Validated dynamic flow model

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2011-01-01

    model structure suggested by University of Lund the WP4 leader. This particular model structure has the advantages that it fits better into the control design frame work used by WP3-4 compared to the model structures previously developed in WP2. The different model structures are first summarised....... Then issues dealing with optimal experimental design is considered. Finally the parameters are estimated in the chosen static and dynamic models and a validation is performed. Two of the static models, one of them the additive model, explains the data well. In case of dynamic models the suggested additive...

  20. Environmental Modeling Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...

  1. TRACKING CLIMATE MODELS

    Data.gov (United States)

    National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...

  2. Exogenous calcium alleviates low night temperature stress on the photosynthetic apparatus of tomato leaves.

    Directory of Open Access Journals (Sweden)

    Guoxian Zhang

    Full Text Available The effect of exogenous CaCl2 on photosystem I and II (PSI and PSII activities, cyclic electron flow (CEF, and proton motive force of tomato leaves under low night temperature (LNT was investigated. LNT stress decreased the net photosynthetic rate (Pn, effective quantum yield of PSII [Y(II], and photochemical quenching (qP, whereas CaCl2 pretreatment improved Pn, Y(II, and qP under LNT stress. LNT stress significantly increased the non-regulatory quantum yield of energy dissipation [Y(NO], whereas CaCl2 alleviated this increase. Exogenous Ca2+ enhanced stimulation of CEF by LNT stress. Inhibition of oxidized PQ pools caused by LNT stress was alleviated by CaCl2 pretreatment. LNT stress reduced zeaxanthin formation and ATPase activity, but CaCl2 pretreatment reversed both of these effects. LNT stress caused excess formation of a proton gradient across the thylakoid membrane, whereas CaCl2 pretreatment decreased the said factor under LNT. Thus, our results showed that photoinhibition of LNT-stressed plants could be alleviated by CaCl2 pretreatment. Our findings further revealed that this alleviation was mediated in part by improvements in carbon fixation capacity, PQ pools, linear and cyclic electron transports, xanthophyll cycles, and ATPase activity.

  3. Exogenous calcium alleviates low night temperature stress on the photosynthetic apparatus of tomato leaves.

    Science.gov (United States)

    Zhang, Guoxian; Liu, Yufeng; Ni, Yang; Meng, Zhaojuan; Lu, Tao; Li, Tianlai

    2014-01-01

    The effect of exogenous CaCl2 on photosystem I and II (PSI and PSII) activities, cyclic electron flow (CEF), and proton motive force of tomato leaves under low night temperature (LNT) was investigated. LNT stress decreased the net photosynthetic rate (Pn), effective quantum yield of PSII [Y(II)], and photochemical quenching (qP), whereas CaCl2 pretreatment improved Pn, Y(II), and qP under LNT stress. LNT stress significantly increased the non-regulatory quantum yield of energy dissipation [Y(NO)], whereas CaCl2 alleviated this increase. Exogenous Ca2+ enhanced stimulation of CEF by LNT stress. Inhibition of oxidized PQ pools caused by LNT stress was alleviated by CaCl2 pretreatment. LNT stress reduced zeaxanthin formation and ATPase activity, but CaCl2 pretreatment reversed both of these effects. LNT stress caused excess formation of a proton gradient across the thylakoid membrane, whereas CaCl2 pretreatment decreased the said factor under LNT. Thus, our results showed that photoinhibition of LNT-stressed plants could be alleviated by CaCl2 pretreatment. Our findings further revealed that this alleviation was mediated in part by improvements in carbon fixation capacity, PQ pools, linear and cyclic electron transports, xanthophyll cycles, and ATPase activity.

  4. Regularized Structural Equation Modeling

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  5. In vivo regulation of biochemical processes by reactive species

    International Nuclear Information System (INIS)

    Spasic, M.B.; Zunic, Z.; Vujin, S.

    1998-01-01

    For the regulation of exposure to low level radiation so called linear-no-threshold (LNT) model is usually employed. The premise of LNT is that there is no safe level of exposure. It is well established that ionizing radiation induces in an organism the appearance of free radicals and other reactive species, such as hydrated electrons and ions, due to ionization of aqueous medium. Among direct damaging effects to biomacromolecules, ionizing radiation induces free radical chain reactions which lead to the appearance of non-functional derivatized molecules and thus disturbed physiological functions. Overwhelming defense capacity for preventing formation and elimination of damaged molecules results in pathological changes and fatal outcome and represents molecular basis for LNT interpretation of ionizing radiation effects. Redox reactions in aerobes are usually connected to molecular oxygen reduction in the process of oxidative phosphorylation and during xenobiotic detoxification (mixed function microsomal oxidases, cytochrome P 450 ). This fact, together with chemical characteristics (production of reactive and free radical intermediates) and a strict cellular compartmentalization of redox reactions can explain insufficient knowledge on possible role of these reactions in signal transduction. Recognition of physiological role of a free radical nitrogen oxide (NO), as a regulator of soluble guanylyl cyclase (sGC) activity and thus production of cGMP (a second messenger) led to the intensification of the studies focused on the elucidation of the role of redox processes in signal transduction. In this paper we will summarize evidence for a direct regulatory role of reactive oxygen species evolving upon low dose ionising radiation through their interaction with regulatory molecules such as sGC tyrosine kinases or nuclear factor -kB, and discuss a possible role of such interactions on an organism. (author)

  6. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  7. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  8. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  9. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  10. Biosphere Model Report

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  11. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  12. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  13. Lumped Thermal Household Model

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob

    2013-01-01

    a lumped model approach as an alternative to the individual models. In the lumped model, the portfolio is seen as baseline consumption superimposed with an ideal storage of limited power and energy capacity. The benefit of such a lumped model is that the computational effort of flexibility optimization...

  14. The Moody Mask Model

    DEFF Research Database (Denmark)

    Larsen, Bjarke Alexander; Andkjær, Kasper Ingdahl; Schoenau-Fog, Henrik

    2015-01-01

    This paper proposes a new relation model, called "The Moody Mask model", for Interactive Digital Storytelling (IDS), based on Franceso Osborne's "Mask Model" from 2011. This, mixed with some elements from Chris Crawford's Personality Models, is a system designed for dynamic interaction between ch...

  15. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS...

  16. AIDS Epidemiological models

    Science.gov (United States)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  17. Numerical Modelling of Streams

    DEFF Research Database (Denmark)

    Vestergaard, Kristian

    In recent years there has been a sharp increase in the use of numerical water quality models. Numeric water quality modeling can be divided into three steps: Hydrodynamic modeling for the determination of stream flow and water levels. Modelling of transport and dispersion of a conservative...

  18. A Model for Conversation

    DEFF Research Database (Denmark)

    Ayres, Phil

    2012-01-01

    This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...

  19. Generic Market Models

    NARCIS (Netherlands)

    R. Pietersz (Raoul); M. van Regenmortel

    2005-01-01

    textabstractCurrently, there are two market models for valuation and risk management of interest rate derivatives, the LIBOR and swap market models. In this paper, we introduce arbitrage-free constant maturity swap (CMS) market models and generic market models featuring forward rates that span

  20. Modeling the Accidental Deaths

    Directory of Open Access Journals (Sweden)

    Mariyam Hafeez

    2008-01-01

    Full Text Available The model for accidental deaths in the city of Lahore has been developed by using a class of Generalized Linear Models. Various link functions have been used in developing the model. The diagnostic checks have been carried out to see the validity of the fitted model.

  1. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  2. Modelling Railway Interlocking Systems

    DEFF Research Database (Denmark)

    Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth

    2000-01-01

    In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...

  3. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  4. Comparing Active Vision Models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  5. Comparing active vision models

    NARCIS (Netherlands)

    Croon, G.C.H.E. de; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.

    2009-01-01

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different

  6. White Paper on Modelling

    NARCIS (Netherlands)

    Van Bloemendaal, Karen; Dijkema, Gerard P.J.; Woerdman, Edwin; Jong, Mattheus

    2015-01-01

    This White Paper provides an overview of the modelling approaches adopted by the project partners in the EDGaR project 'Understanding Gas Sector Intra- and Inter- Market interactions' (UGSIIMI). The paper addresses three types of models: complementarity modelling, agent-based modelling and property

  7. A Model for Conversation

    DEFF Research Database (Denmark)

    Ayres, Phil

    2012-01-01

    This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...... of design. Three distinctions are drawn through which to develop this discussion of models in an architectural context. An examination of these distinctions serves to nuance particular characteristics and roles of models, the modelling activity itself and those engaged in it....

  8. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...

  9. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...

  10. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  11. The Hospitable Meal Model

    DEFF Research Database (Denmark)

    Justesen, Lise; Overgaard, Svend Skafte

    2017-01-01

    -ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored......This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open...

  12. Calibrated Properties Model

    Energy Technology Data Exchange (ETDEWEB)

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  13. Calibrated Properties Model

    Energy Technology Data Exchange (ETDEWEB)

    C.F. Ahlers, H.H. Liu

    2001-12-18

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  14. The Hospitable Meal Model

    DEFF Research Database (Denmark)

    Justesen, Lise; Overgaard, Svend Skafte

    2017-01-01

    This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open......-ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored...

  15. MulensModel: Microlensing light curves modeling

    Science.gov (United States)

    Poleski, Radoslaw; Yee, Jennifer

    2018-03-01

    MulensModel calculates light curves of microlensing events. Both single and binary lens events are modeled and various higher-order effects can be included: extended source (with limb-darkening), annual microlensing parallax, and satellite microlensing parallax. The code is object-oriented and written in Python3, and requires AstroPy (ascl:1304.002).

  16. Business Models and Business Model Innovation

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Saebi, Tina

    2018-01-01

    While research on business models and business model innovation continue to exhibit growth, the field is still, even after more than two decades of research, characterized by a striking lack of cumulative theorizing and an opportunistic borrowing of more or less related ideas from neighbouring...

  17. EMC Simulation and Modeling

    Science.gov (United States)

    Takahashi, Takehiro; Schibuya, Noboru

    The EMC simulation is now widely used in design stage of electronic equipment to reduce electromagnetic noise. As the calculated electromagnetic behaviors of the EMC simulator depends on the inputted EMC model of the equipment, the modeling technique is important to obtain effective results. In this paper, simple outline of the EMC simulator and EMC model are described. Some modeling techniques of EMC simulation are also described with an example of the EMC model which is shield box with aperture.

  18. Phenomenology of inflationary models

    Science.gov (United States)

    Olyaei, Abbas

    2018-01-01

    There are many inflationary models compatible with observational data. One can investigate inflationary models by looking at their general features, which are common in most of the models. Here we have investigated some of the single-field models without considering their origin in order to find the phenomenology of them. We have shown how to adjust the simple harmonic oscillator model in order to be in good agreement with observational data.

  19. Multilevel statistical models

    CERN Document Server

    Goldstein, Harvey

    2011-01-01

    This book provides a clear introduction to this important area of statistics. The author provides a wide of coverage of different kinds of multilevel models, and how to interpret different statistical methodologies and algorithms applied to such models. This 4th edition reflects the growth and interest in this area and is updated to include new chapters on multilevel models with mixed response types, smoothing and multilevel data, models with correlated random effects and modeling with variance.

  20. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  1. Geochemistry Model Validation Report: External Accumulation Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Zarrabi

    2001-09-27

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  2. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  3. Crop rotation modelling - A European model intercomparison

    DEFF Research Database (Denmark)

    Kollas, Chris; Kersebaum, Kurt C; Nendel, Claas

    2015-01-01

    crop growth simulation models to predict yields in crop rotations at five sites across Europe under minimal calibration. Crop rotations encompassed 301 seasons of ten crop types common to European agriculture and a diverse set of treatments (irrigation, fertilisation, CO2 concentration, soil types...... accurately than main crops (cereals). The majority of models performed better for the treatments of increased CO2 and nitrogen fertilisation than for irrigation and soil-related treatments. The yield simulation of the multi-model ensemble reduced the error compared to single-model simulations. The low degree...... representation of crop rotations, further research is required to synthesise existing knowledge of the physiology of intermediate crops and of carry-over effects from the preceding to the following crop, and to implement/improve the modelling of processes that condition these effects....

  4. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  5. Model Reduction in Groundwater Modeling and Management

    Science.gov (United States)

    Siade, A. J.; Kendall, D. R.; Putti, M.; Yeh, W. W.

    2008-12-01

    Groundwater management requires the development and implementation of mathematical models that, through simulation, evaluate the effects of anthropogenic impacts on an aquifer system. To obtain high levels of accuracy, one must incorporate high levels of complexity, resulting in computationally demanding models. This study provides a methodology for solving groundwater management problems with reduced computational effort by replacing the large, complex numerical model with a significantly smaller, simpler approximation. This is achieved via Proper Orthogonal Decomposition (POD), where the goal is to project the larger model solution space onto a smaller or reduced subspace in which the management problem will be solved, achieving reductions in computation time of up to three orders of magnitude. Once the solution is obtained in the reduced space with acceptable accuracy, it is then projected back to the full model space. A major challenge when using this method is the definition of the reduced solution subspace. In POD, this subspace is defined based on samples or snapshots taken at specific times from the solution of the full model. In this work we determine when snapshots should be taken on the basis of the exponential behavior of the governing partial differential equation. This selection strategy is then generalized for any groundwater model by obtaining and using the optimal snapshot selection for a simplified, dimensionless model. Protocols are developed to allow the snapshot selection results of the simplified, dimensionless model to be transferred to that of a complex, heterogeneous model with any geometry. The proposed methodology is finally applied to a basin in the Oristano Plain located in the Sardinia Island, Italy.

  6. Specific Activation of the Alternative Cardiac Promoter ofCacna1cby the Mineralocorticoid Receptor.

    Science.gov (United States)

    Mesquita, Thassio R; Auguste, Gaelle; Falcón, Debora; Ruiz-Hurtado, Gema; Salazar-Enciso, Rogelio; Sabourin, Jessica; Lefebvre, Florence; Viengchareun, Say; Kobeissy, Hussein; Lechêne, Patrick; Nicolas, Valerie; Fernández-Celis, Amaya; Gomez, Susana; Lauton-Santos, Sandra; Morel, Eric; Rueda, Angelica; López-Andrés, Natalia; Gomez, Ana M; Lombes, Marc; Benitah, Jean-Pierre

    2018-02-21

    Rationale: The mineralocorticoid receptor (MR) antagonists belong to the current therapeutic armamentarium for the management of cardiovascular diseases, but the mechanisms conferring their beneficial effects are poorly understood. Part of the cardiovascular effects of MR are due to the regulation of L-type Ca v 1.2 Ca 2+ channel expression, which is generated by tissue-specific alternative promoters as a long 'cardiac' (Ca v 1.2-LNT) or a short 'vascular' (Ca v 1.2-SNT) N-terminal transcripts. Objective: To analyze the molecular mechanisms by which aldosterone, through MR, modulates Ca v 1.2 expression and function in a tissue-specific manner. Methods and Results: In primary cultures of neonatal rat ventricular myocytes, aldosterone exposure for 24 hours increased in a concentration-dependent manner Ca v 1.2-LNT expression at both mRNA and protein levels, correlating with enhanced concentration-, time- and MR-dependent P1-promoter activity. In silico analysis and mutagenesis identified MR interaction with both specific activating and repressing DNA binding elements on the P1-promoter. The relevance of this regulation is confirmed both ex and in vivo in transgenic mice harboring the luciferase reporter gene under the control of the 'cardiac' P1-promoter. Moreover, we show that this cis-regulatory mechanism is not limited to the heart. Indeed, in smooth muscle cells from different vascular beds, in which the Ca v 1.2-SNT is normally the major isoform, we found that MR signaling activates 'cardiac' Ca v 1.2-LNT expression through P1-promoter activation, leading to vascular contractile dysfunction. These results were further corroborated in hypertensive aldosterone-salt rodent models, showing notably a positive correlation between blood pressure and 'cardiac' P1-promoter activity in aorta. This new vascular Ca v 1.2-LNT molecular signature reduced sensitivity to the Ca 2+ channel blocker, nifedipine, in aldosterone-treated vessels. Conclusions: Our results reveal that

  7. Implementation of a Web-Based Organ Donation Educational Intervention: Development and Use of a Refined Process Evaluation Model.

    Science.gov (United States)

    Redmond, Nakeva; Harker, Laura; Bamps, Yvan; Flemming, Shauna St Clair; Perryman, Jennie P; Thompson, Nancy J; Patzer, Rachel E; Williams, Nancy S DeSousa; Arriola, Kimberly R Jacob

    2017-11-30

    The lack of available organs is often considered to be the single greatest problem in transplantation today. Internet use is at an all-time high, creating an opportunity to increase public commitment to organ donation through the broad reach of Web-based behavioral interventions. Implementing Internet interventions, however, presents challenges including preventing fraudulent respondents and ensuring intervention uptake. Although Web-based organ donation interventions have increased in recent years, process evaluation models appropriate for Web-based interventions are lacking. The aim of this study was to describe a refined process evaluation model adapted for Web-based settings and used to assess the implementation of a Web-based intervention aimed to increase organ donation among African Americans. We used a randomized pretest-posttest control design to assess the effectiveness of the intervention website that addressed barriers to organ donation through corresponding videos. Eligible participants were African American adult residents of Georgia who were not registered on the state donor registry. Drawing from previously developed process evaluation constructs, we adapted reach (the extent to which individuals were found eligible, and participated in the study), recruitment (online recruitment mechanism), dose received (intervention uptake), and context (how the Web-based setting influenced study implementation) for Internet settings and used the adapted model to assess the implementation of our Web-based intervention. With regard to reach, 1415 individuals completed the eligibility screener; 948 (67.00%) were determined eligible, of whom 918 (96.8%) completed the study. After eliminating duplicate entries (n=17), those who did not initiate the posttest (n=21) and those with an invalid ZIP code (n=108), 772 valid entries remained. Per the Internet protocol (IP) address analysis, only 23 of the 772 valid entries (3.0%) were within Georgia, and only 17 of those

  8. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  9. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  10. Modeling for Battery Prognostics

    Science.gov (United States)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient

  11. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  12. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  13. Modeling volatility using state space models.

    Science.gov (United States)

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  14. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  15. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  16. Models of light nuclei

    International Nuclear Information System (INIS)

    Harvey, M.; Khanna, F.C.

    1975-01-01

    The general problem of what constitutes a physical model and what is known about the free nucleon-nucleon interaction are considered. A time independent formulation of the basic equations is chosen. Construction of the average field in which particles move in a general independent particle model is developed, concentrating on problems of defining the average spherical single particle field for any given nucleus, and methods for construction of effective residual interactions and other physical operators. Deformed shell models and both spherical and deformed harmonic oscillator models are discussed in detail, and connections between spherical and deformed shell models are analyzed. A section on cluster models is included. 11 tables, 21 figures

  17. Building Thermal Models

    Science.gov (United States)

    Peabody, Hume L.

    2017-01-01

    This presentation is meant to be an overview of the model building process It is based on typical techniques (Monte Carlo Ray Tracing for radiation exchange, Lumped Parameter, Finite Difference for thermal solution) used by the aerospace industry This is not intended to be a "How to Use ThermalDesktop" course. It is intended to be a "How to Build Thermal Models" course and the techniques will be demonstrated using the capabilities of ThermalDesktop (TD). Other codes may or may not have similar capabilities. The General Model Building Process can be broken into four top level steps: 1. Build Model; 2. Check Model; 3. Execute Model; 4. Verify Results.

  18. Microsoft tabular modeling cookbook

    CERN Document Server

    Braak, Paul te

    2013-01-01

    This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling

  19. Five models of capitalism

    Directory of Open Access Journals (Sweden)

    Luiz Carlos Bresser-Pereira

    2012-03-01

    Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.

  20. Holographic twin Higgs model.

    Science.gov (United States)

    Geller, Michael; Telem, Ofri

    2015-05-15

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.