Directory of Open Access Journals (Sweden)
Tengiz Mdzinarishvili
2009-12-01
Full Text Available A simple, computationally efficient procedure for analyses of the time period and birth cohort effects on the distribution of the age-specific incidence rates of cancers is proposed. Assuming that cohort effects for neighboring cohorts are almost equal and using the Log-Linear Age-Period-Cohort Model, this procedure allows one to evaluate temporal trends and birth cohort variations of any type of cancer without prior knowledge of the hazard function. This procedure was used to estimate the influence of time period and birth cohort effects on the distribution of the age-specific incidence rates of first primary, microscopically confirmed lung cancer (LC cases from the SEER9 database. It was shown that since 1975, the time period effect coefficients for men increase up to 1980 and then decrease until 2004. For women, these coefficients increase from 1975 up to 1990 and then remain nearly constant. The LC birth cohort effect coefficients for men and women increase from the cohort of 1890–94 until the cohort of 1925–29, then decrease until the cohort of 1950–54 and then remain almost unchanged. Overall, LC incidence rates, adjusted by period and cohort effects, increase up to the age of about 72–75, turn over, and then fall after the age of 75–78. The peak of the adjusted rates in men is around the age of 77–78, while in women, it is around the age of 72–73. Therefore, these results suggest that the age distribution of the incidence rates in men and women fall at old ages.
Bayesian Age-Period-Cohort Modeling and Prediction - BAMP
Directory of Open Access Journals (Sweden)
Volker J. Schmid
2007-10-01
Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.
Bayesian Age-Period-Cohort Model of Lung Cancer Mortality
Directory of Open Access Journals (Sweden)
Bhikhari P. Tharu
2015-09-01
Full Text Available Background The objective of this study was to analyze the time trend for lung cancer mortality in the population of the USA by 5 years based on most recent available data namely to 2010. The knowledge of the mortality rates in the temporal trends is necessary to understand cancer burden.Methods Bayesian Age-Period-Cohort model was fitted using Poisson regression with histogram smoothing prior to decompose mortality rates based on age at death, period at death, and birth-cohort.Results Mortality rates from lung cancer increased more rapidly from age 52 years. It ended up to 325 deaths annually for 82 years on average. The mortality of younger cohorts was lower than older cohorts. The risk of lung cancer was lowered from period 1993 to recent periods.Conclusions The fitted Bayesian Age-Period-Cohort model with histogram smoothing prior is capable of explaining mortality rate of lung cancer. The reduction in carcinogens in cigarettes and increase in smoking cessation from around 1960 might led to decreasing trend of lung cancer mortality after calendar period 1993.
Decomposable log-linear models
DEFF Research Database (Denmark)
Eriksen, Poul Svante
can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...... The present paper considers discrete probability models with exact computational properties. In relation to contingency tables this means closed form expressions of the maksimum likelihood estimate and its distribution. The model class includes what is known as decomposable graphicalmodels, which......-hierarchical models, models with structural zeroes, models described by quasi independence and models for level merging. Also, they have a very natural interpretation as they may be formulated by a structured set of conditional independencies between two events given some other event. In relation to contingency...
Mixed models, linear dependency, and identification in age-period-cohort models.
O'Brien, Robert M
2017-07-20
This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Maternal mortality in Mexico, beyond millennial development objectives: An age-period-cohort model.
Rodríguez-Aguilar, Román
2018-01-01
The maternal mortality situation is analyzed in México as an indicator that reflects the social development level of the country and was one of the millennial development objectives. The effect of a maternal death in the related social group has multiplier effects, since it involves family dislocation, economic impact and disruption of the orphans' normal social development. Two perspectives that causes of maternal mortality were analyzed, on one hand, their relationship with social determinants and on the other, factors directly related to the health system. Evidence shows that comparing populations based on group of selected variables according to social conditions and health care access, statistically significant differences prevail according to education and marginalization levels, and access to medical care. In addition, the Age-Period-Cohort model raised, shows significant progress in terms of a downward trend in maternal mortality in a generational level. Those women born before 1980 had a greater probability of maternal death in relation to recent generations, which is a reflection of the improvement in social determinants and in the Health System. The age effect shows a problem in maternal mortality in women under 15 years old, so teen pregnancy is a priority in health and must be addressed in short term. There is no clear evidence of a period effect.
Patterns of lung cancer mortality in 23 countries: Application of the Age-Period-Cohort model
Directory of Open Access Journals (Sweden)
Huang Yi-Chia
2005-03-01
Full Text Available Abstract Background Smoking habits do not seem to be the main explanation of the epidemiological characteristics of female lung cancer mortality in Asian countries. However, Asian countries are often excluded from studies of geographical differences in trends for lung cancer mortality. We thus examined lung cancer trends from 1971 to 1995 among men and women for 23 countries, including four in Asia. Methods International and national data were used to analyze lung cancer mortality from 1971 to 1995 in both sexes. Age-standardized mortality rates (ASMR were analyzed in five consecutive five-year periods and for each five-year age group in the age range 30 to 79. The age-period-cohort (APC model was used to estimate the period effect (adjusted for age and cohort effects for mortality from lung cancer. Results The sex ratio of the ASMR for lung cancer was lower in Asian countries, while the sex ratio of smoking prevalence was higher in Asian countries. The mean values of the sex ratio of the ASMR from lung cancer in Taiwan, Hong Kong, Singapore, and Japan for the five 5-year period were 2.10, 2.39, 3.07, and 3.55, respectively. These values not only remained quite constant over each five-year period, but were also lower than seen in the western countries. The period effect, for lung cancer mortality as derived for the 23 countries from the APC model, could be classified into seven patterns. Conclusion Period effects for both men and women in 23 countries, as derived using the APC model, could be classified into seven patterns. Four Asian countries have a relatively low sex ratio in lung cancer mortality and a relatively high sex ratio in smoking prevalence. Factors other than smoking might be important, especially for women in Asian countries.
Age-period-cohort modelling of breast cancer incidence in the Nordic countries
DEFF Research Database (Denmark)
Rostgaard, K; Vaeth, M; Holst, H
2001-01-01
into account. Assuming the age dependency of the incidence pattern in old age to be common for the Nordic countries, an internal comparison could be made among the four countries of the cohort effects and the period effects. The study indicated that the period effects have been of importance for the increase...... in breast cancer incidence seen in the Nordic countries. The widespread practice of neglecting the period effects in age-period-cohort analysis of time trends in breast cancer incidence therefore probably needs reconsideration. A key finding was that Danish women born in the 20th century seem to have been...... exposed to an increasing load of cohort borne breast cancer risk factors not experienced to the same extent by Norwegian women, whereas they were seemingly subjected to the same period effects....
Rousselière, Damien; Rousselière, Samira
2017-08-01
The study of European attitudes toward biotechnologies underlines a situation that is relatively contrasting in Europe. However, as different effects of time can influence the social attitudes (a life-cycle effect, a generational effect, and an exogenous temporal effect potentially affecting the entire population), an appropriate methodology should be used. To this end, age-period-cohort-country models have thus been estimated based on Eurobarometer data from 1991 onward. Applied to different data subsets, these models give similar results underlining the importance of the life-cycle effects as well as the heterogeneity of the link between political affiliation and biotechnologies attitudes across the European countries.
Modelling regional variation of first-time births in Denmark 1980-1994 by an age-period-cohort model
DEFF Research Database (Denmark)
Thygesen, Lau Caspar; Knudsen, Lisbeth B.; Keiding, Niels
2005-01-01
Despite the small size of Denmark, there have traditionally been rather consistent regional differences in fertility rates. We apply the statistical age-period-cohort model to include the effect of these three time-related factors thereby concisely illuminating the regional differences of first......-time births in Denmark. From the Fertility of Women and Couples Dataset we obtain data on number of births by nulliparous women by year (1980-1994), age (15-45) and county of residence. We show that the APC-model describes the fertility rates of nulliparous women satisfactorily. To catch the regional...... variation an interaction parameter between age and county is necessary, which provides a surprisingly good description suggesting that the county-specific age-distributions of first-time fertility rates differ. Our results are in general agreement with the 'moral geography' concepts of Tonboe (2001)....
Modelling regional variation of first-time births in Denmark 1980-1994 by an age-period-cohort model
Directory of Open Access Journals (Sweden)
Lisbeth B. Knudsen
2005-12-01
Full Text Available Despite the small size of Denmark, there have traditionally been rather consistent regional differences in fertility rates. We apply the statistical age-period-cohort model to include the effect of these three time-related factors thereby concisely illuminating the regional differences of first-time births in Denmark. From the Fertility of Women and Couples Dataset we obtain data on number of births by nulliparous women by year (1980-1994, age (15-45 and county of residence. We show that the APC-model describes the fertility rates of nulliparous women satisfactorily. To catch the regional variation an interaction parameter between age and county is necessary, which provides a surprisingly good description suggesting that the county-specific age-distributions of first-time fertility rates differ. Our results are in general agreement with the 'moral geography' concepts of Tonboe (2001.
Ordinal Log-Linear Models for Contingency Tables
Directory of Open Access Journals (Sweden)
Brzezińska Justyna
2016-12-01
Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Description of cervical cancer mortality in Belgium using Bayesian age-period-cohort models
2009-01-01
Objective To correct cervical cancer mortality rates for death cause certification problems in Belgium and to describe the corrected trends (1954-1997) using Bayesian models. Method Cervical cancer (cervix uteri (CVX), corpus uteri (CRP), not otherwise specified (NOS) uterus cancer and other very rare uterus cancer (OTH) mortality data were extracted from the WHO mortality database together with population data for Belgium and the Netherlands. Different ICD (International Classification of Diseases) were used over time for death cause certification. In the Netherlands, the proportion of not-otherwise specified uterine cancer deaths was small over large periods and therefore internal reallocation could be used to estimate the corrected rates cervical cancer mortality. In Belgium, the proportion of improperly defined uterus deaths was high. Therefore, the age-specific proportions of uterus cancer deaths that are probably of cervical origin for the Netherlands was applied to Belgian uterus cancer deaths to estimate the corrected number of cervix cancer deaths (corCVX). A Bayesian loglinear Poisson-regression model was performed to disentangle the separate effects of age, period and cohort. Results The corrected age standardized mortality rate (ASMR) decreased regularly from 9.2/100 000 in the mid 1950s to 2.5/100,000 in the late 1990s. Inclusion of age, period and cohort into the models were required to obtain an adequate fit. Cervical cancer mortality increases with age, declines over calendar period and varied irregularly by cohort. Conclusion Mortality increased with ageing and declined over time in most age-groups, but varied irregularly by birth cohort. In global, with some discrete exceptions, mortality decreased for successive generations up to the cohorts born in the 1930s. This decline stopped for cohorts born in the 1940s and thereafter. For the youngest cohorts, even a tendency of increasing risk of dying from cervical cancer could be observed, reflecting
Age-period-cohort analysis of suicides among Japanese 1950-2003: a Bayesian cohort model analysis.
Ooe, Yosuke; Ohno, Yuko; Nakamura, Takashi
2009-07-01
The suicide rate in Japan is one of the highest in the world and presents us with a considerable challenge. Demographic statistics show that the number of suicides is on the rise, and at roughly 30,000 people per year have committed suicide since 1998. Suicide trends are not only related to economic boom and bust but also to certain generations and age groups. During the 1950s, there was a remarkably high suicide rate among people in their 20s, and this cohort was identical to that of the middle-age generation in the 1980s. It is important to separately understand both the trend of suicide rates and the numbers analyzed to determine the different factors that influence suicide. These include age, time period, cohort, interaction between age and time period, and changes in population composition. We performed an age-period-cohort analysis of annual trends of suicide rates by age group in Japan using a Bayesian cohort model. With the help of the Nakamura method, we have been able to break down the effects of age, time period, cohort, and the age-by-period interaction. The cohort comprised of people born in the 1930s demonstrated a relatively high suicide rate. Men currently in their 50s also belong to a high suicide rate cohort. Regarding the period effect, business cycles and by-period interaction effect, it became apparent that the high suicide rate among young adults in their early 20s around 1960 was slowing, especially among men. Instead, there was an obvious recent trend for men in their late 50s to have the highest suicide rate. This study confirmed that age-period-cohort analysis can describe these trends of suicide mortality of the Japanese.
Understanding trends in Australian alcohol consumption-an age-period-cohort model.
Livingston, Michael; Raninen, Jonas; Slade, Tim; Swift, Wendy; Lloyd, Belinda; Dietze, Paul
2016-09-01
To decompose Australian trends in alcohol consumption into their age, period (survey year) and cohort (birth year/generation) components. In particular, we aimed to test whether recent declines in overall consumption have been influenced by reductions in drinking among recently born cohorts. Seven cross-sectional waves of the Australian National Drug Strategy Household Survey (1995-2013). Age, period and cohort effects were estimated using a linear and logistic cross-classified random-effects models (CCREMs). Australia A total of 124 440 Australians (69 193 females and 55 257 males), aged 14-79 years. Whether or not respondents consumed alcohol in the 12 months prior to the survey and, for those who did, the estimated volume of pure alcohol consumed, derived using standard quantity-frequency survey questions. Controlling for age and period effects, there was significant variation in drinking participation and drinking volume by birth cohort. In particular, male cohorts born between the 1965 and 1974 and female cohorts born between 1955 and 1974 reported higher rates of drinking participation (P women (P < 0.01). Recent birth cohorts (born between 1995 and 1999) in Australia report significantly lower rates of both drinking participation and drinking volume than previous cohorts, controlling for their age distribution and overall changes in population drinking. These findings suggest that the recent decline in alcohol consumption in Australia has been driven by declines in drinking among these recently born cohorts. These trends are consistent with international shifts in youth drinking. © 2016 Society for the Study of Addiction.
TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS
Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.
2017-01-01
Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Directory of Open Access Journals (Sweden)
Apostolos Bozikas
2018-04-01
Full Text Available During the last decades, life expectancy has risen significantly in the most developed countries all over the world. Greece is a case in point; consequently, higher governmental financial responsibilities occur as well as serious concerns are raised owing to population ageing. To address this issue, an efficient forecasting method is required. Therefore, the most important stochastic models were comparatively applied to Greek data for the first time. An analysis of their fitting behaviour by gender was conducted and the corresponding forecasting results were evaluated. In particular, we incorporated the Greek population data into seven stochastic mortality models under a common age-period-cohort framework. The fitting performance of each model was thoroughly evaluated based on information criteria values as well as the likelihood ratio test and their robustness to period changes was investigated. In addition, parameter risk in forecasts was assessed by employing bootstrapping techniques. For completeness, projection results for both genders were also illustrated in pricing insurance-related products.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Directory of Open Access Journals (Sweden)
Zhehuang Huang
2015-01-01
Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
TENVERGERT, E; GILLESPIE, M; KINGMA, J
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Directory of Open Access Journals (Sweden)
Rosenbaum Peter L
2006-10-01
Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.
Directory of Open Access Journals (Sweden)
Farooq Ahmad
2006-01-01
Full Text Available This is cross sectional study based on 304 households (couples with wives age less than 48 years, chosen from urban locality (city Lahore. Fourteen religious, demographic and socio-economic factors of categorical nature like husband education, wife education, husband’s monthly income, occupation of husband, household size, husband-wife discussion, number of living children, desire for more children, duration of marriage, present age of wife, age of wife at marriage, offering of prayers, political view, and religiously decisions were taken to understand acceptance of family planning. Multivariate log-linear analysis was applied to identify association pattern and interrelationship among factors. The logit model was applied to explore the relationship between predictor factors and dependent factor, and to explore which are the factors upon which acceptance of family planning is highly depending. Log-linear analysis demonstrate that preference of contraceptive use was found to be consistently associated with factors Husband-Wife discussion, Desire for more children, No. of children, Political view and Duration of married life. While Husband’s monthly income, Occupation of husband, Age of wife at marriage and Offering of prayers resulted in no statistical explanation of adoption of family planning methods.
Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu
2018-05-10
Ethiopia is among countries with low contraceptive usage prevalence rate and resulted in high total fertility rate and unwanted pregnancy which intern affects the maternal and child health status. This study aimed to investigate the major factors that affect the number of modern contraceptive users at service delivery point in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia data collected between March and April 2016 at round-4 from 461 eligible service delivery points were in this study. The weighted log-linear negative binomial model applied to analyze the service delivery point's data. Fifty percent of service delivery points in Ethiopia given service for 61 modern contraceptive users with the interquartile range of 0.62. The expected log number of modern contraceptive users at rural was 1.05 (95% Wald CI: - 1.42 to - 0.68) lower than the expected log number of modern contraceptive users at urban. In addition, the expected log count of modern contraceptive users at others facility type was 0.58 lower than the expected log count of modern contraceptive users at the health center. The numbers of nurses/midwives were affecting the number of modern contraceptive users. Since, the incidence rate of modern contraceptive users increased by one due to an additional nurse in the delivery point. Among different factors considered in this study, residence, region, facility type, the number of days per week family planning offered, the number of nurses/midwives and number of medical assistants were to be associated with the number of modern contraceptive users. Thus, the Government of Ethiopia would take immediate steps to address causes of the number of modern contraceptive users in Ethiopia.
Time trend and age-period-cohort effect on kidney cancer mortality in Europe, 1981–2000
Directory of Open Access Journals (Sweden)
López-Abente Gonzalo
2006-05-01
Full Text Available Abstract Background The incorporation of diagnostic and therapeutic improvements, as well as the different smoking patterns, may have had an influence on the observed variability in renal cancer mortality across Europe. This study examined time trends in kidney cancer mortality in fourteen European countries during the last two decades of the 20th century. Methods Kidney cancer deaths and population estimates for each country during the period 1981–2000 were drawn from the World Health Organization Mortality Database. Age- and period-adjusted mortality rates, as well as annual percentage changes in age-adjusted mortality rates, were calculated for each country and geographical region. Log-linear Poisson models were also fitted to study the effect of age, death period, and birth cohort on kidney cancer mortality rates within each country. Results For men, the overall standardized kidney cancer mortality rates in the eastern, western, and northern European countries were 20, 25, and 53% higher than those for the southern European countries, respectively. However, age-adjusted mortality rates showed a significant annual decrease of -0.7% in the north of Europe, a moderate rise of 0.7% in the west, and substantial increases of 1.4% in the south and 2.0% in the east. This trend was similar among women, but with lower mortality rates. Age-period-cohort models showed three different birth-cohort patterns for both men and women: a decrease in mortality trend for those generations born after 1920 in the Nordic countries, a similar but lagged decline for cohorts born after 1930 in western and southern European countries, and a continuous increase throughout all birth cohorts in eastern Europe. Similar but more heterogeneous regional patterns were observed for period effects. Conclusion Kidney cancer mortality trends in Europe showed a clear north-south pattern, with high rates on a downward trend in the north, intermediate rates on a more marked rising
The marriage boom and marriage bust in the United States: An age-period-cohort analysis.
Schellekens, Jona
2017-03-01
In the 1950s and 1960s there was an unprecedented marriage boom in the United States. This was followed in the 1970s by a marriage bust. Some argue that both phenomena are cohort effects, while others argue that they are period effects. The study reported here tested the major period and cohort theories of the marriage boom and bust, by estimating an age-period-cohort model of first marriage for the years 1925-79 using census microdata. The results of the analysis indicate that the marriage boom was mostly a period effect, although there were also cohort influences. More specifically, the hypothesis that the marriage boom was mostly a response to rising wages is shown to be consistent with the data. However, much of the marriage bust can be accounted for by unidentified cohort influences, at least until 1980.
Li, Mengmeng; Wan, Xia; Wang, Yanhong; Sun, Yuanyuan; Yang, Gonghuan; Wang, Li
2017-01-01
Esophageal and gastric cancers share some risk factors. This study aimed to compare the long-term trends in mortality rates of esophageal and gastric cancers in China to provide evidence for cancer prevention and control. Mortality data were derived from 103 continuous points of the Disease Surveillance Points system during 1991?2009, stratified by gender and urban-rural locations. Age-period-cohort models were used to disentangle the time trends of esophageal and gastric cancer mortality. Th...
Trends in ischemic heart disease mortality in Korea, 1985-2009: an age-period-cohort analysis.
Lee, Hye Ah; Park, Hyesook
2012-09-01
Economic growth and development of medical technology help to improve the average life expectancy, but the western diet and rapid conversions to poor lifestyles lead an increasing risk of major chronic diseases. Coronary heart disease mortality in Korea has been on the increase, while showing a steady decline in the other industrialized countries. An age-period-cohort analysis can help understand the trends in mortality and predict the near future. We analyzed the time trends of ischemic heart disease mortality, which is on the increase, from 1985 to 2009 using an age-period-cohort model to characterize the effects of ischemic heart disease on changes in the mortality rate over time. All three effects on total ischemic heart disease mortality were statistically significant. Regarding the period effect, the mortality rate was decreased slightly in 2000 to 2004, after it had continuously increased since the late 1980s that trend was similar in both sexes. The expected age effect was noticeable, starting from the mid-60's. In addition, the age effect in women was more remarkable than that in men. Women born from the early 1900s to 1925 observed an increase in ischemic heart mortality. That cohort effect showed significance only in women. The future cohort effect might have a lasting impact on the risk of ischemic heart disease in women with the increasing elderly population, and a national prevention policy is need to establish management of high risk by considering the age-period-cohort effect.
Seoane-Mato, Daniel; Aragonés, Nuria; Ferreras, Eva; García-Pérez, Javier; Cervantes-Amat, Marta; Fernández-Navarro, Pablo; Pastor-Barriuso, Roberto; López-Abente, Gonzalo
2014-04-11
Although oral cavity, pharyngeal, oesophageal and gastric cancers share some risk factors, no comparative analysis of mortality rate trends in these illnesses has been undertaken in Spain. This study aimed to evaluate the independent effects of age, death period and birth cohort on the mortality rates of these tumours. Specific and age-adjusted mortality rates by tumour and sex were analysed. Age-period-cohort log-linear models were fitted separately for each tumour and sex, and segmented regression models were used to detect changes in period- and cohort-effect curvatures. Among men, the period-effect curvatures for oral cavity/pharyngeal and oesophageal cancers displayed a mortality trend that rose until 1995 and then declined. Among women, oral cavity/pharyngeal cancer mortality increased throughout the study period whereas oesophageal cancer mortality decreased after 1970. Stomach cancer mortality decreased in both sexes from 1965 onwards. Lastly, the cohort-effect curvature showed a certain degree of similarity for all three tumours in both sexes, which was greater among oral cavity, pharyngeal and oesophageal cancers, with a change point in evidence, after which risk of death increased in cohorts born from the 1910-1920s onwards and decreased among the 1950-1960 cohorts and successive generations. This latter feature was likewise observed for stomach cancer. While the similarities of the cohort effects in oral cavity/pharyngeal, oesophageal and gastric tumours support the implication of shared risk factors, the more marked changes in cohort-effect curvature for oral cavity/pharyngeal and oesophageal cancer could be due to the greater influence of some risk factors in their aetiology, such as smoking and alcohol consumption. The increase in oral cavity/pharyngeal cancer mortality in women deserves further study.
An international contrast of rates of placental abruption: an age-period-cohort analysis.
Directory of Open Access Journals (Sweden)
Cande V Ananth
Full Text Available Although rare, placental abruption is implicated in disproportionately high rates of perinatal morbidity and mortality. Understanding geographic and temporal variations may provide insights into possible amenable factors of abruption. We examined abruption frequencies by maternal age, delivery year, and maternal birth cohorts over three decades across seven countries.Women that delivered in the US (n = 863,879; 1979-10, Canada (4 provinces, n = 5,407,463; 1982-11, Sweden (n = 3,266,742; 1978-10, Denmark (n = 1,773,895; 1978-08, Norway (n = 1,780,271, 1978-09, Finland (n = 1,411,867; 1987-10, and Spain (n = 6,151,508; 1999-12 were analyzed. Abruption diagnosis was based on ICD coding. Rates were modeled using Poisson regression within the framework of an age-period-cohort analysis, and multi-level models to examine the contribution of smoking in four countries.Abruption rates varied across the seven countries (3-10 per 1000, Maternal age showed a consistent J-shaped pattern with increased rates at the extremes of the age distribution. In comparison to births in 2000, births after 2000 in European countries had lower abruption rates; in the US there was an increase in rate up to 2000 and a plateau thereafter. No birth cohort effects were evident. Changes in smoking prevalence partially explained the period effect in the US (P = 0.01 and Sweden (P<0.01.There is a strong maternal age effect on abruption. While the abruption rate has plateaued since 2000 in the US, all other countries show declining rates. These findings suggest considerable variation in abruption frequencies across countries; differences in the distribution of risk factors, especially smoking, may help guide policy to reduce abruption rates.
An international contrast of rates of placental abruption: an age-period-cohort analysis.
Ananth, Cande V; Keyes, Katherine M; Hamilton, Ava; Gissler, Mika; Wu, Chunsen; Liu, Shiliang; Luque-Fernandez, Miguel Angel; Skjærven, Rolv; Williams, Michelle A; Tikkanen, Minna; Cnattingius, Sven
2015-01-01
Although rare, placental abruption is implicated in disproportionately high rates of perinatal morbidity and mortality. Understanding geographic and temporal variations may provide insights into possible amenable factors of abruption. We examined abruption frequencies by maternal age, delivery year, and maternal birth cohorts over three decades across seven countries. Women that delivered in the US (n = 863,879; 1979-10), Canada (4 provinces, n = 5,407,463; 1982-11), Sweden (n = 3,266,742; 1978-10), Denmark (n = 1,773,895; 1978-08), Norway (n = 1,780,271, 1978-09), Finland (n = 1,411,867; 1987-10), and Spain (n = 6,151,508; 1999-12) were analyzed. Abruption diagnosis was based on ICD coding. Rates were modeled using Poisson regression within the framework of an age-period-cohort analysis, and multi-level models to examine the contribution of smoking in four countries. Abruption rates varied across the seven countries (3-10 per 1000), Maternal age showed a consistent J-shaped pattern with increased rates at the extremes of the age distribution. In comparison to births in 2000, births after 2000 in European countries had lower abruption rates; in the US there was an increase in rate up to 2000 and a plateau thereafter. No birth cohort effects were evident. Changes in smoking prevalence partially explained the period effect in the US (P = 0.01) and Sweden (Prate has plateaued since 2000 in the US, all other countries show declining rates. These findings suggest considerable variation in abruption frequencies across countries; differences in the distribution of risk factors, especially smoking, may help guide policy to reduce abruption rates.
Age-period-cohort analysis of tuberculosis notifications in Hong Kong from 1961 to 2005.
Wu, P; Cowling, B J; Schooling, C M; Wong, I O L; Johnston, J M; Leung, C-C; Tam, C-M; Leung, G M
2008-04-01
Despite its wealth, excellent vital indices and robust health care infrastructure, Hong Kong has a relatively high incidence of tuberculosis (TB) (85.4 per 100 000). Hong Kong residents have also experienced a very rapid and recent epidemiological transition; the population largely originated from migration by southern Chinese in the mid 20th century. Given the potentially long latency period of TB infection, an investigation was undertaken to determine the extent to which TB incidence rates reflect the population history and the impact of public health interventions. An age-period-cohort model was used to break down the Hong Kong TB notification rates from 1961 to 2005 into the effects of age, calendar period and birth cohort. Analysis by age showed a consistent pattern across all the cohorts by year of birth, with a peak in the relative risk of TB at 20-24 years of age. Analysis by year of birth showed an increase in the relative risk of TB from 1880 to 1900, stable risk until 1910, then a linear rate of decline from 1910 with an inflection point at 1990 for a steeper rate of decline. Period effects yielded only one inflection during the calendar years 1971-5. Economic development, social change and the World Health Organisation's short-course directly observed therapy (DOTS) strategy have contributed to TB control in Hong Kong. The linear cohort effect until 1990 suggests that a relatively high, but slowly falling, incidence of TB in Hong Kong will continue into the next few decades.
Bangdiwala, S I; Anzola-Pérez, E
1990-03-01
Injuries and accidents are acknowledged as leading causes of morbidity and mortality among children and adolescents in the developing countries of the world. The Pan American Health Organization sponsored a collaborative study in four selected countries in Latin America to study the extent of the problem as well as to examine the potential risk factors associated with selected non-fatal injuries in the countries. The study subjects were injured children and adolescents (0-19 years of age) presenting at the study hospitals in chosen urban centres, as well as injured that were surveyed in households in the catchment areas of the hospitals. Study methods and descriptive frequency results were presented earlier. In this paper, log-linear multivariate regression models are used to examine the potentiating effects within country of several measured variables on specific types of injuries. The significance of risk factors varied between countries; however, some general patterns emerged. Falls were more likely in younger children, and occurred at home. The main risk factor for home accidents was the age of the child. The education of the head of the household was an important risk factor for the type of injury suffered. The likelihood of traffic accident injury varied with time of day and day of the week, but also was more likely in higher educated households. The results found are consistent with those found in other studies in the developed world and suggest specific areas of concern for health planners to address.
Aggregation of log-linear risks
DEFF Research Database (Denmark)
Embrechts, Paul; Hashorva, Enkeleijd; Mikosch, Thomas Valentin
2014-01-01
In this paper we work in the framework of a k-dimensional vector of log-linear risks. Under weak conditions on the marginal tails and the dependence structure of a vector of positive risks, we derive the asymptotic tail behaviour of the aggregated risk {and present} an application concerning log...
Delaruelle, Katrijn; Buffel, Veerle; Bracke, Piet
2015-11-01
Researchers have recently been investigating the temporal variation in the educational gradient in health. While there is abundant literature concerning age trajectories, theoretical knowledge about cohort differences is relatively limited. Therefore, in analogy with the life course perspective, we introduce two contrasting cohort-specific hypotheses. The diminishing health returns hypothesis predicts a decrease in educational disparities in health across cohorts. By contrast, the cohort accretion hypothesis suggests that the education-health gap will be more pronounced among younger cohorts. To shed light on this, we perform a hierarchical age-period-cohort analysis (HAPC), using data from a subsample of individuals between 25 and 85 years of age (N = 232,573) from 32 countries in the European Social Survey (six waves: 2002-2012). The analysis leads to three important conclusions. First, we observe a widening health gap between different educational levels over the life course. Second, we find that these educational differences in the age trajectories of health seem to strengthen with each successive birth cohort. However, the two age-related effects disappear when we control for employment status, household income, and family characteristics. Last, when adjusting for these mediators, we reveal evidence to support the diminishing health returns hypothesis, implying that it is primarily the direct association between education and health that decreases across cohorts. This finding raises concerns about potential barriers to education being a vehicle for empowerment and the promotion of health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Vasilis Panagiotis Valdramidis
2005-01-01
Full Text Available A mathematical approach incorporating the shoulder effect during the quantification of microbial heat inactivation is being developed based on »the number of log cycles of reduction « concept. Hereto, the heat resistance of Escherichia coli K12 in BHI broth has been quantitatively determined in a generic and accurate way by defining the time t for x log reductions in the microbial population, i.e. txD, as a function of the treatment temperature T. Survival data of the examined microorganism are collected in a range of temperatures between 52–60.6 °C. Shoulder length Sl and specific inactivation rate kmax are derived from a mathematical expression that describes a non-log-linear behaviour. The temperature dependencies of Sl and kmax are used for structuring the txD(T function. Estimation of the txD(T parameters through a global identification procedure permits reliable predictions of the time to achieve a pre-decided microbial reduction. One of the parameters of the txD(T function is proposed as »the reference minimum temperature for inactivation«. For the case study considered, a value of 51.80 °C (with a standard error, SE, of 3.47 was identified. Finally, the time to achieve commercial sterilization and pasteurization for the product at hand, i.e. BHI broth, was found to be 11.70 s (SE=5.22, and 5.10 min (SE=1.22, respectively. Accounting for the uncertainty (based on the 90 % confidence intervals, CI a fail-safe treatment of these two processes takes 20.36 s and 7.12 min, respectively.
Mortality of breast cancer in Taiwan, 1971-2010: temporal changes and an age-period-cohort analysis.
Ho, M-L; Hsiao, Y-H; Su, S-Y; Chou, M-C; Liaw, Y-P
2015-01-01
The current paper describes the age, period and cohort effects on breast cancer mortality in Taiwan. Female breast cancer mortality data were collected from the Taiwan death registries for 1971-2010. The annual percentage changes, age- standardised mortality rates (ASMR) and age-period-cohort model were calculated. The mortality rates increased with advancing age groups when fixing the period. The percentage change in the breast cancer mortality rate increased from 54.79% at aged 20-44 years, to 149.78% in those aged 45-64 years (between 1971-75 and 2006-10). The mortality rates in the 45-64 age group increased steadily from 1971 to 1975 and 2006-10. The 1951 birth cohorts (actual birth cohort; 1947-55) showed peak mortalities in both the 50-54 and 45-49 age groups. We found that the 1951 birth cohorts had the greatest mortality risk from breast cancer. This might be attributed to the DDT that was used in large amounts to prevent deaths from malaria in Taiwan. However, future researches require DDT data to evaluate the association between breast cancer and DDT use.
Directory of Open Access Journals (Sweden)
Anne Karin da Mota Borges
2017-08-01
Full Text Available Purpose: The incidence of thyroid cancer (TC has increased substantially worldwide. However, there is a lack of knowledge about age-period-cohort (APC effects on incidence rates in South American countries. This study describes the TC incidence trends and analyzes APC effects in Cali, Colombia; Costa Rica; Goiânia, Brazil; and Quito, Ecuador. Materials and Methods: Data were obtained from the Cancer Incidence in Five Continents series, and the crude and age-standardized incidence rates were calculated. Trends were assessed using the estimated annual percentage change, and APC models were estimated using Poisson regression for individuals between age 20 and 79 years. Results: An increasing trend in age-standardized incidence rates was observed among women from Goiânia (9.2%, Costa Rica (5.7%, Quito (4.0%, and Cali (3.4%, and in men from Goiânia (10.0% and Costa Rica (3.4%. The APC modeling showed that there was a period effect in all regions and for both sexes. Increasing rate ratios were observed among women over the periods. The best fit model was the APC model in women from all regions and in men from Quito, whereas the age-cohort model showed a better fit in men from Cali and Costa Rica, and the age-drift model showed a better fit among men from Goiânia. Conclusion: These findings suggest that overdiagnosis is a possible explanation for the observed increasing pattern of TC incidence. However, some environmental exposures may also have contributed to the observed increase.
Spatiotemporal Scan and Age-Period-Cohort Analysis of Hepatitis C Virus in Henan, China: 2005-2012.
Directory of Open Access Journals (Sweden)
Fangfang Chen
Full Text Available Studies have shown that hepatitis C virus (HCV infection increased during the past decades in China. However, little evidence is available on when, where, and who were infected with HCV. There are gaps in knowledge on the epidemiological burden and evolution of the HCV epidemic in China.Data on HCV cases were collected by the disease surveillance system from 2005 to 2012 to explore the epidemic in Henan province. Spatiotemporal scan statistics and age-period-cohort (APC model were used to examine the effects of age, period, birth cohort, and spatiotemporal clustering.177,171 HCV cases were reported in Henan province between 2005 and 2012. APC modelling showed that the HCV reported rates significantly increased in people aged > 50 years. A moderate increase in HCV reported rates was observed for females aged about 25 years. HCV reported rates increased over the study period. Infection rates were greatest among people born between 1960 and 1980. People born around 1970 had the highest relative risk of HCV infection. Women born between 1960 and 1980 had a five-fold increase in HCV infection rates compared to men, for the same birth cohort. Spatiotemporal mapping showed major clustering of cases in northern Henan, which probably evolved much earlier than other areas in the province.Spatiotemporal mapping and APC methods are useful to help delineate the evolution of the HCV epidemic. Birth cohort should be part of the criteria screening programmes for HCV in order to identify those at highest risk of infection and unaware of their status. As Henan is unique in the transmission route for HCV, these methods should be used in other high burden provinces to help identify subpopulations at risk.
Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.
2014-01-01
Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.
Braga, Sonia Faria Mendes; de Souza, Mirian Carvalho; Cherchiglia, Mariangela Leal
2017-10-01
In the 1980s, an increase in mortality rates for prostate cancer was observed in North America and developed European countries. In the 1990s, however, mortality rates decreased for these countries, an outcome related to early detection of the disease. Conversely, an upward trend in mortality rates was observed in Brazil. This study describe the trends in mortality for prostate cancer in Brazil and geographic regions (North, Northeast, South, Southeast, and Central-West) between 1980 until 2014 and analyze the influence of age, period, and cohort effects on mortality rates. This time-series study used data from the Mortality Information System (SIM) and population data from Brazilian Institute for Geography and Statistics (IBGE). The effects on mortality rates were examined using age-period-cohort (APC) models. Crude and standardized mortality rates showed an upward trend for Brazil and its regions more than 2-fold the last 30 years. Age effects showed an increased risk of death in all regions. Period effects showed a higher risk of death in the finals periods for the North and Northeast. Cohort effects showed risk of death was higher for younger than older generations in Brazil and regions, mainly Northeast (RR Adjusted =3.12, 95% CI 1.29-1.41; RR Adjusted =0.28, 95% CI 0.26-0.30, respectively). The increase in prostate cancer mortality rates in Brazil and its regions was mainly due to population aging. The differences in mortality rates and APC effects between regions are related to demographic differences and access of health services across the country. Copyright © 2017 Elsevier Ltd. All rights reserved.
Znaor, Ariana; Laversanne, Mathieu; Bray, Freddie
2017-09-01
The increasing rates of kidney cancer incidence, reported in many populations globally, have been attributed both to increasing exposures to environmental risk factors, as well as increasing levels of incidental diagnosis due to widespread use of imaging. To better understand these trends, we examine long-term cancer registry data worldwide, focusing on the roles of birth cohort and calendar period, proxies for changes in risk factor prevalence and detection practice respectively. We used an augmented version of the Cancer Incidence in Five Continents series to analyze kidney cancer incidence rates 1978-2007 in 16 geographically representative populations worldwide by sex for ages 30-74, using age-period-cohort (APC) analysis. The full APC model provided the best fit to the data in most studied populations. While kidney cancer incidence rates have been increasing in successive generations born from the early twentieth century in most countries, equivalent period-specific rises were observed from the late-1970s, although these have subsequently stabilized in certain European countries (the Czech Republic, Lithuania, Finland, Spain) as well as Japan from the mid-1990s, and from the mid-2000s, in Colombia, Costa Rica and Australia. Our results indicate that the effects of both birth cohort and calendar period contribute to the international kidney cancer incidence trends. While cohort-specific increases may partly reflect the rising trends in obesity prevalence and the need for more effective primary prevention policies, the attenuations in period-specific increases (observed in 8 of the 16 populations) highlight a possible change in imaging practices that could lead to mitigation of overdiagnosis and overtreatment. © 2017 UICC.
Age-period-cohort analysis of infectious disease mortality in urban-rural China, 1990-2010.
Li, Zhi; Wang, Peigang; Gao, Ge; Xu, Chunling; Chen, Xinguang
2016-03-31
Although a number of studies on infectious disease trends in China exist, these studies have not distinguished the age, period, and cohort effects simultaneously. Here, we analyze infectious disease mortality trends among urban and rural residents in China and distinguish the age, period, and cohort effects simultaneously. Infectious disease mortality rates (1990-2010) of urban and rural residents (5-84 years old) were obtained from the China Health Statistical Yearbook and analyzed with an age-period-cohort (APC) model based on Intrinsic Estimator (IE). Infectious disease mortality is relatively high at age group 5-9, reaches a minimum in adolescence (age group 10-19), then rises with age, with the growth rate gradually slowing down from approximately age 75. From 1990 to 2010, except for a slight rise among urban residents from 2000 to 2005, the mortality of Chinese residents experienced a substantial decline, though at a slower pace from 2005 to 2010. In contrast to the urban residents, rural residents experienced a rapid decline in mortality during 2000 to 2005. The mortality gap between urban and rural residents substantially narrowed during this period. Overall, later birth cohorts experienced lower infectious disease mortality risk. From the 1906-1910 to the 1941-1945 birth cohorts, the decrease of mortality among urban residents was significantly faster than that of subsequent birth cohorts and rural counterparts. With the rapid aging of the Chinese population, the prevention and control of infectious disease in elderly people will present greater challenges. From 1990 to 2010, the infectious disease mortality of Chinese residents and the urban-rural disparity have experienced substantial declines. However, the re-emergence of previously prevalent diseases and the emergence of new infectious diseases created new challenges. It is necessary to further strengthen screening, immunization, and treatment for the elderly and for older cohorts at high risk.
Life satisfaction and age : Dealing with underidentification in age-period-cohort models
de Ree, Joppe; Alessie, Rob
Recent literature typically finds a U shaped relationship between life satisfaction and age. Age profiles, however, are not identified without forcing arbitrary restrictions on the cohort and/or time profiles. In this paper we report what can be identified about the relationship between life
Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally
2018-02-01
1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.
Su, Shih-Yung; Huang, Jing-Yang; Jian, Zhi-Hong; Ho, Chien-Chang; Lung, Chia-Chi; Liaw, Yung-Po
2012-12-01
Colorectal cancer (CRC) is the second most common cause of cancer death in developed countries among men (after lung cancer) and the third most common among women. This study thus examines the long-term trends of CRC mortality in Taiwan. CRC cases were collective between patients aged 30 years or older and younger than 85 years from the Taiwan death registries during 1971-2010. Standard descriptive techniques such as age-standardized mortality rates (ASMR), aural percent change, and age-period-cohort analyses were used. The increase of percentage change by each age group in men was higher than in women. The ASMR of CRC increased 2-fold for men and almost 1.5-fold for women during the periods 1971-1975 and 2006-2010. For age-period-cohort analysis, the estimated mortality rate increased steadily with age in both sexes, and plateaued at 175.29 per 100,000 people for men and 128.14 per 100,000 for women in the 80- to 84-year-old group. Period effects were weak in both sexes. Cohort effects were strong. Between 30 and 59 years of age, the sex ratio showed that the female CRC mortality rate was higher than that of their male counterparts. Conversely, the mortality risk of CRC in men was higher than that in women when they were between 60 and 84 years old. The current findings showed a consistent increase in mortality from CRC over the years. Changes in the patient sex ratio indicated an important etiological role of sex hormones, especially in women aged 60 years or younger.
Materialism across the life span: An age-period-cohort analysis.
Jaspers, Esther D T; Pieters, Rik G M
2016-09-01
This research examined the development of materialism across the life span. Two initial studies revealed that (a) lay beliefs were that materialism declines with age and (b) previous research findings also implied a modest, negative relationship between age and materialism. Yet, previous research has considered age only as a linear control variable, thereby precluding the possibility of more intricate relationships between age and materialism. Moreover, prior studies have relied on cross-sectional data and thus confound age and cohort effects. To improve on this, the main study used longitudinal data from 8 waves spanning 9 years of over 4,200 individuals (16 to 90 years) to examine age effects on materialism while controlling for cohort and period effects. Using a multivariate multilevel latent growth model, it found that materialism followed a curvilinear trajectory across the life span, with the lowest levels at middle age and higher levels before and after that. Thus, in contrast to lay beliefs, materialism increased in older age. Moreover, age effects on materialism differed markedly between 3 core themes of materialism: acquisition centrality, possession-defined success, and acquisition as the pursuit of happiness. In particular, acquisition centrality and possession-defined success were higher at younger and older age. Independent of these age effects, older birth cohorts were oriented more toward possession-defined success, whereas younger birth cohorts were oriented more toward acquisition centrality. The economic downturn since 2008 led to a decrease in acquisition as the pursuit of happiness and in desires for personal growth, but to an increase in desires for achievement. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Trends in mouth cancer incidence in Mumbai, India (1995-2009): An age-period-cohort analysis.
Shridhar, Krithiga; Rajaraman, Preetha; Koyande, Shravani; Parikh, Purvish M; Chaturvedi, Pankaj; Dhillon, Preet K; Dikshit, Rajesh P
2016-06-01
Despite tobacco control and health promotion efforts, the incidence rates of mouth cancer are increasing across most regions in India. Analysing the influence of age, time period and birth cohort on these secular trends can point towards underlying factors and help identify high-risk populations for improved cancer control programmes. We evaluated secular changes in mouth cancer incidence among men and women aged 25-74 years in Mumbai between 1995 and 2009 by calculating age-specific and age-standardized incidence rates (ASR). We estimated the age-adjusted linear trend for annual percent change (EAPC) using the drift parameter, and conducted an age-period-cohort (APC) analysis to quantify recent time trends and to evaluate the significance of birth cohort and calendar period effects. Over the 15-year period, age-standardized incidence rates of mouth cancer in men in Mumbai increased by 2.7% annually (95% CI:1.9 to 3.4), pMumbai cancer registry indicate a significant linear increase of mouth cancer incidence from 1995 to 2009 in men, which was driven by younger men aged 25-49 years, and a non-significant upward trend in similarly aged younger women. Health promotion efforts should more effectively target younger cohorts. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bell, Andrew
2014-11-01
There is ongoing debate regarding the shape of life-course trajectories in mental health. Many argue the relationship is U-shaped, with mental health declining with age to mid-life, then improving. However, I argue that these models are beset by the age-period-cohort (APC) identification problem, whereby age, cohort and year of measurement are exactly collinear and their effects cannot be meaningfully separated. This means an apparent life-course effect could be explained by cohorts. This paper critiques two sets of literature: the substantive literature regarding life-course trajectories in mental health, and the methodological literature that claims erroneously to have 'solved' the APC identification problem statistically (e.g. using Yang and Land's Hierarchical APC-HAPC-model). I then use a variant of the HAPC model, making strong but justified assumptions that allow the modelling of life-course trajectories in mental health (measured by the General Health Questionnaire) net of any cohort effects, using data from the British Household Panel Survey, 1991-2008. The model additionally employs a complex multilevel structure that allows the relative importance of spatial (households, local authority districts) and temporal (periods, cohorts) levels to be assessed. Mental health is found to increase throughout the life-course; this slows at mid-life before worsening again into old age, but there is no evidence of a U-shape--I argue that such findings result from confounding with cohort processes (whereby more recent cohorts have generally worse mental health). Other covariates were also evaluated; income, smoking, education, social class, urbanity, ethnicity, gender and marriage were all related to mental health, with the latter two in particular affecting life-course and cohort trajectories. The paper shows the importance of understanding APC in life-course research generally, and mental health research in particular. Copyright © 2014 Elsevier Ltd. All rights
Sala, Carole; Ru, Giuseppe
2009-09-18
The Age-Period-Cohort (APC) analysis is routinely used for time trend analysis of cancer incidence or mortality rates, but in veterinary epidemiology, there are still only a few examples of this application. APC models were recently used to model the French epidemic assuming that the time trend for BSE was mainly due to a cohort effect in relation to the control measures that may have modified the BSE exposure of cohorts over time. We used a categorical APC analysis which did not require any functional form for the effect of the variables, and examined second differences to estimate the variation of the BSE trend. We also reanalysed the French epidemic and performed a simultaneous analysis of Italian data using more appropriate birth cohort categories for comparison. We used data from the exhaustive surveillance carried out in France and Italy between 2001 and 2007, and comparatively described the trend of the epidemic in both countries. At the end, the shape and irregularities of the trends were discussed in light of the main control measures adopted to control the disease. In Italy a decrease in the epidemic became apparent from 1996, following the application of rendering standards for the processing of specific risk material (SRM). For the French epidemic, the pattern of second differences in the birth cohorts confirmed the beginning of the decrease from 1995, just after the implementation of the meat and bone meal (MBM) ban for all ruminants (1994). The APC analysis proved to be highly suitable for the study of the trend in BSE epidemics and was helpful in understanding the effects of management and control of the disease. Additionally, such an approach may help in the implementation of changes in BSE regulations.
The log-linear return approximation, bubbles, and predictability
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividendprice ratio. Next, we simulate various rational bubbles which have explosive conditional expec...
The Log-Linear Return Approximation, Bubbles, and Predictability
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
2012-01-01
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional expe...
Ilic, Milena; Ilic, Irena
2016-06-22
For both men and women worldwide, colorectal cancer is among the leading causes of cancer-related death. This study aimed to assess the mortality trends of colorectal cancer in Serbia between 1991 and 2010, prior to the introduction of population-based screening. Joinpoint regression analysis was used to estimate average annual percent change (AAPC) with the corresponding 95% confidence interval (CI). Furthermore, age-period-cohort analysis was performed to examine the effects of birth cohort and calendar period on the observed temporal trends. We observed a significantly increased trend in colorectal cancer mortality in Serbia during the study period (AAPC = 1.6%, 95% CI 1.3%-1.8%). Colorectal cancer showed an increased mortality trend in both men (AAPC = 2.0%, 95% CI 1.7%-2.2%) and women (AAPC = 1.0%, 95% CI 0.6%-1.4%). The temporal trend of colorectal cancer mortality was significantly affected by birth cohort (P < 0.05), whereas the study period did not significantly affect the trend (P = 0.072). Colorectal cancer mortality increased for the first several birth cohorts in Serbia (from 1916 to 1955), followed by downward flexion for people born after the 1960s. According to comparability test, overall mortality trends for colon cancer and rectal and anal cancer were not parallel (the final selected model rejected parallelism, P < 0.05). We found that colorectal cancer mortality in Serbia increased considerably over the past two decades. Mortality increased particularly in men, but the trends were different according to age group and subsite. In Serbia, interventions to reduce colorectal cancer burden, especially the implementation of a national screening program, as well as treatment improvements and measures to encourage the adoption of a healthy lifestyle, are needed.
Directory of Open Access Journals (Sweden)
Yu-Kang Tu
2011-04-01
Full Text Available Due to a problem of identification, how to estimate the distinct effects of age, time period and cohort has been a controversial issue in the analysis of trends in health outcomes in epidemiology. In this study, we propose a novel approach, partial least squares (PLS analysis, to separate the effects of age, period, and cohort. Our example for illustration is taken from the Glasgow Alumni cohort. A total of 15,322 students (11,755 men and 3,567 women received medical screening at the Glasgow University between 1948 and 1968. The aim is to investigate the secular trends in blood pressure over 1925 and 1950 while taking into account the year of examination and age at examination. We excluded students born before 1925 or aged over 25 years at examination and those with missing values in confounders from the analyses, resulting in 12,546 and 12,516 students for analysis of systolic and diastolic blood pressure, respectively. PLS analysis shows that both systolic and diastolic blood pressure increased with students' age, and students born later had on average lower blood pressure (SBP: -0.17 mmHg/per year [95% confidence intervals: -0.19 to -0.15] for men and -0.25 [-0.28 to -0.22] for women; DBP: -0.14 [-0.15 to -0.13] for men; -0.09 [-0.11 to -0.07] for women. PLS also shows a decreasing trend in blood pressure over the examination period. As identification is not a problem for PLS, it provides a flexible modelling strategy for age-period-cohort analysis. More emphasis is then required to clarify the substantive and conceptual issues surrounding the definitions and interpretations of age, period and cohort effects.
Rothacker, Karen M; Brown, Suzanne J; Hadlow, Narelle C; Wardrop, Robert; Walsh, John P
2016-03-01
The TSH-T4 relationship was thought to be inverse log-linear, but recent cross-sectional studies report a complex, nonlinear relationship; large, intra-individual studies are lacking. Our objective was to analyze the TSH-free T4 relationship within individuals. We analyzed data from 13 379 patients, each with six or more TSH/free T4 measurements and at least a 5-fold difference between individual median TSH and minimum or maximum TSH. Linear and nonlinear regression models of log TSH on free T4 were fitted to data from individuals and goodness of fit compared by likelihood ratio testing. Comparing all models, the linear model achieved best fit in 31% of individuals, followed by quartic (27%), cubic (15%), null (12%), and quadratic (11%) models. After eliminating least favored models (with individuals reassigned to best fitting, available models), the linear model fit best in 42% of participants, quartic in 43%, and null model in 15%. As the number of observations per individual increased, so did the proportion of individuals in whom the linear model achieved best fit, to 66% in those with more than 20 observations. When linear models were applied to all individuals and averaged according to individual median free T4 values, variations in slope and intercept indicated a nonlinear log TSH-free T4 relationship across the population. The log TSH-free T4 relationship appears linear in some individuals and nonlinear in others, but is predominantly linear in those with the largest number of observations. A log-linear relationship within individuals can be reconciled with a non-log-linear relationship in a population.
Directory of Open Access Journals (Sweden)
Irene O L Wong
Full Text Available BACKGROUND: With economic development and population aging, ischaemic heart disease (IHD is becoming a leading cause of mortality with widening inequalities in China. To forewarn the trends in China we projected IHD trends in the most economically developed part of China, i.e., Hong Kong. METHODS: Based on sex-specific IHD mortality rates from 1976 to 2005, we projected mortality rates by neighborhood-level socio-economic position (i.e., low- or high-income groups to 2020 in Hong Kong using Poisson age-period-cohort models with autoregressive priors. RESULTS: In the low-income group, age-standardized IHD mortality rates among women declined from 33.3 deaths in 1976-1980 to 19.7 per 100,000 in 2016-2020 (from 55.5 deaths to 34.2 per 100,000 among men. The rates in the high-income group were initially higher in both sexes, particularly among men, but this had reversed by the end of the study periods. The rates declined faster for the high-income group than for the low-income group in both sexes. The rates were projected to decline faster in the high-income group, such that by the end of the projection period the high-income group would have lower IHD mortality rates, particularly for women. Birth cohort effects varied with sex, with a marked upturn in IHD mortality around 1945, i.e., for the first generation of men to grow up in a more economically developed environment. There was no such upturn in women. Birth cohort effects were the main drivers of change in IHD mortality rates. CONCLUSION: IHD mortality rates are declining in Hong Kong and are projected to continue to do so, even taking into account greater vulnerability for the first generation of men born into a more developed environment. At the same time social disparities in IHD have reversed and are widening, partly as a result of a cohort effect, with corresponding implications for prevention.
Attell, Brandon K
2017-01-01
Several longitudinal studies show that over time the American public has become more approving of euthanasia and suicide for terminally ill persons. Yet, these previous findings are limited because they derive from biased estimates of disaggregated hierarchical data. Using insights from life course sociological theory and cross-classified logistic regression models, I better account for this liberalization process by disentangling the age, period, and cohort effects that contribute to longitudinal changes in these attitudes. The results of the analysis point toward a continued liberalization of both attitudes over time, although the magnitude of change was greater for suicide compared with euthanasia. More fluctuation in the probability of supporting both measures was exhibited for the age and period effects over the cohort effects. In addition, age-based differences in supporting both measures were found between men and women and various religious affiliations.
Rughiniș, Cosima; Humă, Bogdana
2015-12-01
In this paper we argue that quantitative survey-based social research essentializes age, through specific rhetorical tools. We outline the device of 'socio-demographic variables' and we discuss its argumentative functions, looking at scientific survey-based analyses of adult scientific literacy, in the Public Understanding of Science research field. 'Socio-demographics' are virtually omnipresent in survey literature: they are, as a rule, used and discussed as bundles of independent variables, requiring little, if any, theoretical and measurement attention. 'Socio-demographics' are rhetorically effective through their common-sense richness of meaning and inferential power. We identify their main argumentation functions as 'structure building', 'pacification', and 'purification'. Socio-demographics are used to uphold causal vocabularies, supporting the transmutation of the descriptive statistical jargon of 'effects' and 'explained variance' into 'explanatory factors'. Age can also be studied statistically as a main variable of interest, through the age-period-cohort (APC) disambiguation technique. While this approach has generated interesting findings, it did not mitigate the reductionism that appears when treating age as a socio-demographic variable. By working with age as a 'socio-demographic variable', quantitative researchers convert it (inadvertently) into a quasi-biological feature, symmetrical, as regards analytical treatment, with pathogens in epidemiological research. Copyright © 2015 Elsevier Inc. All rights reserved.
Su, Shih-Yung; Lee, Wen-Chung; Chen, Tzu-Ting; Wang, Hao-Chien; Su, Ta-Chen; Jeng, Jiann-Shing; Tu, Yu-Kang; Liao, Shu-Fen; Lu, Tzu-Pin; Chien, Kuo-Liong
2017-01-01
The aim of the 25 by 25 goal is to reduce mortality from premature non-communicable diseases by 25% before 2025. Studies have evaluated the 25 by 25 goal in many countries, but not in Taiwan. The aim of this study was to estimate the 25 by 25 goal for premature mortality from cardiovascular diseases in Taiwan. We applied the age-period-cohort model to project the incidence of premature death from cardiovascular disease from 2015 to 2024 and used the population attributable fraction to estimate the contributions of targeted risk factors. The probability of death was used to estimate the percent change. The percent change in business-as-usual trend during 2010-2024 was only a 6% (range 1.7-10.7%) lower risk of premature mortality from cardiovascular disease among men. The greatest reduction in the risk of mortality occurred with a 30% reduction in the prevalence of smoking; however, there was only a 14.5% (10.6-18.3%) decrease in percent change and in the corresponding number of men (3706: range 3543-3868) who were prevented from dying. More than a 25% reduction in the percent change of premature cardiovascular disease mortality among women was achieved without control of any risk factor. To reach a 25% reduction in men before 2025, there needs to be a 70% reduction in the prevalence of smoking to reduce mortality by 26.2% (22.9-29.3%). Cigarette smoking is the primary target in the prevention of cardiovascular disease. Through the stringent control of smoking, the goal of a 25% reduction in premature mortality from cardiovascular disease may be achieved before 2025 in Taiwan.
Directory of Open Access Journals (Sweden)
Carmen Saiz-Sánchez
1999-05-01
Full Text Available OBJETIVO. Estudiar la evolución de la mortalidad por accidentes de tráfico en España y su posible aplicación a un modelo edad-periodo-cohorte, así como el efecto que pueden tener algunas medidas de seguridad vial seleccionadas. MATERIAL Y MÉTODOS. Se obtuvieron las tasas de mortalidad por accidentes de tráfico y las tasas en intervalos quinquenales de edad para cada sexo, lo que permite su estudio como tasas específicas de edad por cohortes de nacimiento. Para determinar la asociación entre las medidas de seguridad vial seleccionadas y la mortalidad se han construido modelos de regresión de Poisson. RESULTADOS. Se observaron dos ondas evolutivas en la mortalidad por accidentes de tráfico. Respecto a la edad, no podemos hablar de un efecto claro; tampoco se encontró un efecto cohorte ni para varones ni para mujeres. En relación con las medidas de seguridad vial, se discutió la consistencia que guardaban los modelos seleccionados con los resultados gráficos, y se obtuvo que el uso obligatorio del casco y de las luces de cruce en motocicletas se ha asociado significativamente a la reducción de la mortalidad (RR 0.73, pOBJECTIVE. To study the evolution of traffic accidents mortality in Spain and its possible application to an age-period-cohort analysis, as well as the effect of selected road safety measures. MATERIAL AND METHODS. Road accidents rates of mortality were obtained, and five-year interval age rates for each sex, which allows the study of specific rates of age by birth cohorts. To determine the association between the selected road safety measures and mortality, Poisson regression models were adjusted. RESULTS. Two waves emerge in the evolution of traffic accidents. There was no clear effect with respect to age, nor was there a cohort effect for men or women. As to the road safety measures, we discuss the consistency between the selected models and graphic results. The compulsory use of helmet and of crossing lights is
Ananth, Cande V.; Keyes, Katherine M.; Hamilton, Ava; Gissler, Mika; Wu, Chunsen; Liu, Shiliang; Luque-Fernandez, Miguel Angel; Skjaerven, Rolv; Williams, Michelle A.; Tikkanen, Minna; Cnattingius, Sven
2015-01-01
Background. Although rare, placental abruption is implicated in disproportionately high rates of perinatal morbidity and mortality. Understanding geographic and temporal variations may provide insights into possible amenable factors of abruption. We examined abruption frequencies by maternal age, delivery year, and maternal birth cohorts over three decades across seven countries. Methods. Women that delivered in the US (n = 863,879; 1979–10), Canada (4 provinces, n = 5,407,463; 1982–11), ...
Directory of Open Access Journals (Sweden)
Masaru Yokoe
2009-03-01
Full Text Available This paper proposes a method to quantitatively measure and evaluate finger tapping movements for the assessment of motor function using log-linearized Gaussian mixture networks (LLGMNs. First, finger tapping movements are measured using magnetic sensors, and eleven indices are computed for evaluation. After standardizing these indices based on those of normal subjects, they are input to LLGMNs to assess motor function. Then, motor ability is probabilistically discriminated to determine whether it is normal or not using a classifier combined with the output of multiple LLGMNs based on bagging and entropy. This paper reports on evaluation and discrimination experiments performed on finger tapping movements in 33 Parkinson’s disease (PD patients and 32 normal elderly subjects. The results showed that the patients could be classified correctly in terms of their impairment status with a high degree of accuracy (average rate: 93:1 § 3:69% using 12 LLGMNs, which was about 5% higher than the results obtained using a single LLGMN.
Cohen, Dale J; Quinlan, Philip T
2018-02-01
The bounded number-line task has been used extensively to assess the numerical competence of both children and adults. One consistent finding has been that young children display a logarithmic response function, whereas older children and adults display a more linear response function. Traditionally, these log-linear functions have been interpreted as providing a transparent window onto the nature of the participants' psychological representations of quantity (termed here a direct response strategy). Here we show that the direct response strategy produces the log-linear response function regardless of whether the psychological representation of quantity is compressive or expansive. Simply put, the log-linear response function results from task constraints rather than from the psychological representation of quantities. We also demonstrate that a proportion/subtraction response strategy produces response patterns that almost perfectly correlate with the psychological representation of quantity. We therefore urge researchers not to interpret the log-linear response pattern in terms of numerical representation.
Directory of Open Access Journals (Sweden)
Francisco Franco-Marina
2009-01-01
Full Text Available OBJECTIVE: To assess the age, period and cohort effects on breast cancer (BC mortality in Mexico. MATERIAL AND METHODS: Age, period and cohort curvature trends for BC mortality were estimated through the Poisson Regression model proposed by Holford. RESULTS: Nationally, BC death rates have leveled off since 1995 in most age groups. BC mortality trends are mainly determined by birth cohort and age effects in Mexico. Women born between 1940 and 1955 show the highest rate of increase in BC mortality. Women born afterwards still show an increasing trend but at a much lower rate. Mammography and adjuvant therapy have had a limited impact on mortality. Potential reasons for observed patterns are discussed. An increase in BC mortality in Mexico is expected in the following decades. CONCLUSIONS: Mammography screening programs and timely access to effective treatment should be a national priority to reverse the expected increasing BC mortality trend.OBJETIVO: Evaluar efectos de edad-periodo-cohorte en la mortalidad por cáncer de mama (CaMa en México. MATERIAL Y MÉTODOS: Las tendencias de los efectos de edad-periodo-cohorte fueron estimados mediante un modelo de regresión de Poisson propuesto por Holford. RESULTADOS: Las tasas de mortalidad por CaMa se han estabilizado en la mayoría de los grupos de edad desde 1995 y están determinadas principalmente por efectos de cohorte y edad. Las mujeres nacidas entre 1940 y 1955 muestran los mayores aumentos en la mortalidad en comparación con las nacidas después de este período. La mamografía y la terapia adyuvante han tenido un impacto limitado sobre la mortalidad. Se discuten posibles explicaciones de las tendencias observadas. En las siguientes décadas se espera continúe aumentando la mortalidad por CaMa. CONCLUSIONES: El acceso a mamografía y a tratamiento oportuno y efectivo debieran ser una prioridad para revertir la tendencia creciente esperada de la mortalidad por CM.
Materialism across the lifespan : An age-period-cohort analysis
Jaspers, Esther; Pieters, Rik
This research examined the development of materialism across the lifespan. Two initial studies revealed that: 1) lay beliefs were that materialism declines with age; and 2) previous research findings also implied a modest, negative relationship between age and materialism. Yet, previous research has
Marginal Models for Categorial Data
Bergsma, W.P.; Rudas, T.
2002-01-01
Statistical models defined by imposing restrictions on marginal distributions of contingency tables have received considerable attention recently. This paper introduces a general definition of marginal log-linear parameters and describes conditions for a marginal log-linear parameter to be a smooth
Braak, ter Cajo J.F.
2017-01-01
Ecologists wish to understand the role of traits of species in determining where each species occurs in the environment. For this, they wish to detect associations between species traits and environmental variables from three data tables, species count data from sites with associated
Recent hip fracture trends in Sweden and Denmark with age-period-cohort effects
DEFF Research Database (Denmark)
Rosengren, B E; Björk, J; Cooper, C
2017-01-01
This study used nationwide hip fracture data from Denmark and Sweden during 1987-2010 to examine effects of (birth) cohort and period. We found that time trends, cohort, and period effects were different in the two countries. Results also indicated that hip fracture rates may increase in the not ...
Age-period-cohort effect of adolescent smoking in Korea: from 2006-2016
Directory of Open Access Journals (Sweden)
Heewon Kang
2018-03-01
Efforts to reduce tobacco-use among adolescents appears to be playing a substantial role in reducing current smoking and ever smoking prevalence. Ongoing surveillance for trends in adolescent cigarette smoking is essential to implement effective tobacco control programs.
Modelling BSE trend over time in Europe, a risk assessment perspective
Ducrot, C.; Sala, C.; Ru, G.; Koeijer, de A.A.; Sheridan, H.; Saegerman, C.; Selhorst, T.; Arnold, M.; Polak, M.P.; Calavas, D.
2010-01-01
BSE is a zoonotic disease that caused the emergence of variant Creuzfeldt-Jakob disease in the mid 1990s. The trend of the BSE epidemic in seven European countries was assessed and compared, using Age-Period-Cohort and Reproduction Ratio modelling applied to surveillance data 2001-2007. A strong
Základní pojmy a principy konstrukce modelů typu věk-období-kohorta
Czech Academy of Sciences Publication Activity Database
Reissigová, Jindra; Rychtaříková, J.
2015-01-01
Roč. 57, č. 1 (2015), s. 21-39 ISSN 0011-8265 Grant - others:GA ČR(CZ) GAP404/12/0883 Institutional support: RVO:67985807 Keywords : Lexis diagram * age-period-cohort * identification problem * generalised linear model * prediction * male mortality * Czech Republic Subject RIV: BB - Applied Statistics, Operational Research
Directory of Open Access Journals (Sweden)
Chun-Hsiao Chu
2017-01-01
Full Text Available Externality is an important issue for formulating the regulation policy of a taxi market. However, this issue is rarely taken into account in the current policy-making process, and it has not been adequately explored in prior research. This study extends the model proposed by Chang and Chu in 2009 with the aim of exploring the effect of externality on the optimization of the regulation policy of a cruising taxi market. A closed-form solution for optimizing the fare, vacancy rate, and subsidy of the market is derived. The results show that when the externality of taxi trips is taken into consideration, the optimal vacancy rate should be lower and the subsidy should be higher than they are under current conditions where externality is not considered. The results of the sensitivity analysis on the occupied and vacant distance indicate that the relation of the vacant distance to the marginal external cost is more sensitive than the occupied distance. The result of the sensitivity analysis on the subsidy shows the existence of a negative relationship between the marginal external cost and the optimal subsidy.
On Combining Language Models: Oracle Approach
National Research Council Canada - National Science Library
Hacioglu, Kadri; Ward, Wayne
2001-01-01
In this paper, we address the of combining several language models (LMs). We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle...
Directory of Open Access Journals (Sweden)
Jing Chen
Full Text Available BACKGROUND: Chronic obstructive pulmonary disease (COPD is a leading cause of death, particularly in developing countries. Little is known about the effects of economic development on COPD mortality, although economic development may potentially have positive and negative influences over the life course on COPD. We took advantage of a unique population whose rapid and recent economic development is marked by changes at clearly delineated and identifiable time points, and where few women smoke, to examine the effect of macro-level events on COPD mortality. METHODS: We used Poisson regression to decompose sex-specific COPD mortality rates in Hong Kong from 1981 to 2005 into the effects of age, period and cohort. RESULTS: COPD mortality declined strongly over generations for people born from the early to mid 20th century, which was particularly evident for the first generation to grow up in a more economically developed environment for both sexes. Population wide COPD mortality decreased when air quality improved and increased with increasing air pollution. COPD mortality increased with age, particularly after menopause among women. CONCLUSIONS: Economic development may reduce vulnerability to COPD by reducing long-lasting insults to the respiratory system, such as infections, poor nutrition and indoor air pollution. However, some of these gains may be offset if economic development results in increasing air pollution or increasing smoking.
An R package for fitting age, period and cohort models
Directory of Open Access Journals (Sweden)
Adriano Decarli
2014-11-01
Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.
In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data
Yi, Yeon-Sook
2017-01-01
This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…
PENENTUAN BENTUK FUNGSI MODEL EMPIRIK: STUDI KASUS PERMINTAAN KENDARAAN RODA EMPAT BARU
Directory of Open Access Journals (Sweden)
Andryan Setyadharma
2012-01-01
Full Text Available In many cases, the determination of form of the regression function of the empirical model betweenthe linear model and the log-linear model is neglected when someone starts research. Someoneconcludes the best model only by comparing the R2 value from respective function form and determinesthe best form of the function model only based on the highest R2 value. This is clearly wrong. This studyattempted to find the best regression function model by using two kinds of tests: MacKinnon, White andDavidson Test (MWD Test and Bera and McAleer Test (B-M Test. This Study showed that the twoforms of the empirical function models-both the linear and log-linear functions- could be used to estimatethe demand of the new four wheels vehicle in Indonesia. Furthermore, checking by using classicalassumption, we found that the log-linear function model is the best model to estimate the demand of thenew four wheels vehicle in Indonesia.Keywords: empirical model, linear model, log-linear model
Wong, IOL; Schooling, CM; Cowling, BJ; Leung, GM
2013-01-01
Background:With economic development and population aging, ischaemic heart disease (IHD) is becoming a leading cause of mortality with widening inequalities in China. To forewarn the trends in China we projected IHD trends in the most economically developed part of China, i.e., Hong Kong.Methods:Based on sex-specific IHD mortality rates from 1976 to 2005, we projected mortality rates by neighborhood-level socio-economic position (i.e., low- or high-income groups) to 2020 in Hong Kong using Po...
Kelderman, Henk
1991-01-01
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual
Economic policy optimization based on both one stochastic model and the parametric control theory
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
Directory of Open Access Journals (Sweden)
Edgar F. Vargas
2007-01-01
Full Text Available The deviations observed in the solubility of ibuprofen (IBP and naproxen (NAP in propylene glycol (PG + water (W cosolvent mixtures with respect to the logarithmic-linear model proposed by Yalkowsky have been analyzed at 25.00 ± 0.05 ºC. Negative deviations were obtained in all cosolvent compositions for both drugs; they were greater for IBP. Another treatment, based on Gibbs free energy relationships, was also employed showing an apparent hydrophobicity chameleonic effect, because at low PG proportions NAP is more hydrophobic, whereas at high PG proportions IBP is more hydrophobic. The results are discussed in terms of solute-solvent and solvent-solvent interactions.
SOME PROPERTIES OF GEOMETRIC DEA MODELS
Directory of Open Access Journals (Sweden)
Ozren Despić
2013-02-01
Full Text Available Some specific geometric data envelopment analysis (DEA models are well known to the researchers in DEA through so-called multiplicative or log-linear efficiency models. Valuable properties of these models were noted by several authors but the models still remain somewhat obscure and rarely used in practice. The purpose of this paper is to show from a mathematical perspective where the geometric DEA fits in relation to the classical DEA, and to provide a brief overview of some benefits in using geometric DEA in practice of decision making and/or efficiency measurement.
A Knife-Edge Property of Some Pollution-and-Growth Models
Eriksson, Clas
2008-01-01
In some recent economic growth models there can be decreasing pollution along with increasing per capita income, if the rate of improvement in the environmenta ltechnology is suﬃciently high. A central function describes how gross pollution and environmental technology interact to determine net pollution, which in the previous works has a log-linear form. This letter provides an example in which this function is generalized to a CES type. The result is that the environmental technology factor...
Speaks, Crystal; McGlynn, Katherine A; Cook, Michael B
2012-10-01
The current working model of type II testicular germ cell tumor (TGCT) pathogenesis states that carcinoma in situ arises during embryogenesis, is a necessary precursor, and always progresses to cancer. An implicit condition of this model is that only in utero exposures affect the development of TGCT in later life. In an age-period-cohort analysis, this working model contends an absence of calendar period deviations. We tested this contention using data from the SEER registries of the United States. We assessed age-period-cohort models of TGCTs, seminomas, and nonseminomas for the period 1973-2008. Analyses were restricted to whites diagnosed at ages 15-74 years. We tested whether calendar period deviations were significant in TGCT incidence trends adjusted for age deviations and cohort effects. This analysis included 32,250 TGCTs (18,475 seminomas and 13,775 nonseminomas). Seminoma incidence trends have increased with an average annual percentage change in log-linear rates (net drift) of 1.25 %, relative to just 0.14 % for nonseminoma. In more recent time periods, TGCT incidence trends have plateaued and then undergone a slight decrease. Calendar period deviations were highly statistically significant in models of TGCT (p = 1.24(-9)) and seminoma (p = 3.99(-14)), after adjustment for age deviations and cohort effects; results for nonseminoma (p = 0.02) indicated that the effects of calendar period were much more muted. Calendar period deviations play a significant role in incidence trends of TGCT, which indicates that postnatal exposures are etiologically relevant.
DEFF Research Database (Denmark)
Hereu, A.; Dalgaard, Paw; Garriga, M.
2012-01-01
High pressure (HP) inactivation curves of Listeria monocytogenes CTC1034 (ca. 107CFU/g) on sliced RTE cooked meat products (ham and mortadella) were obtained at pressures from 300 to 800MPa. A clear tail shape was observed at pressures above 450MPa and the log-linear with tail primary model...... provided the best fit to the HP-inactivation kinetics. The relationships between the primary kinetic parameters (log kmax and log Nres) and pressure treatments were described by a polynomial secondary model. To estimate HP-inactivation of L. monocytogenes in log (N/N0) over time, a one-step global fitting...
Directory of Open Access Journals (Sweden)
Yohei Ban
2008-12-01
Full Text Available For 2 x 2 x K contingency tables, Tomizawa considered a Shannon entropy type measure to represent the degree of departure from a log-linear model of no three-factor interaction (the NOTFI model. This paper proposes a generalization of Tomizawa's measure for 2 x 2 x K tables. The measure proposed is expressed by using Patil-Taillie diversity index or Cressie-Read power-divergence. A special case of the proposed measure includes Tomizawa's measure. The proposed measure would be useful for comparing the degrees of departure from the NOTFI model in several tables.
Modeling the Geographic Consequence and Pattern of Dengue Fever Transmission in Thailand.
Bekoe, Collins; Pansombut, Tatdow; Riyapan, Pakwan; Kakchapati, Sampurna; Phon-On, Aniruth
2017-05-04
Dengue fever is one of the infectious diseases that is still a public health problem in Thailand. This study considers in detail, the geographic consequence, seasonal and pattern of dengue fever transmission among the 76 provinces of Thailand from 2003 to 2015. A cross-sectional study. The data for the study was from the Department of Disease Control under the Bureau of Epidemiology, Thailand. The quarterly effects and location on the transmission of dengue was modeled using an alternative additive log-linear model. The model fitted well as illustrated by the residual plots and the Again, the model showed that dengue fever is high in the second quarter of every year from May to August. There was an evidence of an increase in the trend of dengue annually from 2003 to 2015. There was a difference in the distribution of dengue fever within and between provinces. The areas of high risks were the central and southern regions of Thailand. The log-linear model provided a simple medium of modeling dengue fever transmission. The results are very important in the geographic distribution of dengue fever patterns.
A note on modeling road accident frequency: a flexible elasticity model.
Couto, António; Ferreira, Sara
2011-11-01
Count data models and their variants have been widely applied in accident modeling. The traditional log-linear function is used to represent the relationship between explanatory variables and the dependent variable (accident frequency). However, this function assumes constant elasticity for the estimation parameters, which is a limitation in the analysis of the effects of explanatory variables on accident risk. Although interaction effects between explanatory variables have been studied in the road safety context (where they are normally assessed by logistic regression), no one has yet examined the possibility of using a flexible function form allowing non-constant elasticity values. This paper seeks to explore the use of the translog function usually used in the economics context to allow the elasticity to vary with the values of other explanatory variables. Therefore, the objective of this study was to evaluate the application of the translog function to accident modeling and to compare the results with those of the traditional log-linear function negative binomial (NB) model. The results show that, in terms of goodness-of-fit statistics and residual analysis, the NB model with the translog function performs better than the traditional NB model. Additional evaluations in terms of predictive performance, hotspot identification and uncertainty associated with the estimated values were taken into account. Although this study is exploratory in nature, it suggests that the translog function has considerable potential for modeling accident observations. It is hoped that this novel accident modeling methodology will open the door to the reliable interpretation and evaluation of the influence of explanatory variables on accident frequency. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.
2012-10-01
Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.
The gRbase Package for Graphical Modelling in R
DEFF Research Database (Denmark)
Højsgaard, Søren; Dethlefsen, Claus
We have developed a package, called , consisting of a number of classes and associated methods to support the analysis of data using graphical models. It is developed for the open source language, R, and is available for several platforms. The package is intended to be widely extendible...... these building blocks can be combined and integrated with inference engines in the special cases of hierarchical log-linear models (undirected models). gRbase gRbase dynamicGraph...... and flexible so that package developers may implement further types of graphical models using the available methods. contains methods for representing data, specification of models using a formal language, and is linked to , an interactive graphical user interface for manipulating graphs. We show how...
Modelling BSE trend over time in Europe, a risk assessment perspective.
Ducrot, Christian; Sala, Carole; Ru, Giuseppe; de Koeijer, Aline; Sheridan, Hazel; Saegerman, Claude; Selhorst, Thomas; Arnold, Mark; Polak, Miroslaw P; Calavas, Didier
2010-06-01
BSE is a zoonotic disease that caused the emergence of variant Creuzfeldt-Jakob disease in the mid 1990s. The trend of the BSE epidemic in seven European countries was assessed and compared, using Age-Period-Cohort and Reproduction Ratio modelling applied to surveillance data 2001-2007. A strong decline in BSE risk was observed for all countries that applied control measures during the 1990s, starting at different points in time in the different countries. Results were compared with the type and date of the BSE control measures implemented between 1990 and 2001 in each country. Results show that a ban on the feeding of meat and bone meal (MBM) to cattle alone was not sufficient to eliminate BSE. The fading out of the epidemic started shortly after the complementary measures targeted at controlling the risk in MBM. Given the long incubation period, it is still too early to estimate the additional effect of the ban on the feeding of animal protein to all farm animals that started in 2001. These results provide new insights in the risk assessment of BSE for cattle and Humans, which will especially be useful in the context of possible relaxing BSE surveillance and control measures.
Directory of Open Access Journals (Sweden)
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
Realized GARCH: A Complete Model of Returns and Realized Measures of Volatility
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Huang, Zhuo (Albert); Shek, Howard Howan
GARCH models have been successful in modeling financial returns. Still, much is to be gained by incorporating a realized measure of volatility in these models. In this paper we introduce a new framework for the joint modeling of returns and realized measures of volatility. The Realized GARCH...... framework nests most GARCH models as special cases and is, in many ways, a natural extension of standard GARCH models. We pay special attention to linear and log-linear Realized GARCH specifications. This class of models has several attractive features. It retains the simplicity and tractability...... to latent volatility. This equation facilitates a simple modeling of the dependence between returns and future volatility that is commonly referred to as the leverage effect. An empirical application with DJIA stocks and an exchange traded index fund shows that a simple Realized GARCH structure leads...
Sakashita, Tetsuya; Hamada, Nobuyuki; Kawaguchi, Isao; Ouchi, Noriyuki B; Hara, Takamitsu; Kobayashi, Yasuhiko; Saito, Kimiaki
2013-01-01
Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD) over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells) and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Progresive diseases study using Markov´s multiple stage models
Directory of Open Access Journals (Sweden)
René Iral Palomino, Esp estadística
2005-12-01
Full Text Available Risk factors and their degree of association with a progressive disease,such as Alzheimerís disease or liver cancer, can be identifi edby using epidemiological models; some examples of these modelsinclude logistic and Poisson regression, log-linear, linear regression,and mixed models. Using models that take into account not onlythe different health status that a person could experience betweenvisits but also his/her characteristics (i.e. age, gender, genetic traits,etc. seems to be reasonable and justifi ed. In this paper we discussa methodology to estimate the effect of covariates that could beassociated with a disease when its progression or regression canbe idealized by means of a multi-state model that incorporates thelongitudinal nature of data. This method is based on the Markovproperty and it is illustrated using simulated data about Alzheimerísdisease. Finally, the merits and limitations of this method are discussed.
The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.
von Davier, Matthias
2014-02-01
The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.
The end of the decline in cervical cancer mortality in Spain: trends across the period 1981-2012.
Cervantes-Amat, Marta; López-Abente, Gonzalo; Aragonés, Nuria; Pollán, Marina; Pastor-Barriuso, Roberto; Pérez-Gómez, Beatriz
2015-04-15
In Spain, cervical cancer prevention is based on opportunistic screening, due to the disease's traditionally low incidence and mortality rates. Changes in sexual behaviour, tourism and migration have, however, modified the probability of exposure to human papilloma virus among Spaniards. This study thus sought to evaluate recent cervical cancer mortality trends in Spain. We used annual female population figures and individual records of deaths certified as cancer of cervix, reclassifying deaths recorded as unspecified uterine cancer to correct coding quality problems. Joinpoint models were fitted to estimate change points in trends, as well as the annual (APC) and average annual percentage change. Log-linear Poisson models were also used to study age-period-cohort effects on mortality trends and their change points. 1981 marked the beginning of a decline in cervical cancer mortality (APC(1981-2003): -3.2; 95% CI:-3.4;-3.0) that ended in 2003, with rates reaching a plateau in the last decade (APC2003-2012: 0.1; 95% CI:-0.9; 1.2). This trend, which was observable among women aged 45-46 years (APC(2003-2012): 1.4; 95% CI:-0.1;2.9) and over 65 years (APC(2003-2012): -0.1; 95% CI:-1.9;1.7), was clearest in Spain's Mediterranean and Southern regions. The positive influence of opportunistic screening is not strong enough to further reduce cervical cancer mortality rates in the country. Our results suggest that the Spanish Health Authorities should reform current prevention programmes and surveillance strategies in order to confront the challenges posed by cervical cancer.
Directory of Open Access Journals (Sweden)
Igor Shuryak
2017-12-01
Full Text Available Recent technological advances allow precise radiation delivery to tumor targets. As opposed to more conventional radiotherapy—where multiple small fractions are given—in some cases, the preferred course of treatment may involve only a few (or even one large dose(s per fraction. Under these conditions, the choice of appropriate radiobiological model complicates the tasks of predicting radiotherapy outcomes and designing new treatment regimens. The most commonly used model for this purpose is the venerable linear-quadratic (LQ formalism as it applies to cell survival. However, predictions based on the LQ model are frequently at odds with data following very high acute doses. In particular, although the LQ predicts a continuously bending dose–response relationship for the logarithm of cell survival, empirical evidence over the high-dose region suggests that the survival response is instead log-linear with dose. Here, we show that the distribution of lethal chromosomal lesions among individual human cells (lymphocytes and fibroblasts exposed to gamma rays and X rays is somewhat overdispersed, compared with the Poisson distribution. Further, we show that such overdispersion affects the predicted dose response for cell survival (the fraction of cells with zero lethal lesions. This causes the dose response to approximate log-linear behavior at high doses, even when the mean number of lethal lesions per cell is well fitted by the continuously curving LQ model. Accounting for overdispersion of lethal lesions provides a novel, mechanistically based explanation for the observed shapes of cell survival dose responses that, in principle, may offer a tractable and clinically useful approach for modeling the effects of high doses per fraction.
Directory of Open Access Journals (Sweden)
Andrew F Brouwer
Full Text Available Differences in prognosis in HPV-positive and HPV-negative oral (oropharyngeal and oral cavity squamous cell carcinomas (OSCCs and increasing incidence of HPV-related cancers have spurred interest in demographic and temporal trends in OSCC incidence. We leverage multistage clonal expansion (MSCE models coupled with age-period-cohort (APC epidemiological models to analyze OSCC data in the SEER cancer registry (1973-2012. MSCE models are based on the initiation-promotion-malignant conversion paradigm in carcinogenesis and allow for interpretation of trends in terms of biological mechanisms. APC models seek to differentiate between the temporal effects of age, period, and birth cohort on cancer risk. Previous studies have looked at the effect of period and cohort on tumor initiation, and we extend this to compare model fits of period and cohort effects on each of tumor initiation, promotion, and malignant conversion rates. HPV-related, HPV-unrelated except oral tongue, and HPV-unrelated oral tongue sites are best described by placing period and cohort effects on the initiation rate. HPV-related and non-oral-tongue HPV-unrelated cancers have similar promotion rates, suggesting similar tumorigenesis dynamics once initiated. Estimates of promotion rates at oral tongue sites are lower, corresponding to a longer sojourn time; this finding is consistent with the hypothesis of an etiology distinct from HPV or alcohol and tobacco use. Finally, for the three subsite groups, men have higher initiation rates than women of the same race, and black people have higher promotion than white people of the same sex. These differences explain part of the racial and sex differences in OSCC incidence.
Directory of Open Access Journals (Sweden)
Leif E. Peterson
1997-11-01
Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.
DEFF Research Database (Denmark)
Kjolby, Mads; Bie, Peter
2008-01-01
rate, urine flow, plasma potassium, and plasma renin activity did not change. The results indicate that sodium excretion is controlled by neurohumoral mechanisms that are quite resistant to acute changes in plasma volume and colloid osmotic pressure and are not down-regulated within 2 h. With previous......Cl administration increased PV (+6.3-8.9%) and plasma sodium concentration (~2%) and decreased plasma protein concentration (-6.4-8.1%). Plasma ANG II and aldosterone concentrations decreased transiently. Potassium excretion increased substantially. Sodium excretion, arterial blood pressure, glomerular filtration...
Hilpert, Markus; Johnson, William P.
2018-01-01
We used a recently developed simple mathematical network model to upscale pore-scale colloid transport information determined under unfavorable attachment conditions. Classical log-linear and nonmonotonic retention profiles, both well-reported under favorable and unfavorable attachment conditions, respectively, emerged from our upscaling. The primary attribute of the network is colloid transfer between bulk pore fluid, the near-surface fluid domain (NSFD), and attachment (treated as irreversible). The network model accounts for colloid transfer to the NSFD of downgradient grains and for reentrainment to bulk pore fluid via diffusion or via expulsion at rear flow stagnation zones (RFSZs). The model describes colloid transport by a sequence of random trials in a one-dimensional (1-D) network of Happel cells, which contain a grain and a pore. Using combinatorial analysis that capitalizes on the binomial coefficient, we derived from the pore-scale information the theoretical residence time distribution of colloids in the network. The transition from log-linear to nonmonotonic retention profiles occurs when the conditions underlying classical filtration theory are not fulfilled, i.e., when an NSFD colloid population is maintained. Then, nonmonotonic retention profiles result potentially both for attached and NSFD colloids. The concentration maxima shift downgradient depending on specific parameter choice. The concentration maxima were also shown to shift downgradient temporally (with continued elution) under conditions where attachment is negligible, explaining experimentally observed downgradient transport of retained concentration maxima of adhesion-deficient bacteria. For the case of zero reentrainment, we develop closed-form, analytical expressions for the shape, and the maximum of the colloid retention profile.
An open-population hierarchical distance sampling model
Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,
2015-01-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
An open-population hierarchical distance sampling model.
Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott
2015-02-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
Nasari, Masoud M; Szyszkowicz, Mieczysław; Chen, Hong; Crouse, Daniel; Turner, Michelle C; Jerrett, Michael; Pope, C Arden; Hubbell, Bryan; Fann, Neal; Cohen, Aaron; Gapstur, Susan M; Diver, W Ryan; Stieb, David; Forouzanfar, Mohammad H; Kim, Sun-Young; Olives, Casey; Krewski, Daniel; Burnett, Richard T
2016-01-01
The effectiveness of regulatory actions designed to improve air quality is often assessed by predicting changes in public health resulting from their implementation. Risk of premature mortality from long-term exposure to ambient air pollution is the single most important contributor to such assessments and is estimated from observational studies generally assuming a log-linear, no-threshold association between ambient concentrations and death. There has been only limited assessment of this assumption in part because of a lack of methods to estimate the shape of the exposure-response function in very large study populations. In this paper, we propose a new class of variable coefficient risk functions capable of capturing a variety of potentially non-linear associations which are suitable for health impact assessment. We construct the class by defining transformations of concentration as the product of either a linear or log-linear function of concentration multiplied by a logistic weighting function. These risk functions can be estimated using hazard regression survival models with currently available computer software and can accommodate large population-based cohorts which are increasingly being used for this purpose. We illustrate our modeling approach with two large cohort studies of long-term concentrations of ambient air pollution and mortality: the American Cancer Society Cancer Prevention Study II (CPS II) cohort and the Canadian Census Health and Environment Cohort (CanCHEC). We then estimate the number of deaths attributable to changes in fine particulate matter concentrations over the 2000 to 2010 time period in both Canada and the USA using both linear and non-linear hazard function models.
Empirical membrane lifetime model for heavy duty fuel cell systems
Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik
2016-12-01
Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Update of predictions of mortality from pleural mesothelioma in the Netherlands
O. Segura; A. Burdorf (Alex); C.W.N. Looman (Caspar)
2003-01-01
textabstractAIMS: To predict the expected number of pleural mesothelioma deaths in the Netherlands from 2000 to 2028 and to study the effect of main uncertainties in the modelling technique. METHODS: Through an age-period-cohort modelling technique, age specific mortality rates
Meng, Yang; Holmes, John; Hill-McManus, Daniel; Brennan, Alan; Meier, Petra Sylvia
2014-02-01
British alcohol consumption and abstinence rates have increased substantially in the last 3 decades. This study aims to disentangle age, period and birth cohort effects to improve our understanding of these trends and suggest groups for targeted interventions to reduce resultant harms. Age, period, cohort analysis of repeated cross-sectional surveys using separate logistic and negative binomial models for each gender. Great Britain 1984-2009. Annual nationally representative samples of approximately 20 000 adults (16+) within 13 000 households. Age (eight groups: 16-17 to 75+ years), period (six groups: 1980-84 to 2005-09) and birth cohorts (19 groups: 1900-04 to 1990-94). Outcome measures were abstinence and average weekly alcohol consumption. Controls were income, education, ethnicity and country. After accounting for period and cohort trends, 18-24-year-olds have the highest consumption levels (incident rate ratio = 1.18-1.15) and lower abstention rates (odds ratio = 0.67-0.87). Consumption generally decreases and abstention rates increase in later life. Until recently, successive birth cohorts' consumption levels were also increasing. However, for those born post-1985, abstention rates are increasing and male consumption is falling relative to preceding cohorts. In contrast, female drinking behaviours have polarized over the study period, with increasing abstention rates accompanying increases in drinkers' consumption levels. Rising female consumption of alcohol and progression of higher-consuming birth cohorts through the life course are key drivers of increased per capita alcohol consumption in the United Kingdom. Recent declines in alcohol consumption appear to be attributable to reduced consumption and increased abstinence rates among the most recent birth cohorts, especially males, and general increased rates of abstention across the study period. © 2013 Society for the Study of Addiction.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
A Local Poisson Graphical Model for inferring networks from sequencing data.
Allen, Genevera I; Liu, Zhandong
2013-09-01
Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.
Modeling of urban atmospheric pollution and impact on health
International Nuclear Information System (INIS)
Myrto, Valari
2009-10-01
The goal of this dissertation, is to develop a methodology that provides an improved knowledge of the associations between atmospheric contaminant concentrations and health impact. The propagation of uncertainties from input data to the output concentrations through a Chemistry Transport Model was first studied. The influence of the resolutions of meteorological parameters and emissions data were studied separately, and their relative role was compared. It was found that model results do not improve linearly with the resolution of emission input. A critical resolution was found, beyond which model error becomes higher and the model breaks down. Based on this first investigation concerning the direct down scaling, further research focused on sub grid scale modeling. Thus, a statistical down scaling approach was adopted for the modeling of sub grid-scale concentration variability due to heterogeneous surface emissions. Emission fractions released from different types of sources (industry, roads, residential, natural etc.) were calculated from a high-resolution emission inventory. Then emission fluxes were mapped on surfaces emitting source-specific species. Simulations were run independently over the defined micro-environments allowing the modeling of sub grid-scale concentration variability. Sub grid scale concentrations were therefore combined with demographic and human activity data to provide exposure estimates. The spatial distribution of human exposure was parameterized through a Monte-Carlo model. The new information concerning exposure variability was added to an existing epidemiological model to study relative health risks. A log-linear Poisson regression model was used for this purpose. The principal outcome of the investigation was that a new functionality was added to the regression model which allows the dissociation of the health risk associated with each pollutant (e.g. NO 2 and PM 2.5 ). (author)
Directory of Open Access Journals (Sweden)
Julia Kravchenko
Full Text Available BACKGROUND: Adenocarcinomas (ACs and squamous cell carcinomas (SCCs differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. METHODOLOGY/PRINCIPAL FINDINGS: Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973-2003 were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual's life and leading to cancer (m-stages were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC. When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri, the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes. Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. CONCLUSIONS/SIGNIFICANCE: A model containing parameters capable of representing the number of stages of cancer development occurring during individual's life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype
Kravchenko, Julia; Akushevich, Igor; Abernethy, Amy P; Lyerly, H Kim
2012-01-01
Adenocarcinomas (ACs) and squamous cell carcinomas (SCCs) differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973-2003) were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual's life and leading to cancer (m-stages) were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC). When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri), the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes). Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. A model containing parameters capable of representing the number of stages of cancer development occurring during individual's life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype being more strongly associated with ACs/SCCs histotype than with organ/site.
A feature-based approach to modeling protein-DNA interactions.
Directory of Open Access Journals (Sweden)
Eilon Sharon
Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.
Barnett, William A.; Duzhak, Evgeniya Aleksandrovna
2008-06-01
Grandmont [J.M. Grandmont, On endogenous competitive business cycles, Econometrica 53 (1985) 995-1045] found that the parameter space of the most classical dynamic models is stratified into an infinite number of subsets supporting an infinite number of different kinds of dynamics, from monotonic stability at one extreme to chaos at the other extreme, and with many forms of multiperiodic dynamics in between. The econometric implications of Grandmont’s findings are particularly important, if bifurcation boundaries cross the confidence regions surrounding parameter estimates in policy-relevant models. Stratification of a confidence region into bifurcated subsets seriously damages robustness of dynamical inferences. Recently, interest in policy in some circles has moved to New-Keynesian models. As a result, in this paper we explore bifurcation within the class of New-Keynesian models. We develop the econometric theory needed to locate bifurcation boundaries in log-linearized New-Keynesian models with Taylor policy rules or inflation-targeting policy rules. Central results needed in this research are our theorems on the existence and location of Hopf bifurcation boundaries in each of the cases that we consider.
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
A state-space model for estimating detailed movements and home range from acoustic receiver data
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Weng, Kevin
2013-01-01
We present a state-space model for acoustic receiver data to estimate detailed movement and home range of individual fish while accounting for spatial bias. An integral part of the approach is the detection function, which models the probability of logging tag transmissions as a function of dista......We present a state-space model for acoustic receiver data to estimate detailed movement and home range of individual fish while accounting for spatial bias. An integral part of the approach is the detection function, which models the probability of logging tag transmissions as a function...... that the location error scales log-linearly with detection range and movement speed. This result can be used as guideline for designing network layout when species movement capacity and acoustic environment are known or can be estimated prior to network deployment. Finally, as an example, the state-space model...... is used to estimate home range and movement of a reef fish in the Pacific Ocean....
Modeling size effects on fatigue life of a zirconium-based bulk metallic glass under bending
International Nuclear Information System (INIS)
Yuan Tao; Wang Gongyao; Feng Qingming; Liaw, Peter K.; Yokoyama, Yoshihiko; Inoue, Akihisa
2013-01-01
A size effect on the fatigue-life cycles of a Zr 50 Cu 30 Al 10 Ni 10 (at.%) bulk metallic glass has been observed in the four-point-bending fatigue experiment. Under the same bending-stress condition, large-sized samples tend to exhibit longer fatigue lives than small-sized samples. This size effect on the fatigue life cannot be satisfactorily explained by the flaw-based Weibull theories. Based on the experimental results, this study explores possible approaches to modeling the size effects on the bending-fatigue life of bulk metallic glasses, and proposes two fatigue-life models based on the Weibull distribution. The first model assumes, empirically, log-linear effects of the sample thickness on the Weibull parameters. The second model incorporates the mechanistic knowledge of the fatigue behavior of metallic glasses, and assumes that the shear-band density, instead of the flaw density, has significant influence on the bending fatigue-life cycles. Promising predictive results provide evidence of the potential validity of the models and their assumptions.
Pan, Chengbin; Miranda, Enrique; Villena, Marco A.; Xiao, Na; Jing, Xu; Xie, Xiaoming; Wu, Tianru; Hui, Fei; Shi, Yuanyuan; Lanza, Mario
2017-06-01
Despite the enormous interest raised by graphene and related materials, recent global concern about their real usefulness in industry has raised, as there is a preoccupying lack of 2D materials based electronic devices in the market. Moreover, analytical tools capable of describing and predicting the behavior of the devices (which are necessary before facing mass production) are very scarce. In this work we synthesize a resistive random access memory (RRAM) using graphene/hexagonal-boron-nitride/graphene (G/h-BN/G) van der Waals structures, and we develop a compact model that accurately describes its functioning. The devices were fabricated using scalable methods (i.e. CVD for material growth and shadow mask for electrode patterning), and they show reproducible resistive switching (RS). The measured characteristics during the forming, set and reset processes were fitted using the model developed. The model is based on the nonlinear Landauer approach for mesoscopic conductors, in this case atomic-sized filaments formed within the 2D materials system. Besides providing excellent overall fitting results (which have been corroborated in log-log, log-linear and linear-linear plots), the model is able to explain the dispersion of the data obtained from cycle-to-cycle in terms of the particular features of the filamentary paths, mainly their confinement potential barrier height.
A Cytomorphic Chip for Quantitative Modeling of Fundamental Bio-Molecular Circuits.
2015-08-01
We describe a 0.35 μm BiCMOS silicon chip that quantitatively models fundamental molecular circuits via efficient log-domain cytomorphic transistor equivalents. These circuits include those for biochemical binding with automatic representation of non-modular and loading behavior, e.g., in cascade and fan-out topologies; for representing variable Hill-coefficient operation and cooperative binding; for representing inducer, transcription-factor, and DNA binding; for probabilistic gene transcription with analogic representations of log-linear and saturating operation; for gain, degradation, and dynamics of mRNA and protein variables in transcription and translation; and, for faithfully representing biological noise via tunable stochastic transistor circuits. The use of on-chip DACs and ADCs enables multiple chips to interact via incoming and outgoing molecular digital data packets and thus create scalable biochemical reaction networks. The use of off-chip digital processors and on-chip digital memory enables programmable connectivity and parameter storage. We show that published static and dynamic MATLAB models of synthetic biological circuits including repressilators, feed-forward loops, and feedback oscillators are in excellent quantitative agreement with those from transistor circuits on the chip. Computationally intensive stochastic Gillespie simulations of molecular production are also rapidly reproduced by the chip and can be reliably tuned over the range of signal-to-noise ratios observed in biological cells.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.
Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E
2015-09-03
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Predictive modelling of gene expression from transcriptional regulatory elements.
Budden, David M; Hurley, Daniel G; Crampin, Edmund J
2015-07-01
Predictive modelling of gene expression provides a powerful framework for exploring the regulatory logic underpinning transcriptional regulation. Recent studies have demonstrated the utility of such models in identifying dysregulation of gene and miRNA expression associated with abnormal patterns of transcription factor (TF) binding or nucleosomal histone modifications (HMs). Despite the growing popularity of such approaches, a comparative review of the various modelling algorithms and feature extraction methods is lacking. We define and compare three methods of quantifying pairwise gene-TF/HM interactions and discuss their suitability for integrating the heterogeneous chromatin immunoprecipitation (ChIP)-seq binding patterns exhibited by TFs and HMs. We then construct log-linear and ϵ-support vector regression models from various mouse embryonic stem cell (mESC) and human lymphoblastoid (GM12878) data sets, considering both ChIP-seq- and position weight matrix- (PWM)-derived in silico TF-binding. The two algorithms are evaluated both in terms of their modelling prediction accuracy and ability to identify the established regulatory roles of individual TFs and HMs. Our results demonstrate that TF-binding and HMs are highly predictive of gene expression as measured by mRNA transcript abundance, irrespective of algorithm or cell type selection and considering both ChIP-seq and PWM-derived TF-binding. As we encourage other researchers to explore and develop these results, our framework is implemented using open-source software and made available as a preconfigured bootable virtual environment. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Ma, Junsheng; Chan, Wenyaw; Tsai, Chu-Lin; Xiong, Momiao; Tilley, Barbara C
2015-11-30
Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log-linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state. Copyright © 2015 John Wiley & Sons, Ltd.
McKellar, Robin C; Delaquis, Pascal
2011-11-15
Escherichia coli O157:H7, an occasional contaminant of fresh produce, can present a serious health risk in minimally processed leafy green vegetables. A good predictive model is needed for Quantitative Risk Assessment (QRA) purposes, which adequately describes the growth or die-off of this pathogen under variable temperature conditions experienced during processing, storage and shipping. Literature data on behaviour of this pathogen on fresh-cut lettuce and spinach was taken from published graphs by digitization, published tables or from personal communications. A three-phase growth function was fitted to the data from 13 studies, and a square root model for growth rate (μ) as a function of temperature was derived: μ=(0.023*(Temperature-1.20))(2). Variability in the published data was incorporated into the growth model by the use of weighted regression and the 95% prediction limits. A log-linear die-off function was fitted to the data from 13 studies, and the resulting rate constants were fitted to a shifted lognormal distribution (Mean: 0.013; Standard Deviation, 0.010; Shift, 0.001). The combined growth-death model successfully predicted pathogen behaviour under both isothermal and non-isothermal conditions when compared to new published data. By incorporating variability, the resulting model is an improvement over existing ones, and is suitable for QRA applications. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
Modeling road traffic fatalities in India: Smeed's law, time invariance and regional specificity
Directory of Open Access Journals (Sweden)
Raj V. Ponnaluri
2012-07-01
Full Text Available Mathematical formulations linking road traffic fatalities to vehicle ownership, regional population, and economic growth continue to be developed against the backdrop of Smeed and Andreassen models. Though a few attempts were made, Smeed's law has not been fully tested in India. Using the 1991–2009 panel data from all states, this work (a developed the generalized Smeed and Andreassen models; (b evaluated if traffic fatalities were impacted by structural changes; and (c examined if – in relation to the generalized model – the individual (time and regional models are more relevant for application. Seven models (Smeed: original, generalized, time-variant, state-variant; and Andreassen: generalized, time-variant, state-variant were developed and tested for fit with the actual data. Results showed that the per vehicle fatality rate closely resembled Smeed's formulation. Chow-test yielded a significant F-stat, suggesting that the models for four pre-defined time-blocks are structurally different from the 19-year generalized model. The counterclockwise rotation of the log-linear form also suggested lower fatality rates. While the new government policies, reduced vehicle operating speeds, better healthcare, and improved vehicle technology could be the factors, further research is required to understand the reasons for fatality rate reductions. The intercept and gradients of the time-series models showed high stability and varied only slightly in comparison to the 19-year generalized models, thus suggesting that the latter are pragmatic for application. Regional formulations, however, indicate that they may be more relevant for studying trends and tendencies. This research illustrates the robustness of Smeed's law, and provides evidence for time-invariance but state-specificity.
Energy Technology Data Exchange (ETDEWEB)
Otake, M [Hiroshima Univ. (Japan). Faculty of Science
1976-12-01
Various statistical models designed to determine the effects of radiation dose on mortality of atomic bomb survivors in Hiroshima and Nagasaki from specific cancers were evaluated on the basis of a basic k(age) x c(dose) x 2 contingency table. From the aspects of application and fits of different models, analysis based on the additive logit model was applied to the mortality experience of this population during the 22year period from 1 Oct. 1950 to 31 Dec. 1972. The advantages and disadvantages of the additive logit model were demonstrated. Leukemia mortality showed a sharp rise with an increase in dose. The dose response relationship suggests a possible curvature or a log linear model, particularly if the dose estimated to be more than 600 rad were set arbitrarily at 600 rad, since the average dose in the 200+ rad group would then change from 434 to 350 rad. In the 22year period from 1950 to 1972, a high mortality risk due to radiation was observed in survivors with doses of 200 rad and over for all cancers except leukemia. On the other hand, during the latest period from 1965 to 1972 a significant risk was noted also for stomach and breast cancers. Survivors who were 9 year old or less at the time of the bomb and who were exposed to high doses of 200+ rad appeared to show a high mortality risk for all cancers except leukemia, although the number of observed deaths is yet small. A number of interesting areas are discussed from the statistical and epidemiological standpoints, i.e., the numerical comparison of risks in various models, the general evaluation of cancer mortality by the additive logit model, the dose response relationship, the relative risk in the high dose group, the time period of radiation induced cancer mortality, the difference of dose response between Hiroshima and Nagasaki and the relative biological effectiveness of neutrons.
Gompertz, Makeham, and Siler models explain Taylor's law in human mortality data
Directory of Open Access Journals (Sweden)
Joel E. Cohen
2018-03-01
Full Text Available Background: Taylor's law (TL states a linear relationship on logarithmic scales between the variance and the mean of a nonnegative quantity. TL has been observed in spatiotemporal contexts for the population density of hundreds of species including humans. TL also describes temporal variation in human mortality in developed countries, but no explanation has been proposed. Objective: To understand why and to what extent TL describes temporal variation in human mortality, we examine whether the mortality models of Gompertz, Makeham, and Siler are consistent with TL. We also examine how strongly TL differs between observed and modeled mortality, between women and men, and among countries. Methods: We analyze how well each mortality model explains TL fitted to observed occurrence-exposure death rates by comparing three features: the log-log linearity of the temporal variance as a function of the temporal mean, the age profile, and the slope of TL. We support some empirical findings from the Human Mortality Database with mathematical proofs. Results: TL describes modeled mortality better than observed mortality and describes Gompertz mortality best. The age profile of TL is closest between observed and Siler mortality. The slope of TL is closest between observed and Makeham mortality. The Gompertz model predicts TL with a slope of exactly 2 if the modal age at death increases linearly with time and the parameter that specifies the growth rate of mortality with age is constant in time. Observed mortality obeys TL with a slope generally less than 2. An explanation is that, when the parameters of the Gompertz model are estimated from observed mortality year by year, both the modal age at death and the growth rate of mortality with age change over time. Conclusions: TL describes human mortality well in developed countries because their mortality schedules are approximated well by classical mortality models, which we have shown to obey TL. Contribution
Van Regenmortel, Tina; Janssen, Colin R; De Schamphelaere, Karel A C
2015-07-01
Although it is increasingly recognized that biotic ligand models (BLMs) are valuable in the risk assessment of metals in aquatic systems, the use of 2 differently structured and parameterized BLMs (1 in the United States and another in the European Union) to obtain bioavailability-based chronic water quality criteria for copper is worthy of further investigation. In the present study, the authors evaluated the predictive capacity of these 2 BLMs for a large dataset of chronic copper toxicity data with 2 Daphnia magna clones, termed K6 and ARO. One BLM performed best with clone K6 data, whereas the other performed best with clone ARO data. In addition, there was an important difference between the 2 BLMs in how they predicted the bioavailability of copper as a function of pH. These modeling results suggested that the effect of pH on chronic copper toxicity is different between the 2 clones considered, which was confirmed with additional chronic toxicity experiments. Finally, because fundamental differences in model structure between the 2 BLMs made it impossible to create an average BLM, a generalized bioavailability model (gBAM) was developed. Of the 3 gBAMs developed, the authors recommend the use of model gBAM-C(uni), which combines a log-linear relation between the 21-d median effective concentration (expressed as free Cu(2+) ion activity) and pH, with more conventional BLM-type competition constants for sodium, calcium, and magnesium. This model can be considered a first step in further improving the accuracy of chronic toxicity predictions of copper as a function of water chemistry (for a variety of Daphnia magna clones), even beyond the robustness of the current BLMs used in regulatory applications. © 2015 SETAC.
An open-access modeled passenger flow matrix for the global air network in 2010.
Huang, Zhuojie; Wu, Xiao; Garcia, Andres J; Fik, Timothy J; Tatem, Andrew J
2013-01-01
The expanding global air network provides rapid and wide-reaching connections accelerating both domestic and international travel. To understand human movement patterns on the network and their socioeconomic, environmental and epidemiological implications, information on passenger flow is required. However, comprehensive data on global passenger flow remain difficult and expensive to obtain, prompting researchers to rely on scheduled flight seat capacity data or simple models of flow. This study describes the construction of an open-access modeled passenger flow matrix for all airports with a host city-population of more than 100,000 and within two transfers of air travel from various publicly available air travel datasets. Data on network characteristics, city population, and local area GDP amongst others are utilized as covariates in a spatial interaction framework to predict the air transportation flows between airports. Training datasets based on information from various transportation organizations in the United States, Canada and the European Union were assembled. A log-linear model controlling the random effects on origin, destination and the airport hierarchy was then built to predict passenger flows on the network, and compared to the results produced using previously published models. Validation analyses showed that the model presented here produced improved predictive power and accuracy compared to previously published models, yielding the highest successful prediction rate at the global scale. Based on this model, passenger flows between 1,491 airports on 644,406 unique routes were estimated in the prediction dataset. The airport node characteristics and estimated passenger flows are freely available as part of the Vector-Borne Disease Airline Importation Risk (VBD-Air) project at: www.vbd-air.com/data.
Lecture Notes in Statistics. 3rd Semester
DEFF Research Database (Denmark)
The lecture note is prepared to meet the requirements for the 3rd semester course in statistics at the Aarhus School of Business. It focuses on multiple regression models, analysis of variance, and log-linear models.......The lecture note is prepared to meet the requirements for the 3rd semester course in statistics at the Aarhus School of Business. It focuses on multiple regression models, analysis of variance, and log-linear models....
A prediction model for assessing residential radon concentration in Switzerland
International Nuclear Information System (INIS)
Hauri, Dimitri D.; Huss, Anke; Zimmermann, Frank; Kuehni, Claudia E.; Röösli, Martin
2012-01-01
Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th–90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40–111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69–215 Bq/m³) in the medium category, and 219 Bq/m³ (108–427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be
Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games
Jaleel, Hassan; Shamma, Jeff S.
2018-01-01
dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics have the same stochastically stable states, LLL and ML correspond to different behavioral models for decision making. Moreover, we demonstrate through
Van Regenmortel, Tina; Berteloot, Olivier; Janssen, Colin R; De Schamphelaere, Karel A C
2017-10-01
Risk assessment in the European Union implements Zn bioavailability models to derive predicted-no-effect concentrations for Zn. These models are validated within certain boundaries (i.e., pH ≤ 8 and Ca concentrations ≥ 5mg/L), but a substantial fraction of the European surface waters falls outside these boundaries. Therefore, we evaluated whether the chronic Zn biotic ligand model (BLM) for Daphnia magna and the chronic bioavailability model for Pseudokirchneriella subcapitata could be extrapolated to pH > 8 and Ca concentrations model can accurately predict Zn toxicity for Ca concentrations down to 0.8 mg/L and pH values up to 8.5. Because the chronic Zn BLM for D. magna could not be extrapolated beyond its validity boundaries for pH, a generalized bioavailability model (gBAM) was developed. Of 4 gBAMs developed, we recommend the use of gBAM-D, which combines a log-linear relation between the 21-d median effective concentrations (expressed as free Zn 2+ ion activity) and pH, with more conventional BLM-type competition constants for Na, Ca, and Mg. This model is a first step in further improving the accuracy of chronic toxicity predictions of Zn as a function of water chemistry, which can decrease the uncertainty in implementing the bioavailability-based predicted-no-effect concentration in the risk assessment of high-pH and low-Ca concentration regions in Europe. Environ Toxicol Chem 2017;36:2781-2798. © 2017 SETAC. © 2017 SETAC.
Patel, Shruti V; Patel, Sarsvatkumar
2015-09-18
Self-micro emulsifying drug delivery system (SMEDDS) is one of the methods to improve solubility and bioavailability of poorly soluble drug(s). The knowledge of the solubility of pharmaceuticals in pure lipidic solvents and solvent mixtures is crucial for designing the SMEDDS of poorly soluble drug substances. Since, experiments are very time consuming, a model, which allows for solubility predictions in solvent mixtures based on less experimental data is desirable for efficiency. Solvents employed were Labrafil® M1944CS and Labrasol® as lipidic solvents; Capryol-90®, Capryol-PGMC® and Tween®-80 as surfactants; Transcutol® and PEG-400 as co-solvents. Solubilities of both drugs were determined in single solvent systems at temperature (T) range of 283-333K. In present study, we investigated the applicability of the thermodynamic model to understand the solubility behavior of drugs in the lipiodic solvents. By using the Van't Hoff and general solubility theory, the thermodynamic functions like Gibbs free energy, enthalpy and entropy of solution, mixing and solvation for drug in single and mixed solvents were understood. The thermodynamic parameters were understood in the framework of drug-solvent interaction based on their chemical similarity and dissimilarity. Clotrimazole and Fluconazole were used as active ingredients whose solubility was measured in single solvent as a function of temperature and the data obtained were used to derive mathematical models which can predict solubility in multi-component solvent mixtures. Model dependent parameters for each drug were calculated at each temperature. The experimental solubility data of solute in mixed solvent system were measured experimentally and further correlated with the calculates values obtained from exponent model and log-linear model of Yalkowsky. The good correlation was observed between experimental solubility and predicted solubility. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Sgouros, George; O'Donoghue, Joseph A.; Larson, Steven M.; Macapinlac, Homer; Larson, Justine J.; Kemeny, Nancy
1998-01-01
Purpose: Due to the cytotoxicity of DNA-bound iodine-125, 5-[ 125 I]Iodo-2'-deoxyuridine ([ 125 I]IUdR), an analog of thymidine, has long been recognized as possessing therapeutic potential. In this work, the feasibility and potential effectiveness of hepatic artery infusion of [ 125 I]IUdR is examined. Methods: A mathematical model has been developed that simulates tumor growth and response to [ 125 I]IUdR treatment. The model is used to examine the efficacy and potential toxicity of prolonged infusion therapy. Treatment of kinetically homogeneous tumors with potential doubling times of either 4, 5, or 6 days is simulated. Assuming uniformly distributed activity, absorbed dose estimates to the red marrow, liver and whole-body are calculated to assess the potential toxicity of treatment. Results: Nine to 10 logs of tumor-cell kill over a 7- to 20-day period are predicted by the various simulations examined. The most slowly proliferating tumor was also the most difficult to eradicate. During the infusion time, tumor-cell loss consisted of two components: A plateau phase, beginning at the start of infusion and ending once the infusion time exceeded the potential doubling time of the tumor; and a rapid cell-reduction phase that was close to log-linear. Beyond the plateau phase, treatment efficacy was highly sensitive to tumor activity concentration. Conclusions: Model predictions suggest that [ 125 I]IUdR will be highly dependent upon the potential doubling time of the tumor. Significant tumor cell kill will require infusion durations that exceed the longest potential doubling time in the tumor-cell population
Energy Technology Data Exchange (ETDEWEB)
De Schamphelaere, K.A.C., E-mail: karel.deschamphelaere@ugent.be; Nys, C., E-mail: chnys.nys@ugent.be; Janssen, C.R., E-mail: colin.janssen@ugent.be
2014-10-15
Highlights: • Chronic toxicity of Pb varied 4-fold among three algae species. • The use of an organic P avoided Pb precipitation in the experiments. • pH and Dissolved Organic Carbon strongly affect Pb toxicity, Ca and Mg do not. • A bioavailability model was developed that accurately predicts toxicity. • Algae may become the most sensitive species to Pb above pH 7.4. - Abstract: Scientifically sound risk assessment and derivation of environmental quality standards for lead (Pb) in the freshwater environment are hampered by insufficient data on chronic toxicity and bioavailability to unicellular green algae. Here, we first performed comparative chronic (72-h) toxicity tests with three algal species in medium at pH 6, containing 4 mg fulvic acid (FA)/L and containing organic phosphorous (P), i.e. glycerol-2-phosphate, instead of PO{sub 4}{sup 3−} to prevent lead-phosphate mineral precipitation. Pseudokirchneriella subcapitata was 4-fold more sensitive to Pb than Chlorella kesslerii, with Chlamydomonas reinhardtii in the middle. The influence of medium physico-chemistry was therefore investigated in detail with P. subcapitata. In synthetic test media, higher concentrations of fulvic acid or lower pH protected against toxicity of (filtered) Pb to P. subcapitata, while effects of increased Ca or Mg on Pb toxicity were less clear. When toxicity was expressed on a free Pb{sup 2+} ion activity basis, a log-linear, 260-fold increase of toxicity was observed between pH 6.0 and 7.6. Effects of fulvic acid were calculated to be much more limited (1.9-fold) and were probably even non-existent (depending on the affinity constant for Pb binding to fulvic acid that was used for calculating speciation). A relatively simple bioavailability model, consisting of a log-linear pH effect on Pb{sup 2+} ion toxicity linked to the geochemical speciation model Visual Minteq (with the default NICA-Donnan description of metal and proton binding to fulvic acid), provided relatively
International Nuclear Information System (INIS)
De Schamphelaere, K.A.C.; Nys, C.; Janssen, C.R.
2014-01-01
Highlights: • Chronic toxicity of Pb varied 4-fold among three algae species. • The use of an organic P avoided Pb precipitation in the experiments. • pH and Dissolved Organic Carbon strongly affect Pb toxicity, Ca and Mg do not. • A bioavailability model was developed that accurately predicts toxicity. • Algae may become the most sensitive species to Pb above pH 7.4. - Abstract: Scientifically sound risk assessment and derivation of environmental quality standards for lead (Pb) in the freshwater environment are hampered by insufficient data on chronic toxicity and bioavailability to unicellular green algae. Here, we first performed comparative chronic (72-h) toxicity tests with three algal species in medium at pH 6, containing 4 mg fulvic acid (FA)/L and containing organic phosphorous (P), i.e. glycerol-2-phosphate, instead of PO 4 3− to prevent lead-phosphate mineral precipitation. Pseudokirchneriella subcapitata was 4-fold more sensitive to Pb than Chlorella kesslerii, with Chlamydomonas reinhardtii in the middle. The influence of medium physico-chemistry was therefore investigated in detail with P. subcapitata. In synthetic test media, higher concentrations of fulvic acid or lower pH protected against toxicity of (filtered) Pb to P. subcapitata, while effects of increased Ca or Mg on Pb toxicity were less clear. When toxicity was expressed on a free Pb 2+ ion activity basis, a log-linear, 260-fold increase of toxicity was observed between pH 6.0 and 7.6. Effects of fulvic acid were calculated to be much more limited (1.9-fold) and were probably even non-existent (depending on the affinity constant for Pb binding to fulvic acid that was used for calculating speciation). A relatively simple bioavailability model, consisting of a log-linear pH effect on Pb 2+ ion toxicity linked to the geochemical speciation model Visual Minteq (with the default NICA-Donnan description of metal and proton binding to fulvic acid), provided relatively accurate toxicity
On the use of dynamic modelling for the design of IRF
International Nuclear Information System (INIS)
Hinds, H.W.
1997-01-01
The multi-purpose high-flux Irradiation Research Facility (IRF) reactor has been proposed by AECL as a replacement for the venerable NRU reactor at Chalk River, and the pre-project design of IRF is currently underway. As part of this design effort, we are currently modelling the dynamic response of the reactor and especially that of the Reactor Regulating System (RRS). The tool chosen for this work is the MATRIXx family of programs, including XMath, SystemBuild and DocumentIt. The SystemBuild tool allows users to specify a complete model by graphically interconnecting a set of modules (SuperBlocks) and/or 'primitives'. Each module, in turn, can be defined graphically by interconnecting a further set of sub-modules and/or 'primitives'. The system supports both continuous (analog) as well as discrete (digital) modules at the same time. Thus, it is possible to accurately model a continuous process coupled to its computer-based control system. The frequency response of the system can be extracted from the same model. The model will be used for control system stability analysis and to choose appropriate design parameters for various controllers and dynamic compensators within both the RRS and other important controllers in the system. The whole system can then be tested using various manoeuvres such as start-ups, shutdowns and step perturbations. It can also be used to verify that the design functions well under extreme conditions such as those which might occur at the beginning or end of the fuel cycle, or when attempting to override a poison-out. The model can also be of practical assistance to other designers in choosing the various parameters involved (e.g., step size of the stepping motor drives for the control absorber rods (CARs), or rundown time of the main primary coolant system pumps). The model currently consists of: point- or 7-node neutron kinetics with temperature and xenon feedback; 1 or 2 sets of log, linear and log rate amplifiers; the RRS (flux
Molva, Celenk; Baysal, Ayse Handan
2014-10-17
Alicyclobacillus acidoterrestris is a spoilage bacterium in fruit juices leading to high economic losses. The present study evaluated the effect of sporulation medium on the thermal inactivation kinetics of A. acidoterrestris DSM 3922 spores in apple juice (pH3.82±0.01; 11.3±0.1 °Brix). Bacillus acidocaldarius agar (BAA), Bacillus acidoterrestris agar (BATA), malt extract agar (MEA), potato dextrose agar (PDA) and B. acidoterrestris broth (BATB) were used for sporulation. Inactivation kinetic parameters at 85, 87.5 and 90°C were obtained using the log-linear model. The decimal reduction times at 85°C (D85°C) were 41.7, 57.6, 76.8, 76.8 and 67.2min; D87.5°C-values were 22.4, 26.7, 32.9, 31.5, and 32.9min; and D90°C-values were 11.6, 9.9, 14.7, 11.9 and 14.1min for spores produced on PDA, MEA, BATA, BAA and BATB, respectively. The estimated z-values were 9.05, 6.60, 6.96, 6.15, and 7.46, respectively. The present study suggests that the sporulation medium affects the wet-heat resistance of A. acidoterrestris DSM 3922 spores. Also, the dipicolinic acid content (DPA) was found highest in heat resistant spores formed on mineral containing media. After wet-heat treatment, loss of internal volume due to the release of DPA from spore core was observed by scanning electron microscopy. Since, there is no standardized media for the sporulation of A. acidoterrestris, the results obtained from this study might be useful to determine and compare the thermal resistance characteristics of A. acidoterrestris spores in fruit juices. Copyright © 2014 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kirman, Christopher R., E-mail: ckirman@summittoxicology.com [Summit Toxicology, Orange Village, OH, 44022 (United States); Suh, Mina, E-mail: msuh@toxstrategies.com [ToxStrategies, Inc., Mission Viejo, CA, 92692 (United States); Hays, Sean M., E-mail: shays@summittoxicology.com [Summit Toxicology, Allenspark, CO, 8040 (United States); Gürleyük, Hakan, E-mail: hakan@brooksrand.com [Brooks Applied Labs, Bothell, WA, 98011 (United States); Gerads, Russ, E-mail: russ@brooksrand.com [Brooks Applied Labs, Bothell, WA, 98011 (United States); De Flora, Silvio, E-mail: sdf@unige.it [Department of Health Sciences, University of Genoa, 16132 Genoa (Italy); Parker, William, E-mail: william.parker@duke.edu [Duke University Medical Center, Department of Surgery, Durham, NC, 27710 (United States); Lin, Shu, E-mail: shu.lin@duke.edu [Duke University Medical Center, Department of Surgery, Durham, NC, 27710 (United States); Haws, Laurie C., E-mail: lhaws@toxstrategies.com [ToxStrategies, Inc., Katy, TX, 77494 (United States); Harris, Mark A., E-mail: mharris@toxstrategies.com [ToxStrategies, Inc., Austin, TX, 78751 (United States); Proctor, Deborah M., E-mail: dproctor@toxstrategies.com [ToxStrategies, Inc., Mission Viejo, CA, 92692 (United States)
2016-09-01
To extend previous models of hexavalent chromium [Cr(VI)] reduction by gastric fluid (GF), ex vivo experiments were conducted to address data gaps and limitations identified with respect to (1) GF dilution in the model; (2) reduction of Cr(VI) in fed human GF samples; (3) the number of Cr(VI) reduction pools present in human GF under fed, fasted, and proton pump inhibitor (PPI)-use conditions; and (4) an appropriate form for the pH-dependence of Cr(VI) reduction rate constants. Rates and capacities of Cr(VI) reduction were characterized in gastric contents from fed and fasted volunteers, and from fasted pre-operative patients treated with PPIs. Reduction capacities were first estimated over a 4-h reduction period. Once reduction capacity was established, a dual-spike approach was used in speciated isotope dilution mass spectrometry analyses to characterize the concentration-dependence of the 2nd order reduction rate constants. These data, when combined with previously collected data, were well described by a three-pool model (pool 1 = fast reaction with low capacity; pool 2 = slow reaction with higher capacity; pool 3 = very slow reaction with higher capacity) using pH-dependent rate constants characterized by a piecewise, log-linear relationship. These data indicate that human gastric samples, like those collected from rats and mice, contain multiple pools of reducing agents, and low concentrations of Cr(VI) (< 0.7 mg/L) are reduced more rapidly than high concentrations. The data and revised modeling results herein provide improved characterization of Cr(VI) gastric reduction kinetics, critical for Cr(VI) pharmacokinetic modeling and human health risk assessment. - Highlights: • SIDMS allows for measurement of Cr(VI) reduction rate in gastric fluid ex vivo • Human gastric fluid has three reducing pools • Cr(VI) in drinking water at < 0.7 mg/L is rapidly reduced in human gastric fluid • Reduction rate is concentration- and pH-dependent • A refined PK
Pain-related Impairment of Daily Activities After Thoracic Surgery
DEFF Research Database (Denmark)
Ringsted, Thomas K; Wildgaard, Kim; Kreiner, Svend
2013-01-01
-specific questionnaire for assessment of functional impairment due to PTPS. METHODS:: Activities were obtained from the literature supplemented by interviews with patients and surgeons. The questionnaire was validated using the Rasch model in order to describe an underlying pain impairment scale. RESULTS:: Four of 17...... found. A generalized log-linear Rasch model including local dependence was constructed. Though local dependence influenced reliability, the test-retest reliability estimated under the log-linear Rasch model was high (0.88-0.96). Correlation with items from the Disability of the Arm, Shoulder and Hand...... (quick) questionnaire supported validity (γ=0.46, P...
Support for Marijuana (Cannabis Legalization: Untangling Age, Period, and Cohort Effects
Directory of Open Access Journals (Sweden)
William Campbell
2017-02-01
Full Text Available In three large, nationally representative surveys of U.S. 12th graders, college students, and adults ('N' = 9 million conducted 1968–2015, Americans became significantly more supportive of legal marijuana (cannabis starting in the mid-1980’s. Hierarchical models using age-period-cohort analysis on the adult (General Social Survey sample showed that the increased support for legalization is primarily a time period effect rather than generational or age effect; thus, Americans of all ages became more supportive of legal marijuana. Among 12th graders, support for marijuana legalization was closely linked to perceptions of marijuana safety.
Second-birth rates in Denmark from 1980 to 1994
DEFF Research Database (Denmark)
Strandberg-Larsen, Katrine; Knudsen, Lisbeth B.; Thygesen, Lau Casper
2010-01-01
A statistical age-period-cohort model was used to depict second-time birth rates and the spacing between the first and second child in Denmark, including 524,316 one-child mothers who gave birth to 296,923 second children 1980-1994. The spacing between the first and second child varies according...... to age, as older women had shorter duration from first to second child than younger women. Our results emphasize the importance of including an interaction between age and duration since first birth when analysing second-birth rates....
Castanon, Alejandra; Landy, Rebecca; Pesola, Francesca; Windridge, Peter; Sasieni, Peter
2018-01-01
In the next 25 years, the epidemiology of cervical cancer in England, UK, will change: human papillomavirus (HPV) screening will be the primary test for cervical cancer. Additionally, the proportion of women screened regularly is decreasing and women who received the HPV vaccine are due to attend screening for the first time. Therefore, we aimed to estimate how vaccination against HPV, changes to the screening test, and falling screening coverage will affect cervical cancer incidence in England up to 2040. We did a data modelling study that combined results from population modelling of incidence trends, observable data from the individual level with use of a generalised linear model, and microsimulation of unobservable disease states. We estimated age-specific absolute risks of cervical cancer in the absence of screening (derived from individual level data). We used an age period cohort model to estimate birth cohort effects. We multiplied the absolute risks by the age cohort effects to provide absolute risks of cervical cancer for unscreened women in different birth cohorts. We obtained relative risks (RRs) of cervical cancer by screening history (never screened, regularly screened, or lapsed attender) using data from a population-based case-control study for unvaccinated women, and using a microsimulation model for vaccinated women. RRs of primary HPV screening were relative to cytology. We used the proportion of women in each 5-year age group (25-29 years to 75-79 years) and 5-year period (2016-20 to 2036-40) who have a combination of screening and vaccination history, and weighted to estimate the population incidence. The primary outcome was the number of cases and rates per 100 000 women under four scenarios: no changes to current screening coverage or vaccine uptake and HPV primary testing from 2019 (status quo), changing the year in which HPV primary testing is introduced, introduction of the nine-valent vaccine, and changes to cervical screening coverage
Denham, Bryan E.
2009-01-01
Grounded conceptually in social cognitive theory, this research examines how personal, behavioral, and environmental factors are associated with risk perceptions of anabolic-androgenic steroids. Ordinal logistic regression and logit log-linear models applied to data gathered from high-school seniors (N = 2,160) in the 2005 Monitoring the Future…
Reassessing the Economic Value of Advanced Level Mathematics
Adkins, Michael; Noyes, Andrew
2016-01-01
In the late 1990s, the economic return to Advanced level (A-level) mathematics was examined. The analysis was based upon a series of log-linear models of earnings in the 1958 National Child Development Survey (NCDS) and the National Survey of 1980 Graduates and Diplomates. The core finding was that A-level mathematics had a unique earnings premium…
Interracial and Intraracial Patterns of Mate Selection among America's Diverse Black Populations
Batson, Christie D.; Qian, Zhenchao; Lichter, Daniel T.
2006-01-01
Despite recent immigration from Africa and the Caribbean, Blacks in America are still viewed as a monolith in many previous studies. In this paper, we use newly released 2000 census data to estimate log-linear models that highlight patterns of interracial and intraracial marriage and cohabitation among African Americans, West Indians, Africans,…
Analyse van samenhangen tussen kwalitatieve verkeersveiligheidskenmerken. (slot)
Oppe, S.
1980-01-01
The author continues his article in Verkeerskunde 7/80 (see PB 17058) with a description of the essential aspects of the log-linear model and its applicability to traffic safety data. An example is given by the analysis of the BAC (blood-alcohol concentration) of road users.
Genome wide study of maternal and parent-of-origin effects on the etiology of orofacial clefts
DEFF Research Database (Denmark)
Shi, Min; Murray, Jeff; Marazita, Mary L
2012-01-01
We performed a genome wide association analysis of maternally-mediated genetic effects and parent-of-origin (POO) effects on risk of orofacial clefting (OC) using over 2,000 case-parent triads collected through an international cleft consortium. We used log-linear regression models to test indivi...... individual SNPs. For SNPs with a P-value...
Pieters, R.; Baumgartner, H.; Vermunt, J.K.; Bijmolt, T.H.A.
1998-01-01
The citation network of the International Journal of Research in Marketing (IJRM) is examined from 1981 to 1995. We propose a model that contains log-linear and logmultiplicative terms to estimate simultaneously the importance, cohesion, and structural equivalence of journals in the network across
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
ten Cate, Jacob M
2015-01-01
Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel
DEFF Research Database (Denmark)
Carlson, Kerstin
The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...
ten Cate, J.M.
2015-01-01
Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of
Hoos, Anne B.; Terziotti, Silvia; McMahon, Gerard; Savvas, Katerina; Tighe, Kirsten C.; Alkons-Wolinsky, Ruth
2008-01-01
This report presents and describes the digital datasets that characterize nutrient source inputs, environmental characteristics, and instream nutrient loads for the purpose of calibrating and applying a nutrient water-quality model for the southeastern United States for 2002. The model area includes all of the river basins draining to the south Atlantic and the eastern Gulf of Mexico, as well as the Tennessee River basin (referred to collectively as the SAGT area). The water-quality model SPARROW (SPAtially-Referenced Regression On Watershed attributes), developed by the U.S. Geological Survey, uses a regression equation to describe the relation between watershed attributes (predictors) and measured instream loads (response). Watershed attributes that are considered to describe nutrient input conditions and are tested in the SPARROW model for the SAGT area as source variables include atmospheric deposition, fertilizer application to farmland, manure from livestock production, permitted wastewater discharge, and land cover. Watershed and channel attributes that are considered to affect rates of nutrient transport from land to water and are tested in the SAGT SPARROW model as nutrient-transport variables include characteristics of soil, landform, climate, reach time of travel, and reservoir hydraulic loading. Datasets with estimates of each of these attributes for each individual reach or catchment in the reach-catchment network are presented in this report, along with descriptions of methods used to produce them. Measurements of nutrient water quality at stream monitoring sites from a combination of monitoring programs were used to develop observations of the response variable - mean annual nitrogen or phosphorus load - in the SPARROW regression equation. Instream load of nitrogen and phosphorus was estimated using bias-corrected log-linear regression models using the program Fluxmaster, which provides temporally detrended estimates of long-term mean load well
Modelling SDL, Modelling Languages
Directory of Open Access Journals (Sweden)
Michael Piefel
2007-02-01
Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.
Anaïs Schaeffer
2012-01-01
By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models. Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...
Age, period and cohort effects on first-child fertility in Danish men
DEFF Research Database (Denmark)
Kamper-Jørgensen, Mads; Keiding, Niels; Knudsen, Lisbeth B.
Demographic studies of fertility are most often based solely on information about women, leaving out characteristics of men. Thereby valuable information may be lost. The present note intends to explore the potential of the classical age-period-cohort model for describing male first-child fertility...... patterns. The model was fitted to fertility data on Danish men aged 15 to 49 years in the calendar period from 1960 to 1994. We found the classical age-period-cohodt model to be an appropriate model for describing male first-child fertility patterns in Denmark. Fluctuations in age-specific male first-child...... fertility rates over period were found, with a nadir in the mid-1980s. Furthermore, age-specific first-child fertility rates were found to be lower in men from younger cirth cohorts than in men from older birth cohorts....
A Spline-Based Lack-Of-Fit Test for Independent Variable Effect in Poisson Regression.
Li, Chin-Shang; Tu, Wanzhu
2007-05-01
In regression analysis of count data, independent variables are often modeled by their linear effects under the assumption of log-linearity. In reality, the validity of such an assumption is rarely tested, and its use is at times unjustifiable. A lack-of-fit test is proposed for the adequacy of a postulated functional form of an independent variable within the framework of semiparametric Poisson regression models based on penalized splines. It offers added flexibility in accommodating the potentially non-loglinear effect of the independent variable. A likelihood ratio test is constructed for the adequacy of the postulated parametric form, for example log-linearity, of the independent variable effect. Simulations indicate that the proposed model performs well, and misspecified parametric model has much reduced power. An example is given.
DEFF Research Database (Denmark)
Larsen, Lars Bjørn; Vesterager, Johan
This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise s...
Directory of Open Access Journals (Sweden)
A.A. Malykh
2017-08-01
Full Text Available In this paper, the concept of locally simple models is considered. Locally simple models are arbitrarily complex models built from relatively simple components. A lot of practically important domains of discourse can be described as locally simple models, for example, business models of enterprises and companies. Up to now, research in human reasoning automation has been mainly concentrated around the most intellectually intensive activities, such as automated theorem proving. On the other hand, the retailer business model is formed from ”jobs”, and each ”job” can be modelled and automated more or less easily. At the same time, the whole retailer model as an integrated system is extremely complex. In this paper, we offer a variant of the mathematical definition of a locally simple model. This definition is intended for modelling a wide range of domains. Therefore, we also must take into account the perceptual and psychological issues. Logic is elitist, and if we want to attract to our models as many people as possible, we need to hide this elitism behind some metaphor, to which ’ordinary’ people are accustomed. As such a metaphor, we use the concept of a document, so our locally simple models are called document models. Document models are built in the paradigm of semantic programming. This allows us to achieve another important goal - to make the documentary models executable. Executable models are models that can act as practical information systems in the described domain of discourse. Thus, if our model is executable, then programming becomes redundant. The direct use of a model, instead of its programming coding, brings important advantages, for example, a drastic cost reduction for development and maintenance. Moreover, since the model is well and sound, and not dissolved within programming modules, we can directly apply AI tools, in particular, machine learning. This significantly expands the possibilities for automation and
Chang, CC
2012-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.
International Nuclear Information System (INIS)
Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.
1990-01-01
Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings
conting : an R package for Bayesian analysis of complete and incomplete contingency tables
Overstall, Antony; King, Ruth
2014-01-01
The aim of this paper is to demonstrate the R package conting for the Bayesian analysis of complete and incomplete contingency tables using hierarchical log-linear models. This package allows a user to identify interactions between categorical factors (via complete contingency tables) and to estimate closed population sizes using capture-recapture studies (via incomplete contingency tables). The models are fitted using Markov chain Monte Carlo methods. In particular, implementations of the Me...
Burant, Aniela; Lowry, Gregory V; Karamalidis, Athanasios K
2017-06-20
Carbon capture, utilization, and storage (CCUS), a climate change mitigation strategy, along with unconventional oil and gas extraction, generates enormous volumes of produced water containing high salt concentrations and a litany of organic compounds. Understanding the aqueous solubility of organic compounds related to these operations is important for water treatment and reuse alternatives, as well as risk assessment purposes. The well-established Setschenow equation can be used to determine the effect of salts on aqueous solubility. However, there is a lack of reported Setschenow constants, especially for polar organic compounds. In this study, the Setschenow constants for selected hydrophilic organic compounds were experimentally determined, and linear free energy models for predicting the Setschenow constant of organic chemicals in concentrated brines were developed. Solid phase microextraction was employed to measure the salting-out behavior of six selected hydrophilic compounds up to 5 M NaCl and 2 M CaCl 2 and in Na-Ca-Cl brines. All compounds, which include phenol, p-cresol, hydroquinone, pyrrole, hexanoic acid, and 9-hydroxyfluorene, exhibited log-linear behavior up to these concentrations, meaning Setschenow constants previously measured at low salt concentrations can be extrapolated up to high salt concentrations for hydrophilic compounds. Setschenow constants measured in NaCl and CaCl 2 brines are additive for the compounds measured here; meaning Setschenow constants measured in single salt solutions can be used in multiple salt solutions. The hydrophilic compounds in this study were selected to elucidate differences in salting-out behavior based on their chemical structure. Using data from this study, as well as literature data, linear free energy relationships (LFERs) for prediction of NaCl, CaCl 2 , LiCl, and NaBr Setschenow constants were developed and validated. Two LFERs were improved. One LFER uses the Abraham solvation parameters, which include
Model-model Perencanaan Strategik
Amirin, Tatang M
2005-01-01
The process of strategic planning, used to be called as long-term planning, consists of several components, including strategic analysis, setting strategic direction (covering of mission, vision, and values), and action planning. Many writers develop models representing the steps of the strategic planning process, i.e. basic planning model, problem-based planning model, scenario model, and organic or self-organizing model.
DEFF Research Database (Denmark)
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...
Projection of future transport energy demand of Thailand
International Nuclear Information System (INIS)
Limanond, Thirayoot; Jomnonkwao, Sajjakaj; Srikaew, Artit
2011-01-01
The objective of this study is to project transport energy consumption in Thailand for the next 20 years. The study develops log-linear regression models and feed-forward neural network models, using the as independent variables national gross domestic product, population and the numbers of registered vehicles. The models are based on 20-year historical data between years 1989 and 2008, and are used to project the trends in future transport energy consumption for years 2010-2030. The final log-linear models include only gross domestic product, since all independent variables are highly correlated. It was found that the projection results of this study were in the range of 54.84-59.05 million tonnes of oil equivalent, 2.5 times the 2008 consumption. The projected demand is only 61-65% of that predicted in a previous study, which used the LEAP model. This major discrepancy in transport energy demand projections suggests that projects related to this key indicator should take into account alternative projections, because these numbers greatly affect plans, policies and budget allocation for national energy management. - Research highlights: → Thailand transport energy consumption would increase to 54.4-59.1 MTOE in Year 2030. → The log-linear models yield a slightly higher projection than the ANN models. → The elasticity of transport energy demand with respect to GDP is 0.995.
Projection of future transport energy demand of Thailand
Energy Technology Data Exchange (ETDEWEB)
Limanond, Thirayoot, E-mail: tlimanond@yahoo.co [School of Transportation Engineering, Institute of Engineering, Suranaree University of Technology, Nakhon Ratchasima 30000 (Thailand); Jomnonkwao, Sajjakaj [School of Transportation Engineering, Institute of Engineering, Suranaree University of Technology, Nakhon Ratchasima 30000 (Thailand); Srikaew, Artit [School of Electrical Engineering, Institute of Engineering, Suranaree University of Technology, Nakhon Ratchasima 30000 (Thailand)
2011-05-15
The objective of this study is to project transport energy consumption in Thailand for the next 20 years. The study develops log-linear regression models and feed-forward neural network models, using the as independent variables national gross domestic product, population and the numbers of registered vehicles. The models are based on 20-year historical data between years 1989 and 2008, and are used to project the trends in future transport energy consumption for years 2010-2030. The final log-linear models include only gross domestic product, since all independent variables are highly correlated. It was found that the projection results of this study were in the range of 54.84-59.05 million tonnes of oil equivalent, 2.5 times the 2008 consumption. The projected demand is only 61-65% of that predicted in a previous study, which used the LEAP model. This major discrepancy in transport energy demand projections suggests that projects related to this key indicator should take into account alternative projections, because these numbers greatly affect plans, policies and budget allocation for national energy management. - Research highlights: {yields} Thailand transport energy consumption would increase to 54.4-59.1 MTOE in Year 2030. {yields} The log-linear models yield a slightly higher projection than the ANN models. {yields} The elasticity of transport energy demand with respect to GDP is 0.995.
DEFF Research Database (Denmark)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test...
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina
2011-01-01
covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...... the degrees of freedom, dependent and independent variables and solution strategy. Vapour-liquid and solid-liquid equilibrium is covered, and applications to droplet evaporation and kinetic models are given....
DEFF Research Database (Denmark)
Ravn, Anders P.; Staunstrup, Jørgen
1994-01-01
This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...
Hydrological models are mediating models
Babel, L. V.; Karssenberg, D.
2013-08-01
Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting
Dikshit, Rajesh P; Yeole, B B; Nagrani, Rajini; Dhillon, P; Badwe, R; Bray, Freddie
2012-08-01
Increasing trends in the incidence of breast cancer have been observed in India, including Mumbai. These have likely stemmed from an increasing adoption of lifestyle factors more akin to those commonly observed in westernized countries. Analyses of breast cancer trends and corresponding estimation of the future burden are necessary to better plan rationale cancer control programmes within the country. We used data from the population-based Mumbai Cancer Registry to study time trends in breast cancer incidence rates 1976-2005 and stratified them according to younger (25-49) and older age group (50-74). Age-period-cohort models were fitted and the net drift used as a measure of the estimated annual percentage change (EAPC). Age-period-cohort models and population projections were used to predict the age-adjusted rates and number of breast cancer cases circa 2025. Breast cancer incidence increased significantly among older women over three decades (EAPC = 1.6%; 95% CI 1.1-2.0), while lesser but significant 1% increase in incidence among younger women was observed (EAPC = 1.0; 95% CI 0.2-1.8). Non-linear period and cohort effects were observed; a trends-based model predicted a close-to-doubling of incident cases by 2025 from 1300 mean cases per annum in 2001-2005 to over 2500 cases in 2021-2025. The incidence of breast cancer has increased in Mumbai during last two to three decades, with increases greater among older women. The number of breast cancer cases is predicted to double to over 2500 cases, the vast majority affecting older women. Copyright © 2012 Elsevier Ltd. All rights reserved.
Birth cohort effects on mortality in Danish women
DEFF Research Database (Denmark)
Jacobsen, Rune; Keiding, Niels; Lynge, Elsebeth
the mothers of the babyboomers, and the women most heavily hit by the epidemic of sexually transmitted diseases in the mid 1940s. These generations of women furthermore entered the Danish labour market in massive numbers in the 1960s. In the present study we examine the mortality of Danish women and compare...... it to mortality of Danish men, Norwegian women and Swedish women. Specifically we aim to answer the questions: 1) Are there comparable birth cohort effects on mortality in Norway and Sweden and what is the impact of the respective Danish birth cohorts on the life expectancy measure 2) Are there specific causes...... groups. The data was analysed using descriptive techniques, Age-period-cohort modelling and age-decomposing of life expectancies. Results: The results showed no similar birth cohort effect for Norway and Sweden when compared to Denmark and a relatively high impact of the birth cohort effect on life...
DEFF Research Database (Denmark)
Fuglede, Niels; Langballe, Oline; Svendsen, Anne Louise
2006-01-01
The authors report on the incidence rates of breast cancer overall and by histology in a population of unscreened women constituting approximately 80% of the total population of women in Denmark from 1973-2002, utilizing the files of the nationwide Danish Cancer Registry. The age-specific incidence...... no disproportionate changes by histology in any age group from 1988-2002. Thus, previous reports of a disproportionate increase in lobular breast cancer could not be confirmed in a non-screened population, whereas important changes over the past decade in the age-specific incidence pattern of breast cancer particular...... rates of breast cancer increased throughout the period, and further, marked changes in the age-specific incidence pattern were observed, where the plateau and change of slope around the age of 46-48 in 1973-1981 shifted to around age 64-66 years in 1994-2002. Age-period-cohort modeling indicated...
International Nuclear Information System (INIS)
Phillips, C.K.
1985-12-01
This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs
Modelling in Business Model design
Simonse, W.L.
2013-01-01
It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and
International Nuclear Information System (INIS)
Michel, F.C.
1989-01-01
Three existing eclipse models for the PSR 1957 + 20 pulsar are discussed in terms of their requirements and the information they yield about the pulsar wind: the interacting wind from a companion model, the magnetosphere model, and the occulting disk model. It is shown out that the wind model requires an MHD wind from the pulsar, with enough particles that the Poynting flux of the wind can be thermalized; in this model, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model requires an EM wind, which is Poynting flux dominated; the advantage of this model over the wind model is that the plasma density inside the magnetosphere can be orders of magnitude larger than in a magnetospheric tail blown back by wind interaction. The occulting disk model also requires an EM wind so that the interaction would be pushed down onto the companion surface, minimizing direct interaction of the wind with the orbiting macroscopic particles
International Nuclear Information System (INIS)
Yang, H.
1999-01-01
The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future
Colorectal cancer mortality trends in Córdoba, Argentina.
Pou, Sonia Alejandra; Osella, Alberto Rubén; Eynard, Aldo Renato; Niclis, Camila; Diaz, María del Pilar
2009-12-01
Colorectal cancer is a leading cause of death worldwide for men and women, and one of the most commonly diagnosed in Córdoba, Argentina. The aim of this work was to provide an up-to-date approach to descriptive epidemiology of colorectal cancer in Córdoba throughout the estimation of mortality trends in the period 1986-2006, using Joinpoint and age-period-cohort (APC) models. Age-standardized (world population) mortality rates (ASMR), overall and truncated (35-64 years), were calculated and Joinpoint regression performed to compute the estimated annual percentage changes (EAPC). Poisson sequential models were fitted to estimate the effect of age (11 age groups), period (1986-1990, 1991-1995, 1996-2000 or 2001-2006) and cohort (13 ten-years cohorts overlapping each other by five-years) on colorectal cancer mortality rates. ASMR showed an overall significant decrease (EAPC -0.9 95%CI: -1.7, -0.2) for women, being more noticeable from 1996 onwards (EAPC -2.1 95%CI: -4.0, -0.1). Age-effect showed an important rise in both sexes, but more evident in males. Birth cohort- and period effects reflected increasing and decreasing tendencies for men and women, respectively. Differences in mortality rates were found according to sex and could be related to age-period-cohort effects linked to the ageing process, health care and lifestyle. Further research is needed to elucidate the specific age-, period- and cohort-related factors.
DEFF Research Database (Denmark)
Blomhøj, Morten
2004-01-01
Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...
2016-01-01
This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.
Bottle, Neil
2013-01-01
The Model : making exhibition was curated by Brian Kennedy in collaboration with Allies & Morrison in September 2013. For the London Design Festival, the Model : making exhibition looked at the increased use of new technologies by both craft-makers and architectural model makers. In both practices traditional ways of making by hand are increasingly being combined with the latest technologies of digital imaging, laser cutting, CNC machining and 3D printing. This exhibition focussed on ...
The impact of online user reviews on hotel room sales
Ye, Q.; Law, R.; Gu, B.
2009-01-01
Despite hospitality and tourism researchers’ recent attempts at examining different aspects of online word-of-mouth [WOM], its impact on hotel sales remains largely unknown in the existing literature. To fill this void, we conduct a study to empirically investigate the impact of online consumer-generated reviews on hotel room sales. Utilizing data collected from the largest travel website in China, we develop a fixed effect log-linear regression model to assess the influence of online reviews...
International Nuclear Information System (INIS)
Frampton, Paul H.
1998-01-01
In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA
International Nuclear Information System (INIS)
Frampton, P.H.
1998-01-01
In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA. copyright 1998 American Institute of Physics
Modeling Documents with Event Model
Directory of Open Access Journals (Sweden)
Longhui Wang
2015-08-01
Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.
DEFF Research Database (Denmark)
Gøtze, Jens Peter; Krentz, Andrew
2014-01-01
In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...
Jongerden, M.R.; Haverkort, Boudewijn R.H.M.
2008-01-01
The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However,
DEFF Research Database (Denmark)
Højgaard, Tomas; Hansen, Rune
The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful...
Kempen, van A.; Kok, H.; Wagter, H.
1992-01-01
In Computer Aided Drafting three groups of three-dimensional geometric modelling can be recognized: wire frame, surface and solid modelling. One of the methods to describe a solid is by using a boundary based representation. The topology of the surface of a solid is the adjacency information between
Poortman, Sybilla; Sloep, Peter
2006-01-01
Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in
International Nuclear Information System (INIS)
V. Chipman
2002-01-01
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses
DEFF Research Database (Denmark)
Kindler, Ekkart
2009-01-01
, these notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult......There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add most...
Familial aggregation of congenital hydrocephalus in a nationwide cohort
DEFF Research Database (Denmark)
Munch, Tina Nørgaard; Rostgaard, Klaus; Rasmussen, Marie-Louise Hee
2012-01-01
The objective of the study was to investigate familial aggregation of primary congenital hydrocephalus in an unselected, nationwide population. Based on the Danish Central Person Register, we identified all children born in Denmark between 1978 and 2008 and their family members (up to third......-degree relatives). Information on primary congenital hydrocephalus was obtained from the National Patient Discharge Register. Using binomial log-linear regression, we estimated recurrence risk ratios of congenital hydrocephalus. An alternative log-linear regression model was applied to quantify the genetic effect...... and the maternal effect. Of 1 928 683 live-born children, 2194 had a diagnosis of idiopathic congenital hydrocephalus (1.1/1000). Of those, 75 (3.4%) had at least one other family member with primary congenital hydrocephalus. Significantly increased recurrence risk ratios of primary congenital hydrocephalus were...
Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R
2007-01-01
Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.
The great sleep recession: changes in sleep duration among US adolescents, 1991-2012.
Keyes, Katherine M; Maslowsky, Julie; Hamilton, Ava; Schulenberg, John
2015-03-01
Average nightly sleep times precipitously decline from childhood through adolescence. There is increasing concern that historical shifts also occur in overall adolescent sleep time. Data were drawn from Monitoring the Future, a yearly, nationally representative cross-sectional survey of adolescents in the United States from 1991 to 2012 (N = 272 077) representing birth cohorts from 1973 to 2000. Adolescents were asked how often they get ≥7 hours of sleep and how often they get less sleep than they should. Age-period-cohort models were estimated. Adolescent sleep generally declined over 20 years; the largest change occurred between 1991-1995 and 1996-2000. Age-period-cohort analyses indicate adolescent sleep is best described across demographic subgroups by an age effect, with sleep decreasing across adolescence, and a period effect, indicating that sleep is consistently decreasing, especially in the late 1990s and early 2000s. There was also a cohort effect among some subgroups, including male subjects, white subjects, and those in urban areas, with the earliest cohorts obtaining more sleep. Girls were less likely to report getting ≥7 hours of sleep compared with boys, as were racial/ethnic minorities, students living in urban areas, and those of low socioeconomic status (SES). However, racial/ethnic minorities and adolescents of low SES were more likely to self-report adequate sleep, compared with white subjects and those of higher SES. Declines in self-reported adolescent sleep across the last 20 years are concerning. Mismatch between perceptions of adequate sleep and actual reported sleep times for racial/ethnic minorities and adolescents of low SES are additionally concerning and suggest that health education and literacy approaches may be warranted. Copyright © 2015 by the American Academy of Pediatrics.
Directory of Open Access Journals (Sweden)
P. Grimaldi
2012-07-01
Full Text Available These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : – the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program; – the shot visualization in two distinct windows – the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Kreiner, Svend; Christensen, Karl Bang
Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models......Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models...
International Nuclear Information System (INIS)
Woosley, S.E.; California, University, Livermore, CA); Weaver, T.A.
1981-01-01
Recent progress in understanding the observed properties of type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the Ni-56 produced therein is reviewed. The expected nucleosynthesis and gamma-line spectra for this model of type I explosions and a model for type II explosions are presented. Finally, a qualitatively new approach to the problem of massive star death and type II supernovae based upon a combination of rotation and thermonuclear burning is discussed. While the theoretical results of existing models are predicated upon the assumption of a successful core bounce calculation and the neglect of such two-dimensional effects as rotation and magnetic fields the new model suggests an entirely different scenario in which a considerable portion of the energy carried by an equatorially ejected blob is deposited in the red giant envelope overlying the mantle of the star
Hodges, Wilfrid
1993-01-01
An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.
Indian Academy of Sciences (India)
2School of Water Resources, Indian Institute of Technology,. Kharagpur ... the most accepted method for modelling LULCC using current .... We used UTM coordinate system with zone 45 .... need to develop criteria for making decision about.
National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of all...
Energy models characterize the energy system, its evolution, and its interactions with the broader economy. The energy system consists of primary resources, including both fossil fuels and renewables; power plants, refineries, and other technologies to process and convert these r...
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Skaaret, Eimund
Calculation procedures, used in the design of ventilating systems, which are especially suited for displacement ventilation in addition to linking it to mixing ventilation, are addressed. The two zone flow model is considered and the steady state and transient solutions are addressed. Different methods of supplying air are discussed, and different types of air flow are considered: piston flow, plane flow and radial flow. An evaluation model for ventilation systems is presented.
Model uncertainty: Probabilities for models?
International Nuclear Information System (INIS)
Winkler, R.L.
1994-01-01
Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising
International Nuclear Information System (INIS)
Fryer, M.O.
1984-01-01
The temperature measurements provided by thermocouples (TCs) are important for the operation of pressurized water reactors. During severe inadequate core cooling incidents, extreme temperatures may cause type K thermocouples (TCs) used for core exit temperature monitoring to perform poorly. A model of TC electrical behavior has been developed to determine how TCs react under extreme temperatures. The model predicts the voltage output of the TC and its impedance. A series of experiments were conducted on a length of type K thermocouple to validate the model. Impedance was measured at several temperatures between 22 0 C and 1100 0 C and at frequencies between dc and 10 MHz. The model was able to accurately predict impedance over this wide range of conditions. The average percentage difference between experimental data and the model was less than 6.5%. Experimental accuracy was +-2.5%. There is a sriking difference between impedance versus frequency plots at 300 0 C and at higher temperatures. This may be useful in validating TC data during accident conditions
Kallman, T.
2010-01-01
Warm absorber spectra are characterized by the many lines from partially ionized intermediate-Z elements, and iron, detected with the grating instruments on Chandra and XMM-Newton. If these ions are formed in a gas which is in photoionization equilibrium, they correspond to a broad range of ionization parameters, although there is evidence for certain preferred values. A test for any dynamical model for these outflows is to reproduce these properties, at some level of detail. In this paper we present a statistical analysis of the ionization distribution which can be applied both the observed spectra and to theoretical models. As an example, we apply it to our dynamical models for warm absorber outflows, based on evaporation from the molecular torus.
Smith, J. A.; Cooper, K.; Randolph, M.
1984-01-01
A classical description of the one dimensional radiative transfer treatment of vegetation canopies was completed and the results were tested against measured prairie (blue grama) and agricultural canopies (soybean). Phase functions are calculated in terms of directly measurable biophysical characteristics of the canopy medium. While the phase functions tend to exhibit backscattering anisotropy, their exact behavior is somewhat more complex and wavelength dependent. A Monte Carlo model was developed that treats soil surfaces with large periodic variations in three dimensions. A photon-ray tracing technology is used. Currently, the rough soil surface is described by analytic functions and appropriate geometric calculations performed. A bidirectional reflectance distribution function is calculated and, hence, available for other atmospheric or canopy reflectance models as a lower boundary condition. This technique is used together with an adding model to calculate several cases where Lambertian leaves possessing anisotropic leaf angle distributions yield non-Lambertian reflectance; similar behavior is exhibited for simulated soil surfaces.
Eck, Christof; Knabner, Peter
2017-01-01
Mathematical models are the decisive tool to explain and predict phenomena in the natural and engineering sciences. With this book readers will learn to derive mathematical models which help to understand real world phenomena. At the same time a wealth of important examples for the abstract concepts treated in the curriculum of mathematics degrees are given. An essential feature of this book is that mathematical structures are used as an ordering principle and not the fields of application. Methods from linear algebra, analysis and the theory of ordinary and partial differential equations are thoroughly introduced and applied in the modeling process. Examples of applications in the fields electrical networks, chemical reaction dynamics, population dynamics, fluid dynamics, elasticity theory and crystal growth are treated comprehensively.
Cardey, Sylviane
2013-01-01
In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int
Directory of Open Access Journals (Sweden)
Aarti Sharma
2009-01-01
Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.
International Nuclear Information System (INIS)
Woosley, S.E.; Weaver, T.A.
1980-01-01
Recent progress in understanding the observed properties of Type I supernovae as a consequence of the thermonuclear detonation of white dwarf stars and the ensuing decay of the 56 Ni produced therein is reviewed. Within the context of this model for Type I explosions and the 1978 model for Type II explosions, the expected nucleosynthesis and gamma-line spectra from both kinds of supernovae are presented. Finally, a qualitatively new approach to the problem of massive star death and Type II supernovae based upon a combination of rotation and thermonuclear burning is discussed
Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.
2015-12-01
The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .
Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.
This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…
DEFF Research Database (Denmark)
Nash, Ulrik William
2014-01-01
Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil...
International Nuclear Information System (INIS)
Michel, F.C.
1989-01-01
This paper addresses the question of, if one overlooks their idiosyncratic difficulties, what could be learned from the various models about the pulsar wind? The wind model requires an MHD wind from the pulsar, namely, one with enough particles that the Poynting flux of the wind can be thermalized. Otherwise, there is no shock and the pulsar wind simply reflects like a flashlight beam. Additionally, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model probably requires an EM wind, which is Poynting flux dominated. Reflection in this case would arguably minimize the intimate interaction between the two flows that leads to tail formation and thereby permit a weakly magnetized tail. The occulting disk model also would point to an EM wind so that the interaction would be pushed down onto the companion surface (to form the neutral fountain) and so as to also minimize direct interaction of the wind with the orbiting macroscopic particles
African Journals Online (AJOL)
Simple analytic polynomials have been proposed for estimating solar radiation in the traditional Northern, Central and Southern regions of Malawi. There is a strong agreement between the polynomials and the SSE model with R2 values of 0.988, 0.989 and 0.989 and root mean square errors of 0.061, 0.057 and 0.062 ...
Lomnitz, Cinna
Tichelaar and Ruff [1989] propose to “estimate model variance in complicated geophysical problems,” including the determination of focal depth in earthquakes, by means of unconventional statistical methods such as bootstrapping. They are successful insofar as they are able to duplicate the results from more conventional procedures.
International Nuclear Information System (INIS)
Norgett, M.J.
1980-01-01
Calculations, drawing principally on developments at AERE Harwell, of the relaxation about lattice defects are reviewed with emphasis on the techniques required for such calculations. The principles of defect modelling are outlined and various programs developed for defect simulations are discussed. Particular calculations for metals, ionic crystals and oxides, are considered. (UK)
DEFF Research Database (Denmark)
Stubkjær, Erik
2005-01-01
to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...
DEFF Research Database (Denmark)
About the reconstruction of Palle Nielsen's (f. 1942) work The Model from 1968: a gigantic playground for children in the museum, where they can freely romp about, climb in ropes, crawl on wooden structures, work with tools, jump in foam rubber, paint with finger paints and dress up in costumes....
International Nuclear Information System (INIS)
Wenzel, W.J.; Gallegos, A.F.; Rodgers, J.C.
1985-01-01
The BIOTRAN model was developed at Los Alamos to help predict short- and long-term consequences to man from releases of radionuclides into the environment. It is a dynamic model that simulates on a daily and yearly basis the flux of biomass, water, and radionuclides through terrestrial and aquatic ecosystems. Biomass, water, and radionuclides are driven within the ecosystems by climate variables stochastically generated by BIOTRAN each simulation day. The climate variables influence soil hydraulics, plant growth, evapotranspiration, and particle suspension and deposition. BIOTRAN has 22 different plant growth strategies for simulating various grasses, shrubs, trees, and crops. Ruminants and humans are also dynamically simulated by using the simulated crops and forage as intake for user-specified diets. BIOTRAN has been used at Los Alamos for long-term prediction of health effects to populations following potential accidental releases of uranium and plutonium. Newly developed subroutines are described: a human dynamic physiological and metabolic model; a soil hydrology and irrigation model; limnetic nutrient and radionuclide cycling in fresh-water lakes. 7 references
DEFF Research Database (Denmark)
Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens
2011-01-01
term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence...
1975-01-01
thai h’liathe0in antd is finaull’ %IIIrd alt %tramlit And drohlttle. Mike aplpars Ito inua•,e upward in outler a rei and dowoi. ward it %iunr areli, Oil...fiducial marks should be constant and the edges phobic nor hydrophilic is better for routine sharpl ) defined. model testing. Before each launching in
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 9; Issue 5. Molecular Modeling: A Powerful Tool for Drug Design and Molecular Docking. Rama Rao Nadendla. General Article Volume 9 Issue 5 May 2004 pp 51-60. Fulltext. Click here to view fulltext PDF. Permanent link:
International Nuclear Information System (INIS)
Alsaed, A.
2004-01-01
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality
Building Models and Building Modelling
DEFF Research Database (Denmark)
Jørgensen, Kaj; Skauge, Jørn
2008-01-01
I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygningsmodelleringsprogrammer beskrevet. Vigtige aspekter om comp...
DEFF Research Database (Denmark)
2012-01-01
The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....... on this subject, this book makes essential reading for anyone considering new ways of thinking about architecture. In drawing upon both historical and contemporary perspectives this book provides evidence of the ways in which relations between representation and the represented continue to be reconsidered...
DEFF Research Database (Denmark)
The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....... on this subject, this book makes essential reading for anyone considering new ways of thinking about architecture. In drawing upon both historical and contemporary perspectives this book provides evidence of the ways in which relations between representation and the represented continue to be reconsidered...
Barr, Michael
2002-01-01
Acyclic models is a method heavily used to analyze and compare various homology and cohomology theories appearing in topology and algebra. This book is the first attempt to put together in a concise form this important technique and to include all the necessary background. It presents a brief introduction to category theory and homological algebra. The author then gives the background of the theory of differential modules and chain complexes over an abelian category to state the main acyclic models theorem, generalizing and systemizing the earlier material. This is then applied to various cohomology theories in algebra and topology. The volume could be used as a text for a course that combines homological algebra and algebraic topology. Required background includes a standard course in abstract algebra and some knowledge of topology. The volume contains many exercises. It is also suitable as a reference work for researchers.
Directory of Open Access Journals (Sweden)
Aarti Sharma
2009-12-01
Full Text Available
DEFF Research Database (Denmark)
Pedersen, Mogens Jin; Stritch, Justin Michael
2018-01-01
Replication studies relate to the scientific principle of replicability and serve the significant purpose of providing supporting (or contradicting) evidence regarding the existence of a phenomenon. However, replication has never been an integral part of public administration and management...... research. Recently, scholars have issued calls for more replication, but academic reflections on when replication adds substantive value to public administration and management research are needed. This concise article presents a conceptual model, RNICE, for assessing when and how a replication study...... contributes knowledge about a social phenomenon and advances knowledge in the public administration and management literatures. The RNICE model provides a vehicle for researchers who seek to evaluate or demonstrate the value of a replication study systematically. We illustrate the practical application...
DEFF Research Database (Denmark)
Lasrado, Lester Allan; Vatrapu, Ravi
2016-01-01
Recent advancements in set theory and readily available software have enabled social science researchers to bridge the variable-centered quantitative and case-based qualitative methodological paradigms in order to analyze multi-dimensional associations beyond the linearity assumptions, aggregate...... effects, unicausal reduction, and case specificity. Based on the developments in set theoretical thinking in social sciences and employing methods like Qualitative Comparative Analysis (QCA), Necessary Condition Analysis (NCA), and set visualization techniques, in this position paper, we propose...... and demonstrate a new approach to maturity models in the domain of Information Systems. This position paper describes the set-theoretical approach to maturity models, presents current results and outlines future research work....
DEFF Research Database (Denmark)
Bork Petersen, Franziska
2013-01-01
advantageous manner. Stepping on the catwalk’s sloping, moving surfaces decelerates the models’ walk and makes it cautious, hesitant and shaky: suddenly the models lack exactly the affirmative, staccato, striving quality of motion, and the condescending expression that they perform on most contemporary......For the presentation of his autumn/winter 2012 collection in Paris and subsequently in Copenhagen, Danish designer Henrik Vibskov installed a mobile catwalk. The article investigates the choreographic impact of this scenography on those who move through it. Drawing on Dance Studies, the analytical...... focus centres on how the catwalk scenography evokes a ‘defiguration’ of the walking models and to what effect. Vibskov’s mobile catwalk draws attention to the walk, which is a key element of models’ performance but which usually functions in fashion shows merely to present clothes in the most...
DEFF Research Database (Denmark)
Arnoldi, Jakob
The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing....... The article analyses these challenges and argues that we witness a new post-social form of human-technology interaction that will lead to a reconfiguration of professional codes for financial trading....
Vincent, Julian F V
2003-01-01
Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more compl...
Energy Technology Data Exchange (ETDEWEB)
McIllvaine, C M
1994-07-01
Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO{sub 2}), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO{sub x} concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO{sub x} coordinates of the point, known as the NMOC/NO{sub x} ratio. Results obtained by the described model are presented.
International Nuclear Information System (INIS)
McIllvaine, C.M.
1994-01-01
Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO 2 ), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO x concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO x coordinates of the point, known as the NMOC/NO x ratio. Results obtained by the described model are presented
Walker, Ellen A
2010-01-01
As clinical studies reveal that chemotherapeutic agents may impair several different cognitive domains in humans, the development of preclinical animal models is critical to assess the degree of chemotherapy-induced learning and memory deficits and to understand the underlying neural mechanisms. In this chapter, the effects of various cancer chemotherapeutic agents in rodents on sensory processing, conditioned taste aversion, conditioned emotional response, passive avoidance, spatial learning, cued memory, discrimination learning, delayed-matching-to-sample, novel-object recognition, electrophysiological recordings and autoshaping is reviewed. It appears at first glance that the effects of the cancer chemotherapy agents in these many different models are inconsistent. However, a literature is emerging that reveals subtle or unique changes in sensory processing, acquisition, consolidation and retrieval that are dose- and time-dependent. As more studies examine cancer chemotherapeutic agents alone and in combination during repeated treatment regimens, the animal models will become more predictive tools for the assessment of these impairments and the underlying neural mechanisms. The eventual goal is to collect enough data to enable physicians to make informed choices about therapeutic regimens for their patients and discover new avenues of alternative or complementary therapies that reduce or eliminate chemotherapy-induced cognitive deficits.
Energy Technology Data Exchange (ETDEWEB)
Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Energy Technology Data Exchange (ETDEWEB)
Chandler, Graham
2011-03-15
Ken Dedeluk is the president and CEO of Computer Modeling Group (CMG). Dedeluk started his career with Gulf Oil in 1972, worked in computer assisted design; then joined Imperial Esso and Shell, where he became international operations' VP; and finally joined CMG in 1998. CMG made a decision that turned out to be the company's turning point: they decided to provide intensive support and service to their customer to better use their technology. Thanks to this service, their customers' satisfaction grew as well as their revenues.
Model integration and a theory of models
Dolk, Daniel R.; Kottemann, Jeffrey E.
1993-01-01
Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...
1989-01-01
A wooden model of the ALEPH experiment and its cavern. ALEPH was one of 4 experiments at CERN's 27km Large Electron Positron collider (LEP) that ran from 1989 to 2000. During 11 years of research, LEP's experiments provided a detailed study of the electroweak interaction. Measurements performed at LEP also proved that there are three – and only three – generations of particles of matter. LEP was closed down on 2 November 2000 to make way for the construction of the Large Hadron Collider in the same tunnel. The cavern and detector are in separate locations - the cavern is stored at CERN and the detector is temporarily on display in Glasgow physics department. Both are available for loan.
Directory of Open Access Journals (Sweden)
Robert F. Love
2001-01-01
Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.
Comparison: Binomial model and Black Scholes model
Directory of Open Access Journals (Sweden)
Amir Ahmad Dar
2018-03-01
Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.
Computational Modeling | Bioenergy | NREL
cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the
Essays on model uncertainty in financial models
Li, Jing
2018-01-01
This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the
Vector models and generalized SYK models
Energy Technology Data Exchange (ETDEWEB)
Peng, Cheng [Department of Physics, Brown University,Providence RI 02912 (United States)
2017-05-23
We consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. A chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.
Modeling styles in business process modeling
Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.
2012-01-01
Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording
The IMACLIM model; Le modele IMACLIM
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
From Product Models to Product State Models
DEFF Research Database (Denmark)
Larsen, Michael Holm
1999-01-01
A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...
Modelling live forensic acquisition
CSIR Research Space (South Africa)
Grobler, MM
2009-06-01
Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...
Models in architectural design
Pauwels, Pieter
2017-01-01
Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...
International Nuclear Information System (INIS)
Tozini, A.V.
1984-01-01
A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.) [pt
Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes
Filstrup, Christopher T.; Wagner, Tyler; Soranno, Patricia A.; Stanley, Emily H.; Stow, Craig A.; Webster, Katherine E.; Downing, John A.
2014-01-01
The relationship between chlorophyll a (Chl a) and total phosphorus (TP) is a fundamental relationship in lakes that reflects multiple aspects of ecosystem function and is also used in the regulation and management of inland waters. The exact form of this relationship has substantial implications on its meaning and its use. We assembled a spatially extensive data set to examine whether nonlinear models are a better fit for Chl a—TP relationships than traditional log-linear models, whether there were regional differences in the form of the relationships, and, if so, which regional factors were related to these differences. We analyzed a data set from 2105 temperate lakes across 35 ecoregions by fitting and comparing two different nonlinear models and one log-linear model. The two nonlinear models fit the data better than the log-linear model. In addition, the parameters for the best-fitting model varied among regions: the maximum and lower Chl aasymptotes were positively and negatively related to percent regional pasture land use, respectively, and the rate at which chlorophyll increased with TP was negatively related to percent regional wetland cover. Lakes in regions with more pasture fields had higher maximum chlorophyll concentrations at high TP concentrations but lower minimum chlorophyll concentrations at low TP concentrations. Lakes in regions with less wetland cover showed a steeper Chl a—TP relationship than wetland-rich regions. Interpretation of Chl a—TP relationships depends on regional differences, and theory and management based on a monolithic relationship may be inaccurate.
Statistical method to compare massive parallel sequencing pipelines.
Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P
2017-03-01
Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.
Concept Modeling vs. Data modeling in Practice
DEFF Research Database (Denmark)
Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
2015-01-01
This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models....... We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from...
Model-to-model interface for multiscale materials modeling
Energy Technology Data Exchange (ETDEWEB)
Antonelli, Perry Edward [Iowa State Univ., Ames, IA (United States)
2017-12-17
A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface will also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.
Cognitive models embedded in system simulation models
International Nuclear Information System (INIS)
Siegel, A.I.; Wolf, J.J.
1982-01-01
If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context
Model Manipulation for End-User Modelers
DEFF Research Database (Denmark)
Acretoaie, Vlad
, and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...
Air Quality Dispersion Modeling - Alternative Models
Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.
Topological massive sigma models
International Nuclear Information System (INIS)
Lambert, N.D.
1995-01-01
In this paper we construct topological sigma models which include a potential and are related to twisted massive supersymmetric sigma models. Contrary to a previous construction these models have no central charge and do not require the manifold to admit a Killing vector. We use the topological massive sigma model constructed here to simplify the calculation of the observables. Lastly it is noted that this model can be viewed as interpolating between topological massless sigma models and topological Landau-Ginzburg models. ((orig.))
Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher
2014-01-01
The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...
[Bone remodeling and modeling/mini-modeling.
Hasegawa, Tomoka; Amizuka, Norio
Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.
A Model of Trusted Measurement Model
Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin
2017-01-01
A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.
Collett, David
2002-01-01
INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...
Decomposition of Variance for Spatial Cox Processes.
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-03-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.
Divergence from factorizable distributions and matroid representations by partitions
Czech Academy of Sciences Publication Activity Database
Matúš, František
2009-01-01
Roč. 55, č. 12 (2009), s. 5375-5381 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100750603; GA ČR GA201/04/0393 Institutional research plan: CEZ:AV0Z10750506 Keywords : Information divergence * relative entropy * Shannon entropy * exponential family * hierarchical model * log-linear model * contingency table * Gibbs distribution * matroid representation * secret sharing scheme * maximum likelihood. Subject RIV: BA - General Mathematics Impact factor: 2.357, year: 2009 http://library.utia.cas.cz/separaty/2009/MTR/matus-divergence from factorizable distributions and matroid representations by partitions.pdf
Tavasszy, L.A.; Jong, G. de
2014-01-01
Freight Transport Modelling is a unique new reference book that provides insight into the state-of-the-art of freight modelling. Focusing on models used to support public transport policy analysis, Freight Transport Modelling systematically introduces the latest freight transport modelling
Semantic Business Process Modeling
Markovic, Ivan
2010-01-01
This book presents a process-oriented business modeling framework based on semantic technologies. The framework consists of modeling languages, methods, and tools that allow for semantic modeling of business motivation, business policies and rules, and business processes. Quality of the proposed modeling framework is evaluated based on the modeling content of SAP Solution Composer and several real-world business scenarios.
DEFF Research Database (Denmark)
Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik
1997-01-01
This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...
DEFF Research Database (Denmark)
Könemann, Patrick
just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models...
Haiganoush Preisler; Alan Ager
2013-01-01
For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...
Environmental Satellite Models for a Macroeconomic Model
International Nuclear Information System (INIS)
Moeller, F.; Grinderslev, D.; Werner, M.
2003-01-01
To support national environmental policy, it is desirable to forecast and analyse environmental indicators consistently with economic variables. However, environmental indicators are physical measures linked to physical activities that are not specified in economic models. One way to deal with this is to develop environmental satellite models linked to economic models. The system of models presented gives a frame of reference where emissions of greenhouse gases, acid gases, and leaching of nutrients to the aquatic environment are analysed in line with - and consistently with - macroeconomic variables. This paper gives an overview of the data and the satellite models. Finally, the results of applying the model system to calculate the impacts on emissions and the economy are reviewed in a few illustrative examples. The models have been developed for Denmark; however, most of the environmental data used are from the CORINAIR system implemented in numerous countries
Ethnic differences in the time trend of female breast cancer incidence: Singapore, 1968 – 2002
Directory of Open Access Journals (Sweden)
Tan Chuen-Seng
2006-11-01
Full Text Available Abstract Background From 1968 to 2002, Singapore experienced an almost three-fold increase in breast cancer incidence. This increase appeared to be different across the three main ethnic groups: Chinese, Malays and Indians. This paper used age-period-cohort (APC modelling, to determine the effects of age at diagnosis, calendar period, and birth cohort on breast cancer incidence for each ethnic group. Methods This study included all breast cancer cases (n = 15,269 in the three ethnic groups, reported to the Singapore Cancer Registry from 1968 to 2002 between the ages 25 to 79. Age-specific fertility rates from the Department of Statistics were used to explore the role of fertility. Results In the 1970s, Indian women had the highest age-standardized breast cancer but by the mid-1980s the highest rates were seen among the Chinese. Remarkable differences were seen in the age-specific incidence rates by ethnic groups. After age 49, the incidence rates for the Chinese and Malays leveled off whereas it continued to rise in the Indians. While our analyses provided some evidence that an age-drift model described the trend seen in the Indians, age-period-cohort model and age-cohort model had the best fit for the Chinese and Malays aged 25 to 79 respectively. Overall, Chinese and Malay women born in later cohorts were at increased risk of developing breast cancer relative to their counterparts in the earlier cohorts. The three ethnic groups experienced similar changes in their fertility in the 1970s, which likely explained much of the increase in their breast cancer incidence but not the ethnic differences. There was a stronger inverse association between total fertility rate and pre-menopausal breast cancer incidence in the Chinese and Malays than the Indians. Conclusion The observed dissimilarity among ethnic groups suggests ethnic differences in exposure or response to certain risk factors. It is likely that longer and subtler differences in
Geologic Framework Model Analysis Model Report
Energy Technology Data Exchange (ETDEWEB)
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the
Geologic Framework Model Analysis Model Report
International Nuclear Information System (INIS)
Clayton, R.
2000-01-01
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and
DEFF Research Database (Denmark)
De Giovanni, Domenico
2010-01-01
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...
DEFF Research Database (Denmark)
De Giovanni, Domenico
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...
DEFF Research Database (Denmark)
Silvennoinen, Annastiina; Teräsvirta, Timo
This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...
Collaborative networks: Reference modeling
Camarinha-Matos, L.M.; Afsarmanesh, H.
2008-01-01
Collaborative Networks: Reference Modeling works to establish a theoretical foundation for Collaborative Networks. Particular emphasis is put on modeling multiple facets of collaborative networks and establishing a comprehensive modeling framework that captures and structures diverse perspectives of
DEFF Research Database (Denmark)
Juhl, Joakim
This thesis is about mathematical modelling and technology development. While mathematical modelling has become widely deployed within a broad range of scientific practices, it has also gained a central position within technology development. The intersection of mathematical modelling and technol...
D'Souza, Austin
2013-01-01
Presentatie gegeven op 13 mei 2013 op de bijeenkomst "Business Model Canvas Challenge Assen".
Het Business Model Canvas is ontworpen door Alex Osterwalder. Het model werkt zeer overzichtelijk en bestaat uit negen bouwstenen.
CSIR Research Space (South Africa)
Osburn, L
2010-01-01
Full Text Available The construction industry has turned to energy modelling in order to assist them in reducing the amount of energy consumed by buildings. However, while the energy loads of buildings can be accurately modelled, energy models often under...
Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire regime...
Mathematical Modeling Using MATLAB
National Research Council Canada - National Science Library
Phillips, Donovan
1998-01-01
.... Mathematical Modeling Using MA MATLAB acts as a companion resource to A First Course in Mathematical Modeling with the goal of guiding the reader to a fuller understanding of the modeling process...
Analytic Modeling of Insurgencies
2014-08-01
Counterinsurgency, Situational Awareness, Civilians, Lanchester 1. Introduction Combat modeling is one of the oldest areas of operations research, dating...Army. The ground-breaking work of Lanchester in 1916 [1] marks the beginning of formal models of conflicts, where mathematical formulas and, later...Warfare model [3], which is a Lanchester - based mathematical model (see more details about this model later on), and McCormick’s Magic Diamond model [4
Computational neurogenetic modeling
Benuskova, Lubica
2010-01-01
Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol
Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...
Finch, W Holmes; Kelley, Ken
2014-01-01
A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo
Cosmological models without singularities
International Nuclear Information System (INIS)
Petry, W.
1981-01-01
A previously studied theory of gravitation in flat space-time is applied to homogeneous and isotropic cosmological models. There exist two different classes of models without singularities: (i) ever-expanding models, (ii) oscillating models. The first class contains models with hot big bang. For these models there exist at the beginning of the universe-in contrast to Einstein's theory-very high but finite densities of matter and radiation with a big bang of very short duration. After short time these models pass into the homogeneous and isotropic models of Einstein's theory with spatial curvature equal to zero and cosmological constant ALPHA >= O. (author)
National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
International Nuclear Information System (INIS)
Clinton Lum
2002-01-01
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4
Integrated Site Model Process Model Report
International Nuclear Information System (INIS)
Booth, T.
2000-01-01
The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM
ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL
Климак, М.С.; Войтко, С.В.
2016-01-01
Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics
On the radiogenic heat production of igneous rocks
Directory of Open Access Journals (Sweden)
D. Hasterok
2017-09-01
Full Text Available Radiogenic heat production is a physical parameter crucial to properly estimating lithospheric temperatures and properly understanding processes related to the thermal evolution of the Earth. Yet heat production is, in general, poorly constrained by direct observation because the key radiogenic elements exist in trace amounts making them difficulty image geophysically. In this study, we advance our knowledge of heat production throughout the lithosphere by analyzing chemical analyses of 108,103 igneous rocks provided by a number of geochemical databases. We produce global estimates of the average and natural range for igneous rocks using common chemical classification systems. Heat production increases as a function of increasing felsic and alkali content with similar values for analogous plutonic and volcanic rocks. The logarithm of median heat production is negatively correlated (r2 = 0.98 to compositionally-based estimates of seismic velocities between 6.0 and 7.4 km s−1, consistent with the vast majority of igneous rock compositions. Compositional variations for continent-wide models are also well-described by a log-linear correlation between heat production and seismic velocity. However, there are differences between the log-linear models for North America and Australia, that are consistent with interpretations from previous studies that suggest above average heat production across much of Australia. Similar log-linear models also perform well within individual geological provinces with ∼1000 samples. This correlation raises the prospect that this empirical method can be used to estimate average heat production and natural variance both laterally and vertically throughout the lithosphere. This correlative relationship occurs despite a direct causal relationship between these two parameters but probably arises from the process of differentiation through melting and crystallization.
Modelling bankruptcy prediction models in Slovak companies
Directory of Open Access Journals (Sweden)
Kovacova Maria
2017-01-01
Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.
Better models are more effectively connected models
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity
Generalized latent variable modeling multilevel, longitudinal, and structural equation models
Skrondal, Anders; Rabe-Hesketh, Sophia
2004-01-01
This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.
Energy Technology Data Exchange (ETDEWEB)
D. W. Wu
2003-07-16
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
Energy Technology Data Exchange (ETDEWEB)
M. A. Wasiolek
2003-10-27
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
International Nuclear Information System (INIS)
D. W. Wu
2003-01-01
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)
Rahmani, Fouad Lazhar
2010-11-01
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].
DEFF Research Database (Denmark)
Ayres, Phil
2012-01-01
This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...
Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.
2017-07-01
HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
DEFF Research Database (Denmark)
Larsen, Bjarke Alexander; Andkjær, Kasper Ingdahl; Schoenau-Fog, Henrik
2015-01-01
This paper proposes a new relation model, called "The Moody Mask model", for Interactive Digital Storytelling (IDS), based on Franceso Osborne's "Mask Model" from 2011. This, mixed with some elements from Chris Crawford's Personality Models, is a system designed for dynamic interaction between ch...
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
International Nuclear Information System (INIS)
Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.
1994-05-01
The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid
International Nuclear Information System (INIS)
Ogava, S.; Savada, S.; Nakagava, M.
1983-01-01
Composite models of hadrons are considered. The main attention is paid to the Sakata, S model. In the framework of the model it is presupposed that proton, neutron and Λ particle are the fundamental particles. Theoretical studies of unknown fundamental constituents of a substance have led to the creation of the quark model. In the framework of the quark model using the theory of SU(6)-symmetry the classification of mesons and baryons is considered. Using the quark model relations between hadron masses, their spins and electromagnetic properties are explained. The problem of three-colour model with many flavours is briefly presented
Modeller af komplicerede systemer
DEFF Research Database (Denmark)
Mortensen, J.
emphasizes their use in relation to technical systems. All the presented models, with the exception of the types presented in chapter 2, are non-theoretical non-formal conceptual network models. Two new model types are presented: 1) The System-Environment model, which describes the environments interaction...... with conceptual modeling in relation to process control. It´s purpose is to present classify and exemplify the use of a set of qualitative model types. Such model types are useful in the early phase of modeling, where no structured methods are at hand. Although the models are general in character, this thesis......This thesis, "Modeller af komplicerede systemer", represents part of the requirements for the Danish Ph.D.degree. Assisting professor John Nørgaard-Nielsen, M.Sc.E.E.Ph.D. has been principal supervisor and professor Morten Lind, M.Sc.E.E.Ph.D. has been assisting supervisor. The thesis is concerned...
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
DEFF Research Database (Denmark)
Justesen, Lise; Overgaard, Svend Skafte
2017-01-01
This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open......-ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored...
Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P
2008-01-01
Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...
International Nuclear Information System (INIS)
Ahlers, C.F.; Liu, H.H.
2001-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
International Nuclear Information System (INIS)
Ahlers, C.; Liu, H.
2000-01-01
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
Business Models and Business Model Innovation
DEFF Research Database (Denmark)
Foss, Nicolai J.; Saebi, Tina
2018-01-01
While research on business models and business model innovation continue to exhibit growth, the field is still, even after more than two decades of research, characterized by a striking lack of cumulative theorizing and an opportunistic borrowing of more or less related ideas from neighbouring...
Wake modelling combining mesoscale and microscale models
DEFF Research Database (Denmark)
Badger, Jake; Volker, Patrick; Prospathospoulos, J.
2013-01-01
In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake paramet...
Introduction to Adjoint Models
Errico, Ronald M.
2015-01-01
In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.
Zagorsek, Branislav
2013-01-01
Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre
2005-01-01
parametric family ofdistributions. In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....
Geochemistry Model Validation Report: External Accumulation Model
International Nuclear Information System (INIS)
Zarrabi, K.
2001-01-01
The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation
Pavement Aging Model by Response Surface Modeling
Directory of Open Access Journals (Sweden)
Manzano-Ramírez A.
2011-10-01
Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.
Modelling of an homogeneous equilibrium mixture model
International Nuclear Information System (INIS)
Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.
2014-01-01
We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)
Variational approach for spatial point process intensity estimation
DEFF Research Database (Denmark)
Coeurjolly, Jean-Francois; Møller, Jesper
is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....
Han PHOUMIN; Shigeru KIMURA
2014-01-01
This study uses time series data of selected ASEAN and East Asia countries to investigate the patterns of price and income elasticity of energy demand. Applying a dynamic log-linear energy demand model, both short-run and long-run price and income elasticities were estimated by country. The study uses three types of dependent variable “energy demand” such as total primary energy consumption (TPES), total final energy consumption (TFEC) and total final oil consumption (TFOC) to regress on its ...
Habit, Production, and the Cross-Section of Stock Returns
Chen, Andrew Y.
2014-01-01
Solutions to the equity premium puzzle should inform us about the cross-section of stock returns. An external habit model with heterogeneous firms reproduces numerous stylized facts about both the equity premium and the value premium. The equity premium is large, time-varying, and linked with consumption volatility. The cross-section of expected returns is log-linear in B/M, and the slope matches the data. The explanation for the value premium lies in the interaction between the cross-section...
DEFF Research Database (Denmark)
Chovanec, Josef; Selingerova, Iveta; Greplova, Kristina
2017-01-01
based on single-institution data from 120 EC patients and validated against multicentric data from 379 EC patients. Results: In non-cancer individuals, serum HE4 levels increase log-linearly with reduced glomerular filtration of eGFR = 90 ml/min/1.73 m2. HE4ren, adjusting HE4 serum levels to decreased e...... levels to reduced eGFR that enables quantification of time-dependent changes in HE4 production and elimination irrespective of age and renal function in women. Utilizing HE4ren improves performance of biomarker-based models for prediction of dMI in endometrial cancer patients....
Model Validation Status Review
International Nuclear Information System (INIS)
E.L. Hardin
2001-01-01
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and
Modeling for Battery Prognostics
Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick
2017-01-01
For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient
DEFF Research Database (Denmark)
Cameron, Ian T.; Gani, Rafiqul
. These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety......This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....
DEFF Research Database (Denmark)
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....
Model Validation Status Review
Energy Technology Data Exchange (ETDEWEB)
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and
Modeling volatility using state space models.
Timmer, J; Weigend, A S
1997-08-01
In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).
Empirical Model Building Data, Models, and Reality
Thompson, James R
2011-01-01
Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m
Modeling Guru: Knowledge Base for NASA Modelers
Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.
2009-05-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the
Models for Dynamic Applications
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina
2011-01-01
This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor...... be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment....
Geller, Michael; Telem, Ofri
2015-05-15
We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.
International Nuclear Information System (INIS)
Harvey, M.; Khanna, F.C.
1975-01-01
The general problem of what constitutes a physical model and what is known about the free nucleon-nucleon interaction are considered. A time independent formulation of the basic equations is chosen. Construction of the average field in which particles move in a general independent particle model is developed, concentrating on problems of defining the average spherical single particle field for any given nucleus, and methods for construction of effective residual interactions and other physical operators. Deformed shell models and both spherical and deformed harmonic oscillator models are discussed in detail, and connections between spherical and deformed shell models are analyzed. A section on cluster models is included. 11 tables, 21 figures
Geller, Michael; Telem, Ofri
2015-05-01
We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at mKK , naturally allowing for mKK beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.
Directory of Open Access Journals (Sweden)
Luiz Carlos Bresser-Pereira
2012-03-01
Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2011-01-01
description of biological phosphorus removal, physicalchemical processes, hydraulics and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2D/3D dynamic numerical models. Plant-wide modeling is set to advance further the practice......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2008-01-01
description of biological phosphorus removal, physical–chemical processes, hydraulics, and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2-D/3-D dynamic numerical models. Plant-wide modeling is set to advance further......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...
Microsoft tabular modeling cookbook
Braak, Paul te
2013-01-01
This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling
Energy Technology Data Exchange (ETDEWEB)
D.W. Wu; A.J. Smith
2004-11-08
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
International Nuclear Information System (INIS)
D.W. Wu; A.J. Smith
2004-01-01
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)
Modelling of Innovation Diffusion
Directory of Open Access Journals (Sweden)
Arkadiusz Kijek
2010-01-01
Full Text Available Since the publication of the Bass model in 1969, research on the modelling of the diffusion of innovation resulted in a vast body of scientific literature consisting of articles, books, and studies of real-world applications of this model. The main objective of the diffusion model is to describe a pattern of spread of innovation among potential adopters in terms of a mathematical function of time. This paper assesses the state-of-the-art in mathematical models of innovation diffusion and procedures for estimating their parameters. Moreover, theoretical issues related to the models presented are supplemented with empirical research. The purpose of the research is to explore the extent to which the diffusion of broadband Internet users in 29 OECD countries can be adequately described by three diffusion models, i.e. the Bass model, logistic model and dynamic model. The results of this research are ambiguous and do not indicate which model best describes the diffusion pattern of broadband Internet users but in terms of the results presented, in most cases the dynamic model is inappropriate for describing the diffusion pattern. Issues related to the further development of innovation diffusion models are discussed and some recommendations are given. (original abstract
Nonlinear Modeling by Assembling Piecewise Linear Models
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Integrated Medical Model – Chest Injury Model
National Aeronautics and Space Administration — The Exploration Medical Capability (ExMC) Element of NASA's Human Research Program (HRP) developed the Integrated Medical Model (IMM) to forecast the resources...
Traffic & safety statewide model and GIS modeling.
2012-07-01
Several steps have been taken over the past two years to advance the Utah Department of Transportation (UDOT) safety initiative. Previous research projects began the development of a hierarchical Bayesian model to analyze crashes on Utah roadways. De...
OPEC model : adjustment or new model
International Nuclear Information System (INIS)
Ayoub, A.
1994-01-01
Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)
Solid Waste Projection Model: Model user's guide
International Nuclear Information System (INIS)
Stiles, D.L.; Crow, V.L.
1990-08-01
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford company (WHC) specifically to address solid waste management issues at the Hanford Central Waste Complex (HCWC). This document, one of six documents supporting the SWPM system, contains a description of the system and instructions for preparing to use SWPM and operating Version 1 of the model. 4 figs., 1 tab
Emissions Modeling Clearinghouse
U.S. Environmental Protection Agency — The Emissions Modeling Clearinghouse (EMCH) supports and promotes emissions modeling activities both internal and external to the EPA. Through this site, the EPA...
Radiobilogical cell survival models
International Nuclear Information System (INIS)
Zackrisson, B.
1992-01-01
A central issue in clinical radiobiological research is the prediction of responses to different radiation qualities. The choice of cell survival and dose-response model greatly influences the results. In this context the relationship between theory and model is emphasized. Generally, the interpretations of experimental data depend on the model. Cell survival models are systematized with respect to their relations to radiobiological theories of cell kill. The growing knowlegde of biological, physical, and chemical mechanisms is reflected in the formulation of new models. The present overview shows that recent modelling has been more oriented towards the stochastic fluctuations connected to radiation energy deposition. This implies that the traditional cell surivival models ought to be complemented by models of stochastic energy deposition processes and repair processes at the intracellular level. (orig.)
Pruneau, Diane; Chouinard, Omer; Arsenault, Charline
1998-01-01
Reports on a model of environmental education that aims to encourage greater attachment to the bioregion of Arcadia. The model results from cooperation within a village community and addresses the environmental education of people of all ages. (DDR)
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
International Nuclear Information System (INIS)
Pulkkinen, U.
2004-04-01
The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)
2014-01-01
This study developed a new snow model and a database which warehouses geometric, weather and traffic : data on New Jersey highways. The complexity of the model development lies in considering variable road : width, different spreading/plowing pattern...
International Nuclear Information System (INIS)
Rahm, L.; Nyberg, L.; Gidhagen, L.
1990-01-01
A dispersion model to be used off costal waters has been developed. The model has been applied to describe the migration of radionuclides in the Baltic sea. A summary of the results is presented here. (K.A.E)
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Laboratory of Biological Modeling
Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to a...
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
Modeling Philosophies and Applications
All models begin with a framework and a set of assumptions and limitations that go along with that framework. In terms of fracing and RA, there are several places where models and parameters must be chosen to complete hazard identification.
Bounding species distribution models
Directory of Open Access Journals (Sweden)
Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE
2011-10-01
Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].
Bounding Species Distribution Models
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Modelling of wastewater systems
DEFF Research Database (Denmark)
Bechmann, Henrik
to analyze and quantify the effect of the Aeration Tank Settling (ATS) operating mode, which is used during rain events. Furthermore, the model is used to propose a control algorithm for the phase lengths during ATS operation. The models are mainly formulated as state space model in continuous time......In this thesis, models of pollution fluxes in the inlet to 2 Danish wastewater treatment plants (WWTPs) as well as of suspended solids (SS) concentrations in the aeration tanks of an alternating WWTP and in the effluent from the aeration tanks are developed. The latter model is furthermore used...... at modelling the fluxes in terms of the multiple correlation coefficient R2. The model of the SS concentrations in the aeration tanks of an alternating WWTP as well as in the effluent from the aeration tanks is a mass balance model based on measurements of SS in one aeration tank and in the common outlet...
DEFF Research Database (Denmark)
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen
Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many...... of these software developments have taken place within the R community, either in the form of new packages or by providing an R ingerface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In addition......, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Modeling EERE deployment programs
Energy Technology Data Exchange (ETDEWEB)
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.
Bennett, Joan
1998-01-01
Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)
Modelling arithmetic operations
Energy Technology Data Exchange (ETDEWEB)
Shabanov-kushnarenk, Yu P
1981-01-01
The possibility of modelling finite alphabetic operators using formal intelligence theory, is explored, with the setting up of models of a 3-digit adder and a multidigit subtractor, as examples. 2 references.
International Nuclear Information System (INIS)
Tashiro, Tohru
2014-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model
2006-01-01
This is the version 1.1 of the TENCompetence Domain Model (version 1.0 released at 19-6-2006; version 1.1 at 9-11-2008). It contains several files: a) a pdf with the model description, b) three jpg files with class models (also in the pdf), c) a MagicDraw zip file with the model itself, d) a release
Optimization modeling with spreadsheets
Baker, Kenneth R
2015-01-01
An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il
Model Checking Feature Interactions
DEFF Research Database (Denmark)
Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas
2015-01-01
This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....
International Nuclear Information System (INIS)
Cheney, J.A.
1981-01-01
The problems of statisfying similarity between a physical model and the prototype in rock wherein fissures and cracks place a role in physical behavior is explored. The need for models of large physical dimensions is explained but also testing of models of the same prototype over a wide range of scales is needed to ascertain the influence of lack of similitude of particular parameters between prototype and model. A large capacity centrifuge would be useful in that respect
Dorofeenko, Victor; Lee, Gabriel; Salyer, Kevin; Strobel, Johannes
2016-01-01
Within the context of a financial accelerator model, we model time-varying uncertainty (i.e. risk shocks) through the use of a mixture Normal model with time variation in the weights applied to the underlying distributions characterizing entrepreneur productivity. Specifically, we model capital producers (i.e. the entrepreneurs) as either low-risk (relatively small second moment for productivity) and high-risk (relatively large second moment for productivity) and the fraction of both types is...
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed.......Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed....
GARCH Modelling of Cryptocurrencies
Jeffrey Chu; Stephen Chan; Saralees Nadarajah; Joerg Osterrieder
2017-01-01
With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.
GARCH Modelling of Cryptocurrencies
Directory of Open Access Journals (Sweden)
Jeffrey Chu
2017-10-01
Full Text Available With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.
Artificial neural network modelling
Samarasinghe, Sandhya
2016-01-01
This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .
Differential models in ecology
International Nuclear Information System (INIS)
Barco Gomez, Carlos; Barco Gomez, German
2002-01-01
The models mathematical writings with differential equations are used to describe the populational behavior through the time of the animal species. These models can be lineal or no lineal. The differential models for unique specie include the exponential pattern of Malthus and the logistical pattern of Verlhust. The lineal differential models to describe the interaction between two species include the competition relationships, predation and symbiosis
Competing through business models
Casadesus-Masanell, Ramon; Ricart, Joan E.
2007-01-01
In this article a business model is defined as the firm choices on policies, assets and governance structure of those policies and assets, together with their consequences, be them flexible or rigid. We also provide a way to represent such business models to highlight the dynamic loops and to facilitate understanding interaction with other business models. Furthermore, we develop some tests to evaluate the goodness of a business model both in isolation as well as in interaction with other bus...
Petrone, Giovanni; Spagnuolo, Giovanni
2016-01-01
This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.
Model description and evaluation of model performance: DOSDIM model
International Nuclear Information System (INIS)
Lewyckyj, N.; Zeevaert, T.
1996-01-01
DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs
Modelling MIZ dynamics in a global model
Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto
2016-04-01
Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.
DEFF Research Database (Denmark)
Andresen, Mette
2007-01-01
-authentic modelling is also linked with the potentials of exploration of ready-made models as a forerunner for more authentic modelling processes. The discussion includes analysis of an episode of students? work in the classroom, which serves to illustrate how concept formation may be linked to explorations of a non...
Crushed Salt Constitutive Model
International Nuclear Information System (INIS)
Callahan, G.D.
1999-01-01
The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well
E. Gregory McPherson; Paula J. Peper
2012-01-01
This paper describes three long-term tree growth studies conducted to evaluate tree performance because repeated measurements of the same trees produce critical data for growth model calibration and validation. Several empirical and process-based approaches to modeling tree growth are reviewed. Modeling is more advanced in the fields of forestry and...
DEFF Research Database (Denmark)
Borlund, Pia
2003-01-01
An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation ...
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
International Nuclear Information System (INIS)
Zuber, A.
1983-01-01
A review and discussion is given of mathematical models used for interpretation of tracer experiments in hydrology. For dispersion model, different initial and boundary conditions are related to different injection and detection modes. Examples of applications of various models are described and commented. (author)
Kelderman, Hendrikus
1984-01-01
Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch
International Nuclear Information System (INIS)
Thomas, A.W.
1981-01-01
Recent developments in the bag model, in which the constraints of chiral symmetry are explicitly included are reviewed. The model leads to a new understanding of the Δ-resonance. The connection of the theory with current algebra is clarified and implications of the model for the structure of the nucleon are discussed
Energy Technology Data Exchange (ETDEWEB)
Fortelius, C.; Holopainen, E.; Kaurola, J.; Ruosteenoja, K.; Raeisaenen, J. [Helsinki Univ. (Finland). Dept. of Meteorology
1996-12-31
In recent years the modelling of interannual climate variability has been studied, the atmospheric energy and water cycles, and climate simulations with the ECHAM3 model. In addition, the climate simulations of several models have been compared with special emphasis in the area of northern Europe
The nontopological soliton model
International Nuclear Information System (INIS)
Wilets, L.
1988-01-01
The nontopological soliton model introduced by Friedberg and Lee, and variations of it, provide a method for modeling QCD which can effectively include the dynamics of hadronic collisions as well as spectra. Absolute color confinement is effected by the assumed dielectric properties of the medium. A recently proposed version of the model is chirally invariant. 32 refs., 5 figs., 1 tab
International Nuclear Information System (INIS)
Martin Llorente, F.
1990-01-01
The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...
Intermittency in branching models
International Nuclear Information System (INIS)
Chiu, C.B.; Texas Univ., Austin; Hwa, R.C.; Oregon Univ., Eugene
1990-01-01
The intermittency properties of three branching models have been investigated. The factorial moments show power-law behavior as function of small rapidity width. The slopes and energy dependences reveal different characteristics of the models. The gluon model has the weakest intermittency. (orig.)
DEFF Research Database (Denmark)
Gudiksen, Sune Klok; Poulsen, Søren Bolvig; Buur, Jacob
2014-01-01
Well-established companies are currently struggling to secure profits due to the pressure from new players' business models as they take advantage of communication technology and new business-model configurations. Because of this, the business model research field flourishes currently; however, t...
International Nuclear Information System (INIS)
Sazykina, T.G.; Kryshev, I.I.
1996-01-01
The main purpose of the model is a more detailed description of the radionuclide transfer in food chains, including the dynamics in the early period after accidental release. Detailed modelling of the dynamics of radioactive depositions is beyond the purpose of the model. Standard procedures are used for assessing inhalation and external doses. 3 figs, 2 tabs
Fedorov, Alexander
2011-01-01
The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…
Energy Technology Data Exchange (ETDEWEB)
Fortelius, C; Holopainen, E; Kaurola, J; Ruosteenoja, K; Raeisaenen, J [Helsinki Univ. (Finland). Dept. of Meteorology
1997-12-31
In recent years the modelling of interannual climate variability has been studied, the atmospheric energy and water cycles, and climate simulations with the ECHAM3 model. In addition, the climate simulations of several models have been compared with special emphasis in the area of northern Europe
DEFF Research Database (Denmark)
Andreasen, Martin Møller; Meldrum, Andrew
This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...
Automated Simulation Model Generation
Huang, Y.
2013-01-01
One of today's challenges in the field of modeling and simulation is to model increasingly larger and more complex systems. Complex models take long to develop and incur high costs. With the advances in data collection technologies and more popular use of computer-aided systems, more data has become
Modeling EERE Deployment Programs
Energy Technology Data Exchange (ETDEWEB)
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
Engineering of products and processes is increasingly “model-centric”. Models in their multitudinous forms are ubiquitous, being heavily used for a range of decision making activities across all life cycle phases. This chapter gives an overview of what is a model, the principal activities in the ...
Dynamic Latent Classification Model
DEFF Research Database (Denmark)
Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre
as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...
Meij, E.; Weerkamp, W.; Balog, K.; de Rijke, M.; Myang, S.-H.; Oard, D.W.; Sebastiani, F.; Chua, T.-S.; Leong, M.-K.
2008-01-01
We describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. We apply our method to six topic sets from test collections in five different genres. Our parsimonious relevance models (i) improve retrieval effectiveness in terms of
DEFF Research Database (Denmark)
Friis, Silje Alberthe Kamille; Gelting, Anne Katrine Gøtzsche
2014-01-01
the approaches and reach a new level of conscious action when designing? Informed by theories of design thinking, knowledge production, and learning, we have developed a model, the 5C model, accompanied by 62 method cards. Examples of how the model has been applied in an educational setting are provided...
A Statistical Algorithm for Estimating Chlorophyll Concentration in the New Caledonian Lagoon
Directory of Open Access Journals (Sweden)
Guillaume Wattelez
2016-01-01
Full Text Available Spatial and temporal dynamics of phytoplankton biomass and water turbidity can provide crucial information about the function, health and vulnerability of lagoon ecosystems (coral reefs, sea grasses, etc.. A statistical algorithm is proposed to estimate chlorophyll-a concentration ([chl-a] in optically complex waters of the New Caledonian lagoon from MODIS-derived “remote-sensing” reflectance (Rrs. The algorithm is developed via supervised learning on match-ups gathered from 2002 to 2010. The best performance is obtained by combining two models, selected according to the ratio of Rrs in spectral bands centered on 488 and 555 nm: a log-linear model for low [chl-a] (AFLC and a support vector machine (SVM model or a classic model (OC3 for high [chl-a]. The log-linear model is developed based on SVM regression analysis. This approach outperforms the classical OC3 approach, especially in shallow waters, with a root mean squared error 30% lower. The proposed algorithm enables more accurate assessments of [chl-a] and its variability in this typical oligo- to meso-trophic tropical lagoon, from shallow coastal waters and nearby reefs to deeper waters and in the open ocean.
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Nobuhiro; Yamashita, Toshifumi
2003-08-14
This Letter demonstrates that, as in flipped SU(5) models, doublet-triplet splitting is accomplished by a missing partner mechanism in flipped SO(10) models. The gauge group SO(10){sub F}xU(1){sub V'{sub F}} includes SU(2){sub E} gauge symmetry, which plays an important role in solving the supersymmetric (SUSY) flavor problem by introducing non-abelian horizontal gauge symmetry and anomalous U(1){sub A} gauge symmetry. The gauge group can be broken into the standard model gauge group by VEVs of only spinor fields; such models may be easier to derive than E{sub 6} models from superstring theory.
International Nuclear Information System (INIS)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions
International Nuclear Information System (INIS)
Padilla, V.R.
1992-01-01
The analysis of oil exploration models in this paper is developed in four parts. The way in which exploration has been dealt with in oil supply models is first described. Five recent models are then looked at, paying particular attention to the explanatory variables used when modelling exploration activities. This is followed by a discussion of the factors which have been shown by several empirical studies to determine exploration in less developed countries. Finally, the interdependence between institutional factors, oil prices and exploration effort is analysed with a view to drawing conclusions for modelling in the future. (UK)
Modeling Epidemic Network Failures
DEFF Research Database (Denmark)
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....
DEFF Research Database (Denmark)
Sørensen, Peter; Edwards, Stefan McKinnon; Rohde, Palle Duun
-additive genetic mechanisms. These modeling approaches have proven to be highly useful to determine population genetic parameters as well as prediction of genetic risk or value. We present a series of statistical modelling approaches that use prior biological information for evaluating the collective action......Whole-genome sequences and multiple trait phenotypes from large numbers of individuals will soon be available in many populations. Well established statistical modeling approaches enable the genetic analyses of complex trait phenotypes while accounting for a variety of additive and non...... regions and gene ontologies) that provide better model fit and increase predictive ability of the statistical model for this trait....
International Nuclear Information System (INIS)
LeBlanc, G.; Corbett, W.J.
1997-01-01
The response matrix, consisting of the closed orbit change at each beam position monitor (BPM) due to corrector magnet excitations, was measured and analyzed in order to calibrate a linear optics model of SPEAR. The model calibration was accomplished by varying model parameters to minimize the chi-square difference between the measured and the model response matrices. The singular value decomposition (SVD) matrix inversion method was used to solve the simultaneous equations. The calibrated model was then used to calculate corrections to the operational lattice. The results of the calibration and correction procedures are presented
International Nuclear Information System (INIS)
Knee, H.E.; Schryver, J.C.
1991-01-01
Models of human behavior and cognition (HB and C) are necessary for understanding the total response of complex systems. Many such models have come available over the past thirty years for various applications. Unfortunately, many potential model users remain skeptical about their practicality, acceptability, and usefulness. Such hesitancy stems in part to disbelief in the ability to model complex cognitive processes, and a belief that relevant human behavior can be adequately accounted for through the use of commonsense heuristics. This paper will highlight several models of HB and C and identify existing and potential applications in attempt to dispel such notions. (author)
Long, John
2014-01-01
Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ
Blaha, Michael
2010-01-01
Best-selling author and database expert with more than 25 years of experience modeling application and enterprise data, Dr. Michael Blaha provides tried and tested data model patterns, to help readers avoid common modeling mistakes and unnecessary frustration on their way to building effective data models. Unlike the typical methodology book, "Patterns of Data Modeling" provides advanced techniques for those who have mastered the basics. Recognizing that database representation sets the path for software, determines its flexibility, affects its quality, and influences whether it succ
Directory of Open Access Journals (Sweden)
Paul Walton
2014-09-01
Full Text Available This paper uses an approach drawn from the ideas of computer systems modelling to produce a model for information itself. The model integrates evolutionary, static and dynamic views of information and highlights the relationship between symbolic content and the physical world. The model includes what information technology practitioners call “non-functional” attributes, which, for information, include information quality and information friction. The concepts developed in the model enable a richer understanding of Floridi’s questions “what is information?” and “the informational circle: how can information be assessed?” (which he numbers P1 and P12.
International Nuclear Information System (INIS)
Brown, T.W.
2010-11-01
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Designing Business Model Change
DEFF Research Database (Denmark)
Cavalcante, Sergio Andre
2014-01-01
The aim of this paper is to base organisational change on the firm's business model, an approach that research has only recently start to address. This study adopts a process-based perspective on business models and insights from a variety of theories as the basis for the development of ideas...... on the design of business model change. This paper offers a new, process-based strategic analytical artefact for the design of business model change, consisting of three main phases. Designing business model change as suggested in this paper allows ex ante analysis of alternative scenarios of change...
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
International Nuclear Information System (INIS)
Iachello, F.; Arima, A.
1987-01-01
The book gives an account of some of the properties of the interacting boson model. The model was introduced in 1974 to describe in a unified way the collective properties of nuclei. The book presents the mathematical techniques used to analyse the structure of the model. The mathematical framework of the model is discussed in detail. The book also contains all the formulae that have been developed throughout the years to account for collective properties of nuclei. These formulae can be used by experimentalists to compare their data with the predictions of the model. (U.K.)
International Nuclear Information System (INIS)
Waterman, T.E.; Takata, A.N.
1983-01-01
The IITRI Urban Fire Spread Model as well as others of similar vintage were constrained by computer size and running costs such that many approximations/generalizations were introduced to reduce program complexity and data storage requirements. Simplifications were introduced both in input data and in fire growth and spread calculations. Modern computational capabilities offer the means to introduce greater detail and to examine its practical significance on urban fire predictions. Selected portions of the model are described as presently configured, and potential modifications are discussed. A single tract model is hypothesized which permits the importance of various model details to be assessed, and, other model applications are identified
International Nuclear Information System (INIS)
McGraw, M.
2000-01-01
The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations
Mathematical modelling techniques
Aris, Rutherford
1995-01-01
""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode
Intersection carbon monoxide modeling
International Nuclear Information System (INIS)
Zamurs, J.
1990-01-01
In this note the author discusses the need for better air quality mobile source models near roadways and intersections. To develop the improved models, a better understanding of emissions and their relation to ambient concentrations is necessary. The database for the modal model indicates that vehicles do have different emission levels for different engine operating modes. If the modal approach is used information is needed on traffic signal phasing, queue lengths, delay times, acceleration rates, deceleration rates, capacity, etc. Dispersion estimates using current air quality models may be inaccurate because the models do not take into account intersecting traffic streams, multiple buildings of varying setbacks, height, and spacing
Blackman, Jonathan; Field, Scott; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel
2015-04-01
With the advanced detector era just around the corner, there is a strong need for fast and accurate models of gravitational waveforms from compact binary coalescence. Fast surrogate models can be built out of an accurate but slow waveform model with minimal to no loss in accuracy, but may require a large number of evaluations of the underlying model. This may be prohibitively expensive if the underlying is extremely slow, for example if we wish to build a surrogate for numerical relativity. We examine alternate choices to building surrogate models which allow for a more sparse set of input waveforms. Research supported in part by NSERC.
Collins, Lisa M.; Part, Chérie E.
2013-01-01
Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411
Making ecological models adequate
Getz, Wayne M.; Marshall, Charles R.; Carlson, Colin J.; Giuggioli, Luca; Ryan, Sadie J.; Romañach, Stephanie; Boettiger, Carl; Chamberlain, Samuel D.; Larsen, Laurel; D'Odorico, Paolo; O'Sullivan, David
2018-01-01
Critical evaluation of the adequacy of ecological models is urgently needed to enhance their utility in developing theory and enabling environmental managers and policymakers to make informed decisions. Poorly supported management can have detrimental, costly or irreversible impacts on the environment and society. Here, we examine common issues in ecological modelling and suggest criteria for improving modelling frameworks. An appropriate level of process description is crucial to constructing the best possible model, given the available data and understanding of ecological structures. Model details unsupported by data typically lead to over parameterisation and poor model performance. Conversely, a lack of mechanistic details may limit a model's ability to predict ecological systems’ responses to management. Ecological studies that employ models should follow a set of model adequacy assessment protocols that include: asking a series of critical questions regarding state and control variable selection, the determinacy of data, and the sensitivity and validity of analyses. We also need to improve model elaboration, refinement and coarse graining procedures to better understand the relevancy and adequacy of our models and the role they play in advancing theory, improving hind and forecasting, and enabling problem solving and management.
International Nuclear Information System (INIS)
Ghezzehej, T.
2004-01-01
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency
Modeling of ultrasound transducers
DEFF Research Database (Denmark)
Bæk, David
This Ph.D. dissertation addresses ultrasound transducer modeling for medical ultrasound imaging and combines the modeling with the ultrasound simulation program Field II. The project firstly presents two new models for spatial impulse responses (SIR)s to a rectangular elevation focused transducer...... (REFT) and to a convex rectangular elevation focused transducer (CREFT). These models are solvable on an analog time scale and give exact smooth solutions to the Rayleigh integral. The REFT model exhibits a root mean square (RMS) error relative to Field II predictions of 0.41 % at 3400 MHz, and 1.......37 % at 100MHz. The CREFT model exhibits a RMS deviation of 0.01 % relative to the exact numerical solution on a CREFT transducer. A convex non-elevation focused, a REFT, and a linear flat transducer are shown to be covered with the CREFT model as well. Pressure pulses calculated with a one...
MATHEMATICAL MODEL MANIPULATOR ROBOTS
Directory of Open Access Journals (Sweden)
O. N. Krakhmalev
2015-12-01
Full Text Available A mathematical model to describe the dynamics of manipulator robots. Mathematical model are the implementation of the method based on the Lagrange equation and using the transformation matrices of elastic coordinates. Mathematical model make it possible to determine the elastic deviations of manipulator robots from programmed motion trajectories caused by elastic deformations in hinges, which are taken into account in directions of change of the corresponding generalized coordinates. Mathematical model is approximated and makes it possible to determine small elastic quasi-static deviations and elastic vibrations. The results of modeling the dynamics by model are compared to the example of a two-link manipulator system. The considered model can be used when performing investigations of the mathematical accuracy of the manipulator robots.
DEFF Research Database (Denmark)
Laursen, Jesper
The present thesis considers numerical modeling of activated sludge tanks on municipal wastewater treatment plants. Focus is aimed at integrated modeling where the detailed microbiological model the Activated Sludge Model 3 (ASM3) is combined with a detailed hydrodynamic model based on a numerical...... solution of the Navier-Stokes equations in a multiphase scheme. After a general introduction to the activated sludge tank as a system, the activated sludge tank model is gradually setup in separate stages. The individual sub-processes that are often occurring in activated sludge tanks are initially...... hydrofoil shaped propellers. These two sub-processes deliver the main part of the supplied energy to the activated sludge tank, and for this reason they are important for the mixing conditions in the tank. For other important processes occurring in the activated sludge tank, existing models and measurements...
International Nuclear Information System (INIS)
Anon.
1982-01-01
Testing the applicability of mathematical models with carefully designed experiments is a powerful tool in the investigations of the effects of ionizing radiation on cells. The modeling and cellular studies complement each other, for modeling provides guidance for designing critical experiments which must provide definitive results, while the experiments themselves provide new input to the model. Based on previous experimental results the model for the accumulation of damage in Chlamydomonas reinhardi has been extended to include various multiple two-event combinations. Split dose survival experiments have shown that models tested to date predict most but not all the observed behavior. Stationary-phase mammalian cells, required for tests of other aspects of the model, have been shown to be at different points in the cell cycle depending on how they were forced to stop proliferating. These cultures also demonstrate different capacities for repair of sublethal radiation damage
Energy Technology Data Exchange (ETDEWEB)
Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-13
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.
Directory of Open Access Journals (Sweden)
Alexander Fedorov
2011-03-01
Full Text Available The author supposed that media education models can be divided into the following groups:- educational-information models (the study of the theory, history, language of media culture, etc., based on the cultural, aesthetic, semiotic, socio-cultural theories of media education;- educational-ethical models (the study of moral, religions, philosophical problems relying on the ethic, religious, ideological, ecological, protectionist theories of media education;- pragmatic models (practical media technology training, based on the uses and gratifications and ‘practical’ theories of media education;- aesthetical models (aimed above all at the development of the artistic taste and enriching the skills of analysis of the best media culture examples. Relies on the aesthetical (art and cultural studies theory; - socio-cultural models (socio-cultural development of a creative personality as to the perception, imagination, visual memory, interpretation analysis, autonomic critical thinking, relying on the cultural studies, semiotic, ethic models of media education.
Targeting: Logistic Regression, Special Cases and Extensions
Directory of Open Access Journals (Sweden)
Helmut Schaeben
2014-12-01
Full Text Available Logistic regression is a classical linear model for logit-transformed conditional probabilities of a binary target variable. It recovers the true conditional probabilities if the joint distribution of predictors and the target is of log-linear form. Weights-of-evidence is an ordinary logistic regression with parameters equal to the differences of the weights of evidence if all predictor variables are discrete and conditionally independent given the target variable. The hypothesis of conditional independence can be tested in terms of log-linear models. If the assumption of conditional independence is violated, the application of weights-of-evidence does not only corrupt the predicted conditional probabilities, but also their rank transform. Logistic regression models, including the interaction terms, can account for the lack of conditional independence, appropriate interaction terms compensate exactly for violations of conditional independence. Multilayer artificial neural nets may be seen as nested regression-like models, with some sigmoidal activation function. Most often, the logistic function is used as the activation function. If the net topology, i.e., its control, is sufficiently versatile to mimic interaction terms, artificial neural nets are able to account for violations of conditional independence and yield very similar results. Weights-of-evidence cannot reasonably include interaction terms; subsequent modifications of the weights, as often suggested, cannot emulate the effect of interaction terms.
Energy Technology Data Exchange (ETDEWEB)
Hammerand, Daniel Carl; Scherzinger, William Mark
2007-09-01
The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented
Geochemical modeling: a review
International Nuclear Information System (INIS)
Jenne, E.A.
1981-06-01
Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted
International Nuclear Information System (INIS)
Martin, W.E.; McDonald, L.A.
1997-01-01
The eight book chapters demonstrate the link between the physical models of the environment and the policy analysis in support of policy making. Each chapter addresses an environmental policy issue using a quantitative modeling approach. The volume addresses three general areas of environmental policy - non-point source pollution in the agricultural sector, pollution generated in the extractive industries, and transboundary pollutants from burning fossil fuels. The book concludes by discussing the modeling efforts and the use of mathematical models in general. Chapters are entitled: modeling environmental policy: an introduction; modeling nonpoint source pollution in an integrated system (agri-ecological); modeling environmental and trade policy linkages: the case of EU and US agriculture; modeling ecosystem constraints in the Clean Water Act: a case study in Clearwater National Forest (subject to discharge from metal mining waste); costs and benefits of coke oven emission controls; modeling equilibria and risk under global environmental constraints (discussing energy and environmental interrelations); relative contribution of the enhanced greenhouse effect on the coastal changes in Louisiana; and the use of mathematical models in policy evaluations: comments. The paper on coke area emission controls has been abstracted separately for the IEA Coal Research CD-ROM
Directory of Open Access Journals (Sweden)
Chérie E. Part
2013-05-01
Full Text Available The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested.
Directory of Open Access Journals (Sweden)
Dan Alexandru Anghel
2012-01-01
Full Text Available In semiconductor laser modeling, a good mathematical model gives near-reality results. Three methods of modeling solutions from the rate equations are presented and analyzed. A method based on the rate equations modeled in Simulink to describe quantum well lasers was presented. For different signal types like step function, saw tooth and sinus used as input, a good response of the used equations is obtained. Circuit model resulting from one of the rate equations models is presented and simulated in SPICE. Results show a good modeling behavior. Numerical simulation in MathCad gives satisfactory results for the study of the transitory and dynamic operation at small level of the injection current. The obtained numerical results show the specific limits of each model, according to theoretical analysis. Based on these results, software can be built that integrates circuit simulation and other modeling methods for quantum well lasers to have a tool that model and analysis these devices from all points of view.
Geochemical modeling: a review
Energy Technology Data Exchange (ETDEWEB)
Jenne, E.A.
1981-06-01
Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.
Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol
2003-01-01
The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.
Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan
2015-02-01
In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.
International Nuclear Information System (INIS)
Lundberg, Jonas; Johansson, Björn JE
2015-01-01
It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies
Arnold, Konstantin; Kiefer, Florian; Kopp, Jürgen; Battey, James N D; Podvinec, Michael; Westbrook, John D; Berman, Helen M; Bordoli, Lorenza; Schwede, Torsten
2009-03-01
Structural Genomics has been successful in determining the structures of many unique proteins in a high throughput manner. Still, the number of known protein sequences is much larger than the number of experimentally solved protein structures. Homology (or comparative) modeling methods make use of experimental protein structures to build models for evolutionary related proteins. Thereby, experimental structure determination efforts and homology modeling complement each other in the exploration of the protein structure space. One of the challenges in using model information effectively has been to access all models available for a specific protein in heterogeneous formats at different sites using various incompatible accession code systems. Often, structure models for hundreds of proteins can be derived from a given experimentally determined structure, using a variety of established methods. This has been done by all of the PSI centers, and by various independent modeling groups. The goal of the Protein Model Portal (PMP) is to provide a single portal which gives access to the various models that can be leveraged from PSI targets and other experimental protein structures. A single interface allows all existing pre-computed models across these various sites to be queried simultaneously, and provides links to interactive services for template selection, target-template alignment, model building, and quality assessment. The current release of the portal consists of 7.6 million model structures provided by different partner resources (CSMP, JCSG, MCSG, NESG, NYSGXRC, JCMM, ModBase, SWISS-MODEL Repository). The PMP is available at http://www.proteinmodelportal.org and from the PSI Structural Genomics Knowledgebase.
Models as Relational Categories
Kokkonen, Tommi
2017-11-01
Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done.
Chung, Roger Y; Yip, Benjamin H K; Chan, Sandra S M; Wong, Samuel Y S
2016-06-01
To examine temporal variations of age, period, and cohort on suicide mortality rate in Hong Kong (HK) from 1976 to 2010, and speculate the macroenvironmental mechanisms of the observed trends. Poisson age-period-cohort modeling was used to delineate the effects of age, period, and cohort on suicide mortality. Analysis by sex was also conducted to examine if gender difference exists for suicidal behaviours. Age-cohort model provides the best fit to the mortality data, implying that the cohort effect is likely to explain more of the contributions to HK's suicide mortality pattern than the period effect. Risk of suicide mortality increases nonlinearly with age and accelerates after age 65-69 for both sexes. Moreover, the cohort effects differ between the sexes-risk of mortality increases continually for men born after 1961, but no change is observed for women since the 1941 cohort. With increased risk of suicide mortality in younger cohorts and the age effect of suicide mortality, we may see future increase in suicide mortality as these younger cohorts age. Further studies are needed to clarify plausible associations between broader sociohistorical changes in the population impacting psychological risk factors and suicidal behaviour to better inform suicide prevention strategies. © 2015 Wiley Periodicals, Inc.
Baby boomers nearing retirement: the healthiest generation?
Rice, Neil E; Lang, Iain A; Henley, William; Melzer, David
2010-02-01
The baby-boom generation is entering retirement. Having experienced unprecedented prosperity and improved medical technology, they should be the healthiest generation ever. We compared prevalence of disease and risk factors at ages 50-61 years in baby boomers with the preceding generation and attributed differences to period or cohort effects. Data were from the Health Survey for England (HSE) from 1994 to 2007 (n = 48,563). Logistic regression models compared health status between birth cohorts. Age-period-cohort models identified cohort and period effects separately. Compared to the wartime generation, the baby-boomer group was heavier (3.02 kg; 95% confidence interval [CI], 2.42-3.63; p Baby boomers reported fewer heart attacks (OR = 0.61; CI, 0.47-0.79; p baby boomers are moving toward retirement with improved cardiovascular health. However, the baby-boomer cohort has a higher prevalence of mental illness diagnoses and shows no improvement in self-rated health compared to the wartime birth cohort. There remains substantial scope to reduce health risks and future disability.
Modelling cointegration in the vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren
2000-01-01
A survey is given of some results obtained for the cointegrated VAR. The Granger representation theorem is discussed and the notions of cointegration and common trends are defined. The statistical model for cointegrated I(1) variables is defined, and it is shown how hypotheses on the cointegratin...
Template for Conceptual Model Construction: Model Review and Corps Applications
National Research Council Canada - National Science Library
Henderson, Jim E; O'Neil, L. J
2007-01-01
.... The template will expedite conceptual model construction by providing users with model parameters and potential model components, building on a study team's knowledge and experience, and promoting...
Aeroservoelasticity modeling and control
Tewari, Ashish
2015-01-01
This monograph presents the state of the art in aeroservoelastic (ASE) modeling and analysis and develops a systematic theoretical and computational framework for use by researchers and practicing engineers. It is the first book to focus on the mathematical modeling of structural dynamics, unsteady aerodynamics, and control systems to evolve a generic procedure to be applied for ASE synthesis. Existing robust, nonlinear, and adaptive control methodology is applied and extended to some interesting ASE problems, such as transonic flutter and buffet, post-stall buffet and maneuvers, and flapping flexible wing. The author derives a general aeroservoelastic plant via the finite-element structural dynamic model, unsteady aerodynamic models for various regimes in the frequency domain, and the associated state-space model by rational function approximations. For more advanced models, the full-potential, Euler, and Navier-Stokes methods for treating transonic and separated flows are also briefly addressed. Essential A...
Identification of physical models
DEFF Research Database (Denmark)
Melgaard, Henrik
1994-01-01
of the model with the available prior knowledge. The methods for identification of physical models have been applied in two different case studies. One case is the identification of thermal dynamics of building components. The work is related to a CEC research project called PASSYS (Passive Solar Components......The problem of identification of physical models is considered within the frame of stochastic differential equations. Methods for estimation of parameters of these continuous time models based on descrete time measurements are discussed. The important algorithms of a computer program for ML or MAP...... design of experiments, which is for instance the design of an input signal that are optimal according to a criterion based on the information provided by the experiment. Also model validation is discussed. An important verification of a physical model is to compare the physical characteristics...
Developing mathematical modelling competence
DEFF Research Database (Denmark)
Blomhøj, Morten; Jensen, Tomas Højgaard
2003-01-01
In this paper we introduce the concept of mathematical modelling competence, by which we mean being able to carry through a whole mathematical modelling process in a certain context. Analysing the structure of this process, six sub-competences are identified. Mathematical modelling competence...... cannot be reduced to these six sub-competences, but they are necessary elements in the development of mathematical modelling competence. Experience from the development of a modelling course is used to illustrate how the different nature of the sub-competences can be used as a tool for finding...... the balance between different kinds of activities in a particular educational setting. Obstacles of social, cognitive and affective nature for the students' development of mathematical modelling competence are reported and discussed in relation to the sub-competences....
International Nuclear Information System (INIS)
Bozoki, E.
1987-01-01
There is burgeoning interest in modeling-based accelerator control. With more and more stringent requirements on the performance, the importance of knowing, controlling, predicting the behavior of the accelerator system is growing. Modeling means two things: (1) the development of programs and data which predict the outcome of a measurement, and (2) devising and performing measurements to find the machine physics parameter and their behavior under different conditions. These two sides should be tied together in an iterative process. With knowledge gained on the real system, the model will be modified, calibrated, and fine-tuned. The model of a system consists of data and the modeling program. The Modeling Based Control Programs (MBC) should in the on-line mode control, optimize, and correct the machine. In the off-line mode, the MBC is used to simulate the machine as well as explore and study its behavior and responses under a wide variety of circumstances. 15 refs., 3 figs
DEFF Research Database (Denmark)
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...
Essentials of econophysics modelling
Slanina, Frantisek
2014-01-01
This book is a course in methods and models rooted in physics and used in modelling economic and social phenomena. It covers the discipline of econophysics, which creates an interface between physics and economics. Besides the main theme, it touches on the theory of complex networks and simulations of social phenomena in general. After a brief historical introduction, the book starts with a list of basic empirical data and proceeds to thorough investigation of mathematical and computer models. Many of the models are based on hypotheses of the behaviour of simplified agents. These comprise strategic thinking, imitation, herding, and the gem of econophysics, the so-called minority game. At the same time, many other models view the economic processes as interactions of inanimate particles. Here, the methods of physics are especially useful. Examples of systems modelled in such a way include books of stock-market orders, and redistribution of wealth among individuals. Network effects are investigated in the inter...
Macklin, Paul; Cristini, Vittorio
2013-01-01
Simulating cancer behavior across multiple biological scales in space and time, i.e., multiscale cancer modeling, is increasingly being recognized as a powerful tool to refine hypotheses, focus experiments, and enable more accurate predictions. A growing number of examples illustrate the value of this approach in providing quantitative insight on the initiation, progression, and treatment of cancer. In this review, we introduce the most recent and important multiscale cancer modeling works that have successfully established a mechanistic link between different biological scales. Biophysical, biochemical, and biomechanical factors are considered in these models. We also discuss innovative, cutting-edge modeling methods that are moving predictive multiscale cancer modeling toward clinical application. Furthermore, because the development of multiscale cancer models requires a new level of collaboration among scientists from a variety of fields such as biology, medicine, physics, mathematics, engineering, and computer science, an innovative Web-based infrastructure is needed to support this growing community. PMID:21529163
Modelling of biomass pyrolysis
International Nuclear Information System (INIS)
Kazakova, Nadezhda; Petkov, Venko; Mihailov, Emil
2015-01-01
Pyrolysis is an essential preliminary step in a gasifier. The first step in modelling the pyrolysis process of biomass is creating a model for the chemical processes taking place. This model should describe the used fuel, the reactions taking place and the products created in the process. The numerous different polymers present in the organic fraction of the fuel are generally divided in three main groups. So, the multistep kinetic model of biomass pyrolysis is based on conventional multistep devolatilization models of the three main biomass components - cellulose, hemicelluloses, and lignin. Numerical simulations have been conducted in order to estimate the influence of the heating rate and the temperature of pyrolysis on the content of the virgin biomass, active biomass, liquid, solid and gaseous phases at any moment. Keywords: kinetic models, pyrolysis, biomass pyrolysis.
Hydrological land surface modelling
DEFF Research Database (Denmark)
Ridler, Marc-Etienne Francois
Recent advances in integrated hydrological and soil-vegetation-atmosphere transfer (SVAT) modelling have led to improved water resource management practices, greater crop production, and better flood forecasting systems. However, uncertainty is inherent in all numerical models ultimately leading...... temperature are explored in a multi-objective calibration experiment to optimize the parameters in a SVAT model in the Sahel. The two satellite derived variables were effective at constraining most land-surface and soil parameters. A data assimilation framework is developed and implemented with an integrated...... and disaster management. The objective of this study is to develop and investigate methods to reduce hydrological model uncertainty by using supplementary data sources. The data is used either for model calibration or for model updating using data assimilation. Satellite estimates of soil moisture and surface...
Modeling exogenous moral norms
Directory of Open Access Journals (Sweden)
Ross A. Tippit
2014-11-01
Full Text Available This paper considers the possibility of a robust and general formulation of a model of choice for the representation of a variety of moral norms. It starts by reviewing several recent models of deontological (or rule-based norms that retain the basic elements of the economic model of choice. It briefly examines the achievements and drawbacks of each model, and while no model is identified as the most accurate or robust, the most appealing aspects of each model contribute to the construction of a tout-ensemble utility function proposed in the final section. This representation of preferences aims to incorporate the most common qualities of both consequentialist and deontological moral norms in order to represent decision making under their influence.
DEFF Research Database (Denmark)
Andersen, Kasper Winther
Three main topics are presented in this thesis. The first and largest topic concerns network modelling of functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). In particular nonparametric Bayesian methods are used to model brain networks derived from resting state f...... for their ability to reproduce node clustering and predict unseen data. Comparing the models on whole brain networks, BCD and IRM showed better reproducibility and predictability than IDM, suggesting that resting state networks exhibit community structure. This also points to the importance of using models, which...... allow for complex interactions between all pairs of clusters. In addition, it is demonstrated how the IRM can be used for segmenting brain structures into functionally coherent clusters. A new nonparametric Bayesian network model is presented. The model builds upon the IRM and can be used to infer...
Inverse and Predictive Modeling
Energy Technology Data Exchange (ETDEWEB)
Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.
Lawson, Andrew B
2002-01-01
Research has generated a number of advances in methods for spatial cluster modelling in recent years, particularly in the area of Bayesian cluster modelling. Along with these advances has come an explosion of interest in the potential applications of this work, especially in epidemiology and genome research. In one integrated volume, this book reviews the state-of-the-art in spatial clustering and spatial cluster modelling, bringing together research and applications previously scattered throughout the literature. It begins with an overview of the field, then presents a series of chapters that illuminate the nature and purpose of cluster modelling within different application areas, including astrophysics, epidemiology, ecology, and imaging. The focus then shifts to methods, with discussions on point and object process modelling, perfect sampling of cluster processes, partitioning in space and space-time, spatial and spatio-temporal process modelling, nonparametric methods for clustering, and spatio-temporal ...
DEFF Research Database (Denmark)
Bro Petersen, Peter
Models of Journalism investigates the most fundamental questions of how journalists can best serve the public and what factors enable or obstruct them in doing so. The book evaluates previous scholarly attempts at modeling the function and influencing factors of journalism, and proceeds to develop...... a range of important new models that take contemporary challenges faced by journalists and journalism into account. Among these new models is the "chronology-of-journalism", which introduces a new set of influencing factors that can affect journalists in the 21st century. These include internal factors...... – journalistic principles, precedents and practices – and external factors – journalistic production, publication and perception. Another new model, the "journalistic compass", delineates differences and similarities between some of the most important journalistic roles in the media landscape. For each new model...
International Nuclear Information System (INIS)
Post, D.E.; Heifetz, D.; Petravic, M.
1982-07-01
Recent progress in models for poloidal divertors has both helped to explain current divertor experiments and contributed significantly to design efforts for future large tokamak (INTOR, etc.) divertor systems. These models range in sophistication from zero-dimensional treatments and dimensional analysis to two-dimensional models for plasma and neutral particle transport which include a wide variety of atomic and molecular processes as well as detailed treatments of the plasma-wall interaction. This paper presents a brief review of some of these models, describing the physics and approximations involved in each model. We discuss the wide variety of physics necessary for a comprehensive description of poloidal divertors. To illustrate the progress in models for poloidal divertors, we discuss some of our recent work as typical examples of the kinds of calculations being done
Energy Technology Data Exchange (ETDEWEB)
Post, D.E.; Heifetz, D.; Petravic, M.
1982-07-01
Recent progress in models for poloidal divertors has both helped to explain current divertor experiments and contributed significantly to design efforts for future large tokamak (INTOR, etc.) divertor systems. These models range in sophistication from zero-dimensional treatments and dimensional analysis to two-dimensional models for plasma and neutral particle transport which include a wide variety of atomic and molecular processes as well as detailed treatments of the plasma-wall interaction. This paper presents a brief review of some of these models, describing the physics and approximations involved in each model. We discuss the wide variety of physics necessary for a comprehensive description of poloidal divertors. To illustrate the progress in models for poloidal divertors, we discuss some of our recent work as typical examples of the kinds of calculations being done.
International Nuclear Information System (INIS)
Pleitez, V.
1994-01-01
The search for physics laws beyond the standard model is discussed in a general way, and also some topics on supersymmetry theories. An approach is made on recent possibilities rise in the leptonic sector. Finally, models with SU(3) c X SU(2) L X U(1) Y symmetry are considered as alternatives for the extensions of the elementary particles standard model. 36 refs., 1 fig., 4 tabs
FORECASTING MODELS IN MANAGEMENT
Sindelar, Jiri
2008-01-01
This article deals with the problems of forecasting models. First part of the article is dedicated to definition of the relevant areas (vertical and horizontal pillar of definition) and then the forecasting model itself is defined; as article presents theoretical background for further primary research, this definition is crucial. Finally the position of forecasting models within the management system is identified. The paper is a part of the outputs of FEM CULS grant no. 1312/11/3121.
Arnold, Konstantin; Kiefer, Florian; Kopp, J?rgen; Battey, James N. D.; Podvinec, Michael; Westbrook, John D.; Berman, Helen M.; Bordoli, Lorenza; Schwede, Torsten
2008-01-01
Structural Genomics has been successful in determining the structures of many unique proteins in a high throughput manner. Still, the number of known protein sequences is much larger than the number of experimentally solved protein structures. Homology (or comparative) modeling methods make use of experimental protein structures to build models for evolutionary related proteins. Thereby, experimental structure determination efforts and homology modeling complement each other in the exploratio...
Vibroacoustic Skin Diagnostics Modeling
Directory of Open Access Journals (Sweden)
Svetlana М. Yatsun
2013-01-01
Full Text Available The article deals with the mathematical modeling of biological diagnosis of complex heterogeneous structure (skin, using non-destructive control method. The mathematical model, describing interaction of the material with electrodynamic vibration generator and sensor system, controlling the propagation of small disturbances was developed. The influence of material model parameters on the spectrum in the course of the propagation of the surface disturbance
Arnaoudova, Kristina; Stanchev, Peter
2015-11-01
The business processes are the key asset for every organization. The design of the business process models is the foremost concern and target among an organization's functions. Business processes and their proper management are intensely dependent on the performance of software applications and technology solutions. The paper is attempt for definition of new Conceptual model of IT service provider, it could be examined as IT focused Enterprise model, part of Enterprise Architecture (EA) school.
Mavromatakis, F.; Franghiadakis, Y.; Vignola, F.
2016-01-01
A robust and reliable model describing the power produced by a photovoltaic system is needed in order to be able to detect module failures, inverter malfunction, shadowing effects and other factors that may result to energy losses. In addition, a reliable model enables an investor to perform accurate estimates of the system energy production, payback times etc. The model utilizes the global irradiance reaching the plane of the photovoltaic modules since in almost all Photovoltaic (PV) facilit...
Xie, Qiong-Tao; Cui, Shuai; Cao, Jun-Peng; Amico, Luigi; Fan, Heng
2014-01-01
We define the anisotropic Rabi model as the generalization of the spin-boson Rabi model: The Hamiltonian system breaks the parity symmetry; the rotating and counterrotating interactions are governed by two different coupling constants; a further parameter introduces a phase factor in the counterrotating terms. The exact energy spectrum and eigenstates of the generalized model are worked out. The solution is obtained as an elaboration of a recently proposed method for the isotropic limit of th...
Energy Technology Data Exchange (ETDEWEB)
Young, Michael F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-07-01
Aerosol particles that deposit on surfaces may be subsequently resuspended by air flowing over the surface. A review of models for this liftoff process is presented and compared to available data. Based on this review, a model that agrees with existing data and is readily computed is presented for incorporation into a system level code such as MELCOR. Liftoff Model for MELCOR July 2015 4 This page is intentionally blank
Bootstrapping pronunciation models
CSIR Research Space (South Africa)
Davel, M
2006-07-01
Full Text Available -scarce language. During the procedure known as ‘bootstrapping’, a model is improved iteratively via a controlled series of increments, at each stage using the previous model to generate the next. This self- improving circularity distinguishes bootstrapping...-to-phoneme rules (the second representation) can be used to identify possible errors that require re-verification. In contrast, during the bootstrapping of acoustic models for speech recognition, both representations are amenable to automated analysis...
Казыдуб, Надежда
2013-01-01
Discourse space is a complex structure that incorporates different levels and dimensions. The paper focuses on developing a multidisciplinary approach that is congruent to the complex character of the modern discourse. Two models of discourse space are proposed here. The Integrated Model reveals the interaction of different categorical mechanisms in the construction of the discourse space. The Evolutionary Model describes the historical roots of the modern discourse. It also reveals historica...
International Nuclear Information System (INIS)
Wilczek, F.
1993-01-01
The standard model of particle physics is highly successful, although it is obviously not a complete or final theory. In this presentation the author argues that the structure of the standard model gives some quite concrete, compelling hints regarding what lies beyond. Essentially, this presentation is a record of the author's own judgement of what the central clues for physics beyond the standard model are, and also it is an attempt at some pedagogy. 14 refs., 6 figs
Modeling multiphase materials processes
Iguchi, Manabu
2010-01-01
""Modeling Multiphase Materials Processes: Gas-Liquid Systems"" describes the methodology and application of physical and mathematical modeling to multi-phase flow phenomena in materials processing. The book focuses on systems involving gas-liquid interaction, the most prevalent in current metallurgical processes. The performance characteristics of these processes are largely dependent on transport phenomena. This volume covers the inherent characteristics that complicate the modeling of transport phenomena in such systems, including complex multiphase structure, intense turbulence, opacity of
Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department
2017-06-22
This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.
Generalized Nonlinear Yule Models
Lansky, Petr; Polito, Federico; Sacerdote, Laura
2016-01-01
With the aim of considering models with persistent memory we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macrovolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth...
Energy Technology Data Exchange (ETDEWEB)
Bergen, Benjamin Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-07
This is the PDF of a powerpoint presentation from a teleconference on Los Alamos programming models. It starts by listing their assumptions for the programming models and then details a hierarchical programming model at the System Level and Node Level. Then it details how to map this to their internal nomenclature. Finally, a list is given of what they are currently doing in this regard.