Tengiz Mdzinarishvili
2009-12-01
Full Text Available A simple, computationally efficient procedure for analyses of the time period and birth cohort effects on the distribution of the age-specific incidence rates of cancers is proposed. Assuming that cohort effects for neighboring cohorts are almost equal and using the Log-Linear Age-Period-Cohort Model, this procedure allows one to evaluate temporal trends and birth cohort variations of any type of cancer without prior knowledge of the hazard function. This procedure was used to estimate the influence of time period and birth cohort effects on the distribution of the age-specific incidence rates of first primary, microscopically confirmed lung cancer (LC cases from the SEER9 database. It was shown that since 1975, the time period effect coefficients for men increase up to 1980 and then decrease until 2004. For women, these coefficients increase from 1975 up to 1990 and then remain nearly constant. The LC birth cohort effect coefficients for men and women increase from the cohort of 1890–94 until the cohort of 1925–29, then decrease until the cohort of 1950–54 and then remain almost unchanged. Overall, LC incidence rates, adjusted by period and cohort effects, increase up to the age of about 72–75, turn over, and then fall after the age of 75–78. The peak of the adjusted rates in men is around the age of 77–78, while in women, it is around the age of 72–73. Therefore, these results suggest that the age distribution of the incidence rates in men and women fall at old ages.
Tengiz Mdzinarishvili
2010-04-01
Full Text Available An efficient computing procedure for estimating the age-specific hazard functions by the log-linear age-period-cohort (LLAPC model is proposed. This procedure accounts for the influence of time period and birth cohort effects on the distribution of age-specific cancer incidence rates and estimates the hazard function for populations with different exposures to a given categorical risk factor. For these populations, the ratio of the corresponding age-specific hazard functions is proposed for use as a measure of relative hazard. This procedure was used for estimating the risks of lung cancer (LC for populations living in different geographical areas. For this purpose, the LC incidence rates in white men and women, in three geographical areas (namely: San Francisco-Oakland, Connecticut and Detroit, collected from the SEER 9 database during 1975–2004, were utilized. It was found that in white men the averaged relative hazard (an average of the relative hazards over all ages of LC in Connecticut vs. San Francisco-Oakland is 1.31 ± 0.02, while in Detroit vs. San Francisco-Oakland this averaged relative hazard is 1.53 ± 0.02. In white women, analogous hazards in Connecticut vs. San Francisco-Oakland and Detroit vs. San Francisco-Oakland are 1.22 ± 0.02 and 1.32 ± 0.02, correspondingly. The proposed computing procedure can be used for assessing hazard functions for other categorical risk factors, such as gender, race, lifestyle, diet, obesity, etc.
Bayesian Age-Period-Cohort Modeling and Prediction - BAMP
Volker J. Schmid
2007-10-01
Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.
Decomposable log-linear models
Eriksen, Poul Svante
can be characterized by a structured set of conditional independencies between some variables given some other variables. We term the new model class decomposable log-linear models, which is illustrated to be a much richer class than decomposable graphical models.It covers a wide range of non...
Age-period-cohort modelling of breast cancer incidence in the Nordic countries
Rostgaard, K; Vaeth, M; Holst, H
2001-01-01
into account. Assuming the age dependency of the incidence pattern in old age to be common for the Nordic countries, an internal comparison could be made among the four countries of the cohort effects and the period effects. The study indicated that the period effects have been of importance for the increase......The Nordic countries have experienced a steady increase in breast cancer incidence throughout the past 35 years. We analysed the incidence in Denmark, Finland, Norway and Sweden during the period 1958 to 1992 using age-period-cohort models and taking the systematic mammography screening...... in breast cancer incidence seen in the Nordic countries. The widespread practice of neglecting the period effects in age-period-cohort analysis of time trends in breast cancer incidence therefore probably needs reconsideration. A key finding was that Danish women born in the 20th century seem to have been...
Age-period-cohort models using smoothing splines: a generalized additive model approach.
Jiang, Bei; Carriere, Keumhee C
2014-02-20
Age-period-cohort (APC) models are used to analyze temporal trends in disease or mortality rates, dealing with linear dependency among associated effects of age, period, and cohort. However, the nature of sparseness in such data has severely limited the use of APC models. To deal with these practical limitations and issues, we advocate cubic smoothing splines. We show that the methods of estimable functions proposed in the framework of generalized linear models can still be considered to solve the non-identifiability problem when the model fitting is within the framework of generalized additive models with cubic smoothing splines. Through simulation studies, we evaluate the performance of the cubic smoothing splines in terms of the mean squared errors of estimable functions. Our results support the use of cubic smoothing splines for APC modeling with sparse but unaggregated data from a Lexis diagram.
Patterns of lung cancer mortality in 23 countries: Application of the Age-Period-Cohort model
Huang Yi-Chia
2005-03-01
Full Text Available Abstract Background Smoking habits do not seem to be the main explanation of the epidemiological characteristics of female lung cancer mortality in Asian countries. However, Asian countries are often excluded from studies of geographical differences in trends for lung cancer mortality. We thus examined lung cancer trends from 1971 to 1995 among men and women for 23 countries, including four in Asia. Methods International and national data were used to analyze lung cancer mortality from 1971 to 1995 in both sexes. Age-standardized mortality rates (ASMR were analyzed in five consecutive five-year periods and for each five-year age group in the age range 30 to 79. The age-period-cohort (APC model was used to estimate the period effect (adjusted for age and cohort effects for mortality from lung cancer. Results The sex ratio of the ASMR for lung cancer was lower in Asian countries, while the sex ratio of smoking prevalence was higher in Asian countries. The mean values of the sex ratio of the ASMR from lung cancer in Taiwan, Hong Kong, Singapore, and Japan for the five 5-year period were 2.10, 2.39, 3.07, and 3.55, respectively. These values not only remained quite constant over each five-year period, but were also lower than seen in the western countries. The period effect, for lung cancer mortality as derived for the 23 countries from the APC model, could be classified into seven patterns. Conclusion Period effects for both men and women in 23 countries, as derived using the APC model, could be classified into seven patterns. Four Asian countries have a relatively low sex ratio in lung cancer mortality and a relatively high sex ratio in smoking prevalence. Factors other than smoking might be important, especially for women in Asian countries.
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
Su Shih-Yung
2013-01-01
Full Text Available Abstract Background Cervical cancer is the most common cancer experienced by women worldwide; however, screening techniques are very effective for reducing the risk of death. The national cervical cancer screening program was implemented in Taiwan in 1995. The objective of this study was to examine and provide evidence of the cervical cancer mortality trends for the periods before and after the screening program was implemented. Methods Data from 1981 to 2010 of the causes of death registered were obtained from the Department of Health, Taiwan. Age-standardized mortality rates, age-specific rates, and age-period-cohort models that employed the sequential method were used to assess temporal changes that occurred between 1981 and 2010, with 1995 used as the separating year. Results The results showed that for both time periods of 1981 to 1995 and 1996 to 2010, age and period had significant effects, whereas the birth cohort effects were insignificant. For patients between 80 and 84 years of age, the mortality rate for 1981 to 1995 and 1996 to 2010 was 48.34 and 68.08. The cervical cancer mortality rate for 1996 to 2010 was 1.0 for patients between 75 and 79 years of age and 1.4 for patients between 80 and 84 years of age compared to that for 1981 to 1995. Regarding the period effect, the mortality trend decreased 2-fold from 1996 to 2010. Conclusions The results of this study indicate a decline in cervical cancer mortality trends after the screening program involving Papanicolaou tests was implemented in 1995. However, the positive effects of the screening program were not observed in elderly women because of treatment delays during the initial implementation of the screening program.
MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES
ZHAO Lincheng; ZHANG Hong
2003-01-01
In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.
Modelling regional variation of first-time births in Denmark 1980-1994 by an age-period-cohort model
Thygesen, L. C.; Knudsen, Lisbeth B.; Keiding, N.
2005-01-01
-time births in Denmark. From the Fertility of Women and Couples Dataset we obtain data on number of births by nulliparous women by year (1980-1994), age (15-45) and county of residence. We show that the APC-model describes the fertility rates of nulliparous women satisfactorily. To catch the regional...
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Marinaccio, Alessandro; Montanaro, Fabio; Mastrantonio, Marina; Uccelli, Raffaella; Altavista, Pierluigi; Nesti, Massimo; Costantini, Adele Seniori; Gorini, Giuseppe
2005-05-20
Italy was the second main asbestos producer in Europe, after the Soviet Union, until the end of the 1980s, and raw asbestos was imported on a large scale until 1992. The Italian pattern of asbestos consumption lags on average about 10 years behind the United States, Australia, the United Kingdom and the Nordic countries. Measures to reduce exposure were introduced in the mid-1970s in some workplaces. In 1986, limitations were imposed on the use of crocidolite and in 1992 asbestos was definitively banned. We have used primary pleural cancer mortality figures (1970-1999) to predict mortality from mesothelioma among Italian men in the next 30 years by age-cohort-period models and by a model based on asbestos consumption figures. The pleural cancer/mesothelioma ratio and mesothelioma misdiagnosis in the past were taken into account in the analysis. Estimated risks of birth cohorts born after 1945 decrease less quickly in Italy than in other Western countries. The findings predict a peak with about 800 mesothelioma annual deaths in the period 2012-2024. Results estimated using age-period-cohort models were similar to those obtained from the asbestos consumption model.
A log-linear multidimensional Rasch model for capture-recapture.
Pelle, E; Hessen, D J; van der Heijden, P G M
2016-02-20
In this paper, a log-linear multidimensional Rasch model is proposed for capture-recapture analysis of registration data. In the model, heterogeneity of capture probabilities is taken into account, and registrations are viewed as dichotomously scored indicators of one or more latent variables that can account for correlations among registrations. It is shown how the probability of a generic capture profile is expressed under the log-linear multidimensional Rasch model and how the parameters of the traditional log-linear model are derived from those of the log-linear multidimensional Rasch model. Finally, an application of the model to neural tube defects data is presented.
A note on adding and deleting edges in hierarchical log-linear models
Edwards, David
2012-01-01
The operations of edge addition and deletion for hierarchical log-linear models are defined, and polynomial-time algorithms for the operations are given......The operations of edge addition and deletion for hierarchical log-linear models are defined, and polynomial-time algorithms for the operations are given...
Korn, E L
1978-08-01
This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.
A comparison of statistical selection strategies for univariate and bivariate log-linear models.
Moses, Tim; Holland, Paul W
2010-11-01
In this study, eight statistical selection strategies were evaluated for selecting the parameterizations of log-linear models used to model the distributions of psychometric tests. The selection strategies included significance tests based on four chi-squared statistics (likelihood ratio, Pearson, Freeman-Tukey, and Cressie-Read) and four additional strategies (Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and a measure attributed to Goodman). The strategies were evaluated in simulations for different log-linear models of univariate and bivariate test-score distributions and two sample sizes. Results showed that all eight selection strategies were most accurate for the largest sample size considered. For univariate distributions, the AIC selection strategy was especially accurate for selecting the correct parameterization of a complex log-linear model and the likelihood ratio chi-squared selection strategy was the most accurate strategy for selecting the correct parameterization of a relatively simple log-linear model. For bivariate distributions, the likelihood ratio chi-squared, Freeman-Tukey chi-squared, BIC, and CAIC selection strategies had similarly high selection accuracies.
Exact Hypothesis Tests for Log-linear Models with exactLoglinTest
Brian Caffo
2006-11-01
Full Text Available This manuscript overviews exact testing of goodness of fit for log-linear models using the R package exactLoglinTest. This package evaluates model fit for Poisson log-linear models by conditioning on minimal sufficient statistics to remove nuisance parameters. A Monte Carlo algorithm is proposed to estimate P values from the resulting conditional distribution. In particular, this package implements a sequentially rounded normal approximation and importance sampling to approximate probabilities from the conditional distribution. Usually, this results in a high percentage of valid samples. However, in instances where this is not the case, a Metropolis Hastings algorithm can be implemented that makes more localized jumps within the reference set. The manuscript details how some conditional tests for binomial logit models can also be viewed as conditional Poisson log-linear models and hence can be performed via exactLoglinTest. A diverse battery of examples is considered to highlight use, features and extensions of the software. Notably, potential extensions to evaluating disclosure risk are also considered.
Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm
Zhehuang Huang
2015-01-01
Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
Frenk, Steven M; Yang, Yang Claire; Land, Kenneth C
2013-01-01
In recently developed hierarchical age-period-cohort (HAPC) models, inferential questions arise: How can one assess or judge the significance of estimates of individual cohort and period effects in such models? And how does one assess the overall statistical significance of the cohort and/or the period effects? Beyond statistical significance is the question of substantive significance. This paper addresses these questions. In the context of empirical applications of linear and generalized linear mixed-model specifications of HAPC models using data on verbal test scores and voter turnout in U.S. presidential elections, respectively, we describe a two-step approach and a set of guidelines for assessing statistical significance. The guidelines include assessments of patterns of effects and statistical tests both for the effects of individual cohorts and time periods as well as for entire sets of cohorts and periods. The empirical applications show strong evidence that trends in verbal test scores are primarily cohort driven, while voter turnout is primarily a period phenomenon.
Anderson, R W G; Searson, D J
2015-02-01
A novel application of age-period-cohort methods are used to explain changes in vehicle based crash rates in New South Wales, Australia over the period 2003-2010. Models are developed using vehicle age, crash period and vehicle cohort to explain changes in the rate of single vehicle driver fatalities and injuries in vehicles less than 13 years of age. Large declines in risk are associated with vehicle cohorts built after about 1996. The decline in risk appears to have accelerated to 12 percent per vehicle cohort year for cohorts since 2004. Within each cohort, the risk of crashing appears to be a minimum at two years of age and increases as the vehicle ages beyond this. Period effects (i.e., other road safety measures) between 2003 and 2010 appear to have contributed to declines of up to about two percent per annum to the driver-fatality single vehicle crash rate, and possibly only negligible improvements to the driver-injury single vehicle crash rate. Vehicle improvements appear to have been responsible for a decline in per-vehicle crash risk of at least three percent per calendar year for both severity levels over the same period. Given the decline in risk associated with more recent vehicle cohorts and the dynamics of fleet turnover, continued declines in per-vehicle crash risk over coming years are almost certain. Copyright © 2014. Published by Elsevier Ltd.
SPI drought class prediction using log-linear models applied to wet and dry seasons
Moreira, Elsa E.
2016-08-01
A log-linear modelling for 3-dimensional contingency tables was used with categorical time series of SPI drought class transitions for prediction of monthly drought severity. Standardized Precipitation Index (SPI) time series in 12- and 6-month time scales were computed for 10 precipitation time series relative to GPCC datasets with 2.5° spatial resolution located over Portugal and with 112 years length (1902-2014). The aim was modelling two-month step class transitions for the wet and dry seasons of the year and then obtain probability ratios - Odds - as well as their respective confidence intervals to estimate how probable a transition is compared to another. The prediction results produced by the modelling applied to wet and dry season separately, for the 6- and the 12-month SPI time scale, were compared with the results produced by the same modelling without the split, using skill scores computed for the entire time series length. Results point to good prediction performances ranging from 70 to 80% in the percentage of corrects (PC) and 50-70% in the Heidke skill score (HSS), with the highest scores obtained when the modelling is applied to the SPI12. The adding up of the wet and dry seasons introduced in the modelling brought improvements in the predictions, of about 0.9-4% in the PC and 1.3-6.8% in the HSS, being the highest improvements obtained in the SPI6 application.
TENVERGERT, E; GILLESPIE, M; KINGMA, J
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total
TENVERGERT, E; GILLESPIE, M; KINGMA, J
1993-01-01
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total score
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Voorhees, Richard A.
Log-linear modeling was employed to explore the conceptual relationships among community college student persistence and nine variables, including student demographics, purpose for enrolling, intentions to return, frequency of informal interaction with faculty, and satisfaction with the institution in general. The study sample was 369 new and…
Jhun, Hyung-Joon; Kim, Ho; Cho, Sung-Il
2011-05-01
We examined time trend and age-period-cohort effects on acute myocardial infarction (AMI) mortality in Korean adults from 1988 to 2007. Annual AMI mortality data and population statistics from 1988 to 2007 were obtained from the STATISTICS KOREA website. Age adjusted mortality for four 5-yr calendar periods (1988-1992 to 2003-2007) was calculated by direct standardization using the Year 2000 WHO world standard population. A log-linear Poisson regression model was used to estimate age, period, and cohort effects on AMI mortality. In both genders, age-adjusted AMI mortality increased from period one (1988-1992) to period three (1998-2002) but decreased in period four (2003-2007). An exponential age effect was noted in both genders. The rate ratio of the cohort effect increased up to the 1943 birth cohort and decreased gradually thereafter, and the rate ratio of the period effect increased up to period three (1998-2002) and decreased thereafter. Our results suggest that AMI mortality in Korean adults has decreased since the period 1998-2002 and age, period, and cohort effects have influenced on AMI mortality.
Rosenbaum Peter L
2006-10-01
Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.
Farooq Ahmad
2006-01-01
Full Text Available This is cross sectional study based on 304 households (couples with wives age less than 48 years, chosen from urban locality (city Lahore. Fourteen religious, demographic and socio-economic factors of categorical nature like husband education, wife education, husband’s monthly income, occupation of husband, household size, husband-wife discussion, number of living children, desire for more children, duration of marriage, present age of wife, age of wife at marriage, offering of prayers, political view, and religiously decisions were taken to understand acceptance of family planning. Multivariate log-linear analysis was applied to identify association pattern and interrelationship among factors. The logit model was applied to explore the relationship between predictor factors and dependent factor, and to explore which are the factors upon which acceptance of family planning is highly depending. Log-linear analysis demonstrate that preference of contraceptive use was found to be consistently associated with factors Husband-Wife discussion, Desire for more children, No. of children, Political view and Duration of married life. While Husband’s monthly income, Occupation of husband, Age of wife at marriage and Offering of prayers resulted in no statistical explanation of adoption of family planning methods.
Age-period-cohort analyses of obesity prevalence in US adults.
An, R; Xiang, X
2016-12-01
Age-period-cohort analysis is a stream of methodologies that decompose the temporal trends for disease risk into three time scales-age, calendar year (period) and year of birth (cohort). This study conducted age-period-cohort analyses of obesity prevalence in US adults. Retrospective data analysis. We constructed regression models based on anthropometric data from the 1999-2012 National Health and Nutrition Examination Survey to correct for the self-reported height/weight in the 1984-2014 Behavioral Risk Factor Surveillance System (BRFSS). We estimated fixed-effects age-period-cohort models based on the BRFSS data for the overall adult sample (n = 6,093,293) and by sex and race/ethnicity, adjusting for individual characteristics and the BRFSS survey design. An inverted U-shaped age effect on obesity and a positive period effect characterized by over-time increase in obesity risk independent of age and cohort influences were identified in the overall sample and subgroups by sex and race/ethnicity. From 1984 to 2014, the adjusted obesity prevalence increased by 21.1 percentage points among US adults, and 20.9, 21.6, 21.0, 26.4 and 20.1 percentage points in men, women, non-Hispanic whites, African Americans and Hispanics, respectively. In contrast, no consistent evidence was found in support of the cohort effect-the adjusted obesity risk was comparable across birth cohorts after accounting for the age and period effects. Shifts in the age distribution and nationwide secular changes may have fuelled the obesity epidemic in the USA over the past decades. Reversing the obesity epidemic may require understanding of the nationwide changes over time that affect weight gain across all population subgroups and promoting universal changes to diet, physical activity and the obesogenic environment. Copyright Â© 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Empirical Bayes Age-Period-Cohort Analysis of Retrospective Incidence Data
Ogata, Yosihiko; Katsura, Koichi; Keiding, Niels;
2000-01-01
ABIC, age-period-cohort decomposition, anisotropic smoothness prior, B-spline, detection rate, diabetes incidence, integrated likelihood, intensity function, Lexis diagram, random deletion......ABIC, age-period-cohort decomposition, anisotropic smoothness prior, B-spline, detection rate, diabetes incidence, integrated likelihood, intensity function, Lexis diagram, random deletion...
J.R. González
2002-06-01
Full Text Available Los modelos edad-período-cohorte suelen utilizarse en estudios de epidemiología descriptiva para analizar las tendencias de la incidencia y de la mortalidad para valorar el efecto temporal de la ocurrencia de un evento. La relación lineal exacta existente entre estos tres efectos hace que los parámetros del modelo completo no puedan estimarse, lo que se denomina no identificabilidad. En estas notas se explicarán dos de los métodos más usados para analizar modelos edad-período-cohorte: uno se basa en funciones de penalización y otro en funciones estimables (tendencia lineal y curvaturas o desviaciones. Ambos métodos se ilustrarán con dos ejemplos en el que se analizan la tendencia temporal de la mortalidad por cáncer de pulmón y mama en las mujeres de Cataluña. Estos ejemplos ilustran que los métodos basados en funciones de penalización tienden a atribuir la tendencia a un efecto cohorte exclusivo, por lo que se aconseja utilizar los métodos basados en funciones estimables.Age-period-cohort models are usually used in descriptive epidemiological studies to analyze time trends in incidence or mortality. The exact linear relationship between the three effects of these models has the effect of making the parameters of the full model impossible to estimate, which is called non-identifiability. In these notes two of the most frequently used methods to analyze age-period-cohort models will be explained. One is based on penalty functions and the other on estimable functions (drift and curvatures or deviation from linearity. Both methods will be illustrated with two examples in wich temporal trends of breast and lung cancer mortality in women from Catalonia in Spain will be studied. These examples show how the methods based on penalty functions tend to attribute the trend exclusively to a cohort effect. Consequently, the use of methods based on estimate functions is recommended.
Suicide Mortality in Canada and Quebec, 1926-2008: An Age-Period-Cohort Analysis
Lise Thibodeau
2015-09-01
Full Text Available Suicide rates raise with age has remained consistent for more than 150 years but over the last 50 years major changes occurred. We examined Age-Period-Cohort (APC effects on suicide mortality rate by gender in Canada and in Quebec from 1926 to 2008. Durkheim theoretical framework is used to interpret our findings. Descriptive analysis and APC models relating to the Intrinsic Estimator (IE were used to assess these effects. IE model shows suicide net age effect for men in Canada and Quebec as death rate increased until 25 years old before reaching a plateau. For women it’s an inverted "U" shape peaking at mid-adulthood. While period effect differs, a net cohort effect is found for men born in 1941, and women in 1981 until most recent cohorts.
Interpersonal trust: An age-period-cohort analysis revisited.
Clark, April K; Eisenstein, Marie A
2013-03-01
Building on the previous work of Robinson and Jackson(1), this study addresses the extent to which interpersonal trust in America is changing due to age, period, or cohort effects (APC). The importance of APC in explaining variations in trust stems from the understanding that the specific source of change can have important - albeit different and possibly, negative - consequences on society. Moreover, 3years after the previous study concluded, the country experienced the largest concerted terrorist attacks on US soil. Little is known about how the attacks affected the dynamics of interpersonal trust relative to the processes of birth, aging, and historical change - such an investigation has important implications for our understanding of the sources and consequences of interpersonal trust. Two analysis techniques for disentangling APC effects are used: constrained generalized linear models and intrinsic estimator models. The results show that while period effects are an important contributor to declining trust, the attacks exert little influence over one's decision to trust others. Also, the investigation provides further confirmation that trust in others has fallen dramatically in the US with the scarcity being led by individuals coming of age in the late 1940s, after which, trust falls with each successive cohort. If this trend continues, through the process of cohort replacement, we will become a society of "distrusters".
Choi, Hyung-Seok; Kim, Youngchul; Cho, Kwang-Hyun; Park, Taesung
2013-01-01
Since eukaryotic transcription is regulated by sets of Transcription Factors (TFs) having various transcriptional time delays, identification of temporal combinations of activated TFs is important to reconstruct Transcriptional Regulatory Networks (TRNs). Our methods combine time course microarray data, information on physical binding between the TFs and their targets and the regulatory sequences of genes using a log-linear model to reconstruct dynamic functional TRNs of the yeast cell cycle and human apoptosis. In conclusion, our results suggest that the proposed dynamic motif search method is more effective in reconstructing TRNs than the static motif search method.
Adaptive Lasso for Poisson log-linear regression model%自适应Lasso在Poisson对数线性回归模型下的性质
崔静; 郭鹏江; 夏志明
2011-01-01
Aim To study adaptive Lasso for Poisson log-linear regrersion model. Methods The methods of mathematical analysis and probability theory are used. Results Under some conditions, the adaptive Lasso estimator for Poisson log-linear regression has the oracle properties which are sparsity and asymptotic normality. Conclusion A-daptive Lasso can effectively choose variables for Poisson log-linar regression model and estimate the variable coefficient.%目的 研究自适应Lasso在Poisson对数线性模型下的性质.方法 利用数学分析及概率论中的性质.结果 证明了在Poisson对数线性模型下自适应Lasso估计量具有稀疏性和渐进正态性.结论 自适应Lasso可以有效选择Poisson对数线性模型中的变量,并同时估计变量系数.
Irene O L Wong
Full Text Available BACKGROUND: Hong Kong population has experienced drastic changes in its economic development in the 1940s. Taking advantage of Hong Kong's unique demographic and socioeconomic history, characterized by massive, punctuated migration waves from Southern China, and recent, rapid transition from a pre-industrialized society to the first ethnic Chinese community reaching "first world" status over the last 60 years (i.e., in two or three generations, we examined the longitudinal trends in infection related mortality including septicemia compared to trends in non-bacterial pneumonia to generate hypotheses for further testing in other recently transitioned economies and to provide generalized aetiological insights on how economic transition affects infection-related mortality. METHODS: We used deaths from septicemia and pneumonia not specified as bacterial, and population figures in Hong Kong from 1976-2005. We fitted age-period-cohort models to decompose septicemia and non-bacterial pneumonia mortality rates into age, period and cohort effects. RESULTS: Septicaemia-related deaths increased exponentially with age, with a downturn by period. The birth cohort curves had downward inflections in both sexes in the 1940s, with a steeper deceleration for women. Non-bacterial pneumonia-related deaths also increased exponentially with age, but the birth cohort patterns showed no downturns for those born in the 1940s. CONCLUSION: The observed changes appeared to suggest that better early life conditions may enable better development of adaptive immunity, thus enhancing immunity against bacterial infections, with greater benefits for women than men. Given the interaction between the immune system and the gonadotropic axis, these observations are compatible with the hypothesis that upregulation of the gonadotropic axis underlies some of the changes in disease patterns with economic development.
Mortality of breast cancer in Taiwan, 1971-2010: temporal changes and an age-period-cohort analysis.
Ho, M-L; Hsiao, Y-H; Su, S-Y; Chou, M-C; Liaw, Y-P
2015-01-01
The current paper describes the age, period and cohort effects on breast cancer mortality in Taiwan. Female breast cancer mortality data were collected from the Taiwan death registries for 1971-2010. The annual percentage changes, age- standardised mortality rates (ASMR) and age-period-cohort model were calculated. The mortality rates increased with advancing age groups when fixing the period. The percentage change in the breast cancer mortality rate increased from 54.79% at aged 20-44 years, to 149.78% in those aged 45-64 years (between 1971-75 and 2006-10). The mortality rates in the 45-64 age group increased steadily from 1971 to 1975 and 2006-10. The 1951 birth cohorts (actual birth cohort; 1947-55) showed peak mortalities in both the 50-54 and 45-49 age groups. We found that the 1951 birth cohorts had the greatest mortality risk from breast cancer. This might be attributed to the DDT that was used in large amounts to prevent deaths from malaria in Taiwan. However, future researches require DDT data to evaluate the association between breast cancer and DDT use.
Aggregation of log-linear risks
Embrechts, Paul; Hashorva, Enkeleijd; Mikosch, Thomas Valentin
2014-01-01
In this paper we work in the framework of a k-dimensional vector of log-linear risks. Under weak conditions on the marginal tails and the dependence structure of a vector of positive risks, we derive the asymptotic tail behaviour of the aggregated risk {and present} an application concerning log...
Vasilis Panagiotis Valdramidis
2005-01-01
Full Text Available A mathematical approach incorporating the shoulder effect during the quantification of microbial heat inactivation is being developed based on »the number of log cycles of reduction « concept. Hereto, the heat resistance of Escherichia coli K12 in BHI broth has been quantitatively determined in a generic and accurate way by defining the time t for x log reductions in the microbial population, i.e. txD, as a function of the treatment temperature T. Survival data of the examined microorganism are collected in a range of temperatures between 52–60.6 °C. Shoulder length Sl and specific inactivation rate kmax are derived from a mathematical expression that describes a non-log-linear behaviour. The temperature dependencies of Sl and kmax are used for structuring the txD(T function. Estimation of the txD(T parameters through a global identification procedure permits reliable predictions of the time to achieve a pre-decided microbial reduction. One of the parameters of the txD(T function is proposed as »the reference minimum temperature for inactivation«. For the case study considered, a value of 51.80 °C (with a standard error, SE, of 3.47 was identified. Finally, the time to achieve commercial sterilization and pasteurization for the product at hand, i.e. BHI broth, was found to be 11.70 s (SE=5.22, and 5.10 min (SE=1.22, respectively. Accounting for the uncertainty (based on the 90 % confidence intervals, CI a fail-safe treatment of these two processes takes 20.36 s and 7.12 min, respectively.
Trends in Ischemic Heart Disease Mortality in Korea, 1985-2009: An Age-period-cohort Analysis
Lee, Hye Ah; Park, Hyesook
2012-01-01
Objectives Economic growth and development of medical technology help to improve the average life expectancy, but the western diet and rapid conversions to poor lifestyles lead an increasing risk of major chronic diseases. Coronary heart disease mortality in Korea has been on the increase, while showing a steady decline in the other industrialized countries. An age-period-cohort analysis can help understand the trends in mortality and predict the near future. Methods We analyzed the time tren...
刘婷
2011-01-01
Accelerated life tests,in which more than one stress is often involved, have become widely used in today's industries. The log-linear accelerated model was proposed to describe the relation between multiple-stress and product's lifetime, and Weibull log-linear accelerated models were established. Due to highly nonlinear and non-monotonic of the log likelihood function, the genetic algorithm could be adopted to determine the maximum likelihood estimates of accelerated model parameters， and then reliability assessment and lifetime prediction could be realized under various stress. Finally,simulation results illustrate reasonability of the method proposed in the paper.%针对Weibull分布产品的多应力加速试验,提出采用对数线性加速模型描述多个应力和产品寿命之间的定量关系,建立了Weibull分布对数线性加速试验的可靠性分析模型.鉴于似然函数的高度非线性和非单调特性,采用遗传算法得到了加速模型参数的极大似然估计,可以实现不同应力之间可靠性特性参数的相互转换.仿真算例验证了所提方法的有效性.
陈万青; 李媛秋; 郑荣寿
2012-01-01
Objective To predict the disease burden of kidney cancer and to provide basic information for etiology and control planning.Methods We retrieved incidence data of kidney cancer from 18 urban cancer registries from National Central Cancer Registry during ten years period from 1998 to 2007.Ageperiod-cohort Bayesian model was applied for modeling to predict kidney cancer incidence in urban China in 2008-2015.Results Between 1998 and 2007,the incidence of kidney cancer in urban registration areas kept increasing dramatically.Incidence for male raised from 3.12/100 000 in 1998 to 5.36/100 000 in 2007 and from 1.66/100 000 to 2.67/100 000 for female.Different models showed that the increase was mainly caused by a cohort effect (P ＜ 0.001).The predicted incidence rate of kidney cancer for the year 2015 is 9.93 per 100 000 in male and 4.54 per 100 000 in female.The number of new cases will rise to 52 259 in 2015,including 36 616 men and 15 643 women.Conclusions The burden of kidney cancer in urban areas would increase due to the effect of age and cohort.Kidney cancer will become one of the main cancers threatening people's health in urban areas in China.Etiology research and planning of prevention and control for kidney cancer should be enhanced.%目的 预测全国城市地区肾癌发病情况,为肾癌的病因学研究及防治策略提供依据.方法 根据全国肿瘤登记中心1998-2007年18个城市登记处的肿瘤登记资料,利用年龄-时期-出生队列的贝叶斯模型对发病趋势进行拟合,预测2008-2015年我国城市肾癌发病情况. 结果 1998-2007年我国城市登记地区男性和女性的肾癌发病率均有显著增加,男性从1998年的3.12/10万升至2007年的5.36/10万(标化率),女性从1998年的1.66/10万升至2007年的2.67/10万.不同模型拟合结果显示,肾癌发病率的上升主要由出生队列效应引起(P＜0.001).据预测2015年我国城市地区男性肾癌发病率将达到9.93/10万,女性4.54/10
The Log-Linear Return Approximation, Bubbles, and Predictability
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
2012-01-01
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional....... Finally, we show that a bubble model in which expected returns are constant can explain the predictability of stock returns from the dividend-price ratio that many previous studies have documented....
Life satisfaction and age : Dealing with underidentification in age-period-cohort models
de Ree, Joppe; Alessie, Rob
2011-01-01
Recent literature typically finds a U shaped relationship between life satisfaction and age. Age profiles, however, are not identified without forcing arbitrary restrictions on the cohort and/or time profiles. In this paper we report what can be identified about the relationship between life satisfa
Camp, Richard J.; Pratt, Thane K.; Gorresen, P. Marcos; Woodworth, Bethany L.; Jeffrey, John J.
2014-01-01
Freed and Cann (2013) criticized our use of linear models to assess trends in the status of Hawaiian forest birds through time (Camp et al. 2009a, 2009b, 2010) by questioning our sampling scheme, whether we met model assumptions, and whether we ignored short-term changes in the population time series. In the present paper, we address these concerns and reiterate that our results do not support the position of Freed and Cann (2013) that the forest birds in the Hakalau Forest National Wildlife Refuge (NWR) are declining, or that the federally listed endangered birds are showing signs of imminent collapse. On the contrary, our data indicate that the 21-year long-term trends for native birds in Hakalau Forest NWR are stable to increasing, especially in areas that have received active management.
Materialism across the life span: An age-period-cohort analysis.
Jaspers, Esther D T; Pieters, Rik G M
2016-09-01
This research examined the development of materialism across the life span. Two initial studies revealed that (a) lay beliefs were that materialism declines with age and (b) previous research findings also implied a modest, negative relationship between age and materialism. Yet, previous research has considered age only as a linear control variable, thereby precluding the possibility of more intricate relationships between age and materialism. Moreover, prior studies have relied on cross-sectional data and thus confound age and cohort effects. To improve on this, the main study used longitudinal data from 8 waves spanning 9 years of over 4,200 individuals (16 to 90 years) to examine age effects on materialism while controlling for cohort and period effects. Using a multivariate multilevel latent growth model, it found that materialism followed a curvilinear trajectory across the life span, with the lowest levels at middle age and higher levels before and after that. Thus, in contrast to lay beliefs, materialism increased in older age. Moreover, age effects on materialism differed markedly between 3 core themes of materialism: acquisition centrality, possession-defined success, and acquisition as the pursuit of happiness. In particular, acquisition centrality and possession-defined success were higher at younger and older age. Independent of these age effects, older birth cohorts were oriented more toward possession-defined success, whereas younger birth cohorts were oriented more toward acquisition centrality. The economic downturn since 2008 led to a decrease in acquisition as the pursuit of happiness and in desires for personal growth, but to an increase in desires for achievement. (PsycINFO Database Record
赵宏林; 佟伟军; 林哲; 张永红
2011-01-01
目的 了解不同年龄、不同性别的蒙古族人群吸烟对高血压的影响.方法 采用对数线性模型的方法对各因素的主效应和交互效应进行分析.结果 男性青年组血压仅与吸烟年限有关(P＜0.05),男性中年组血压与吸烟量有关(P＜0.05),男性老年组血压与吸烟各指标均无关联.女性青年组血压与吸烟年限有关(P＜0.05),女性中年组和女性老年组血压与吸烟各指标均无关联(P＞0.05).结论 蒙古族人群高血压研究中预防吸烟的重点是青年；吸烟研究中尽量使用吸烟量和吸烟年限指标；对数线性模型方法是分析高维列联表交互效应的理想方法.%Objective To explore on risk factors between cigarette smoking and hypertension in Mongolian people. Methods Log-linear model method was used to analyze the main effects of various factors and interaction. Results In the young male group blood pressure associated only with years of smoking( P 0. 05 ). Conclusion In the study of hypertension in Mongolian prevention focuses on smoking in youth; in smoking research indicators of smoking and years of smoking are taken to be used mostly; log-linear model is used as an ideal way to analyze interactive effect in the high-dimensional contingency table.
赵宏林; 刘永跃; 佟伟军; 林哲; 张永红
2011-01-01
[Objective] To explore relation of drinking with hypertension in Mongolian people. [Methods] Log-linear model method was used to analyze the main effects of various factors and interaction. [Results] Drinking alcohol was significantly associated with blood pressure in young and old age groups among males (P 0.05); Drinking alcohol amount was significantly associated with blood pressure in all age groups among males and middle age group among females (P 0.05); Number of years of alcohol drinking was significantly associated with blood pressure in young age group a-mong males (P 0.05). [Conclusion] Drinking alcohol amount was associated with hypertension among Mongolian population; The log-linear model is a ideal method to analyze interactions in high-dimensional contingency table.%[目的]进一步了解不同年龄、性别的蒙古族饮酒与高血压的关系.[方法]采用对数线性模型的方法对各因素的主效应和交互效应进行分析.[结果]男性青年和老年组饮酒与高血压有关联(P＜0.05),男性中年组及女性各组饮酒与高血压均无关联(P＞0.05)；男性各年龄组和女性中年组的饮酒量与高血压相关联(P＜ 0.001),而女性青年组和老年组饮酒量与高血压无关联(p＞0.05)；男性青年组饮酒年限与高血压有关联(P＜0.05),男性中年、老年及女性各组饮酒年限与高血压无关联(P＞0.05).[结论]蒙古族人群饮酒量与高血压相关联；对数线性模型方法是分析高维列联表交互效应的理想方法.
Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally
2017-03-14
1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUCinf) of dalbavancin is a key parameter and AUCinf/MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. Cmax) Cmax versus AUCinf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUCinf were performed using published Cmax data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The Cmax versus AUCinf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE regression models, a single time point strategy of using Cmax (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUCinf of dalbavancin in patients.
Kramer, Michael R; Valderrama, Amy L; Casper, Michele L
2015-08-15
Against the backdrop of late 20th century declines in heart disease mortality in the United States, race-specific rates diverged because of slower declines among blacks compared with whites. To characterize the temporal dynamics of emerging black-white racial disparities in heart disease mortality, we decomposed race-sex-specific trends in an age-period-cohort (APC) analysis of US mortality data for all diseases of the heart among adults aged ≥35 years from 1973 to 2010. The black-white gap was largest among adults aged 35-59 years (rate ratios ranged from 1.2 to 2.7 for men and from 2.3 to 4.0 for women) and widened with successive birth cohorts, particularly for men. APC model estimates suggested strong independent trends across generations ("cohort effects") but only modest period changes. Among men, cohort-specific black-white racial differences emerged in the 1920-1960 birth cohorts. The apparent strength of the cohort trends raises questions about life-course inequalities in the social and health environments experienced by blacks and whites which could have affected their biomedical and behavioral risk factors for heart disease. The APC results suggest that the genesis of racial disparities is neither static nor restricted to a single time scale such as age or period, and they support the importance of equity in life-course exposures for reducing racial disparities in heart disease.
Ru Giuseppe
2009-09-01
Full Text Available Abstract Background The Age-Period-Cohort (APC analysis is routinely used for time trend analysis of cancer incidence or mortality rates, but in veterinary epidemiology, there are still only a few examples of this application. APC models were recently used to model the French epidemic assuming that the time trend for BSE was mainly due to a cohort effect in relation to the control measures that may have modified the BSE exposure of cohorts over time. We used a categorical APC analysis which did not require any functional form for the effect of the variables, and examined second differences to estimate the variation of the BSE trend. We also reanalysed the French epidemic and performed a simultaneous analysis of Italian data using more appropriate birth cohort categories for comparison. Results We used data from the exhaustive surveillance carried out in France and Italy between 2001 and 2007, and comparatively described the trend of the epidemic in both countries. At the end, the shape and irregularities of the trends were discussed in light of the main control measures adopted to control the disease. In Italy a decrease in the epidemic became apparent from 1996, following the application of rendering standards for the processing of specific risk material (SRM. For the French epidemic, the pattern of second differences in the birth cohorts confirmed the beginning of the decrease from 1995, just after the implementation of the meat and bone meal (MBM ban for all ruminants (1994. Conclusion The APC analysis proved to be highly suitable for the study of the trend in BSE epidemics and was helpful in understanding the effects of management and control of the disease. Additionally, such an approach may help in the implementation of changes in BSE regulations.
Li-Fang Zhang; Yan-Hua Li; Shang-Hang Xie; Wei Ling; Sui-Hong Chen; Qing Liu; Qi-Hong Huang; Su-Mei Cao
2015-01-01
Introduction:In the past several decades, declining incidences of nasopharyngeal carcinoma (NPC) have been observed in Chinese populations in Hong Kong, Taiwan, Los Angeles, and Singapore. A previous study indicated that the incidence of NPC in Sihui County, South China remained stable until 2002, but whether age, diagnosis period, and birth cohort affect the incidence of NPC remains unknown. Methods:Age-standardized rates (ASRs) of NPC incidence based on the world standard population were examined in both males and females in Sihui County from 1987 to 2011. Joinpoint regression analysis was conducted to quantify the changes in incidence trends. A Poisson regression age-period-cohort model was used to assess the effects of age, diagnosis period, and birth cohort on the risk of NPC. Results:The ASRs of NPC incidence during the study period were 30.29/100,000 for males and 13.09/100,000 for females. The incidence of NPC remained stable at a non-significant average annual percent change of 0.2%for males and−1.6%for females throughout the entire period. A significantly increased estimated annual percent change of 6.8%(95%confidence interval, 0.1%–14.0%) was observed from 2003 to 2009 for males. The relative risk of NPC increased with advancing age up to 50–59 and decreased at ages>60 years. The period effect curves on NPC were nearly flat for males and females. The birth cohort effect curve for males showed an increase from the 1922 cohort to the 1957 cohort and a decrease thereafter. In females, there was an undulating increase in the relative risk from the 1922 cohort to the 1972 cohort. Conclusions:The incidence trends for NPC remained generally stable in Sihui from 1987 to 2011, with an increase from 2003 to 2009. The relative risks of NPC increased in younger females.
Computing and Visualizing Log-linear Analysis Interactively
Pedro M. Valero-Mora
2002-09-01
Full Text Available The purpose of this paper is to describe a simple program for computing log-linear analysis based on a direct manipulation interface that emphasizes the use of plots for guiding the analysis and evaluating the results obtained. The program described here works as a plugin for ViSta (Young 1997 and receives the name of LoginViSta (for Log-linear analysis in ViSTa. ViSta is a statistical package based on Lisp-Stat. Lisp-Stat is a statistical programming environment developed by Luke Tierney (1990 that features an object-oriented approach for statistical computing and one that allows for The purpose of this paper is to describe a simple program for computing log-linear analysis based on a direct manipulation interface that emphasizes the use of plots for guiding the analysis and evaluating the results obtained. The program described here works as a plugin for ViSta (Young 1997 and receives the name of LoginViSta (for Log-linear analysis in ViSTa. ViSta is a statistical package based on Lisp-Stat. Lisp-Stat is a statistical programming environment developed by Luke Tierney (1990 that features an object-oriented approach for statistical computing and one that allows for Computing and Visualizing Pedro Valero-Mora and Forrest W. Young interactive and dynamic graphs.
郭俊锋; 芮执元; 冯瑞成; 魏兴春
2014-01-01
Based on the log-linear virtual age process, an imperfect preventive maintenance policy for numerical control (NC) machine tools with random maintenance quality is proposed. The proposed model is a combination of the Kijima type virtual age model and the failure intensity adjustment model. Maintenance intervals of the proposed hybrid model are derived when the failure intensity increase factor and the restoration factor are both random variables with uniform distribution. The optimal maintenance policy in infinite time horizon is presented. A numerical example is given when the failures of NC machine tools are described by the log-linear process. Finally, a discussion is presented to show how the optimal results depend on the different cost parameters.
Estimation of Log-Linear-Binomial Distribution with Applications
Elsayed Ali Habib
2010-01-01
Full Text Available Log-linear-binomial distribution was introduced for describing the behavior of the sum of dependent Bernoulli random variables. The distribution is a generalization of binomial distribution that allows construction of a broad class of distributions. In this paper, we consider the problem of estimating the two parameters of log-linearbinomial distribution by moment and maximum likelihood methods. The distribution is used to fit genetic data and to obtain the sampling distribution of the sign test under dependence among trials.
Rughiniș, Cosima; Humă, Bogdana
2015-12-01
In this paper we argue that quantitative survey-based social research essentializes age, through specific rhetorical tools. We outline the device of 'socio-demographic variables' and we discuss its argumentative functions, looking at scientific survey-based analyses of adult scientific literacy, in the Public Understanding of Science research field. 'Socio-demographics' are virtually omnipresent in survey literature: they are, as a rule, used and discussed as bundles of independent variables, requiring little, if any, theoretical and measurement attention. 'Socio-demographics' are rhetorically effective through their common-sense richness of meaning and inferential power. We identify their main argumentation functions as 'structure building', 'pacification', and 'purification'. Socio-demographics are used to uphold causal vocabularies, supporting the transmutation of the descriptive statistical jargon of 'effects' and 'explained variance' into 'explanatory factors'. Age can also be studied statistically as a main variable of interest, through the age-period-cohort (APC) disambiguation technique. While this approach has generated interesting findings, it did not mitigate the reductionism that appears when treating age as a socio-demographic variable. By working with age as a 'socio-demographic variable', quantitative researchers convert it (inadvertently) into a quasi-biological feature, symmetrical, as regards analytical treatment, with pathogens in epidemiological research. Copyright © 2015 Elsevier Inc. All rights reserved.
MilenaIlic; IrenaIlic
2016-01-01
Background:For both men and women worldwide, colorectal cancer is among the leading causes of cancer-related death. This study aimed to assess the mortality trends of colorectal cancer in Serbia between 1991 and 2010, prior to the introduction of population-based screening. Methods:Joinpoint regression analysis was used to estimate average annual percent change (AAPC) with the cor-responding 95% conifdence interval (CI). Furthermore, age-period-cohort analysis was performed to examine the effects of birth cohort and calendar period on the observed temporal trends. Results:We observed a signiifcantly increased trend in colorectal cancer mortality in Serbia during the study period (AAPC=1.6%, 95% CI 1.3%–1.8%). Colorectal cancer showed an increased mortality trend in both men (AAPC=2.0%, 95% CI 1.7%–2.2%) and women (AAPC=1.0%, 95% CI 0.6%–1.4%). The temporal trend of colorectal cancer mortality was signiifcantly affected by birth cohort (P Conclusions:We found that colorectal cancer mortality in Serbia increased considerably over the past two decades. Mortality increased particularly in men, but the trends were different according to age group and subsite. In Serbia, interventions to reduce colorectal cancer burden, especially the implementation of a national screening program, as well as treatment improvements and measures to encourage the adoption of a healthy lifestyle, are needed.
彭大松
2012-01-01
通过运用对数线性模型，对离婚、独身等婚姻观念受教育程度、代际更迭的影响分析发现：教育获得对传统婚姻观念构成巨大的冲击，随着教育程度的提高，人们会越来越淡漠传统的婚姻观念。而代际更替的效应分析表明，随着代际的推进，年轻一代比年长一代倾向于反对传统观念，喜欢接受新观念，从而使得新的婚姻观念多样化、常态化和普遍化。%Based on China＇s social survey （CGSS2006） data, the author using logarithm linear model analyze the marriage values such as divorce, single etc. influenced by educational level and intergeneration difference. The paper reveals net effects of education and generation by using logarithm product level model. The results show that people who has more education has little traditional marriage values, and with the advancement of generation, the younger generation tend to oppose the traditional values and prefer the new values more than the older generation, so as to make the new marriage values diversification, normalization and generalization.
Use of Log-Linear Models in Classification Problems.
1981-12-01
populations are H(1): Infants with Apgar scores 3 of seven or below and H(2) : Infants with normal Apgar scores . The data are in Table 3. We take t - A...lypoxic trauma: Damage to an infant during or shortly after birth caused by oxygen deficiency. 3Apgar score : An index of the level of physiological...functioning based on symptoms of the infant observed shortly after birth. See Apgar (1953). 41,, I.O -4V-4 -4 0.4 .� N-4 (4 .iGo 0 th VW -IVVr0%C co
System Combination with Log Linear Models (Author’s Manuscript)
2016-05-19
problem is to segment the continuous speech into segments, and then classify each (independent) segment in an acoustic code breaking fashion [9...system, and ηk are the corresponding weights. φlm(·) de- notes the language features which provide pronunciation probabili- ties, word statistics
El Allaki, Farouk; Christensen, Jette; Vallières, André; Paré, Julie
2014-10-01
The objective of this study was to estimate the population size of Canadian poultry farms in 3 subpopulations (British Columbia, Ontario, and Other) by poultry category. We used data for 2008 to 2011 from the Canadian Notifiable Avian Influenza (NAI) Surveillance System (CanNAISS). Log-linear capture-recapture models were applied to estimate the number of commercial chicken and turkey farms. The estimated size of farm populations was validated by comparing sizes to data provided by the Canadian poultry industry in 2007, which were assumed to be complete and exhaustive. Our results showed that the log-linear modelling approach was an appropriate tool to estimate the population size of Canadian commercial chicken and turkey farms. The 2007 farm population size for each poultry category was included in the 95% confidence intervals of the farm population size estimates. Log-linear capture-recapture modelling might be useful for estimating the number of farms using surveillance data when no comprehensive registry exists.
A programmable log-linear amplifier for wide range nuclear power measuring channels
Khaleeq, M. Tahir; Alam, Mahmood; Ghumman, Iftikhar Ahmad
2002-12-01
A programmable log-linear amplifier has been developed for nuclear channels. The amplifier can be programmed for logarithmic, linear or log-linear mode of operation. In the log-linear mode, the amplifier operates partially in log mode and automatically switches to linear mode at any preset point. The log-linear mode is used for wide range operation of nuclear channels and, hence, the amplifier will improve the fault finding capabilities of the nuclear channels used in power range. The amplifier is tested at nuclear reactor and the results are found in very good agreement with the designed specifications. This article presents design and construction of the amplifier and field test results.
Bie, Peter
2011-01-01
Sodium intake and renin system avtivity: Effects of metroprolol on the log-linear relationship in conscious rats.......Sodium intake and renin system avtivity: Effects of metroprolol on the log-linear relationship in conscious rats....
Carmen Saiz-Sánchez
1999-05-01
Full Text Available OBJETIVO. Estudiar la evolución de la mortalidad por accidentes de tráfico en España y su posible aplicación a un modelo edad-periodo-cohorte, así como el efecto que pueden tener algunas medidas de seguridad vial seleccionadas. MATERIAL Y MÉTODOS. Se obtuvieron las tasas de mortalidad por accidentes de tráfico y las tasas en intervalos quinquenales de edad para cada sexo, lo que permite su estudio como tasas específicas de edad por cohortes de nacimiento. Para determinar la asociación entre las medidas de seguridad vial seleccionadas y la mortalidad se han construido modelos de regresión de Poisson. RESULTADOS. Se observaron dos ondas evolutivas en la mortalidad por accidentes de tráfico. Respecto a la edad, no podemos hablar de un efecto claro; tampoco se encontró un efecto cohorte ni para varones ni para mujeres. En relación con las medidas de seguridad vial, se discutió la consistencia que guardaban los modelos seleccionados con los resultados gráficos, y se obtuvo que el uso obligatorio del casco y de las luces de cruce en motocicletas se ha asociado significativamente a la reducción de la mortalidad (RR 0.73, pOBJECTIVE. To study the evolution of traffic accidents mortality in Spain and its possible application to an age-period-cohort analysis, as well as the effect of selected road safety measures. MATERIAL AND METHODS. Road accidents rates of mortality were obtained, and five-year interval age rates for each sex, which allows the study of specific rates of age by birth cohorts. To determine the association between the selected road safety measures and mortality, Poisson regression models were adjusted. RESULTS. Two waves emerge in the evolution of traffic accidents. There was no clear effect with respect to age, nor was there a cohort effect for men or women. As to the road safety measures, we discuss the consistency between the selected models and graphic results. The compulsory use of helmet and of crossing lights is
Rafael Perez Ribas
2007-06-01
individuos, proyectando medidas futuras de privación en la renta. Para ello es utilizado un modelo de edad-período-cohorte (IPC sobre la pobreza, absoluta y relativa, observada en las PNADs entre 1995 y 2003, y sobre su composición T-C estimada. Los resultados apuntan que el efecto cohorte es más indicativo que el de período sobre la reducción de la pobreza recientemente, en especial de su componente crónico. Ya el componente transitorio presenta tendencia de aumento a lo largo del tiempo.The profile of poverty in Brazil has changed in recent decades, partially due to alterations in the reproduction and mortality standards of the population. During this same period, the designs of social policies, especially those for reducing poverty, have also undergone changes. It must be emphasized that the effectiveness of these policies depends on the type of poverty that is being dealt with. Destitution can be a permanent or temporary phenomenon, and this transient-chronic (T-C composition may show a temporal trend. The objective of this paper is to analyze this trend as well as the temporal evolution of poverty rates in urban areas. The result may make it possible to predict future income and destitution. To this end, an Age-Period-Cohort (APC model was applied to absolute and relative poverty measures and to the T-C composition, based on data from the PNADs between 1995 and 2003. The results indicate that the cohort-effect is more expressive than the period effect for the recent reduction in poverty rates, especially for the chronic component. In contrast, the transient component showed a relative temporal tendency to increase.
Hirano, Hiroki; Horiuchi, Tetsuya; Hirano, Harutoyo; Kurita, Yuichi; Ukawa, Teiji; Nakamura, Ryuji; Saeki, Noboru; Yoshizumi, Masao; Kawamoto, Masashi; Tsuji, Toshio
2013-01-01
This paper proposes a novel technique to support the monitoring of peripheral vascular conditions using biological signals such as electrocardiograms, arterial pressure values and pulse oximetry plethysmographic waveforms. In this approach, a second-order log-linearized model (referred to here as a log-linearized peripheral arterial viscoelastic model) is used to describe the non-linear viscoelastic relationship between blood pressure waveforms and photo-plethysmographic waveforms. The proposed index enables estimation of peripheral arterial wall stiffness changes induced by sympathetic nerve activity. The validity of the method is discussed here based on the results of peripheral vascular condition monitoring conducted during endoscopic thoracic sympathectomy (ETS). The results of ETS monitoring showed significant changes in stiffness variations between the periods before and during the procedures observed (p < 0.01) as well as during and after them (p < 0.01), so that it was confirmed that sympathetic nerve activity is drastically decreased in the area around the monitoring site after the thoracic sympathetic nerve trunk on the monitoring side is successfully blocked. In addition, no change was observed in the values of the proposed index during the ETS procedure on the side opposite that of the monitoring site. The experimental results obtained clearly show the proposed method can be used to assess changes in sympathetic nerve activity during ETS.
毛学荣; 李晓月
2015-01-01
主要研究了随机对数线性（ SLL）模型以及如何基于SLL模型计算欧式期权平均收益。此外，还演绎了资产价格的Monte Carlo模拟。%The key aim of this serial articles is to study various stochastic models in finance with emphasise on the Monte Carlo simulations with R for these models.This paper investigates the stochastic Log⁃linear (SLL) model and obtains the mean payoff of European options.Moreover,this paper discusses how to perform Monte Carlo simulations on the asset price.
Salter, Daniel W.
2003-01-01
Log-linear analysis (LLA) techniques for categorical variables are demonstrated and evaluated using data from the Myers-Briggs Type Indicator. Symmetrical LLA and asymmetrical LLA address questions of association and inference, respectively. Configural frequency analysis is examined as a strategy for whole types research. LLA approaches seem…
Log-linear randomized-response models taking self-protective response behavior into account
Cruyff, M.J.L.F; Hout, Ardo van den; Heijden, P.G.M. van der; Böckenholt, Ulf
2007-01-01
Randomized response (RR) is an interview technique designed to eliminate response bias when sensitive questions are asked. In RR the answer depends partly on the true status of the respondent and partly on the outcome of a randomizing device. Although RR elicits more honest answers than direct quest
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
Francisco Franco-Marina
2009-01-01
Full Text Available OBJECTIVE: To assess the age, period and cohort effects on breast cancer (BC mortality in Mexico. MATERIAL AND METHODS: Age, period and cohort curvature trends for BC mortality were estimated through the Poisson Regression model proposed by Holford. RESULTS: Nationally, BC death rates have leveled off since 1995 in most age groups. BC mortality trends are mainly determined by birth cohort and age effects in Mexico. Women born between 1940 and 1955 show the highest rate of increase in BC mortality. Women born afterwards still show an increasing trend but at a much lower rate. Mammography and adjuvant therapy have had a limited impact on mortality. Potential reasons for observed patterns are discussed. An increase in BC mortality in Mexico is expected in the following decades. CONCLUSIONS: Mammography screening programs and timely access to effective treatment should be a national priority to reverse the expected increasing BC mortality trend.OBJETIVO: Evaluar efectos de edad-periodo-cohorte en la mortalidad por cáncer de mama (CaMa en México. MATERIAL Y MÉTODOS: Las tendencias de los efectos de edad-periodo-cohorte fueron estimados mediante un modelo de regresión de Poisson propuesto por Holford. RESULTADOS: Las tasas de mortalidad por CaMa se han estabilizado en la mayoría de los grupos de edad desde 1995 y están determinadas principalmente por efectos de cohorte y edad. Las mujeres nacidas entre 1940 y 1955 muestran los mayores aumentos en la mortalidad en comparación con las nacidas después de este período. La mamografía y la terapia adyuvante han tenido un impacto limitado sobre la mortalidad. Se discuten posibles explicaciones de las tendencias observadas. En las siguientes décadas se espera continúe aumentando la mortalidad por CaMa. CONCLUSIONES: El acceso a mamografía y a tratamiento oportuno y efectivo debieran ser una prioridad para revertir la tendencia creciente esperada de la mortalidad por CM.
Materialism across the lifespan : An age-period-cohort analysis
Jaspers, Esther; Pieters, Rik
This research examined the development of materialism across the lifespan. Two initial studies revealed that: 1) lay beliefs were that materialism declines with age; and 2) previous research findings also implied a modest, negative relationship between age and materialism. Yet, previous research has
Kjolby, Mads; Bie, Peter
2008-01-01
data, we demonstrate that RAAS variables are log-linearly related to sodium intake over a >250-fold range in sodium intake, defining dietary sodium function lines that are simple measures of the sodium sensitivity of the RAAS. The dietary function line for plasma ANG II concentration increases from...... theoretical zero at a daily sodium intake of 17 mmol Na/kg (intercept) with a slope of 16 pM increase per decade of decrease in dietary sodium intake.......Responses to acute sodium loading depend on the load and on the level of chronic sodium intake. To test the hypothesis that an acute step increase in total body sodium (TBS) elicits a natriuretic response, which is dependent on the chronic level of TBS, we measured the effects of a bolus of Na...
Study on the Log-Linear Velocity Profile of Near-Bed Tidal Current in Estuarine and Coastal Waters
SONG Zhi-yao; YAN Yi-xin; HAO Jia-ling; KONG Jun; ZHANG Hong-gui
2006-01-01
Many observed data show that the near-bed tidal velocity profile deviates from the usual logarithmic law. The amount of deviation may not be large, but it results in large errors when the logarithmic velocity profile is used to calculate the bed roughness height and friction velocity (or shear stress). Based on their investigation, Kuo et al. (1996) indicate that the deviation amplitude may exceed 100%. On the basis of fluid dynamic principle, the profile of the near-bed tidal velocity in estuarine and coastal waters is established by introducing Prandtl's mixing length theory and Von Karman self-similarity theory. By the fitting and calculation of the near-bed velocity profile data observed in the west Solent, England, the results are compared with those of the usual logarithmic model, and it is shown that the present near-bed tidal velocity profile model has such advantages as higher fitting precision, and better inner consistency between the roughness height and friction velocity. The calculated roughness height and friction velocity are closer to reality. The conclusions are validated that the logarithmic model underestimates the roughness height and friction velocity during tidal acceleration and overestimates them during tidal deceleration.
Kjolby, Mads; Bie, Peter
2008-01-01
Responses to acute sodium loading depend on the load and on the level of chronic sodium intake. To test the hypothesis that an acute step increase in total body sodium (TBS) elicits a natriuretic response, which is dependent on the chronic level of TBS, we measured the effects of a bolus of NaCl during different low-sodium diets spanning a 25-fold change in sodium intake on elements of the renin-angiotensin-aldosterone system (RAAS) and on natriuresis. To custom-made, low-sodium chow (0.003%), NaCl was added to provide four levels of intake, 0.03-0.75 mmol.kg(-1).day(-1) for 7 days. Acute NaCl administration increased PV (+6.3-8.9%) and plasma sodium concentration (~2%) and decreased plasma protein concentration (-6.4-8.1%). Plasma ANG II and aldosterone concentrations decreased transiently. Potassium excretion increased substantially. Sodium excretion, arterial blood pressure, glomerular filtration rate, urine flow, plasma potassium, and plasma renin activity did not change. The results indicate that sodium excretion is controlled by neurohumoral mechanisms that are quite resistant to acute changes in plasma volume and colloid osmotic pressure and are not down-regulated within 2 h. With previous data, we demonstrate that RAAS variables are log-linearly related to sodium intake over a >250-fold range in sodium intake, defining dietary sodium function lines that are simple measures of the sodium sensitivity of the RAAS. The dietary function line for plasma ANG II concentration increases from theoretical zero at a daily sodium intake of 17 mmol Na/kg (intercept) with a slope of 16 pM increase per decade of decrease in dietary sodium intake.
Modelling BSE trend over time in Europe, a risk assessment perspective
Ducrot, C.; Sala, C.; Ru, G.; Koeijer, de A.A.; Sheridan, H.; Saegerman, C.; Selhorst, T.; Arnold, M.; Polak, M.P.; Calavas, D.
2010-01-01
BSE is a zoonotic disease that caused the emergence of variant Creuzfeldt-Jakob disease in the mid 1990s. The trend of the BSE epidemic in seven European countries was assessed and compared, using Age-Period-Cohort and Reproduction Ratio modelling applied to surveillance data 2001-2007. A strong dec
Modelling BSE trend over time in Europe, a risk assessment perspective
Ducrot, C.; Sala, C.; Ru, G.; Koeijer, de A.A.; Sheridan, H.; Saegerman, C.; Selhorst, T.; Arnold, M.; Polak, M.P.; Calavas, D.
2010-01-01
BSE is a zoonotic disease that caused the emergence of variant Creuzfeldt-Jakob disease in the mid 1990s. The trend of the BSE epidemic in seven European countries was assessed and compared, using Age-Period-Cohort and Reproduction Ratio modelling applied to surveillance data 2001-2007. A strong
Age-period-cohort analysis in the 1870s: Diagrams, stereograms, and the basic differential equation
Keiding, Niels
2011-01-01
The period 1868–1880 saw a dramatic development in analytical and graphical descriptions of mortality, varying with time and age; this took place almost entirely in the German language. This report attempts a survey of these developments with brief notes on other graphical representations...
Chun-Hsiao Chu
2017-01-01
Full Text Available Externality is an important issue for formulating the regulation policy of a taxi market. However, this issue is rarely taken into account in the current policy-making process, and it has not been adequately explored in prior research. This study extends the model proposed by Chang and Chu in 2009 with the aim of exploring the effect of externality on the optimization of the regulation policy of a cruising taxi market. A closed-form solution for optimizing the fare, vacancy rate, and subsidy of the market is derived. The results show that when the externality of taxi trips is taken into consideration, the optimal vacancy rate should be lower and the subsidy should be higher than they are under current conditions where externality is not considered. The results of the sensitivity analysis on the occupied and vacant distance indicate that the relation of the vacant distance to the marginal external cost is more sensitive than the occupied distance. The result of the sensitivity analysis on the subsidy shows the existence of a negative relationship between the marginal external cost and the optimal subsidy.
Jing Chen
Full Text Available BACKGROUND: Chronic obstructive pulmonary disease (COPD is a leading cause of death, particularly in developing countries. Little is known about the effects of economic development on COPD mortality, although economic development may potentially have positive and negative influences over the life course on COPD. We took advantage of a unique population whose rapid and recent economic development is marked by changes at clearly delineated and identifiable time points, and where few women smoke, to examine the effect of macro-level events on COPD mortality. METHODS: We used Poisson regression to decompose sex-specific COPD mortality rates in Hong Kong from 1981 to 2005 into the effects of age, period and cohort. RESULTS: COPD mortality declined strongly over generations for people born from the early to mid 20th century, which was particularly evident for the first generation to grow up in a more economically developed environment for both sexes. Population wide COPD mortality decreased when air quality improved and increased with increasing air pollution. COPD mortality increased with age, particularly after menopause among women. CONCLUSIONS: Economic development may reduce vulnerability to COPD by reducing long-lasting insults to the respiratory system, such as infections, poor nutrition and indoor air pollution. However, some of these gains may be offset if economic development results in increasing air pollution or increasing smoking.
An R package for fitting age, period and cohort models
Adriano Decarli
2014-11-01
Full Text Available In this paper we present the R implementation of a GLIM macro which fits age-period-cohort model following Osmond and Gardner. In addition to the estimates of the corresponding model, owing to the programming capability of R as an object oriented language, methods for printing, plotting and summarizing the results are provided. Furthermore, the researcher has fully access to the output of the main function (apc which returns all the models fitted within the function. It is so possible to critically evaluate the goodness of fit of the resulting model.
Simple capture-recapture models permitting unequal catchability and variable sampling effort.
Agresti, A
1994-06-01
We consider two capture-recapture models that imply that the logit of the probability of capture is an additive function of an animal catchability parameter and a parameter reflecting the sampling effort. The models are special cases of the Rasch model, and satisfy the property of quasi-symmetry. One model is log-linear and the other is a latent class model. For the log-linear model, point and interval estimates of the population size are easily obtained using standard software, such as GLIM.
Cosolvency and deviations from log-linear solubilization.
Rubino, J T; Yalkowsky, S H
1987-06-01
The solubilities of three nonpolar drugs, phenytoin, diazepam, and benzocaine, have been measured in 14 cosolvent-water binary mixtures. The observed solubilities were examined for deviations from solubilities calculated by the equation log Sm = f log Sc + (1 - f) log Sw, where Sm is the solubility of the drug in the cosolvent-water mixture, Sc is the solubility of the drug in neat cosolvent, f is the volume fraction of cosolvent, and Sw is the solubility of the drug in water. When presented graphically, the patterns of the deviations were similar for all three drugs in mixtures of amphiprotic cosolvents (glycols, polyols, and alcohols) and water as well as nonpolar, aprotic cosolvents (dioxane, triglyme, dimethyl isosorbide) and water. The deviations were positive for phenytoin and benzocaine but negative for diazepam in mixtures of dipolar, aprotic cosolvents (dimethylsulfoxide, dimethylformamide, and dimethylacetamide) and water. The source of the deviations could not consistently be attributed to physical properties of the cosolvent-water mixtures or to alterations in the solute crystal. Similarities between the results of this study and those of previous investigations suggest that changes in the structure of the solvent play a role in the deviations from the expected solubilities.
A toolkit for analyzing nonlinear dynamic stochastic models easily
Uhlig, H.F.H.V.S.
1995-01-01
Often, researchers wish to analyze nonlinear dynamic discrete-time stochastic models. This paper provides a toolkit for solving such models easily, building on log-linearizing the necessary equations characterizing the equilibrium and solving for the recursive equilibrium law of motion with the meth
A toolkit for analyzing nonlinear dynamic stochastic models easily
Uhlig, H.F.H.V.S.
1995-01-01
Often, researchers wish to analyze nonlinear dynamic discrete-time stochastic models. This paper provides a toolkit for solving such models easily, building on log-linearizing the necessary equations characterizing the equilibrium and solving for the recursive equilibrium law of motion with the meth
Higher-order effects in asset-pricing models with long-run risks
Pohl, W.; Schmedders, K.; Wilms, Ole
2017-01-01
This paper shows that the latest generation of asset pricing models with long-run risk exhibits economically significant nonlinearities, and thus the ubiquitous Campbell--Shiller log-linearization can generate large numerical errors. These errors in turn translate to considerable errors in the model
Khuluse, S
2013-11-01
Full Text Available compare ordinary and regression kriging models to the Poisson log-linear spatial model (Diggle et al. 1998, Diggle et al. 2007) with and without covariate information in mapping annual average exceedance frequencies of the South African PM10 air quality...
N.A.H. van Hest; A.D. Grant; F. Smit (Filip); A. Story; J.H. Richardus (Jan Hendrik)
2008-01-01
textabstractCapture–recapture analysis has been used to evaluate infectious disease surveillance. Violation of the underlying assumptions can jeopardize the validity of the capture–recapture estimates and a tool is needed for cross-validation. We re-examined 19 datasets of log-linear model capture–r
The use of mixed logit models to reflect heterogeneity in capture-recapture studies.
Coull, B A; Agresti, A
1999-03-01
We examine issues in estimating population size N with capture-recapture models when there is variable catchability among subjects. We focus on a logistic-normal mixed model, for which the logit of the probability of capture is an additive function of a random subject and a fixed sampling occasion parameter. When the probability of capture is small or the degree of heterogeneity is large, the log-likelihood surface is relatively flat and it is difficult to obtain much information about N. We also discuss a latent class model and a log-linear model that account for heterogeneity and show that the log-linear model has greater scope. Models assuming homogeneity provide much narrower intervals for N but are usually highly overly optimistic, the actual coverage probability being much lower than the nominal level.
Economic policy optimization based on both one stochastic model and the parametric control theory
Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit
2016-06-01
A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)
Stone, G; Chapman, B; Lovell, D
2009-11-01
In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (D(T)), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a D(T)-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes.
Keklik, Nene M; Demirci, Ali; Puri, Virendra M; Heinemann, Paul H
2012-02-01
Pulsed UV light inactivation of Salmonella Typhimurium on unpackaged and vacuum-packaged chicken breast, Listeria monocytogenes on unpackaged and vacuum-packaged chicken frankfurters, and Salmonella Enteritidis on shell eggs was explained by log-linear and Weibull models using inactivation data from previous studies. This study demonstrated that the survival curves of Salmonella Typhimurium and L. monocytogenes were nonlinear exhibiting concavity. The Weibull model was more successful than the log-linear model in estimating the inactivations for all poultry products evaluated, except for Salmonella Enteritidis on shell eggs, for which the survival curve was sigmoidal rather than concave, and the use of the Weibull model resulted in slightly better fit than the log-linear model. The analyses for the goodness of fit and performance of the Weibull model produced root mean square errors of 0.059 to 0.824, percent root mean square errors of 3.105 to 21.182, determination coefficients of 0.747 to 0.989, slopes of 0.842 to 1.042, bias factor values of 0.505 to 1.309, and accuracy factor values of 1.263 to 6.874. Overall, this study suggests that the survival curves of pathogens on poultry products exposed to pulsed UV light are nonlinear and that the Weibull model may generally be a useful tool to describe the inactivation patterns for pathogenic microorganisms affiliated with poultry products.
Edgar F. Vargas
2007-01-01
Full Text Available The deviations observed in the solubility of ibuprofen (IBP and naproxen (NAP in propylene glycol (PG + water (W cosolvent mixtures with respect to the logarithmic-linear model proposed by Yalkowsky have been analyzed at 25.00 ± 0.05 ºC. Negative deviations were obtained in all cosolvent compositions for both drugs; they were greater for IBP. Another treatment, based on Gibbs free energy relationships, was also employed showing an apparent hydrophobicity chameleonic effect, because at low PG proportions NAP is more hydrophobic, whereas at high PG proportions IBP is more hydrophobic. The results are discussed in terms of solute-solvent and solvent-solvent interactions.
Bimodal Log-linear Regression for Fusion of Audio and Visual Features
Rudovic, Ognjen; Petridis, Stavros; Pantic, Maja
2013-01-01
One of the most commonly used audiovisual fusion approaches is feature-level fusion where the audio and visual features are concatenated. Although this approach has been successfully used in several applications, it does not take into account interactions between the features, which can be a problem
Andersen, Torben G.; Bollerslev, Tim; Huang, Xin
Building on realized variance and bi-power variation measures constructed from high-frequency financial prices, we propose a simple reduced form framework for effectively incorporating intraday data into the modeling of daily return volatility. We decompose the total daily return variability...... of an ACH model for the time-varying jump intensities coupled with a relatively simple log-linear structure for the jump sizes. Lastly, we discuss how the resulting reduced form model structure for each of the three components may be used in the construction of out-of-sample forecasts for the total return...
Yohei Ban
2008-12-01
Full Text Available For 2 x 2 x K contingency tables, Tomizawa considered a Shannon entropy type measure to represent the degree of departure from a log-linear model of no three-factor interaction (the NOTFI model. This paper proposes a generalization of Tomizawa's measure for 2 x 2 x K tables. The measure proposed is expressed by using Patil-Taillie diversity index or Cressie-Read power-divergence. A special case of the proposed measure includes Tomizawa's measure. The proposed measure would be useful for comparing the degrees of departure from the NOTFI model in several tables.
Estimation of Models in a Rasch Family for Polytomous Items and Multiple Latent Variables
Carolyn J. Anderson
2007-02-01
Full Text Available The Rasch family of models considered in this paper includes models for polytomous items and multiple correlated latent traits, as well as for dichotomous items and a single latent variable. An R package is described that computes estimates of parameters and robust standard errors of a class of log-linear-by-linear association (LLLA models, which are derived from a Rasch family of models. The LLLA models are special cases of log-linear models with bivariate interactions. Maximum likelihood estimation of LLLA models in this form is limited to relatively small problems; however, pseudo-likelihood estimation overcomes this limitation. Maximizing the pseudo-likelihood function is achieved by maximizing the likelihood of a single conditional multinomial logistic regression model. The parameter estimates are asymptotically normal and consistent. Based on our simulation studies, the pseudo-likelihood and maximum likelihood estimates of the parameters of LLLA models are nearly identical and the loss of efficiency is negligible. Recovery of parameters of Rasch models fit to simulated data is excellent.
Age, period and cohort effects on suicide mortality in Russia, 1956-2005.
Jukkala, Tanya; Stickley, Andrew; Mäkinen, Ilkka Henrik; Baburin, Aleksei; Sparén, Pär
2017-03-07
Russian suicide mortality rates changed rapidly over the second half of the twentieth century. This study attempts to differentiate between underlying period and cohort effects in relation to the changes in suicide mortality in Russia between 1956 and 2005. Sex- and age-specific suicide mortality data were analyzed using an age-period-cohort (APC) approach. Descriptive analyses and APC modeling with log-linear Poisson regression were performed. Strong period effects were observed for the years during and after Gorbachev's political reforms (including the anti-alcohol campaign) and for those following the break-up of the Soviet Union. After mutual adjustment, the cohort- and period-specific relative risk estimates for suicide revealed differing underlying processes. While the estimated period effects had an overall positive trend, cohort-specific developments indicated a positive trend for the male cohorts born between 1891 and 1931 and for the female cohorts born between 1891 and 1911, but a negative trend for subsequent cohorts. Our results indicate that the specific life experiences of cohorts may be important for variations in suicide mortality across time, in addition to more immediate effects of changes in the social environment.
Modeling the Geographic Consequence and Pattern of Dengue Fever Transmission in Thailand.
Bekoe, Collins; Pansombut, Tatdow; Riyapan, Pakwan; Kakchapati, Sampurna; Phon-On, Aniruth
2017-05-04
Dengue fever is one of the infectious diseases that is still a public health problem in Thailand. This study considers in detail, the geographic consequence, seasonal and pattern of dengue fever transmission among the 76 provinces of Thailand from 2003 to 2015. A cross-sectional study. The data for the study was from the Department of Disease Control under the Bureau of Epidemiology, Thailand. The quarterly effects and location on the transmission of dengue was modeled using an alternative additive log-linear model. The model fitted well as illustrated by the residual plots and the Again, the model showed that dengue fever is high in the second quarter of every year from May to August. There was an evidence of an increase in the trend of dengue annually from 2003 to 2015. There was a difference in the distribution of dengue fever within and between provinces. The areas of high risks were the central and southern regions of Thailand. The log-linear model provided a simple medium of modeling dengue fever transmission. The results are very important in the geographic distribution of dengue fever patterns.
Modelling BSE trend over time in Europe, a risk assessment perspective.
Ducrot, Christian; Sala, Carole; Ru, Giuseppe; de Koeijer, Aline; Sheridan, Hazel; Saegerman, Claude; Selhorst, Thomas; Arnold, Mark; Polak, Miroslaw P; Calavas, Didier
2010-06-01
BSE is a zoonotic disease that caused the emergence of variant Creuzfeldt-Jakob disease in the mid 1990s. The trend of the BSE epidemic in seven European countries was assessed and compared, using Age-Period-Cohort and Reproduction Ratio modelling applied to surveillance data 2001-2007. A strong decline in BSE risk was observed for all countries that applied control measures during the 1990s, starting at different points in time in the different countries. Results were compared with the type and date of the BSE control measures implemented between 1990 and 2001 in each country. Results show that a ban on the feeding of meat and bone meal (MBM) to cattle alone was not sufficient to eliminate BSE. The fading out of the epidemic started shortly after the complementary measures targeted at controlling the risk in MBM. Given the long incubation period, it is still too early to estimate the additional effect of the ban on the feeding of animal protein to all farm animals that started in 2001. These results provide new insights in the risk assessment of BSE for cattle and Humans, which will especially be useful in the context of possible relaxing BSE surveillance and control measures.
Raimondo, Sandy; Vivian, Deborah N; Barron, Mace G
2009-10-01
Ecotoxicological models generally have large data requirements and are frequently based on existing information from diverse sources. Standardizing data for toxicological models may be necessary to reduce extraneous variation and to ensure models reflect intrinsic relationships. However, the extent to which data standardization is necessary remains unclear, particularly when data transformations are used in model development. An extensive acute toxicity database was compiled for aquatic species to comprehensively assess the variation associated with acute toxicity test type (e.g., flow-through, static), reporting concentrations as nominal or measured, and organism life stage. Three approaches were used to assess the influence of these factors on log-transformed acute toxicity: toxicity ratios, log-linear models of factor groups, and comparison of interspecies correlation estimation (ICE) models developed using either standardized test types or reported concentration type. In general, median ratios were generally less than 2.0, the slopes of log-linear models were approximately one for well-represented comparisons, and ICE models developed using data from standardized test types or reported concentrations did not differ substantially. These results indicate that standardizing test data by acute test type, reported concentration type, or life stage may not be critical for developing ecotoxicological models using large datasets of log-transformed values.
Wang, Tianyou
2008-01-01
Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…
Mitra, Anindita; Li, Y.-F.; Shimizu, T.; Klämpfl, Tobias; Zimmermann, J. L.; Morfill, G. E.
2012-10-01
Cold Atmospheric Plasma (CAP) is a fast, low cost, simple, easy to handle technology for biological application. Our group has developed a number of different CAP devices using the microwave technology and the surface micro discharge (SMD) technology. In this study, FlatPlaSter2.0 at different time intervals (0.5 to 5 min) is used for microbial inactivation. There is a continuous demand for deactivation of microorganisms associated with raw foods/seeds without loosing their properties. This research focuses on the kinetics of CAP induced microbial inactivation of naturally growing surface microorganisms on seeds. The data were assessed for log- linear and non-log-linear models for survivor curves as a function of time. The Weibull model showed the best fitting performance of the data. No shoulder and tail was observed. The models are focused in terms of the number of log cycles reduction rather than on classical D-values with statistical measurements. The viability of seeds was not affected for CAP treatment times up to 3 min with our device. The optimum result was observed at 1 min with increased percentage of germination from 60.83% to 89.16% compared to the control. This result suggests the advantage and promising role of CAP in food industry.
The gRbase package for graphical modelling in R
Højsgaard, Søren; Dethlefsen, Claus
We have developed a package, called , consisting of a number of classes and associated methods to support the analysis of data using graphical models. It is developed for the open source language, R, and is available for several platforms. The package is intended to be widely extendible and flexi...... these building blocks can be combined and integrated with inference engines in the special cases of hierarchical log-linear models (undirected models). gRbase gRbase dynamicGraph...... and flexible so that package developers may implement further types of graphical models using the available methods. contains methods for representing data, specification of models using a formal language, and is linked to , an interactive graphical user interface for manipulating graphs. We show how...
Mihaela Simionescu
2014-12-01
Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.
RM-structure alignment based statistical machine translation model
Sun Jiadong; Zhao Tiejun
2008-01-01
A novel model based on structure alignments is proposed for statistical machine translation in this paper.Meta-structure and sequence of meta-structure for a parse tree are defined.During the translation process, a parse tree is decomposed to deal with the structure divergence and the alignments can be constructed at different levels of recombination of meta-structure (RM).This method can perform the structure mapping across the sub-tree structure between languages.As a result, we get not only the translation for the target language, but sequence of meta-structure of its parse tree at the same time.Experiments show that the model in the framework of log-linear model has better generative ability and significantly outperforms Pharaoh, a phrase-based system.
Baysal, Ayse Handan; Molva, Celenk; Unluturk, Sevcan
2013-09-16
In the present study, the effect of short wave ultraviolet light (UV-C) on the inactivation of Alicyclobacillus acidoterrestris DSM 3922 spores in commercial pasteurized white grape and apple juices was investigated. The inactivation of A. acidoterrestris spores in juices was examined by evaluating the effects of UV light intensity (1.31, 0.71 and 0.38 mW/cm²) and exposure time (0, 3, 5, 7, 10, 12 and 15 min) at constant depth (0.15 cm). The best reduction (5.5-log) was achieved in grape juice when the UV intensity was 1.31 mW/cm². The maximum inactivation was approximately 2-log CFU/mL in apple juice under the same conditions. The results showed that first-order kinetics were not suitable for the estimation of spore inactivation in grape juice treated with UV-light. Since tailing was observed in the survival curves, the log-linear plus tail and Weibull models were compared. The results showed that the log-linear plus tail model was satisfactorily fitted to estimate the reductions. As a non-thermal technology, UV-C treatment could be an alternative to thermal treatment for grape juices or combined with other preservation methods for the pasteurization of apple juice.
Juliano, Pablo; Knoerzer, Kai; Fryer, Peter J; Versteeg, Cornelis
2009-01-01
High-pressure, high-temperature (HPHT) processing is effective for microbial spore inactivation using mild preheating, followed by rapid volumetric compression heating and cooling on pressure release, enabling much shorter processing times than conventional thermal processing for many food products. A computational thermal fluid dynamic (CTFD) model has been developed to model all processing steps, including the vertical pressure vessel, an internal polymeric carrier, and food packages in an axis-symmetric geometry. Heat transfer and fluid dynamic equations were coupled to four selected kinetic models for the inactivation of C. botulinum; the traditional first-order kinetic model, the Weibull model, an nth-order model, and a combined discrete log-linear nth-order model. The models were solved to compare the resulting microbial inactivation distributions. The initial temperature of the system was set to 90 degrees C and pressure was selected at 600 MPa, holding for 220 s, with a target temperature of 121 degrees C. A representation of the extent of microbial inactivation throughout all processing steps was obtained for each microbial model. Comparison of the models showed that the conventional thermal processing kinetics (not accounting for pressure) required shorter holding times to achieve a 12D reduction of C. botulinum spores than the other models. The temperature distribution inside the vessel resulted in a more uniform inactivation distribution when using a Weibull or an nth-order kinetics model than when using log-linear kinetics. The CTFD platform could illustrate the inactivation extent and uniformity provided by the microbial models. The platform is expected to be useful to evaluate models fitted into new C. botulinum inactivation data at varying conditions of pressure and temperature, as an aid for regulatory filing of the technology as well as in process and equipment design.
Li, Yi; Chen, Yuren
2016-01-01
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers’ perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers’ vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers’ perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers’ perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers’ perception-response time. PMID:28042851
Yi Li
2016-12-01
Full Text Available To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers’ perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers’ vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second. A corresponding assistance model showed a positive impact on drivers’ perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers’ perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers’ perception-response time.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
Settlement prediction model of slurry suspension based on sedimentation rate attenuation
Shuai-jie GUO
2012-03-01
Full Text Available This paper introduces a slurry suspension settlement prediction model for cohesive sediment in a still water environment. With no sediment input and a still water environment condition, control forces between settling particles are significantly different in the process of sedimentation rate attenuation, and the settlement process includes the free sedimentation stage, the log-linear attenuation stage, and the stable consolidation stage according to sedimentation rate attenuation. Settlement equations for sedimentation height and time were established based on sedimentation rate attenuation properties of different sedimentation stages. Finally, a slurry suspension settlement prediction model based on slurry parameters was set up with a foundation being that the model parameters were determined by the basic parameters of slurry. The results of the settlement prediction model show good agreement with those of the settlement column experiment and reflect the main characteristics of cohesive sediment. The model can be applied to the prediction of cohesive soil settlement in still water environments.
Burant, Aniela; Lowry, Gregory V; Karamalidis, Athanasios K
2016-02-01
Treatment and reuse of brines, produced from energy extraction activities, requires aqueous solubility data for organic compounds in saline solutions. The presence of salts decreases the aqueous solubility of organic compounds (i.e. salting-out effect) and can be modeled using the Setschenow Equation, the validity of which has not been assessed in high salt concentrations. In this study, we used solid-phase microextraction to determine Setschenow constants for selected organic compounds in aqueous solutions up to 2-5 M NaCl, 1.5-2 M CaCl2, and in Na-Ca binary electrolyte solutions to assess additivity of the constants. These compounds exhibited log-linear behavior up to these high NaCl concentrations. Log-linear decreases in solubility with increasing salt concentration were observed up to 1.5-2 M CaCl2 for all compounds, and added to a sparse database of CaCl2 Setschenow constants. Setschenow constants were additive in binary electrolyte mixtures. New models to predict CaCl2 and KCl Setschenow constants from NaCl Setschenow constants were developed, which successfully predicted the solubility of the compounds measured in this study. Overall, data show that the Setschenow Equation is valid for a wide range of salinity conditions typically found in energy-related technologies.
The Biplot as a diagnostic tool of local dependence in latent class models. A medical application.
Sepúlveda, R; Vicente-Villardón, J L; Galindo, M P
2008-05-20
Latent class models (LCMs) can be used to assess diagnostic test performance when no reference test (a gold standard) is available, considering two latent classes representing disease or non-disease status. One of the basic assumptions in such models is that of local or conditional independence: all indicator variables (tests) are statistically independent within each latent class. However, in practice this assumption is often violated; hence, the two-LCM fits the data poorly. In this paper, we propose the use of Biplot methods to identify the conditional dependence between pairs of manifest variables within each latent class. Additionally, we propose incorporating such dependence in the corresponding latent class using the log-linear formulation of the model.
Lopez Alan D
2002-12-01
Full Text Available Abstract Background The Global Burden of Disease 2000 (GBD 2000 study starts from an analysis of the overall mortality envelope in order to ensure that the cause-specific estimates add to the total all cause mortality by age and sex. For regions where information on the distribution of cancer deaths is not available, a site-specific survival model was developed to estimate the distribution of cancer deaths by site. Methods An age-period-cohort model of cancer survival was developed based on data from the Surveillance, Epidemiology, and End Results (SEER. The model was further adjusted for the level of economic development in each region. Combined with the available incidence data, cancer death distributions were estimated and the model estimates were validated against vital registration data from regions other than the United States. Results Comparison with cancer mortality distribution from vital registration confirmed the validity of this approach. The model also yielded the cancer mortality distribution which is consistent with the estimates based on regional cancer registries. There was a significant variation in relative interval survival across regions, in particular for cancers of bladder, breast, melanoma of the skin, prostate and haematological malignancies. Moderate variations were observed among cancers of colon, rectum, and uterus. Cancers with very poor prognosis such as liver, lung, and pancreas cancers showed very small variations across the regions. Conclusions The survival model presented here offers a new approach to the calculation of the distribution of deaths for areas where mortality data are either scarce or unavailable.
Andrew F Brouwer
Full Text Available Differences in prognosis in HPV-positive and HPV-negative oral (oropharyngeal and oral cavity squamous cell carcinomas (OSCCs and increasing incidence of HPV-related cancers have spurred interest in demographic and temporal trends in OSCC incidence. We leverage multistage clonal expansion (MSCE models coupled with age-period-cohort (APC epidemiological models to analyze OSCC data in the SEER cancer registry (1973-2012. MSCE models are based on the initiation-promotion-malignant conversion paradigm in carcinogenesis and allow for interpretation of trends in terms of biological mechanisms. APC models seek to differentiate between the temporal effects of age, period, and birth cohort on cancer risk. Previous studies have looked at the effect of period and cohort on tumor initiation, and we extend this to compare model fits of period and cohort effects on each of tumor initiation, promotion, and malignant conversion rates. HPV-related, HPV-unrelated except oral tongue, and HPV-unrelated oral tongue sites are best described by placing period and cohort effects on the initiation rate. HPV-related and non-oral-tongue HPV-unrelated cancers have similar promotion rates, suggesting similar tumorigenesis dynamics once initiated. Estimates of promotion rates at oral tongue sites are lower, corresponding to a longer sojourn time; this finding is consistent with the hypothesis of an etiology distinct from HPV or alcohol and tobacco use. Finally, for the three subsite groups, men have higher initiation rates than women of the same race, and black people have higher promotion than white people of the same sex. These differences explain part of the racial and sex differences in OSCC incidence.
Futao Guo; Guangyu Wang; John L Innes; Xiangqing Ma; Long Sun; Haiqing Hu
2015-01-01
The purpose of this study was to determine a suitable model for investigating the effects of climate factors on the area burned by forest fire in the Tahe forest region, Daxing’an Mountains, in northeast China. The response variables were the area burned by lightning-caused fire, human-caused fire, and total burned area. The predictor variables were nine climate variables collected from the local weather station. Three regression models were utilized, including multiple linear regression, log-linear model (log-transformation on both response and predictor variables), and gamma-generalized linear model. The goodness-of-fit of the models were compared based on model fitting statistics such as R2, AIC, and RMSE. The results revealed that the gamma-generalized linear model was generally superior to both multiple linear regression model and log-linear model for fitting the fire data. Further, the best models were selected based on the criteria that the climate variables were statistically significant at a=0.05. The gamma best models indicated that maximum wind speed, precipitation, and days that rainfall greater than 0.1 mm had significant impacts on the area burned by the lightning-caused fire, while the mean temperature and minimum relative humidity were the main drivers of the burned area caused by human activities. Overall, the total burned area by forest fire was significantly influenced by days that rainfall greater than 0.1 mm and minimum rela-tive humidity, indicating that the moisture condition of forest stands determine the burned area by forest fire.
Antonio Bevilacqua
2015-10-01
Full Text Available Predictive Microbiology (PM deals with the mathematical modeling of microorganisms in foods for different applications (challenge test, evaluation of microbiological shelf life, prediction of the microbiological hazards connected with foods, etc.. An interesting and important part of PM focuses on the use of primary functions to fit data of death kinetics of spoilage, pathogenic, and useful microorganisms following thermal or non-conventional treatments and can also be used to model survivors throughout storage. The main topic of this review is a focus on the most important death models (negative Gompertz, log-linear, shoulder/tail, Weibull, Weibull+tail, re-parameterized Weibull, biphasic approach, etc. to pinpoint the benefits and the limits of each model; in addition, the last section addresses the most important tools for the use of death kinetics and predictive microbiology in a user-friendly way.
The Strehler-Mildvan correlation from the perspective of a two-process vitality model.
Li, Ting; Anderson, James J
2015-01-01
The Strehler and Mildvan (SM) general theory of ageing and mortality provides a mechanism-based explanation of Gompertz's law and predicts a log-linear relationship between the two Gompertz coefficients, known as the SM correlation. While the SM correlation is supported by data from developed countries before the second half of the twentieth century, the recent breakdown of the correlation pattern in these countries has prompted demographers to conclude that SM theory needs to be reassessed. In this paper we use a newly developed two-process vitality model to explain the SM correlation and its breakdown in terms of asynchronous trends in acute (extrinsic) and chronic (intrinsic) mortality factors. We propose that the mortality change in the first half of the twentieth century is largely determined by the elimination of immediate hazards to death, whereas the mortality change in the second half is primarily driven by the slowdown of the deterioration rate of intrinsic survival capacity.
Ardkaew, Jurairat; Tongkumchum, Phattrawan
2011-09-01
This study aims to identify the incidence patterns of the most common infectious diseases, including acute diarrhea, pyrexia of unknown origin, hemorrhagic conjunctivitis, and pneumonia, in the 7 provinces of northeastern Thailand, based on individual hospital case records of infectious disease routinely reported from 1999 to 2004. Log-linear regression analysis with age-group, season, and district as factors was used, with data from all 4 diseases as outcomes combined into 1 model. confirmed that the highest incidence of each infectious disease occurred in children aged less than 5 years of age, with particularly high rates for diarrhea. In addition, the burden of pyrexia of unknown origin was found to be lower in districts bordering Laos, and the incidence rates were higher from April to June in 1999-2001 and 2004 and from July to September in 2002-2003. Higher incidence rates also occurred in most rural districts of Loei and Udon Thani provinces.
Shen, Xing-Rong; Feng, Rui; Chai, Jing; Cheng, Jing; Wang, De-Bin
2014-01-01
Large scale secular registry or surveillance systems have been accumulating vast data that allow mathematical modeling of cancer incidence and mortality rates. Most contemporary models in this regard use time series and APC (age-period-cohort) methods and focus primarily on predicting or analyzing cancer epidemiology with little attention being paid to implications for designing cancer registry, surveillance or evaluation initiatives. This research models age-specific cancer incidence rates using logistic growth equations and explores their performance under different scenarios of data completeness in the hope of deriving clues for reshaping relevant data collection. The study used China Cancer Registry Report 2012 as the data source. It employed 3-parameter logistic growth equations and modeled the age-specific incidence rates of all and the top 10 cancers presented in the registry report. The study performed 3 types of modeling, namely full age-span by fitting, multiple 5-year- segment fitting and single-segment fitting. Measurement of model performance adopted adjusted goodness of fit that combines sum of squred residuals and relative errors. Both model simulation and performance evalation utilized self-developed algorithms programed using C# languade and MS Visual Studio 2008. For models built upon full age-span data, predicted age-specific cancer incidence rates fitted very well with observed values for most (except cervical and breast) cancers with estimated goodness of fit (Rs) being over 0.96. When a given cancer is concerned, the R valuae of the logistic growth model derived using observed data from urban residents was greater than or at least equal to that of the same model built on data from rural people. For models based on multiple-5-year-segment data, the Rs remained fairly high (over 0.89) until 3-fourths of the data segments were excluded. For models using a fixed length single-segment of observed data, the older the age covered by the corresponding
Tetsuya Sakashita
Full Text Available BACKGROUND: Clonogenicity gives important information about the cellular reproductive potential following ionizing irradiation, but an abortive colony that fails to continue to grow remains poorly characterized. It was recently reported that the fraction of abortive colonies increases with increasing dose. Thus, we set out to investigate the production kinetics of abortive colonies using a model of branching processes. METHODOLOGY/PRINCIPAL FINDINGS: We firstly plotted the experimentally determined colony size distribution of abortive colonies in irradiated normal human fibroblasts, and found the linear relationship on the log-linear or log-log plot. By applying the simple model of branching processes to the linear relationship, we found the persistent reproductive cell death (RCD over several generations following irradiation. To verify the estimated probability of RCD, abortive colony size distribution (≤ 15 cells and the surviving fraction were simulated by the Monte Carlo computational approach for colony expansion. Parameters estimated from the log-log fit demonstrated the good performance in both simulations than those from the log-linear fit. Radiation-induced RCD, i.e. excess probability, lasted over 16 generations and mainly consisted of two components in the early (<3 generations and late phases. Intriguingly, the survival curve was sensitive to the excess probability over 5 generations, whereas abortive colony size distribution was robust against it. These results suggest that, whereas short-term RCD is critical to the abortive colony size distribution, long-lasting RCD is important for the dose response of the surviving fraction. CONCLUSIONS/SIGNIFICANCE: Our present model provides a single framework for understanding the behavior of primary cell colonies in culture following irradiation.
Update of predictions of mortality from pleural mesothelioma in the Netherlands
O. Segura; A. Burdorf (Alex); C.W.N. Looman (Caspar)
2003-01-01
textabstractAIMS: To predict the expected number of pleural mesothelioma deaths in the Netherlands from 2000 to 2028 and to study the effect of main uncertainties in the modelling technique. METHODS: Through an age-period-cohort modelling technique, age specific mortality rates and
Leif E. Peterson
1997-11-01
Full Text Available A computer program for multifactor relative risks, confidence limits, and tests of hypotheses using regression coefficients and a variance-covariance matrix obtained from a previous additive or multiplicative regression analysis is described in detail. Data used by the program can be stored and input from an external disk-file or entered via the keyboard. The output contains a list of the input data, point estimates of single or joint effects, confidence intervals and tests of hypotheses based on a minimum modified chi-square statistic. Availability of the program is also discussed.
Gender differences in the trend of colorectal cancer incidence in Singapore, 1968-2002.
Kok, IM de; Wong, C.S.; Chia, K.S.; Sim, X.; Tan, C.S.; Kiemeney, L.A.L.M.; Verkooijen, H.M.
2008-01-01
BACKGROUND AND AIMS: Over the past decades, incidence trends of colorectal cancer are sharply increased in Singapore. In this population-based study we describe changes in colorectal cancer incidence in Singapore and explore the reasons behind these changes through age-period cohort (APC) modeling.
Second-birth rates in Denmark from 1980 to 1994
Strandberg-Larsen, Katrine; Knudsen, Lisbeth B.; Thygesen, Lau Caspar;
2010-01-01
A statistical age-period-cohort model was used to depict second-time birth rates and the spacing between the first and second child in Denmark, including 524,316 one-child mothers who gave birth to 296,923 second children 1980-1994. The spacing between the first and second child varies according...
Kim, Do-Kyun; Kim, Soo-Ji; Kang, Dong-Hyun
2017-01-01
In order to assure the microbial safety of drinking water, UVC-LED treatment has emerged as a possible technology to replace the use of conventional low pressure (LP) mercury vapor UV lamps. In this investigation, inactivation of Human Enteric Virus (HuEV) surrogates with UVC-LEDs was investigated in a water disinfection system, and kinetic model equations were applied to depict the surviving infectivities of the viruses. MS2, Qβ, and ΦX 174 bacteriophages were inoculated into sterile distilled water (DW) and irradiated with UVC-LED printed circuit boards (PCBs) (266nm and 279nm) or conventional LP lamps. Infectivities of bacteriophages were effectively reduced by up to 7-log after 9mJ/cm(2) treatment for MS2 and Qβ, and 1mJ/cm(2) for ΦX 174. UVC-LEDs showed a superior viral inactivation effect compared to conventional LP lamps at the same dose (1mJ/cm(2)). Non-log linear plot patterns were observed, so that Weibull, Biphasic, Log linear-tail, and Weibull-tail model equations were used to fit the virus survival curves. For MS2 and Qβ, Weibull and Biphasic models fit well with R(2) values approximately equal to 0.97-0.99, and the Weibull-tail equation accurately described survival of ΦX 174. The level of UV-susceptibility among coliphages measured by the inactivation rate constant, k, was statistically different (ΦX 174 (ssDNA)>MS2, Qβ (ssRNA)), and indicated that sensitivity to UV was attributed to viral genetic material. Copyright © 2016 Elsevier Ltd. All rights reserved.
Syllable language models for Mandarin speech recognition: exploiting character language models.
Liu, Xunying; Hieronymus, James L; Gales, Mark J F; Woodland, Philip C
2013-01-01
Mandarin Chinese is based on characters which are syllabic in nature and morphological in meaning. All spoken languages have syllabiotactic rules which govern the construction of syllables and their allowed sequences. These constraints are not as restrictive as those learned from word sequences, but they can provide additional useful linguistic information. Hence, it is possible to improve speech recognition performance by appropriately combining these two types of constraints. For the Chinese language considered in this paper, character level language models (LMs) can be used as a first level approximation to allowed syllable sequences. To test this idea, word and character level n-gram LMs were trained on 2.8 billion words (equivalent to 4.3 billion characters) of texts from a wide collection of text sources. Both hypothesis and model based combination techniques were investigated to combine word and character level LMs. Significant character error rate reductions up to 7.3% relative were obtained on a state-of-the-art Mandarin Chinese broadcast audio recognition task using an adapted history dependent multi-level LM that performs a log-linearly combination of character and word level LMs. This supports the hypothesis that character or syllable sequence models are useful for improving Mandarin speech recognition performance.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
Ma, Junsheng; Chan, Wenyaw; Tsai, Chu-Lin; Xiong, Momiao; Tilley, Barbara C
2015-11-30
Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log-linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state.
VEHICLE OWNERSHIP AND TRIP GENERATION MODELLING
Pattarathep SILLAPARCHARN
2007-01-01
Full Text Available This paper aims to address some important issues of a rapid growth in vehicle ownership, how different types of vehicles interact and how these growths affect a number of trip generations, especially in a country undergoing fast economic development, such as Thailand. The forecasts of such growth are important for strategic transport decision-making, travel demand forecasts and other policy issues at both regional and national levels. For these reasons, a series of vehicle ownership models, which include: (1 car, (2 motorcycle, (3 truck and heavy goods vehicle and (4 bus and coach models, are proposed together with a trip generation model. These models are built upon the limited aggregate data, both time series and cross sectional and disaggregated spatially by province, using non-log-linear weighted least squares regression for motor vehicle ownership models and exponential modelling for a trip generation model. The inputs required are basically forecasted gross provincial products per capita and population level. All proposed models have good statistical properties, i.e. all statistically significant coefficients and high adjusted R squared value and very good fits which are within 12% of observed values during the base years. The car ownership model also gives reasonable car ownership elasticities with respect to income and a sensible saturation level when compared with other studies. The forecast for a case of increasing income shows that the car and truck ownership level will increase together with the trip generation level. However, as people own more cars, the motorcycle and bus ownership levels will decrease as there might be a switch to car ownership.
Catastrophe model of the accident process, safety climate, and anxiety.
Guastello, Stephen J; Lynn, Mark
2014-04-01
This study aimed (a) to address the evidence for situational specificity in the connection between safety climate to occupational accidents, (b) to resolve similar issues between anxiety and accidents, (c) to expand and develop the concept of safety climate to include a wider range of organizational constructs, (d) to assess a cusp catastrophe model for occupational accidents where safety climate and anxiety are treated as bifurcation variables, and environ-mental hazards are asymmetry variables. Bifurcation, or trigger variables can have a positive or negative effect on outcomes, depending on the levels of asymmetry, or background variables. The participants were 1262 production employees of two steel manufacturing facilities who completed a survey that measured safety management, anxiety, subjective danger, dysregulation, stressors and hazards. Nonlinear regression analyses showed, for this industry, that the accident process was explained by a cusp catastrophe model in which safety management and anxiety were bifurcation variables, and hazards, age and experience were asymmetry variables. The accuracy of the cusp model (R2 = .72) exceeded that of the next best log-linear model (R2 = .08) composed from the same survey variables. The results are thought to generalize to any industry where serious injuries could occur, although situationally specific effects should be anticipated as well.
Modeling polychlorinated biphenyl sorption isotherms for soot and coal
Jantunen, A.P.K.; Koelmans, A.A.; Jonker, M.T.O. [University of Utrecht, Utrecht (Netherlands)
2010-08-15
Sorption isotherms (pg-ng/L) were measured for 11 polychlorinated biphenyls (PCBs) of varying molecular planarity from aqueous solution to two carbonaceous geosorbents, anthracite coal and traffic soot. All isotherms were reasonably log-log-linear, but smooth for traffic soot and staircase-shaped for coal, to which sorption was stronger and more nonlinear. The isotherms were modeled using seven sorption models, including Freundlich, (dual) Langmuir, and Polanyi-Dubinin-Manes (PDM). PDM provided the best combination of reliability and mechanistically-interpretable parameters. The PDM normalizing factor Z appeared to correlate negatively with sorbate molecular volume, dependent on the degree of molecular planarity. The modeling results supported the hypothesis that maximum adsorption capacities (Q{sub max}) correlate positively with the sorbent's specific surface area. Q{sub max} did not decrease with increasing sorbate molecular size, and adsorption affinities clearly differed between the sorbents. Sorption was consistently stronger but not less linear for planar than for nonplanar PCBs, suggesting surface rather than pore sorption.
Evaluation of a radiation transport modeling method for radioactive bone cement
Kaneko, T S [Department of Radiological Sciences, B170 Med Sci I, University of California, Irvine, CA 92697 (United States); Sehgal, V; Al-Ghazi, M S A L; Ramisinghani, N S [Department of Radiation Oncology, University of California Irvine Medical Center, Orange, CA 92868 (United States); Skinner, H B [St Jude Heritage Medical Group, Fullerton, CA 92835 (United States); Keyak, J H [Departments of Radiological Sciences, Biomedical Engineering, and Mechanical Engineering, University of California, Irvine, CA 92697 (United States)], E-mail: tkaneko@uci.edu
2010-05-07
Spinal metastases are a common and serious manifestation of cancer, and are often treated with vertebroplasty/kyphoplasty followed by external beam radiation therapy (EBRT). As an alternative, we have introduced radioactive bone cement, i.e. bone cement incorporated with a radionuclide. In this study, we present a Monte Carlo radiation transport modeling method to calculate dose distributions within vertebrae containing radioactive cement. Model accuracy was evaluated by comparing model-predicted depth-dose curves to those measured experimentally in eight cadaveric vertebrae using radiochromic film. The high-gradient regions of the depth-dose curves differed by radial distances of 0.3-0.9 mm, an improvement over EBRT dosimetry accuracy. The low-gradient regions differed by 0.033-0.055 Gy/h/mCi, which may be important in situations involving prior spinal cord irradiation. Using a more rigorous evaluation of model accuracy, four models predicted the measured dose distribution within the experimental uncertainty, as represented by the 95% confidence interval of the measured log-linear depth-dose curve. The remaining four models required modification to account for marrow lost from the vertebrae during specimen preparation. However, the accuracy of the modified model results indicated that, when this source of uncertainty is accounted for, this modeling method can be used to predict dose distributions in vertebrae containing radioactive cement.
Evaluation of a radiation transport modeling method for radioactive bone cement
Kaneko, T. S.; Sehgal, V.; Skinner, H. B.; Al-Ghazi, M. S. A. L.; Ramisinghani, N. S.; Keyak, J. H.
2010-05-01
Spinal metastases are a common and serious manifestation of cancer, and are often treated with vertebroplasty/kyphoplasty followed by external beam radiation therapy (EBRT). As an alternative, we have introduced radioactive bone cement, i.e. bone cement incorporated with a radionuclide. In this study, we present a Monte Carlo radiation transport modeling method to calculate dose distributions within vertebrae containing radioactive cement. Model accuracy was evaluated by comparing model-predicted depth-dose curves to those measured experimentally in eight cadaveric vertebrae using radiochromic film. The high-gradient regions of the depth-dose curves differed by radial distances of 0.3-0.9 mm, an improvement over EBRT dosimetry accuracy. The low-gradient regions differed by 0.033-0.055 Gy/h/mCi, which may be important in situations involving prior spinal cord irradiation. Using a more rigorous evaluation of model accuracy, four models predicted the measured dose distribution within the experimental uncertainty, as represented by the 95% confidence interval of the measured log-linear depth-dose curve. The remaining four models required modification to account for marrow lost from the vertebrae during specimen preparation. However, the accuracy of the modified model results indicated that, when this source of uncertainty is accounted for, this modeling method can be used to predict dose distributions in vertebrae containing radioactive cement.
Scale parameters in stationary and non-stationary GEV modeling of extreme precipitation
Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys
2013-04-01
The generalized extreme value (GEV) distribution is often fitted to environmental time series of extreme values such as annual maxima of daily precipitation. We study two methodological issues here. First we compare methods of selecting the best model among a set of 16 GEV models that allow non-stationary scale and location parameters. Results of simulation studies showed that both the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) correctly detected non-stationarity but the BIC was superior in selecting the correct model more often. The second issue is how best to produce confidence intervals (CIs) for the parameters of the model and other quantities such as the return levels that are usually required for hydrological and climatological time series. Four bootstrap CIs - normal, percentile, basic, and bias corrected and accelerated (BCa) - constructed by random-t resampling, fixed-t resampling and the parametric bootstrap methods were compared. CIs for parameters of the stationary model do not present major differences. CIs for the more extreme quantiles tend to become very wide for all bootstrap methods. For non-stationary GEV models with linear time dependence of location or log-linear time dependence of scale, coverage probabilities of the CIs are reasonably accurate for the parameters. For the extreme percentiles, the BCa method is best overall and the fixed-t method also gives good average coverage probabilities.
Application of the PJ and NPS evaporation duct models over the South China Sea (SCS) in winter.
Yang, Shaobo; Li, Xingfei; Wu, Chao; He, Xin; Zhong, Ying
2017-01-01
The detection of duct height has a significant effect on marine radar or wireless apparatus applications. The paper presents two models to verify the adaptation of evaporation duct models in the SCS in winter. A meteorological gradient instrument used to measure evaporation ducts was fabricated using hydrological and meteorological sensors at different heights. An experiment on the adaptive characteristics of evaporation duct models was carried out over the SCS. The heights of the evaporation ducts were measured by means of log-linear fit, Paulus-Jeske (PJ) and Naval Postgraduate School (NPS) models. The results showed that NPS model offered significant advantages in stability compared with the PJ model. According the collected data computed by the NPS model, the mean deviation (MD) was -1.7 m, and the Standard Deviation (STD) of the MD was 0.8 m compared with the true value. The NPS model may be more suitable for estimating the evaporation duct height in the SCS in winter due to its simpler system characteristics compared with meteorological gradient instruments.
A feature-based approach to modeling protein-DNA interactions.
Eilon Sharon
Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.
Syndromic surveillance models using Web data: the case of scarlet fever in the UK.
Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel
2012-03-01
Recent research has shown the potential of Web queries as a source for syndromic surveillance, and existing studies show that these queries can be used as a basis for estimation and prediction of the development of a syndromic disease, such as influenza, using log linear (logit) statistical models. Two alternative models are applied to the relationship between cases and Web queries in this paper. We examine the applicability of using statistical methods to relate search engine queries with scarlet fever cases in the UK, taking advantage of tools to acquire the appropriate data from Google, and using an alternative statistical method based on gamma distributions. The results show that using logit models, the Pearson correlation factor between Web queries and the data obtained from the official agencies must be over 0.90, otherwise the prediction of the peak and the spread of the distributions gives significant deviations. In this paper, we describe the gamma distribution model and show that we can obtain better results in all cases using gamma transformations, and especially in those with a smaller correlation factor.
Rørbech, Jakob T; Vadenbo, Carl; Hellweg, Stefanie; Astrup, Thomas F
2014-10-07
Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247 individual market inventory data sets covering a wide range of societal activities (ecoinvent database v3.0). Log-linear regression analysis was carried out for all pairwise combinations of the 11 methods for identification of correlations in CFs (resources) and total impacts (inventory data sets) between methods. Significant differences in resource coverage were observed (9-73 resources) revealing a trade-off between resource coverage and model complexity. High correlation in CFs between methods did not necessarily manifest in high correlation in total impacts. This indicates that also resource coverage may be critical for impact assessment results. Although no consistent correlations between methods applying similar assessment models could be observed, all methods showed relatively high correlation regarding the assessment of energy resources. Finally, we classify the existing methods into three groups, according to method focus and modeling approach, to aid method selection within LCA.
Chahine, Teresa; Schultz, Bradley; Zartarian, Valerie; Subramanian, S V; Spengler, John; Hammitt, James; Levy, Jonathan I
2011-01-01
Despite substantial attention toward environmental tobacco smoke (ETS) exposure, previous studies have not provided adequate information to apply broadly within community-scale risk assessments. We aim to estimate residential concentrations of particulate matter (PM) from ETS in sociodemographic and geographic subpopulations in the United States for the purpose of screening-level risk assessment. We developed regression models to characterize smoking using the 2006-7 Current Population Survey--Tobacco Use Supplement, and linked these with air exchange models using the 2007 American Housing Survey. Using repeated logistic and log-linear models (n = 1000), we investigated whether household variables from the 2000 United States census can predict exposure likelihood and ETS-PM concentration in exposed households. We estimated a mean ETS-PM concentration of 16 μg/m(3) among the 17% of homes with non-zero exposure (3 μg/m(3) overall), with substantial variability among homes. The highest exposure likelihood was in the South and Midwest regions, rural populations, and low-income households. Concentrations in exposed households were highest in the South and demonstrated a non-monotonic association with income, related to air exchange rate patterns. We provide estimates of ETS-PM concentration distributions for different subpopulations in the United States, providing a starting point for communities interested in characterizing aggregate and cumulative risks from indoor pollutants.
A haplotype inference method based on sparsely connected multi-body ising model
Kato, Masashi; Gao, Qian Ji; Chigira, Hiroshi; Shindo, Hiroyuki; Inoue, Masato
2010-06-01
Statistical haplotype inference is an indispensable technique in the field of medical science. The method usually has two steps: inference of haplotype frequencies and inference of diplotype for each subject. The first step can be done by using the expectation-maximization (EM) algorithm, but it incurs an unreasonably large calculation cost when the number of single-nucleotide polymorphism (SNP) loci of concern is large. In this article, we describe an approximate probabilistic model of haplotype frequencies. The model is constructed by using several distributions of nearby local SNPs. This approximation seems good because SNPs are generally more strongly correlated when they are close to one another on a chromosome. To implement this approach, we use a log linear model, the Walsh-Hadamard transform, and a combinatorial optimization method. Artificial data suggested that the overall haplotype inference of our method is good if there are nine or more local consecutive SNPs. Some minor problems should be dealt with before this method can be applied to real data.
Pan, Chengbin; Miranda, Enrique; Villena, Marco A.; Xiao, Na; Jing, Xu; Xie, Xiaoming; Wu, Tianru; Hui, Fei; Shi, Yuanyuan; Lanza, Mario
2017-06-01
Despite the enormous interest raised by graphene and related materials, recent global concern about their real usefulness in industry has raised, as there is a preoccupying lack of 2D materials based electronic devices in the market. Moreover, analytical tools capable of describing and predicting the behavior of the devices (which are necessary before facing mass production) are very scarce. In this work we synthesize a resistive random access memory (RRAM) using graphene/hexagonal-boron-nitride/graphene (G/h-BN/G) van der Waals structures, and we develop a compact model that accurately describes its functioning. The devices were fabricated using scalable methods (i.e. CVD for material growth and shadow mask for electrode patterning), and they show reproducible resistive switching (RS). The measured characteristics during the forming, set and reset processes were fitted using the model developed. The model is based on the nonlinear Landauer approach for mesoscopic conductors, in this case atomic-sized filaments formed within the 2D materials system. Besides providing excellent overall fitting results (which have been corroborated in log-log, log-linear and linear-linear plots), the model is able to explain the dispersion of the data obtained from cycle-to-cycle in terms of the particular features of the filamentary paths, mainly their confinement potential barrier height.
Davis, Craig Warren; Di Toro, Dominic M
2015-07-07
Procedures for accurately predicting linear partition coefficients onto various sorbents (e.g., organic carbon, soils, clay) are reliable and well established. However, similar procedures for the prediction of sorption parameters of nonlinear isotherm models are not. The purpose of this paper is to present a procedure for predicting nonlinear isotherm parameters, specifically the median Langmuir binding constants, K̃L, obtained utilizing the single-chemical parameter log-normal Langmuir isotherm developed in the accompanying work. A reduced poly parameter linear free energy relationship (pp-LFER) is able to predict median Langmuir binding constants for graphite, charcoal, and Darco granular activated carbon (GAC) adsorption data. For the larger F400 GAC data set, a single pp-LFER model was insufficient, as a plateau is observed for the median Langmuir binding constants of larger molecular volume sorbates. This volumetric cutoff occurs in proximity to the median pore diameter for F400 GAC. A log-linear relationship exists between the aqueous solubility of these large compounds and their median Langmuir binding constants. Using this relationship for the chemicals above the volumetric cutoff and the pp-LFER below the cutoff, the median Langmuir binding constants can be predicted with a root-mean square error for graphite (n = 13), charcoal (n = 11), Darco GAC (n = 14), and F400 GAC (n = 44) of 0.129, 0.307, 0.407, and 0.424, respectively.
The impact of missing data in a generalized integer-valued autoregression model for count data.
Alosh, Mohamed
2009-11-01
The impact of the missing data mechanism on estimates of model parameters for continuous data has been extensively investigated in the literature. In comparison, minimal research has been carried out for the impact of missing count data. The focus of this article is to investigate the impact of missing data on a transition model, termed the generalized autoregressive model of order 1 for longitudinal count data. The model has several features, including modeling dependence and accounting for overdispersion in the data, that make it appealing for the clinical trial setting. Furthermore, the model can be viewed as a natural extension of the commonly used log-linear model. Following introduction of the model and discussion of its estimation we investigate the impact of different missing data mechanisms on estimates of the model parameters through a simulation experiment. The findings of the simulation experiment show that, as in the case of normally distributed data, estimates under the missing completely at random (MCAR) and missing at random (MAR) mechanisms are close to their analogue for the full dataset and that the missing not at random (MNAR) mechanism has the greatest bias. Furthermore, estimates based on imputing the last observed value carried forward (LOCF) for missing data under the MAR assumption are similar to those of the MAR. This latter finding might be attributed to the Markov property underlying the model and to the high level of dependence among successive observations used in the simulation experiment. Finally, we consider an application of the generalized autoregressive model to a longitudinal epilepsy dataset analyzed in the literature.
Modeling polychlorinated biphenyl sorption isotherms for soot and coal
Jantunen, A.P.K.; Koelmans, A.A.; Jonker, M.T.O.
2010-01-01
Sorption isotherms (pg-ng/L) were measured for 11 polychlorinated biphenyls (PCBs) of varying molecular planarity from aqueous solution to two carbonaceous geosorbents, anthracite coal and traffic soot. All isotherms were reasonably log-log-linear, but smooth for traffic soot and staircase-shaped
Juel-Christiansen, Carsten
2005-01-01
Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter......Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter...
Support for Marijuana (Cannabis) Legalization: Untangling Age, Period, and Cohort Effects
Campbell, William; Twenge, Jean; Carter, Nathan
2017-01-01
In three large, nationally representative surveys of U.S. 12th graders, college students, and adults (N = 9 million) conducted 1968–2015, Americans became significantly more supportive of legal marijuana (cannabis) starting in the mid-1980’s. Hierarchical models using age-period-cohort analysis on the adult (General Social Survey) sample showed that the increased support for legalization is primarily a time period effect rather than generational or age effect; thus, Americans of all ages beca...
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
Myrto, Valari
2009-10-15
The goal of this dissertation, is to develop a methodology that provides an improved knowledge of the associations between atmospheric contaminant concentrations and health impact. The propagation of uncertainties from input data to the output concentrations through a Chemistry Transport Model was first studied. The influence of the resolutions of meteorological parameters and emissions data were studied separately, and their relative role was compared. It was found that model results do not improve linearly with the resolution of emission input. A critical resolution was found, beyond which model error becomes higher and the model breaks down. Based on this first investigation concerning the direct down scaling, further research focused on sub grid scale modeling. Thus, a statistical down scaling approach was adopted for the modeling of sub grid-scale concentration variability due to heterogeneous surface emissions. Emission fractions released from different types of sources (industry, roads, residential, natural etc.) were calculated from a high-resolution emission inventory. Then emission fluxes were mapped on surfaces emitting source-specific species. Simulations were run independently over the defined micro-environments allowing the modeling of sub grid-scale concentration variability. Sub grid scale concentrations were therefore combined with demographic and human activity data to provide exposure estimates. The spatial distribution of human exposure was parameterized through a Monte-Carlo model. The new information concerning exposure variability was added to an existing epidemiological model to study relative health risks. A log-linear Poisson regression model was used for this purpose. The principal outcome of the investigation was that a new functionality was added to the regression model which allows the dissociation of the health risk associated with each pollutant (e.g. NO{sub 2} and PM{sub 2.5}). (author)
trie neural construction oí inoiviouo! unci communal identities in ... occurs, Including models based on Information processing,1 ... Applying the DSM descriptive approach to dissociation in the ... a personal, narrative path lhal connects personal lo ethnic ..... managed the problem in the context of the community, using a.
Lecture Notes in Statistics. 3rd Semester
The lecture note is prepared to meet the requirements for the 3rd semester course in statistics at the Aarhus School of Business. It focuses on multiple regression models, analysis of variance, and log-linear models.......The lecture note is prepared to meet the requirements for the 3rd semester course in statistics at the Aarhus School of Business. It focuses on multiple regression models, analysis of variance, and log-linear models....
Lecture Notes in Statistics. 3rd Semester
The lecture note is prepared to meet the requirements for the 3rd semester course in statistics at the Aarhus School of Business. It focuses on multiple regression models, analysis of variance, and log-linear models.......The lecture note is prepared to meet the requirements for the 3rd semester course in statistics at the Aarhus School of Business. It focuses on multiple regression models, analysis of variance, and log-linear models....
Sud, Y. C.; Lee, Dongmin
2007-11-01
Microphysics of clouds with the Relaxed Arakawa-Schubert Scheme (McRAS) was upgraded for simulating the Aerosol Indirect Effects (AIE) for water clouds. The AIE comprises of i) Fountoukis and Nenes aerosol activation module for obtaining cloud condensation nuclei; ii) Seifert and Beheng algorithms for precipitation microphysics but with modified accretion constant for the coarse vertical-resolution typical of a global general circulation model (GCM); and iii) Khvorostyanov and Curry parameterization for computing the effective radius ( re) of cloud drops. The upgraded package, named McRAS-AC, was evaluated using the 3-year ARM-SGP Single Column Model (SCM) data. Invoking only the most dominant sulfate aerosols over the region, McRAS-AC simulated realistic annual mean and annual cycles of cloud water, cloud optical thicknesses, cloud drop number concentration, and re. The follow-on SCM-sensitivity simulations showed that accretion of cloud water is sensitive to i) the terminal velocity of hydrometeors produced by autoconversion and ii) cloud height increases due to in-cloud condensation heating. The impact of aerosol mass concentration on the resultant column cloud water, and bulk optical properties of clouds were assessed by using 1/8 to 8 times the average monthly aerosol mass concentration estimates of GOCART aerosol climatology. A log-linear relation between cloud-radiative forcing and aerosol-mass concentration emerged in the simulated data.
Nielsen, Julie Bøjstrup; Kyvsgaard, Julie Nyholm; Sildorf, Stine Møller
2017-01-01
Background: Type 1 Diabetes (T1D) has a negative impact on psychological and overall well-being. Screening for Health-related Quality of Life (HrQoL) and addressing HrQoL issues in the clinic leads to improved well-being and metabolic outcomes. The aim of this study was to translate the generic...... with T1D and were aged between 8 and 18 years old. The Rasch and the graphical log linear Rasch model (GLLRM) were used to determine validity. Monte Carlo methods and Cronbach’s α were used to confirm reliability. Results: The data did not fit a pure Rasch model but did fit a GLLRM when item six...... in the independence scale is excluded. The six subscales measure different aspects of HrQoL indicating that all the subscales are necessary. The questionnaire shows local dependency between items and differential item functioning (DIF). Therefore age, gender, and glycated hemoglobin (HbA1c) levels must be taken...
Batlis, Nick C.; Waters, L. K.
1973-01-01
Expectancy theory predictions of course performance were tested for a sample of 195 undergraduates; significant prediction was attained for the total sample using a log linear expectancy model. (Author)
金应华; 吴耀华
2009-01-01
考虑了乘积多项抽样下的对数线性模型.在这个模型下,文献[Jin Y H, Wu Y H. Minimum φ-divergence estimator and hierarchical testing in log-linear models under product-multinomial sampling. Journal of Statistical Planning and Inference, 2009,139:3 488-3 500]用基于φ-散度和最小φ-散度估计构造的统计量研究了几类假设检验问题,这其中就有嵌套假设.最小φ-散度估计是极大似然估计的推广.在上述文献的基础上,给出了其中一类检验的功效函数的渐近逼近公式;另外,还研究了在一列近邻假设下检验统计量的渐近分布.通过模拟研究发现,与Pearson型统计量和对数极大似然比统计量相比,Cressie-Read型检验统计量有差不多的甚至更好的模拟功效和水平.%Suppose that discrete data are distributed according to a product-multinomial distribution whose probabilities follow a loglinear model.Under the model above,Ref.[Jin Y H,Wu Y H.Minimum φ-divergence estimator and hierarchical testing in log-linear models under product-multinomial sampling.Journal of Statistical Planning and Inference,2009,139:3 488-3 500] have considered hypothesis test problems including hierarchical tests using φ-divergence test statistics that contain the minimum φ-divergence estimator (MφE) which is seen as a generalization of the maximum likelihood estimator.Here an approximation to the power function of one of these tests and asymptotic distributions of these test statistics under a contiguous sequence of hypotheses on the basis of the results in Jin et al was gotten.In the last section,a simulation study was conducted to find our member of the power-divergence statistics is the best,the Cressie-Read test statistic is an attractive alternative to the Pearson-based statistic and the likelihood ratio-based test statistic in terms of simulated sizes and powers.
Vanroelen, Christophe; Levecque, Katia; Louckx, Fred
2009-02-01
This paper presents an in-depth examination of the demand-control-support-model (DCS-model). Each hypothesis of the DCS-model is tested: the main effects of job demands, job autonomy, task variation and social support; the additive effects of job strain, active learning and iso-strain; and the interactive buffer-effects of job autonomy, task variation and support on job demands. Data from a representative cross-sectional sample of 11,099 male and female wage-earners are investigated using log linear methods. The outcome measures are self-reported persistent fatigue, musculoskeletal complaints and emotional well-being. There is some support for each of the hypotheses. Quantitative job demands and superior support have the strongest effects. The job autonomy and buffer hypotheses are only partially supported. The strong effects of job demands, support, job strain and active learning are suggesting that a policy aimed at improving psychosocial working conditions should focus on a bearable level of job demands and the quality of social relationships at work.
Izquier, Adriana; Gómez-López, Vicente M
2011-09-01
Pulsed light (PL) is a fast non-thermal method for microbial inactivation. This research studied the kinetics of PL inactivation of microorganisms naturally occurring in some vegetables. Iceberg lettuce, white cabbage and Julienne-style cut carrots were subjected to increasing PL fluences up to 12J/cm(2) in order to study its effect on aerobic mesophilic bacteria determined by plate count. Also, sample temperature increase was determined by infrared thermometry. Survivors' curves were adjusted to several models. No shoulder but tail was observed. The Weibull model showed good fitting performance of data. Results for lettuce were: goodness-of-fit parameter RMSE=0.2289, fluence for the first decimal reduction δ=0.98±0.80J/cm(2) and concavity parameter p=0.33±0.08. Results for cabbage were: RMSE=0.0725, δ=0.81±0.23J/cm(2) and p=0.30±0.02; and for carrot: RMSE=0.1235, δ=0.39±0.24J/cm(2) and p=0.23±0.03. For lettuce, a log-linear and tail model was also suitable. Validation of the Weibull model produced determination coefficients of 0.88-0.96 and slopes of 0.78-0.99. Heating was too low to contribute to inactivation. A single low-energy pulse was enough to achieve one log reduction, with an ultrafast treatment time of 0.5ms. While PL efficacy was found to be limited to high residual counts, the achievable inactivation level may be considered useful for shelf-life extension.
Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals
Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys
2016-04-01
The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.
Development of statistical models for data analysis
Downham, D.Y.
2000-07-01
Incidents that cause, or could cause, injury to personnel, and that satisfy specific criteria, are reported to the Offshore Safety Division (OSD) of the Health and Safety Executive (HSE). The underlying purpose of this report is to improve ways of quantifying risk, a recommendation in Lord Cullen's report into the Piper Alpha disaster. Records of injuries and hydrocarbon releases from 1 January, 1991, to 31 March 1996, are analysed, because the reporting of incidents was standardised after 1990. Models are identified for risk assessment and some are applied. The appropriate analyses of one or two factors (or variables) are tests of uniformity or of independence. Radar graphs are used to represent some temporal variables. Cusums are applied for the analysis of incident frequencies over time, and could be applied for regular monitoring. Log-linear models for Poisson-distributed data are identified as being suitable for identifying 'non-random' combinations of more than two factors. Some questions cannot be addressed with the available data: for example, more data are needed to assess the risk of injury per employee in a time interval. If the questions are considered sufficiently important, resources could be assigned to obtain the data. Some of the main results from the analyses are as follows: the cusum analyses identified a change-point at the end of July 1993, when the reported number of injuries reduced by 40%. Injuries were more likely to occur between 8am and 12am or between 2pm and 5pm than at other times: between 2pm and 3pm the number of injuries was almost twice the average and was more than three fold the smallest. No seasonal effects in the numbers of injuries were identified. Three-day injuries occurred more frequently on the 5th, 6th and 7th days into a tour of duty than on other days. Three-day injuries occurred less frequently on the 13th and 14th days of a tour of duty. An injury classified as 'lifting or craning' was
On pitfalls in the construction of family-based models of population growth: a note.
Kondo, H
1986-04-01
Recently, several attempts have been made to construct an economic theory of population based on a formal theory of the family of the type developed by Becker in 1981, but there are serious limitations in all such efforts. The typical family's problem may have no solution, even with a well-behaved concave utility function. Moreover, even when the family's maximum problem has a unique solution, the phase diagram for the stock of capital may contain no steady state other than the origin. Finally, even when there exists a nontrivial steady state for the stock of capital, the community nevertheless may be destined for extinction. The first of these pitfalls concerns the internal consistency of the models, while the second and third concern the compatibility of the models with some gross facts of life. The pitfalls can be avoided, within the Becker framework by suitably restricting the family's utility and production functions, but the restrictions required are severe. This paper shows that, alternatively, the pitfalls sometimes can be avoided by going slightly outside the Becker framework, specifically, by modifying the typical family's budget constraint to allow explicitly for the cost of raising children. In particular, it is shown that, by this means, the pitfalls can be avoided even when the famil's utility function is log-linear, the example adduced by Kemp et al. In 1984 to demonstrate the existence of pitfalls. More precisely, it is shown that the family's maximum problem has a unique solution; that nontrival steady state exists; that, even if the steady state is locally unstable, the optimal trajectory tends neither to zero nor to infinity but to a 2-period limit cycle; and that survival is possible with quite general production functions. Thus, the end product is a logically consistent and reasonable model of economic development, with both population growth and capital accumulation firmly rooted in life-cycle family planning.
Modelling complete particle-size distributions from operator estimates of particle-size
Roberson, Sam; Weltje, Gert Jan
2014-05-01
Estimates of particle-size made by operators in the field and laboratory represent a vast and relatively untapped data archive. The wide spatial distribution of particle-size estimates makes them ideal for constructing geological models and soil maps. This study uses a large data set from the Netherlands (n = 4837) containing both operator estimates of particle size and complete particle-size distributions measured by laser granulometry. This study introduces a logit-based constrained-cubic-spline (CCS) algorithm to interpolate complete particle-size distributions from operator estimates. The CCS model is compared to four other models: (i) a linear interpolation; (ii) a log-hyperbolic interpolation; (iii) an empirical logistic function; and (iv) an empirical arctan function. Operator estimates were found to be both inaccurate and imprecise; only 14% of samples were successfully classified using the Dutch classification scheme for fine sediment. Operator estimates of sediment particle-size encompass the same range of values as particle-size distributions measured by laser analysis. However, the distributions measured by laser analysis show that most of the sand percentage values lie between zero and one, so the majority of the variability in the data is lost because operator estimates are made to the nearest 1% at best, and more frequently to the nearest 5%. A method for constructing complete particle-size distributions from operator estimates of sediment texture using a logit constrained cubit spline (CCS) interpolation algorithm is presented. This model and four other previously published methods are compared to establish the best approach to modelling particle-size distributions. The logit-CCS model is the most accurate method, although both logit-linear and log-linear interpolation models provide reasonable alternatives. Models based on empirical distribution functions are less accurate than interpolation algorithms for modelling particle-size distributions in
2013-01-01
We model the mortality behavior of the general population in Mexico using data from 1990 to 2009 and compare it to the mortality assumed in the tables used in Mexico for insured lives. We _t a Lee-Carter model, a Renshaw-Haberman model and an Age-Period-Cohort model. The data used are drawn from the Mexican National Institute of Statistics and Geography (INEGI) and the National Population Council (CONAPO). We also fit a Brass-type relational model to compare gaps between general population mo...
Wadsworth, W Duncan; Argiento, Raffaele; Guindani, Michele; Galloway-Pena, Jessica; Shelbourne, Samuel A; Vannucci, Marina
2017-02-08
The Human Microbiome has been variously associated with the immune-regulatory mechanisms involved in the prevention or development of many non-infectious human diseases such as autoimmunity, allergy and cancer. Integrative approaches which aim at associating the composition of the human microbiome with other available information, such as clinical covariates and environmental predictors, are paramount to develop a more complete understanding of the role of microbiome in disease development. In this manuscript, we propose a Bayesian Dirichlet-Multinomial regression model which uses spike-and-slab priors for the selection of significant associations between a set of available covariates and taxa from a microbiome abundance table. The approach allows straightforward incorporation of the covariates through a log-linear regression parametrization of the parameters of the Dirichlet-Multinomial likelihood. Inference is conducted through a Markov Chain Monte Carlo algorithm, and selection of the significant covariates is based upon the assessment of posterior probabilities of inclusions and the thresholding of the Bayesian false discovery rate. We design a simulation study to evaluate the performance of the proposed method, and then apply our model on a publicly available dataset obtained from the Human Microbiome Project which associates taxa abundances with KEGG orthology pathways. The method is implemented in specifically developed R code, which has been made publicly available. Our method compares favorably in simulations to several recently proposed approaches for similarly structured data, in terms of increased accuracy and reduced false positive as well as false negative rates. In the application to the data from the Human Microbiome Project, a close evaluation of the biological significance of our findings confirms existing associations in the literature.
De Schamphelaere, K.A.C., E-mail: karel.deschamphelaere@ugent.be; Nys, C., E-mail: chnys.nys@ugent.be; Janssen, C.R., E-mail: colin.janssen@ugent.be
2014-10-15
Highlights: • Chronic toxicity of Pb varied 4-fold among three algae species. • The use of an organic P avoided Pb precipitation in the experiments. • pH and Dissolved Organic Carbon strongly affect Pb toxicity, Ca and Mg do not. • A bioavailability model was developed that accurately predicts toxicity. • Algae may become the most sensitive species to Pb above pH 7.4. - Abstract: Scientifically sound risk assessment and derivation of environmental quality standards for lead (Pb) in the freshwater environment are hampered by insufficient data on chronic toxicity and bioavailability to unicellular green algae. Here, we first performed comparative chronic (72-h) toxicity tests with three algal species in medium at pH 6, containing 4 mg fulvic acid (FA)/L and containing organic phosphorous (P), i.e. glycerol-2-phosphate, instead of PO{sub 4}{sup 3−} to prevent lead-phosphate mineral precipitation. Pseudokirchneriella subcapitata was 4-fold more sensitive to Pb than Chlorella kesslerii, with Chlamydomonas reinhardtii in the middle. The influence of medium physico-chemistry was therefore investigated in detail with P. subcapitata. In synthetic test media, higher concentrations of fulvic acid or lower pH protected against toxicity of (filtered) Pb to P. subcapitata, while effects of increased Ca or Mg on Pb toxicity were less clear. When toxicity was expressed on a free Pb{sup 2+} ion activity basis, a log-linear, 260-fold increase of toxicity was observed between pH 6.0 and 7.6. Effects of fulvic acid were calculated to be much more limited (1.9-fold) and were probably even non-existent (depending on the affinity constant for Pb binding to fulvic acid that was used for calculating speciation). A relatively simple bioavailability model, consisting of a log-linear pH effect on Pb{sup 2+} ion toxicity linked to the geochemical speciation model Visual Minteq (with the default NICA-Donnan description of metal and proton binding to fulvic acid), provided relatively
De Schamphelaere, K A C; Nys, C; Janssen, C R
2014-10-01
Scientifically sound risk assessment and derivation of environmental quality standards for lead (Pb) in the freshwater environment are hampered by insufficient data on chronic toxicity and bioavailability to unicellular green algae. Here, we first performed comparative chronic (72-h) toxicity tests with three algal species in medium at pH 6, containing 4 mg fulvic acid (FA)/L and containing organic phosphorous (P), i.e. glycerol-2-phosphate, instead of PO4(3-) to prevent lead-phosphate mineral precipitation. Pseudokirchneriella subcapitata was 4-fold more sensitive to Pb than Chlorella kesslerii, with Chlamydomonas reinhardtii in the middle. The influence of medium physico-chemistry was therefore investigated in detail with P. subcapitata. In synthetic test media, higher concentrations of fulvic acid or lower pH protected against toxicity of (filtered) Pb to P. subcapitata, while effects of increased Ca or Mg on Pb toxicity were less clear. When toxicity was expressed on a free Pb(2+) ion activity basis, a log-linear, 260-fold increase of toxicity was observed between pH 6.0 and 7.6. Effects of fulvic acid were calculated to be much more limited (1.9-fold) and were probably even non-existent (depending on the affinity constant for Pb binding to fulvic acid that was used for calculating speciation). A relatively simple bioavailability model, consisting of a log-linear pH effect on Pb(2+) ion toxicity linked to the geochemical speciation model Visual Minteq (with the default NICA-Donnan description of metal and proton binding to fulvic acid), provided relatively accurate toxicity predictions. While toxicity of (filtered) Pb varied 13.7-fold across 14 different test media (including four Pb-spiked natural waters) with widely varying physico-chemistry (72h-EC50s between 26.6 and 364 μg/L), this bioavailability model displayed mean and maximum prediction errors of only 1.4 and 2.2-fold, respectively, thus indicating the potential usefulness of this bioavailability
P. González Diego
2005-12-01
las generaciones sucesivas de varones nacidos en Navarra desde 1900. En las mujeres el riesgo asociado al año de nacimiento aumenta notablemente en las generaciones nacidas después de 1930-1940. Conclusiones. Se ha producido un notable incremento en la incidencia de cáncer en Navarra a lo largo del período 1973-1997, en hombres y en mujeres. El patrón de la incidencia de cáncer en Navarra no muestra todavía signos de estabilización.Background. Population-based registers are one source of information about cancer incidence. Systematic study of its incidence in a specific population is a fundamental tool for estimating the present-day and future magnitude of cancer and provides elements for taking decisions with regard to the allocation of health resources. The aim of this article was to investigate the time trend in the incidence pattern of cancer in Navarre during the period 1973-1997, and to identify the components of age, diagnosis period and birth cohort as determinants of the time trend of cancer incidence. Methods. Descriptive study of cancer incidence through age-period-cohort models. Monitoring of dynamic cohort over 25 years. Classical incidence summarizing indicators were analysed. Log-linear Poisson models were developed to quantify cancer risk and the relative annual trend. Age-period-cohort models were adjusted in order to ascertain the effect on the time trend exerted by the respective age, diagnosis period and birth cohort components. Results. The age-standardized rate incidence for all sites -except non melanoma skin tumours- is maximum in the five-year period 1993-1997, in men: 304,1 new cases per 100,000 person-years, and in women: 190,6 new cases per 100,000 person-years. The average incidence changes for each of the 25 years of monitoring of the set data studied is 1.88% (95% CI 1.69 to 2.07 in men and 1.32% (95% CI 1.09 to 1.54 in women. The cancer increase in women is more pronounced from 35 to 64 years, a fact which should alert health
Roberts, Steven; Martin, Michael
Most investigations of the adverse health effects of multiple air pollutants analyse the time series involved by simultaneously entering the multiple pollutants into a Poisson log-linear model. Concerns have been raised about this type of analysis, and it has been stated that new methodology or models should be developed for investigating the adverse health effects of multiple air pollutants. In this paper, we introduce the use of the lasso for this purpose and compare its statistical properties to those of ridge regression and the Poisson log-linear model. Ridge regression has been used in time series analyses on the adverse health effects of multiple air pollutants but its properties for this purpose have not been investigated. A series of simulation studies was used to compare the performance of the lasso, ridge regression, and the Poisson log-linear model. In these simulations, realistic mortality time series were generated with known air pollution mortality effects permitting the performance of the three models to be compared. Both the lasso and ridge regression produced more accurate estimates of the adverse health effects of the multiple air pollutants than those produced using the Poisson log-linear model. This increase in accuracy came at the expense of increased bias. Ridge regression produced more accurate estimates than the lasso, but the lasso produced more interpretable models. The lasso and ridge regression offer a flexible way of obtaining more accurate estimation of pollutant effects than that provided by the standard Poisson log-linear model.
Farakos, S M Santillana; Frank, J F; Schaffner, D W
2013-09-02
Salmonella can survive in low-moisture foods for long periods of time. Reduced microbial inactivation during heating is believed to be due to the interaction of cells and water, and is thought to be related to water activity (a(w)). Little is known about the role of water mobility in influencing the survival of Salmonella in low-moisture foods. The aim of this study was to determine how the physical state of water in low-moisture foods influences the survival of Salmonella and to use this information to develop mathematical models that predict the behavior of Salmonella in these foods. Whey protein powder of differing water mobilities was produced by pH adjustment and heat denaturation, and then equilibrated to aw levels between 0.19±0.03 and 0.54±0.02. Water mobility was determined by wide-line proton-NMR. Powders were inoculated with a four-strain cocktail of Salmonella, vacuum-sealed and stored at 21, 36, 50, 60, 70 and 80°C. Survival data was fitted to the log-linear, the Geeraerd-tail, the Weibull, the biphasic-linear and the Baranyi models. The model with the best ability to describe the data over all temperatures, water activities and water mobilities (f(test)water mobility on the survival of Salmonella was evaluated using multiple linear regression. Secondary models were developed and then validated in dry non-fat dairy and grain, and low-fat peanut and cocoa products within the range of the modeled data. Water activity significantly influenced the survival of Salmonella at all temperatures, survival increasing with decreasing a(w). Water mobility did not significantly influence survival independent of a(w). Secondary models were useful in predicting the survival of Salmonella in various low-moisture foods providing a correlation of R=0.94 and an acceptable prediction performance of 81%. The % bias and % discrepancy results showed that the models were more accurate in predicting survival in non-fat food systems as compared to foods containing low
Dessens, Jos A. G.; Jansen, Wim; Ganzeboom, Harry B. G.; Heijden, Peter G. M. van der
2003-01-01
This paper brings together the virtues of linear regression models for status attainment models formulated by second-generation social mobility researchers and the strengths of log-linear models formulated by third-generation researchers, into fourth-generation social mobility models, by using condi
Mouly, Damien; Joulin, Eric; Rosin, Christophe; Beaudeau, Pascal; Zeghnoun, Abdelkrim; Olszewski-Ortar, Agnès; Munoz, Jean François; Welté, Bénédicte; Joyeux, Michel; Seux, René; Montiel, Antoine; Rodriguez, M J
2010-10-01
Epidemiological studies have demonstrated that chlorination by-products in drinking water may cause some types of cancer in humans. However, due to differences in methodology between the various studies, it is not possible to establish a dose-response relationship. This shortcoming is due primarily to uncertainties about how exposure is measured-made difficult by the great number of compounds present-the exposure routes involved and the variation in concentrations in water distribution systems. This is especially true for trihalomethanes for which concentrations can double between the water treatment plant and the consumer tap. The aim of this study is to describe the behaviour of trihalomethanes in three French water distribution systems and develop a mathematical model to predict concentrations in the water distribution system using data collected from treated water at the plant (i.e. the entrance of the distribution system). In 2006 and 2007, samples were taken successively from treated water at the plant and at several points in the water distribution system in three French cities. In addition to the concentrations of the four trihalomethanes (chloroform, dichlorobromomethane, chlorodibromomethane, bromoform), many other parameters involved in their formation that affect their concentration were also measured. The average trihalomethane concentration in the three water distribution systems ranged from 21.6 μg/L to 59.9 μg/L. The increase in trihalomethanes between the treated water at the plant and a given point in the water distribution system varied by a factor of 1.1-5.7 over all of the samples. A log-log linear regression model was constructed to predict THM concentrations in the water distribution system. The five variables used were trihalomethane concentration and free residual chlorine for treated water at the plant, two variables that characterize the reactivity of organic matter (specific UV absorbance (SUVA), an indicator developed for the free
Langford, Oliver; Aronson, Jeffrey K; van Valkenhoef, Gert; Stevens, Richard J
2016-01-01
Standard methods for meta-analysis of dose-response data in epidemiology assume a model with a single scalar parameter, such as log-linear relationships between exposure and outcome; such models are implicitly unbounded. In contrast, in pharmacology, multi-parameter models, such as the widely used E
Age, Period, and Cohort Effects on Mortality From Ischemic Heart Disease in Southern Spain.
Ocaña-Riola, Ricardo; Mayoral-Cortés, José María; Fernández-Ajuria, Alberto; Sánchez-Cantalejo, Carmen; Martín-Olmedo, Piedad; Blanco-Reina, Encarnación
2015-05-01
Ischemic heart disease is the leading cause of death and one of the top 4 causes of burden of disease worldwide. The aim of this study was to evaluate age-period-cohort effects on mortality from ischemic heart disease in Andalusia (southern Spain) and in each of its 8 provinces during the period 1981-2008. A population-based ecological study was conducted. In all, 145 539 deaths from ischemic heart disease were analyzed for individuals aged between 30 and 84 years who died in Andalusia in the study period. A nonlinear regression model was estimated for each sex and geographical area using spline functions. There was an upward trend in male and female mortality rate by age from the age of 30 years. The risk of death for men and women showed a downward trend for cohorts born after 1920, decreasing after 1960 with a steep slope among men. Analysis of the period effect showed that male and female death risk first remained steady from 1981 to 1990 and then increased between 1990 and 2000, only to decrease again until 2008. There were similar age-period-cohort effects on mortality in all the provinces of Andalusia and for Andalusia as a whole. If the observed cohort and period effects persist, male and female mortality from ischemic heart disease will continue to decline. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Špillar, Václav; Dolejš, David
2015-12-01
Mechanical crystal-melt interactions in magmatic systems by separation or accumulation of crystals or by extraction of interstitial melt are expected to modify the spatial distribution of crystals observed as phenocrysts in igneous rocks. Textural analysis of porphyritic products can thus provide a quantitative means of interpreting the magnitude of crystal accumulation or melt loss and reconstructing the initial crystal percentage, at which the process occurred. We present a new three-dimensional numerical model that evaluates the effects of crystal accumulation (or interstitial melt removal) on the spatial distribution of crystals. Both processes lead to increasing apparent crystallinity but also to increasing spatial ordering expressed by the clustering index (R). The trend of progressive crystal packing deviates from a random texture trend, produced by static crystal nucleation and growth, and it is universal for any texture with straight log-linear crystal size distribution. For sparse crystal suspensions (5 vol. % crystals, R = 1.03), up to 97% melt can be extracted, corresponding to a new crystallinity of 65 vol.% and R = 1.32, when the rheological threshold of crystal interlocking is reached. For initially crystal-rich suspensions, the compaction path is shorter, this is because the initial crystal population is more aggregated and it reaches the limit of interlocking sooner. Crystal suspensions with ~ 35 vol.% crystals cannot be compacted without mechanical failure. These results illustrate that the onset of the rheological threshold of magma immobility strongly depends on the spatial configuration of crystals in the mush: the primary rigid percolation threshold (~ 35 vol.% crystals) corresponds to touching or interlocking crystal framework produced by in situ closed-system crystallization, whereas the secondary rigid percolation threshold (~ 35 to ~ 75 vol.% crystals) can be reached by compaction, which is particularly spatially efficient when acting on
Zwick, Rebecca; Lenaburg, Lubella
2009-01-01
In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…
Genome wide study of maternal and parent-of-origin effects on the etiology of orofacial clefts
Shi, Min; Murray, Jeff; Marazita, Mary L
2012-01-01
We performed a genome wide association analysis of maternally-mediated genetic effects and parent-of-origin (POO) effects on risk of orofacial clefting (OC) using over 2,000 case-parent triads collected through an international cleft consortium. We used log-linear regression models to test...
Interracial and Intraracial Patterns of Mate Selection among America's Diverse Black Populations
Batson, Christie D.; Qian, Zhenchao; Lichter, Daniel T.
2006-01-01
Despite recent immigration from Africa and the Caribbean, Blacks in America are still viewed as a monolith in many previous studies. In this paper, we use newly released 2000 census data to estimate log-linear models that highlight patterns of interracial and intraracial marriage and cohabitation among African Americans, West Indians, Africans,…
Journal of Agriculture and Social Research (JASR) Vol. 12, No. 1 ...
Zelda
GDP, trade openness and human capital while sustaining its inflation at the level to which it ... market-seeking, efficiency-seeking or strategic-asset-seeking (Ajayi, 2006). ... pricing by multinationals (Ogunkoya and Jerome, 2006). .... of unit root test then a log-linear form of multiple regression model is applied to determine.
Pieters, R.; Baumgartner, H.; Vermunt, J.K.; Bijmolt, T.H.A.
1998-01-01
The citation network of the International Journal of Research in Marketing (IJRM) is examined from 1981 to 1995. We propose a model that contains log-linear and logmultiplicative terms to estimate simultaneously the importance, cohesion, and structural equivalence of journals in the network across t
Reassessing the Economic Value of Advanced Level Mathematics
Adkins, Michael; Noyes, Andrew
2016-01-01
In the late 1990s, the economic return to Advanced level (A-level) mathematics was examined. The analysis was based upon a series of log-linear models of earnings in the 1958 National Child Development Survey (NCDS) and the National Survey of 1980 Graduates and Diplomates. The core finding was that A-level mathematics had a unique earnings premium…
Black Grade 9 learners in historically white suburban schools and ...
Erna Kinsey
transition, creating a new responsiveness to alternative ways of thinking and behaving. .... In addition, the log-linear model was used to do more in-depth analysis of ... and assessment. ..... white learners to fit in with their group: "It's like they're ...
Hoos, Anne B.; Terziotti, Silvia; McMahon, Gerard; Savvas, Katerina; Tighe, Kirsten C.; Alkons-Wolinsky, Ruth
2008-01-01
This report presents and describes the digital datasets that characterize nutrient source inputs, environmental characteristics, and instream nutrient loads for the purpose of calibrating and applying a nutrient water-quality model for the southeastern United States for 2002. The model area includes all of the river basins draining to the south Atlantic and the eastern Gulf of Mexico, as well as the Tennessee River basin (referred to collectively as the SAGT area). The water-quality model SPARROW (SPAtially-Referenced Regression On Watershed attributes), developed by the U.S. Geological Survey, uses a regression equation to describe the relation between watershed attributes (predictors) and measured instream loads (response). Watershed attributes that are considered to describe nutrient input conditions and are tested in the SPARROW model for the SAGT area as source variables include atmospheric deposition, fertilizer application to farmland, manure from livestock production, permitted wastewater discharge, and land cover. Watershed and channel attributes that are considered to affect rates of nutrient transport from land to water and are tested in the SAGT SPARROW model as nutrient-transport variables include characteristics of soil, landform, climate, reach time of travel, and reservoir hydraulic loading. Datasets with estimates of each of these attributes for each individual reach or catchment in the reach-catchment network are presented in this report, along with descriptions of methods used to produce them. Measurements of nutrient water quality at stream monitoring sites from a combination of monitoring programs were used to develop observations of the response variable - mean annual nitrogen or phosphorus load - in the SPARROW regression equation. Instream load of nitrogen and phosphorus was estimated using bias-corrected log-linear regression models using the program Fluxmaster, which provides temporally detrended estimates of long-term mean load well
Freeman, Thomas J.
This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…
Model Transformations? Transformation Models!
Bézivin, J.; Büttner, F.; Gogolla, M.; Jouault, F.; Kurtev, I.; Lindow, A.
2006-01-01
Much of the current work on model transformations seems essentially operational and executable in nature. Executable descriptions are necessary from the point of view of implementation. But from a conceptual point of view, transformations can also be viewed as descriptive models by stating only the
Simonse, W.L.
2014-01-01
Business model design does not always produce a “design” or “model” as the expected result. However, when designers are involved, a visual model or artifact is produced. To assist strategic managers in thinking about how they can act, the designers’ challenge is to combine both strategy and design n
Time trends in heavy drinking among middle-aged and older adults in Denmark
Bjørk, Christina; Thygesen, Lau Caspar; Vinther-Larsen, Mathilde
2008-01-01
on late life alcohol consumption. By using age, period, and cohort modeling this study explores the time trends in heavy drinking. METHODS: Data derive from five National Health and Morbidity Surveys conducted by the Danish National Institute of Public Health in 1987, 1994, 2000, 2003, and 2005. A total......BACKGROUND: Studies have indicated an increasing proportion of heavy drinking among middle-aged and older Danes. Trends in consumption are often extremely sensitive to influence from various components of the time trends but only few have explored the age, period and cohort-related influences...... of 15,144 randomly selected Danes between the age of 50 and 74 were interviewed about their alcohol intake on the last weekday and their alcohol intake in the last week. By applying the age-period-cohort model the probability of heavy alcohol drinking is estimated to separate the influence of age...
Modelling SDL, Modelling Languages
Michael Piefel
2007-02-01
Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.
Poulsen, Helle
1996-01-01
This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants.......This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants....
Anaïs Schaeffer
2012-01-01
By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models. Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...
2011-01-01
This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...
The trade effects of endogenous preferential trade agreements
Egger, Peter; Larch, Mario; Staub, Kevin E; Winkelmann, Rainer
2010-01-01
Recent work by Anderson and van Wincoop (2003) establishes an empirical modelling strategy which takes full account of the structural, non-(log-)linear impact of trade barriers on trade in new trade theory models. Structural new trade theory models have never been used to evaluate and quantify the role of endogenous preferential trade agreement (PTA) membership for trade in a way which is consistent with general equilibrium. Apart from this gap, the present paper aims at delivering an empiric...
Li, Qin; Zhao, Yongxin; Wu, Xiaofeng; Liu, Si
There can be multitudinous models specifying aspects of the same system. Each model has a bias towards one aspect. These models often override in specific aspects though they have different expressions. A specification written in one model can be refined by introducing additional information from other models. The paper proposes a concept of promoting models which is a methodology to obtain refinements with support from cooperating models. It refines a primary model by integrating the information from a secondary model. The promotion principle is not merely an academic point, but also a reliable and robust engineering technique which can be used to develop software and hardware systems. It can also check the consistency between two specifications from different models. A case of modeling a simple online shopping system with the cooperation of the guarded design model and CSP model illustrates the practicability of the promotion principle.
Stubkjær, Erik
2005-01-01
Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares to...
Li, Xiaochun; Li, Huilin; Jin, Man; D Goldberg, Judith
2016-09-10
We consider the non-inferiority (or equivalence) test of the odds ratio (OR) in a crossover study with binary outcomes to evaluate the treatment effects of two drugs. To solve this problem, Lui and Chang (2011) proposed both an asymptotic method and a conditional method based on a random effects logit model. Kenward and Jones (1987) proposed a likelihood ratio test (LRTM ) based on a log linear model. These existing methods are all subject to model misspecification. In this paper, we propose a likelihood ratio test (LRT) and a score test that are independent of model specification. Monte Carlo simulation studies show that, in scenarios considered in this paper, both the LRT and the score test have higher power than the asymptotic and conditional methods for the non-inferiority test; the LRT, score, and asymptotic methods have similar power, and they all have higher power than the conditional method for the equivalence test. When data can be well described by a log linear model, the LRTM has the highest power among all the five methods (LRTM , LRT, score, asymptotic, and conditional) for both non-inferiority and equivalence tests. However, in scenarios for which a log linear model does not describe the data well, the LRTM has the lowest power for the non-inferiority test and has inflated type I error rates for the equivalence test. We provide an example from a clinical trial that illustrates our methods. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
无
2003-01-01
This paper puts forward a new conception:model warehouse,analyzes the reason why model warehouse appears and introduces the characteristics and architecture of model warehouse.Last,this paper points out that model warehouse is an important part of WebGIS.
2011-01-01
procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also......This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...
Batty, M.
2007-01-01
The term ?model? is now central to our thinking about how weunderstand and design cities. We suggest a variety of ways inwhich we use ?models?, linking these ideas to Abercrombie?sexposition of Town and Country Planning which represented thestate of the art fifty years ago. Here we focus on using models asphysical representations of the city, tracing the development ofsymbolic models where the focus is on simulating how functiongenerates form, to iconic models where the focus is on representi...
Chang, CC
2012-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
Spencer, Kevin; Bindra, Renu; Nicolaides, Kypros H
2003-10-01
To assess the suitability of either the log-linear or reciprocal-linear regression procedure for maternal weight correction of biochemical marker MoMs in the first trimester. Data from two prospective first-trimester OSCAR screening programmes including 32,010 women with first-trimester maternal serum-free beta-hCG and PAPP-A measured by the Kryptor analyser was analysed by regression analysis to provide parameters for the log-linear and reciprocal-linear MoM correction procedures. Assessment was made by goodness of fit to the data. The impact on detection rate and false-positive rate of the different correction procedures was assessed using statistical modelling with biochemical markers alone. Both log-linear and reciprocal-linear correction were shown to fit the data well. For free beta-hCG, the log-linear procedure was marginally superior to the reciprocal-linear procedure (r2=0.986 v 0.980), whilst for PAPP-A the reciprocal-linear procedure was marginally better (r2=0.991 v 0.985). Log-linear correction reduced the variance for both markers more than did the reciprocal-linear procedure. For free beta-hCG, the sd was reduced from 0.2675 to 0.2605 and for PAPP-A, it was reduced from 0.2545 to 0.2336. Correcting for maternal weight was shown to reduce the population false-positive rate from 7.0 to 6.5%, whilst maintaining the same detection rate at a risk cut-off of 1 in a 100. At individual levels, a two-fold variation in risk was demonstrated depending upon the individual's weight. To provide accurate individual patient-specific risks for trisomy 21, maternal weight must be taken into account and should be a mandatory data item for screening programmes. Maternal weight correction in the first trimester using free beta-hCG and PAPP-A can be best achieved using the log-linear procedure. Copyright 2003 John Wiley & Sons, Ltd.
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics. We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...
Fox, Anthony David; Dalby, Lars; Christensen, Thomas Kjær;
2016-01-01
We analysed annual changes in abundance of Eurasian Wigeon (Anas penelope) derived from mid-winter International Waterbird Census data throughout its northwest European flyway since 1988 using log-linear Poisson regression modelling. Increases in abundance in the north and east of the wintering r...... to better informmanagement, especially to attempt to harmonise the harvest with annual changes in demography to ensure sustainable exploitation of this important quarry species now and in the future....
Survival of foodborne pathogens in natural cracked olive brines.
Medina, Eduardo; Romero-Gil, Verónica; Garrido-Fernández, Antonio; Arroyo-López, Francisco Noé
2016-10-01
This work reports the survival (challenge tests) of foodborne pathogen species (Escherichia coli, Staphylococcus aureus, Listeria monocytogenes, and Salmonella enterica) in Aloreña de Málaga table olive brines. The inhibitions were fit using a log-linear model with tail implemented in GInaFIT excel software. The olive brine had a considerable inhibitory effect on the pathogens. The residual (final) populations (Fp) after 24 h was below detection limit (olives for foodborne pathogenic microorganisms.
Estimation of the growth curve parameters in Macrobrachium rosenbergii
Nagulu, Banoth; Satyanarayana , Y.; Srinivasa, Rao P.; Gopal , Krishna
2011-01-01
Growth is one of the most important characteristics of cultured species. The objective of this study was to determine the fitness of linear, log linear, polynomial, exponential and Logistic functions to the growth curves of Macrobrachium rosenbergii obtained by using weekly records of live weight, total length, head length, claw length, and last segment length from 20 to 192 days of age. The models were evaluated according to the coefficient of determination (R2), and error sum off square (ES...
What's a university worth? Changes in the lifestyle and status of post-2000 European Graduates.
Mihaela Cornelia PREJMEREAN; Vasilache, Simona
2008-01-01
The paper is structured in two main chapters, the first presenting a literature review on lifestyle, underlining the main themes approached in recent scientific papers, and conducting factorial analysis as to discriminate the most relevant research directions, and the second dedicated to studying, on the data provided by the European Social Survey, the lifestyle patterns of post-2000 European graduates. The methodological perspective included probit regression and log-linear models, as well a...
articles: Describing migration spatial structure
Andrei Rogers; Frans Willekens; James Raymer; Jani Little
2002-01-01
The age structure of a population is a fundamental concept in demography and is generally depicted in the form of an age pyramid. The spatial structure of an interregional system of origin-destination-specific migration streams is, however, a notion lacking a widely accepted definition. We offer a definition in this article, one that draws on the log-linear specification of the geographer's spatial interaction model. We illustrate our definition with observed migration data, we discuss extens...
Unnikrishnan, A; Manoj, N.T.
Various numerical models used to study the dynamics and horizontal distribution of salinity in Mandovi-Zuari estuaries, Goa, India is discussed in this chapter. Earlier, a one-dimensional network model was developed for representing the complex...
Turner, Raymond
2009-01-01
Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al
Taylor, J G
2009-01-01
We present tentative answers to three questions: firstly, what is to be assumed about the structure of the brain in attacking the problem of modeling consciousness; secondly, what is it about consciousness that is attempting to be modeled; and finally, what is taken on board the modeling enterprise, if anything, from the vast works by philosophers about the nature of mind.
Sclütter, Flemming; Frigaard, Peter; Liu, Zhou
This report presents the model test results on wave run-up on the Zeebrugge breakwater under the simulated prototype storms. The model test was performed in January 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University. The detailed description of the model is given...
Ravn, Anders P.; Staunstrup, Jørgen
1994-01-01
This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...
2011-01-01
This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...
Model Experiments and Model Descriptions
Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian
1999-01-01
The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.
Scalable Models Using Model Transformation
2008-07-13
and the following companies: Agilent, Bosch, HSBC , Lockheed-Martin, National Instruments, and Toyota. Scalable Models Using Model Transformation...parametrization, and workflow automation. (AFRL), the State of California Micro Program, and the following companies: Agi- lent, Bosch, HSBC , Lockheed
Stubkjær, Erik
2005-01-01
to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...... to land. The paper advances the position that cadastral modeling has to include not only the physical objects, agents, and information sets of the domain, but also the objectives or requirements of cadastral systems.......Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares...
Familial aggregation of congenital hydrocephalus in a nationwide cohort
Munch, Tina Nørgaard; Rostgaard, Klaus; Rasmussen, Marie-Louise Hee
2012-01-01
The objective of the study was to investigate familial aggregation of primary congenital hydrocephalus in an unselected, nationwide population. Based on the Danish Central Person Register, we identified all children born in Denmark between 1978 and 2008 and their family members (up to third......-degree relatives). Information on primary congenital hydrocephalus was obtained from the National Patient Discharge Register. Using binomial log-linear regression, we estimated recurrence risk ratios of congenital hydrocephalus. An alternative log-linear regression model was applied to quantify the genetic effect...... and the maternal effect. Of 1 928 683 live-born children, 2194 had a diagnosis of idiopathic congenital hydrocephalus (1.1/1000). Of those, 75 (3.4%) had at least one other family member with primary congenital hydrocephalus. Significantly increased recurrence risk ratios of primary congenital hydrocephalus were...
Modelling in Business Model design
Simonse, W.L.
2013-01-01
It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and
Druyan, Leonard M.
2012-01-01
Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.
2016-01-01
This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.
Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens
2011-01-01
In this report a new turbulence model is presented.In contrast to the bulk of modern work, the model is a classical continuum model with a relatively simple constitutive equation. The constitutive equation is, as usual in continuum mechanics, entirely empirical. It has the usual Newton or Stokes...... term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence....... The model is in a virgin state, but a number of numerical tests have been carried out with good results. It is published to encourage other researchers to study the model in order to find its merits and possible limitations....
Blomhøj, Morten
2004-01-01
Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...
Wenninger, Magnus J
2012-01-01
Well-illustrated, practical approach to creating star-faced spherical forms that can serve as basic structures for geodesic domes. Complete instructions for making models from circular bands of paper with just a ruler and compass. Discusses tessellation, or tiling, and how to make spherical models of the semiregular solids and concludes with a discussion of the relationship of polyhedra to geodesic domes and directions for building models of domes. "". . . very pleasant reading."" - Science. 1979 edition.
Liu, Zhou; Frigaard, Peter
This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University.......This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University....
Vestergaard, Kristian
the engineers, but as the scale and the complexity of the hydraulic works increased, the mathematical models became so complex that a mathematical solution could not be obtained. This created a demand for new methods and again the experimental investigation became popular, but this time as measurements on small......-scale models. But still the scale and complexity of hydraulic works were increasing, and soon even small-scale models reached a natural limit for some applications. In the mean time the modern computer was developed, and it became possible to solve complex mathematical models by use of computer-based numerical...
V. Chipman
2002-10-05
The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to
Modeling Documents with Event Model
Longhui Wang
2015-08-01
Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.
Model Selection for Geostatistical Models
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
Højgaard, Tomas; Hansen, Rune
2016-01-01
The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to...
Højgaard, Tomas; Hansen, Rune
2016-01-01
The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to construct this approach in mathematics education research.
Gøtze, Jens Peter; Krentz, Andrew
2014-01-01
In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...
Giandomenico, Rossano
2006-01-01
The model determines a stochastic continuous process as continuous limit of a stochastic discrete process so to show that the stochastic continuous process converges to the stochastic discrete process such that we can integrate it. Furthermore, the model determines the expected volatility and the expected mean so to show that the volatility and the mean are increasing function of the time.
Budiansky, Stephen
1980-01-01
This article discusses the need for more accurate and complete input data and field verification of the various models of air pollutant dispension. Consideration should be given to changing the form of air quality standards based on enhanced dispersion modeling techniques. (Author/RE)
Poortman, Sybilla; Sloep, Peter
2006-01-01
Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in
Jongerden, M.R.; Haverkort, Boudewijn R.H.M.
2008-01-01
The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However,
Linguistic models and linguistic modeling.
Pedryez, W; Vasilakos, A V
1999-01-01
The study is concerned with a linguistic approach to the design of a new category of fuzzy (granular) models. In contrast to numerically driven identification techniques, we concentrate on budding meaningful linguistic labels (granules) in the space of experimental data and forming the ensuing model as a web of associations between such granules. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format, we refer to them as linguistic models. Furthermore, as there are no detailed numeric estimation procedures involved in the construction of the linguistic models carried out in this way, their design mode can be viewed as that of a rapid prototyping. The underlying algorithm used in the development of the models utilizes an augmented version of the clustering technique (context-based clustering) that is centered around a notion of linguistic contexts-a collection of fuzzy sets or fuzzy relations defined in the data space (more precisely a space of input variables). The detailed design algorithm is provided and contrasted with the standard modeling approaches commonly encountered in the literature. The usefulness of the linguistic mode of system modeling is discussed and illustrated with the aid of numeric studies including both synthetic data as well as some time series dealing with modeling traffic intensity over a broadband telecommunication network.
Veronica J. Rutledge
2013-01-01
The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to
Mitchell, W.D.
1972-01-01
Model hydrographs are composed of pairs of dimensionless ratios, arrayed in tabular form, which, when modified by the appropriate values of rainfall exceed and by the time and areal characteristics of the drainage basin, satisfactorily represent the flood hydrograph for the basin. Model bydrographs are developed from a dimensionless translation hydrograph, having a time base of T hours and appropriately modified for storm duration by routing through reservoir storage, S=kOx. Models fall into two distinct classes: (1) those for which the value of x is unity and which have all the characteristics of true unit hydrographs and (2) those for which the value of x is other than unity and to which the unit-hydrograph principles of proportionality and superposition do not apply. Twenty-six families of linear models and eight families of nonlinear models in tabular form from the principal subject of this report. Supplemental discussions describe the development of the models and illustrate their application. Other sections of the report, supplemental to the tables, describe methods of determining the hydrograph characteristics, T, k, and x, both from observed hydrograph and from the physical characteristics of the drainage basin. Five illustrative examples of use show that the models, when properly converted to incorporate actual rainfall excess and the time and areal characteristics of the drainage basins, do indeed satisfactorily represent the observed flood hydrographs for the basins.
Grimaldi, P.
2012-07-01
These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : - the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program); - the shot visualization in two distinct windows - the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C(α) RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lin, Tony; Erfan, Sasan
2016-01-01
Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test...... the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how...
Kindler, Ekkart
2009-01-01
There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts, these no...
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Insepov, Zeke; Veitzer, Seth; Mahalingam, Sudhakar
2011-01-01
Although vacuum arcs were first identified over 110 years ago, they are not yet well understood. We have since developed a model of breakdown and gradient limits that tries to explain, in a self-consistent way: arc triggering, plasma initiation, plasma evolution, surface damage and gra- dient limits. We use simple PIC codes for modeling plasmas, molecular dynamics for modeling surface breakdown, and surface damage, and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties. Since any given experiment seems to have more variables than data points, we have tried to consider a wide variety of arcing (rf structures, e beam welding, laser ablation, etc.) to help constrain the problem, and concentrate on common mechanisms. While the mechanisms can be comparatively simple, modeling can be challenging.
National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of all...
Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia
Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.
Hodges, Wilfrid
1993-01-01
An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.
Accelerated life models modeling and statistical analysis
Bagdonavicius, Vilijandas
2001-01-01
Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia
Do stroke models model stroke?
Philipp Mergenthaler
2012-11-01
Full Text Available Stroke is one of the leading causes of death worldwide and the biggest reason for long-term disability. Basic research has formed the modern understanding of stroke pathophysiology, and has revealed important molecular, cellular and systemic mechanisms. However, despite decades of research, most translational stroke trials that aim to introduce basic research findings into clinical treatment strategies – most notably in the field of neuroprotection – have failed. Among other obstacles, poor methodological and statistical standards, negative publication bias, and incomplete preclinical testing have been proposed as ‘translational roadblocks’. In this article, we introduce the models commonly used in preclinical stroke research, discuss some of the causes of failed translational success and review potential remedies. We further introduce the concept of modeling ‘care’ of stroke patients, because current preclinical research models the disorder but does not model care or state-of-the-art clinical testing. Stringent statistical methods and controlled preclinical trials have been suggested to counteract weaknesses in preclinical research. We conclude that preclinical stroke research requires (1 appropriate modeling of the disorder, (2 appropriate modeling of the care of stroke patients and (3 an approach to preclinical testing that is similar to clinical testing, including Phase 3 randomized controlled preclinical trials as necessary additional steps before new therapies enter clinical testing.
2012-01-01
The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident...... characteristic of architectural practice. But the persistence in persistent modelling can also be understood to apply in other ways, reflecting and anticipating extended roles for representation. This book identifies three principle areas in which these extensions are becoming apparent within contemporary....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....
Eck, Christof; Knabner, Peter
2017-01-01
Mathematical models are the decisive tool to explain and predict phenomena in the natural and engineering sciences. With this book readers will learn to derive mathematical models which help to understand real world phenomena. At the same time a wealth of important examples for the abstract concepts treated in the curriculum of mathematics degrees are given. An essential feature of this book is that mathematical structures are used as an ordering principle and not the fields of application. Methods from linear algebra, analysis and the theory of ordinary and partial differential equations are thoroughly introduced and applied in the modeling process. Examples of applications in the fields electrical networks, chemical reaction dynamics, population dynamics, fluid dynamics, elasticity theory and crystal growth are treated comprehensively.
Ling Li; Vasily Volkov
2006-01-01
A physically-based model is presented for the simulation of a new type of deformable objects-inflatable objects, such as shaped balloons, which consist of pressurized air enclosed by an elastic surface. These objects have properties inherent in both 3D and 2D elastic bodies, as they demonstrate the behaviour of 3D shapes using 2D formulations. As there is no internal structure in them, their behaviour is substantially different from the behaviour of deformable solid objects. We use one of the few available models for deformable surfaces, and enhance it to include the forces of internal and external pressure. These pressure forces may also incorporate buoyancy forces, to allow objects filled with a low density gas to float in denser media. The obtained models demonstrate rich dynamic behaviour, such as bouncing, floating, deflation and inflation.
Nash, Ulrik William
2014-01-01
Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...... of probabilistic functionalism, and concerns the environment and the mind, and adaptation by the latter to the former. This entry is about the lens model, and probabilistic functionalism more broadly. Focus will mostly be on firms and their employees, but, to fully appreciate the scope, we have to keep in mind...
Nash, Ulrik William
2014-01-01
Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...
Aarti Sharma
2009-01-01
Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.
Sivaram, C
2007-01-01
An alternate model for gamma ray bursts is suggested. For a white dwarf (WD) and neutron star (NS) very close binary system, the WD (close to Mch) can detonate due to tidal heating, leading to a SN. Material falling on to the NS at relativistic velocities can cause its collapse to a magnetar or quark star or black hole leading to a GRB. As the material smashes on to the NS, it is dubbed the Smashnova model. Here the SN is followed by a GRB. NS impacting a RG (or RSG) (like in Thorne-Zytkow objects) can also cause a SN outburst followed by a GRB. Other variations are explored.
Cardey, Sylviane
2013-01-01
In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int
Building Models and Building Modelling
Jørgensen, Kaj Asbjørn; Skauge, Jørn
I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygningsmodelleringsprogrammer beskrevet. Vigtige aspekter om......lering og bygningsmodeller. Det bliver understreget at modellering bør udføres på flere abstraktionsniveauer og i to dimensioner i den såkaldte modelleringsmatrix. Ud fra dette identificeres de primære faser af bygningsmodellering. Dernæst beskrives de basale karakteristika for bygningsmodeller. Heri...... inkluderes en præcisering af begreberne objektorienteret software og objektorienteret modeller. Det bliver fremhævet at begrebet objektbaseret modellering giver en tilstrækkelig og bedre forståelse. Endelig beskrives forestillingen om den ideale bygningsmodel som værende én samlet model, der anvendes gennem...
Jensen, Morten S.; Frigaard, Peter
In the following, results from model tests with Zeebrugge breakwater are presented. The objective with these tests is partly to investigate the influence on wave run-up due to a changing waterlevel during a storm. Finally, the influence on wave run-up due to an introduced longshore current...
Olaf eWolkenhauer
2014-01-01
Full Text Available Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question Why model?
Wolkenhauer, Olaf
2014-01-01
Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question "Why model?"
Burianová, Eva
2008-01-01
Cílem první části této bakalářské práce je - pomocí analýzy výchozích textů - teoretické shrnutí ekonomických modelů a teorií, na kterých model CAPM stojí: Markowitzův model teorie portfolia (analýza maximalizace očekávaného užitku a na něm založený model výběru optimálního portfolia), Tobina (rozšíření Markowitzova modelu ? rozdělení výběru optimálního portfolia do dvou fází; nejprve určení optimální kombinace rizikových instrumentů a následná alokace dostupného kapitálu mezi tuto optimální ...
R.E. Waltz
2007-01-01
@@ There has been remarkable progress during the past decade in understanding and modeling turbulent transport in tokamaks. With some exceptions the progress is derived from the huge increases in computational power and the ability to simulate tokamak turbulence with ever more fundamental and physically realistic dynamical equations, e.g.
Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.
2015-12-01
The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .
Goodwyn, Lauren; Salm, Sarah
2007-01-01
Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…
Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.
This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…
Tijskens, L.M.M.
2003-01-01
For modelling product behaviour, with respect to quality for users and consumers, its essential to have at least a fundamental notion what quality really is, and which product properties determine the quality assigned by the consumer to a product. In other words: what is allowed and what is to be
A. Alsaed
2004-09-14
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of
Information Model for Product Modeling
焦国方; 刘慎权
1992-01-01
The Key problems in product modeling for integrated CAD ∥CAM systems are the information structures and representations of products.They are taking more and more important roles in engineering applications.With the investigation on engineering product information and from the viewpoint of industrial process,in this paper,the information models are proposed and the definitions of the framework of product information are given.And then,the integration and the consistence of product information are discussed by introucing the entity and its instance.As a summary,the information structures described in this paper have many advantage and natures helpful in engineering design.
Building Models and Building Modelling
Jørgensen, Kaj; Skauge, Jørn
2008-01-01
I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygningsmodelleringsprogrammer beskrevet. Vigtige aspekter om comp...
Aarti Sharma
2009-12-01
Full Text Available
Arnoldi, Jakob
The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing...... on two cases, this article shows that manipulation more likely happens in the reverse way, meaning that human traders attempt to make algorithms ‘make mistakes’ or ‘mislead’ algos. Thus, it is algorithmic models, not humans, that are manipulated. Such manipulation poses challenges for security exchanges....... The article analyses these challenges and argues that we witness a new post-social form of human-technology interaction that will lead to a reconfiguration of professional codes for financial trading....
Barr, Michael
2002-01-01
Acyclic models is a method heavily used to analyze and compare various homology and cohomology theories appearing in topology and algebra. This book is the first attempt to put together in a concise form this important technique and to include all the necessary background. It presents a brief introduction to category theory and homological algebra. The author then gives the background of the theory of differential modules and chain complexes over an abelian category to state the main acyclic models theorem, generalizing and systemizing the earlier material. This is then applied to various cohomology theories in algebra and topology. The volume could be used as a text for a course that combines homological algebra and algebraic topology. Required background includes a standard course in abstract algebra and some knowledge of topology. The volume contains many exercises. It is also suitable as a reference work for researchers.
Fossión, Rubén
2010-09-01
The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction). Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.
2015-01-01
This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....
Michael, John
others' minds. Then (2), in order to bring to light some possible justifications, as well as hazards and criticisms of the methodology of looking time tests, I will take a closer look at the concept of folk psychology and will focus on the idea that folk psychology involves using oneself as a model...... of other people in order to predict and understand their behavior. Finally (3), I will discuss the historical location and significance of the emergence of looking time tests...
Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Model Construct Based Enterprise Model Architecture and Its Modeling Approach
无
2002-01-01
In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.
PAPAJ Jan
2014-05-01
Full Text Available Traditional wireless networks use the concept of the point-to-point forwarding inherited from reliable wired networks which seems to be not ideal for wireless environment. New emerging applications and networks operate mostly disconnected. So-called Delay-Tolerant networks (DTNs are receiving increasing attentions from both academia and industry. DTNs introduced a store-carry-and-forward concept solving the problem of intermittent connectivity. Behavior of such networks is verified by real models, computer simulation or combination of the both approaches. Computer simulation has become the primary and cost effective tool for evaluating the performance of the DTNs. OPNET modeler is our target simulation tool and we wanted to spread OPNET’s simulation opportunity towards DTN. We implemented bundle protocol to OPNET modeler allowing simulate cases based on bundle concept as epidemic forwarding which relies on flooding the network with messages and the forwarding algorithm based on the history of past encounters (PRoPHET. The implementation details will be provided in article.
Liu Zhiyang
2011-01-01
Similar to ISO Technical Committees,SAC Technical Committees undertake the management and coordination of standard's development and amendments in various sectors in industry,playing the role as a bridge among enterprises,research institutions and the governmental standardization administration.How to fully play the essential role is the vital issue SAC has been committing to resolve.Among hundreds of SAC TCs,one stands out in knitting together those isolated,scattered,but highly competitive enterprises in the same industry with the "Standards" thread,and achieving remarkable results in promoting industry development with standardization.It sets a role model for other TCs.
2015-01-01
This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....... posing new challenges in all areas of the industry from material and structural to the urban scale. Contributions from invited experts, papers and case studies provide the reader with a comprehensive overview of the field, as well as perspectives from related disciplines, such as computer science...
M. Alguacil Marí
2017-08-01
Full Text Available The current economic environment, together with the low scores obtained by our students in recent years, makes it necessary to incorporate new teaching methods. In this sense, econometric modelling provides a unique opportunity offering to the student with the basic tools to address the study of Econometrics in a deeper and novel way. In this article, this teaching method is described, presenting also an example based on a recent study carried out by two students of the Degree of Economics. Likewise, the success of this method is evaluated quantitatively in terms of academic performance. The results confirm our initial idea that the greater involvement of the student, as well as the need for a more complete knowledge of the subject, suppose a stimulus for the study of this subject. As evidence of this, we show how those students who opted for the method we propose here obtained higher qualifications than those that chose the traditional method.
Bork Petersen, Franziska
2013-01-01
For the presentation of his autumn/winter 2012 collection in Paris and subsequently in Copenhagen, Danish designer Henrik Vibskov installed a mobile catwalk. The article investigates the choreographic impact of this scenography on those who move through it. Drawing on Dance Studies, the analytical...... advantageous manner. Stepping on the catwalk’s sloping, moving surfaces decelerates the models’ walk and makes it cautious, hesitant and shaky: suddenly the models lack exactly the affirmative, staccato, striving quality of motion, and the condescending expression that they perform on most contemporary...... catwalks. Vibskov’s catwalk induces what the dance scholar Gabriele Brandstetter has labelled a ‘defigurative choregoraphy’: a straying from definitions, which exist in ballet as in other movement-based genres, of how a figure should move and appear (1998). The catwalk scenography in this instance...
On Activity modelling in process modeling
Dorel Aiordachioaie
2001-12-01
Full Text Available The paper is looking to the dynamic feature of the meta-models of the process modelling process, the time. Some principles are considered and discussed as main dimensions of any modelling activity: the compatibility of the substances, the equipresence of phenomena and the solvability of the model. The activity models are considered and represented at meta-level.
Regional variability among nonlinear chlorophyll-phosphorus relationships in lakes
Filstrup, Christopher T.; Wagner, Tyler; Soranno, Patricia A.; Stanley, Emily H.; Stow, Craig A.; Webster, Katherine E.; Downing, John A.
2014-01-01
The relationship between chlorophyll a (Chl a) and total phosphorus (TP) is a fundamental relationship in lakes that reflects multiple aspects of ecosystem function and is also used in the regulation and management of inland waters. The exact form of this relationship has substantial implications on its meaning and its use. We assembled a spatially extensive data set to examine whether nonlinear models are a better fit for Chl a—TP relationships than traditional log-linear models, whether there were regional differences in the form of the relationships, and, if so, which regional factors were related to these differences. We analyzed a data set from 2105 temperate lakes across 35 ecoregions by fitting and comparing two different nonlinear models and one log-linear model. The two nonlinear models fit the data better than the log-linear model. In addition, the parameters for the best-fitting model varied among regions: the maximum and lower Chl aasymptotes were positively and negatively related to percent regional pasture land use, respectively, and the rate at which chlorophyll increased with TP was negatively related to percent regional wetland cover. Lakes in regions with more pasture fields had higher maximum chlorophyll concentrations at high TP concentrations but lower minimum chlorophyll concentrations at low TP concentrations. Lakes in regions with less wetland cover showed a steeper Chl a—TP relationship than wetland-rich regions. Interpretation of Chl a—TP relationships depends on regional differences, and theory and management based on a monolithic relationship may be inaccurate.
Towards a Multi Business Model Innovation Model
Lindgren, Peter; Jørgensen, Rasmus
2012-01-01
This paper studies the evolution of business model (BM) innovations related to a multi business model framework. The paper tries to answer the research questions: • What are the requirements for a multi business model innovation model (BMIM)? • How should a multi business model innovation model...... look like? Different generations of BMIMs are initially studied in the context of laying the baseline for how next generation multi BM Innovation model (BMIM) should look like. All generations of models are analyzed with the purpose of comparing the characteristics and challenges of previous...
Better Language Models with Model Merging
Brants, T
1996-01-01
This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.
Model Selection Principles in Misspecified Models
Lv, Jinchi
2010-01-01
Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...
The IMACLIM model; Le modele IMACLIM
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
Building Mental Models by Dissecting Physical Models
Srivastava, Anveshna
2016-01-01
When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…
The IMACLIM model; Le modele IMACLIM
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
Modelling live forensic acquisition
Grobler, MM
2009-06-01
Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...
A PROBABILISTIC APPROACH TO STRING TRANSFORMATION
V. Vinothh
2015-10-01
Full Text Available The string model has been applied to a wide range of problems, including spelling correction. These models consist of two components: a source model and a channel model. Very little research has gone into improving the channel model for spelling correction. We Describes a new channel model for spelling correction, based on generic string to string edits. Using this model gives significant performance improvements compared to previously proposed models. We propose a novel and probabilistic approach to string transformation, which is both accurate and efficient. In this approach includes the use of a log linear model, a method for training the model, and an algorithm for generating the top k candidates, whether there is or is not a predefined dictionary. Log linear model is defined as a conditional probability distribution of an output string and a rule set for the transformation conditioned on an input string. The string generation algorithm based on pruning is guaranteed to generate the optimal top k candidates. The proposed method is applied to correction of spelling errors in queries as well as reformulation of queries in web search. Experimental results on large scale data show that the proposed approach is very accurate and efficient improving upon existing methods in terms of accuracy and efficiency in different settings.
Continuous Time Model Estimation
Carl Chiarella; Shenhuai Gao
2004-01-01
This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...
Pejtersen, Jan; Feveile, H; Christensen, Karl Bang;
2011-01-01
population consisted of the 2403 employees that reported working in offices. The different types of offices were characterized according to self-reported number of occupants in the space. The log-linear Poisson model was used to model the number of self-reported sickness absence days depending on the type...... of office; the analysis was adjusted for age, gender, socioeconomic status, body mass index, alcohol consumption, smoking habits, and physical activity during leisure time. Results Sickness absence was significantly related to having a greater number of occupants in the office (P
Quantifying feedforward control: a linear scaling model for fingertip forces and object weight
Lu, Ying; Bilaloglu, Seda; Aluru, Viswanath
2015-01-01
The ability to predict the optimal fingertip forces according to object properties before the object is lifted is known as feedforward control, and it is thought to occur due to the formation of internal representations of the object's properties. The control of fingertip forces to objects of different weights has been studied extensively by using a custom-made grip device instrumented with force sensors. Feedforward control is measured by the rate of change of the vertical (load) force before the object is lifted. However, the precise relationship between the rate of change of load force and object weight and how it varies across healthy individuals in a population is not clearly understood. Using sets of 10 different weights, we have shown that there is a log-linear relationship between the fingertip load force rates and weight among neurologically intact individuals. We found that after one practice lift, as the weight increased, the peak load force rate (PLFR) increased by a fixed percentage, and this proportionality was common among the healthy subjects. However, at any given weight, the level of PLFR varied across individuals and was related to the efficiency of the muscles involved in lifting the object, in this case the wrist and finger extensor muscles. These results quantify feedforward control during grasp and lift among healthy individuals and provide new benchmarks to interpret data from neurologically impaired populations as well as a means to assess the effect of interventions on restoration of feedforward control and its relationship to muscular control. PMID:25878151
Comparative Protein Structure Modeling Using MODELLER.
Webb, Benjamin; Sali, Andrej
2016-06-20
Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.
Stomach cancer incidence in Southern Portugal 1998-2006: a spatio-temporal analysis.
Papoila, Ana L; Riebler, Andrea; Amaral-Turkman, Antónia; São-João, Ricardo; Ribeiro, Conceição; Geraldes, Carlos; Miranda, Ana
2014-05-01
Stomach cancer belongs to the most common malignant tumors in Portugal. Main causal factors are age, dietary habits, smoking, and Helicobacter pylori infections. As these factors do not only operate on different time dimensions, such as age, period, or birth cohort, but may also vary along space, it is of utmost interest to model temporal and spatial trends jointly. In this paper, we analyze incidence of stomach cancer in Southern Portugal between 1998 and 2006 for females and males jointly using a spatial multivariate age-period-cohort model. Thus, we avoid age aggregation and allow the exploration of heterogeneous time trends between males and females across age, period, birth cohort, and space. Model estimation is performed within a Bayesian setting assuming (gender specific) smoothing priors. Our results show that the posterior expected rate of stomach cancer is decreasing for all counties in Southern Portugal and that males around 70 have a two times higher risk of getting stomach cancer compared with their female counterparts. We further found that, except for some few counties, the spatial influence is almost constant over time and negligible in the southern counties of Southern Portugal.
Concept Modeling vs. Data modeling in Practice
Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
2015-01-01
account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models......This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...
Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher
2014-01-01
The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...
Chao, Dennis L; Longini, Ira M; Morris, J Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios.
Longini, Ira M.; Morris, J. Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios. PMID:23412687
Model Manipulation for End-User Modelers
Acretoaie, Vlad
of these proposals. To achieve its first goal, the thesis presents the findings of a Systematic Mapping Study showing that human factors topics are scarcely and relatively poorly addressed in model transformation research. Motivated by these findings, the thesis explores the requirements of end-user modelers......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...
Air Quality Dispersion Modeling - Alternative Models
Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.
From Product Models to Product State Models
Larsen, Michael Holm
1999-01-01
A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...... Object for this project. In the presentation, benefits and challenges of the PSM will be presented as a basis for the discussion....
Kátia Silveira da Silva
2005-06-01
Full Text Available A associação entre apoio social e desfechos relacionados à saúde tem sido objeto de interesse de pesquisadores. O objetivo desse artigo foi avaliar a confiabilidade e estrutura de concordância da escala de apoio social. A confiabilidade teste-reteste foi investigada num grupo de gestantes de maternidade pública (n = 65 por intermédio do coeficiente de correlação intraclasse (CCIC e kappa ponderado (kw2. Avaliou-se a estrutura de concordância pelos modelos log-lineares. O CCIC da escala foi 0,90 e kw² variou entre 0,23 e 0,70. Modelos com melhores ajustes foram o de concordância diagonal mais associação linear por linear e de semi-independência. Considerou-se a escala um instrumento capaz de medir, de maneira confiável, o apoio social entre gestantes de baixa renda.There is a growing interest in research on the association between social support and health outcomes. The objective of this study was to evaluate the reliability and structure of agreement of a social support scale. Test-retest reliability was measured in a group of pregnant women (n = 65 in a public maternity ward. Intraclass correlation coefficient (ICC and quadractically weighted kappa (kw² were used as agreement measures. Log-linear statistical models were fitted to describe patterns of agreement. ICC for social support score was 0.90. The kw² ranged from 0.23 to 0.70. Log-linear models that provided the best fit to the data were diagonal agreement plus linear-by-linear association and quasi-independence models. The scale was considered a reliable instrument to measure social support scale in low-income pregnant women.
Measurement and Modeling: Infectious Disease Modeling
Kretzschmar, MEE
2016-01-01
After some historical remarks about the development of mathematical theory for infectious disease dynamics we introduce a basic mathematical model for the spread of an infection with immunity. The concepts of the model are explained and the model equations are derived from first principles. Using th
Oberaigner, Willi; Horninger, Wolfgang; Klocker, Helmut; Schönitzer, Dieter; Stühlinger, Wolf; Bartsch, Georg
2006-08-15
The objective of this study was to analyze in detail the time trend in prostate cancer mortality in the population of Tyrol, Austria. In Tyrol, prostate-specific antigen tests were introduced in 1988-1989 and, since 1993, have been offered to all men aged 45-74 years free of charge. More than three quarters of all men in this age group had at least one such test in the last decade. The authors applied the age-period-cohort model by Poisson regression to mortality data covering more than three decades, from 1970 to 2003. For Tyrol, the full model with age and period and cohort terms fit fairly well. Period terms showed a significant reduction in prostate cancer mortality in the last 5 years, with a risk ratio of 0.81 (95% confidence interval: 0.68, 0.98) for Tyrol; for Austria without Tyrol, no effect was seen, with a risk ratio of 1.00 (95% confidence interval: 0.95, 1.05). Each was compared with the mortality rate in the period 1989-1993. Although the results of randomized screening trials are not expected until 2008-2010, these findings support the evidence that prostate-specific antigen testing offered to a population free of charge can reduce prostate cancer mortality.
Baby boomers nearing retirement: the healthiest generation?
Rice, Neil E; Lang, Iain A; Henley, William; Melzer, David
2010-02-01
The baby-boom generation is entering retirement. Having experienced unprecedented prosperity and improved medical technology, they should be the healthiest generation ever. We compared prevalence of disease and risk factors at ages 50-61 years in baby boomers with the preceding generation and attributed differences to period or cohort effects. Data were from the Health Survey for England (HSE) from 1994 to 2007 (n = 48,563). Logistic regression models compared health status between birth cohorts. Age-period-cohort models identified cohort and period effects separately. Compared to the wartime generation, the baby-boomer group was heavier (3.02 kg; 95% confidence interval [CI], 2.42-3.63; p Baby boomers reported fewer heart attacks (OR = 0.61; CI, 0.47-0.79; p baby boomers are moving toward retirement with improved cardiovascular health. However, the baby-boomer cohort has a higher prevalence of mental illness diagnoses and shows no improvement in self-rated health compared to the wartime birth cohort. There remains substantial scope to reduce health risks and future disability.
Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik
1997-01-01
This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...... of the laws of physics on the system. The unknown (or uncertain) parameters are estimated with Maximum Likelihood (ML) parameter estimation. The identified model has been evaluated by comparing the measurements with simulation of the model. The identified model was much more capable of describing the dynamics...... of the system than the deterministic model....
Cameron, Ian T.; Gani, Rafiqul
This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models. These ...
Willden, Jeff
2001-01-01
"Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…
Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik
1997-01-01
This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...
Haiganoush Preisler; Alan Ager
2013-01-01
For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...
Solicited abstract: Global hydrological modeling and models
Xu, Chong-Yu
2010-05-01
The origins of rainfall-runoff modeling in the broad sense can be found in the middle of the 19th century arising in response to three types of engineering problems: (1) urban sewer design, (2) land reclamation drainage systems design, and (3) reservoir spillway design. Since then numerous empirical, conceptual and physically-based models are developed including event based models using unit hydrograph concept, Nash's linear reservoir models, HBV model, TOPMODEL, SHE model, etc. From the late 1980s, the evolution of global and continental-scale hydrology has placed new demands on hydrologic modellers. The macro-scale hydrological (global and regional scale) models were developed on the basis of the following motivations (Arenll, 1999). First, for a variety of operational and planning purposes, water resource managers responsible for large regions need to estimate the spatial variability of resources over large areas, at a spatial resolution finer than can be provided by observed data alone. Second, hydrologists and water managers are interested in the effects of land-use and climate variability and change over a large geographic domain. Third, there is an increasing need of using hydrologic models as a base to estimate point and non-point sources of pollution loading to streams. Fourth, hydrologists and atmospheric modellers have perceived weaknesses in the representation of hydrological processes in regional and global climate models, and developed global hydrological models to overcome the weaknesses of global climate models. Considerable progress in the development and application of global hydrological models has been achieved to date, however, large uncertainties still exist considering the model structure including large scale flow routing, parameterization, input data, etc. This presentation will focus on the global hydrological models, and the discussion includes (1) types of global hydrological models, (2) procedure of global hydrological model development
Bayesian Model Selection and Statistical Modeling
Ando, Tomohiro
2010-01-01
Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik
From Numeric Models to Granular System Modeling
Witold Pedrycz
2015-03-01
To make this study self-contained, we briefly recall the key concepts of granular computing and demonstrate how this conceptual framework and its algorithmic fundamentals give rise to granular models. We discuss several representative formal setups used in describing and processing information granules including fuzzy sets, rough sets, and interval calculus. Key architectures of models dwell upon relationships among information granules. We demonstrate how information granularity and its optimization can be regarded as an important design asset to be exploited in system modeling and giving rise to granular models. With this regard, an important category of rule-based models along with their granular enrichments is studied in detail.
Geologic Framework Model Analysis Model Report
R. Clayton
2000-12-19
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the
Mangani, P
2011-01-01
This title includes: Lectures - G.E. Sacks - Model theory and applications, and H.J. Keisler - Constructions in model theory; and, Seminars - M. Servi - SH formulas and generalized exponential, and J.A. Makowski - Topological model theory.
Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire regime...
Osburn, L
2010-01-01
Full Text Available The construction industry has turned to energy modelling in order to assist them in reducing the amount of energy consumed by buildings. However, while the energy loads of buildings can be accurately modelled, energy models often under...
Breaking the racial barriers: variations in interracial marriage between 1980 and 1990.
Qian, Z
1997-05-01
Using PUMS data from the 1980 and the 1990 U.S. Census, I apply log-linear models to examine interracial marriage among whites, African Americans, Hispanics, and Asian Americans. Rarely, but increasingly between 1980 and 1990, interracial marriage of whites occurs most frequently with Asian Americans, followed by Hispanics, and then by African Americans. Interracial marriage tends to be educationally homogamous and the odds of interracial marriage increase with couples' educational attainment. Among interracially married couples with different educational attainments, both men and women from lower status racial groups but with high education levels tend to marry spouses from a higher status racial group with low education levels.
Endogamy among the Dogon of Boni, Mali.
Cazes, M H
1990-01-01
This paper examines factors influencing endogamy in a Dogon population in Mali. Situated in Boni district, this population of about 5000 individuals is distributed over fifteen villages located on four independent massifs. This population is strongly endogamous (only 4% of all marriages are contracted with neighbouring ethnic groups), and each massif shows high endogamy. The roles of lineage, residence in the same village, and geographical distance in mating choice are examined. These different factors are successively analysed using log-linear statistical models and the results offer a more precise interpretation of endogamy in this population.
Computational neurogenetic modeling
Benuskova, Lubica
2010-01-01
Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol
Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...
Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...
Finch, W Holmes; Kelley, Ken
2014-01-01
A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo
Rask, Morten
insight from the literature about business models, international product policy, international entry modes and globalization into a conceptual model of relevant design elements of global business models, enabling global business model innovation to deal with differences in a downstream perspective...... regarding the customer interface and in an upstream perspective regarding the supply infrastructure. The paper offers a coherent conceptual dynamic meta-model of global business model innovation. Students, scholars and managers within the field of international business can use this conceptualization...... to understand, to study, and to create global business model innovation. Managerial and research implications draw on the developed ideal type of global business model innovation....
Cellier, Francois E.
1991-01-01
A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.
Andresen, Mette
2007-01-01
This paper meets the common critique of the teaching of non-authentic modelling in school mathematics. In the paper, non-authentic modelling is related to a change of view on the intentions of modelling from knowledge about applications of mathematical models to modelling for concept formation. Non......-authentic modelling is also linked with the potentials of exploration of ready-made models as a forerunner for more authentic modelling processes. The discussion includes analysis of an episode of students? work in the classroom, which serves to illustrate how concept formation may be linked to explorations of a non...
Interfacing materials models with fire field models
Nicolette, V.F.; Tieszen, S.R.; Moya, J.L.
1995-12-01
For flame spread over solid materials, there has traditionally been a large technology gap between fundamental combustion research and the somewhat simplistic approaches used for practical, real-world applications. Recent advances in computational hardware and computational fluid dynamics (CFD)-based software have led to the development of fire field models. These models, when used in conjunction with material burning models, have the potential to bridge the gap between research and application by implementing physics-based engineering models in a transient, multi-dimensional tool. This paper discusses the coupling that is necessary between fire field models and burning material models for the simulation of solid material fires. Fire field models are capable of providing detailed information about the local fire environment. This information serves as an input to the solid material combustion submodel, which subsequently calculates the impact of the fire environment on the material. The response of the solid material (in terms of thermal response, decomposition, charring, and off-gassing) is then fed back into the field model as a source of mass, momentum and energy. The critical parameters which must be passed between the field model and the material burning model have been identified. Many computational issues must be addressed when developing such an interface. Some examples include the ability to track multiple fuels and species, local ignition criteria, and the need to use local grid refinement over the burning material of interest.
Combustion modeling in a model combustor
L.Y.Jiang; I.Campbell; K.Su
2007-01-01
The flow-field of a propane-air diffusion flame combustor with interior and exterior conjugate heat transfers was numerically studied.Results obtained from four combustion models,combined with the re-normalization group (RNG) k-ε turbulence model,discrete ordinates radiation model and enhanced wall treatment are presented and discussed.The results are compared with a comprehensive database obtained from a series of experimental measurements.The flow patterns and the recirculation zone length in the combustion chamber are accurately predicted,and the mean axial velocities are in fairly good agreement with the experimental data,particularly at downstream sections for all four combustion models.The mean temperature profiles are captured fairly well by the eddy dissipation (EDS),probability density function (PDF),and laminar flamelet combustion models.However,the EDS-finite-rate combustion model fails to provide an acceptable temperature field.In general,the flamelet model illustrates little superiority over the PDF model,and to some extent the PDF model shows better performance than the EDS model.
Regularized Structural Equation Modeling.
Jacobucci, Ross; Grimm, Kevin J; McArdle, John J
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.
Gernaey, Krist; Sin, Gürkan
2011-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise....... Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically activated sludge models – are introduced since these define...
Gernaey, Krist; Sin, Gürkan
2008-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise....... Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically, activated sludge models – are introduced since these define...
ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3
Model Reduction of Nonlinear Fire Dynamics Models
Lattimer, Alan Martin
2016-01-01
Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order mod...
Liu, Shu-Zheng; Zhang, Fang; Quan, Pei-Liang; Lu, Jian-Bang; Liu, Zhi-Cai; Sun, Xi-Bin
2012-01-01
In recent decades, decreasing trends in esophageal cancer mortality have been observed across China. We here describe esophageal cancer mortality trends in Linzhou city, a high-incidence region of esophageal cancer in China, during 1988-2010 and make a esophageal cancer mortality projection in the period 2011-2020 using a Bayesian approach. Age standardized mortality rates were estimated by direct standardization to the World population structure in 1985. A Bayesian age-period-cohort (BAPC) analysis was carried out in order to investigate the effect of the age, period and birth cohort on esophageal cancer mortality in Linzhou during 1988-2010 and to estimate future trends for the period 2011-2020. Age-adjusted rates for men and women decreased from 1988 to 2005 and changed little thereafter. Risk increased from 30 years of age until the very elderly. Period effects showed little variation in risk throughout 1988-2010. In contrast, a cohort effect showed risk decreased greatly in later cohorts. Forecasting, based on BAPC modeling, resulted in a increasing burden of mortality and a decreasing age standardized mortality rate of esophageal cancer in Linzhou city. The decrease of esophageal cancer mortality risk since the 1930 cohort could be attributable to the improvements of social- economic environment and lifestyle. The standardized mortality rates of esophageal cancer should decrease continually. The effect of aging on the population could explain the increase in esophageal mortality projected for 2020.
Burden of cancer associated with type 2 diabetes mellitus in Japan, 2010-2030.
Saito, Eiko; Charvat, Hadrien; Goto, Atsushi; Matsuda, Tomohiro; Noda, Mitsuhiko; Sasazuki, Shizuka; Inoue, Manami
2016-04-01
Diabetes mellitus constitutes a major disease burden globally, and the prevalence of diabetes continues to increase worldwide. We aimed to estimate the burden of cancer associated with type 2 diabetes mellitus in Japan between 2010 and 2030. In this study, we estimated the population attributable fraction of cancer risk associated with type 2 diabetes in 2010 and 2030 using the prevalence estimates of type 2 diabetes in Japan from 1990 to 2030, summary hazard ratios of diabetes and cancer risk from a pooled analysis of eight large-scale Japanese cohort studies, observed incidence/mortality of cancer in 2010 and predicted incidence/mortality for 2030 derived from the age-period-cohort model. Our results showed that between 2010 and 2030, the total numbers of cancer incidence and mortality were predicted to increase by 38.9% and 10.5% in adults aged above 20 years, respectively. In the number of excess incident cancer cases associated with type 2 diabetes, an increase of 26.5% in men and 53.2% in women is expected between 2010 and 2030. The age-specific analysis showed that the population attributable fraction of cancer will increase in adults aged >60 years over time, but will not change in adults aged 20-59 years. In conclusion, this study suggests a modest but steady increase in cancers associated with type 2 diabetes. © 2016 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
Twenge, Jean M; Sherman, Ryne A; Wells, Brooke E
2017-02-01
Examining age, time period, and cohort/generational changes in sexual experience is key to better understanding sociocultural influences on sexuality and relationships. Americans born in the 1980s and 1990s (commonly known as Millennials and iGen) were more likely to report having no sexual partners as adults compared to GenX'ers born in the 1960s and 1970s in the General Social Survey, a nationally representative sample of American adults (N = 26,707). Among those aged 20-24, more than twice as many Millennials born in the 1990s (15 %) had no sexual partners since age 18 compared to GenX'ers born in the 1960s (6 %). Higher rates of sexual inactivity among Millennials and iGen also appeared in analyses using a generalized hierarchical linear modeling technique known as age-period-cohort analysis to control for age and time period effects among adults of all ages. Americans born early in the 20th century also showed elevated rates of adult sexual inactivity. The shift toward higher rates of sexual inactivity among Millennials and iGen'ers was more pronounced among women and absent among Black Americans and those with a college education. Contrary to popular media conceptions of a "hookup generation" more likely to engage in frequent casual sex, a higher percentage of Americans in recent cohorts, particularly Millennials and iGen'ers born in the 1990s, had no sexual partners after age 18.
Lijun Wang
2016-10-01
Full Text Available Background: As lung cancer has shown a continuously increasing trend in many countries, it is essential to stay abreast of lung cancer mortality information and take informed actions with a theoretical basis derived from appropriate and practical statistical methods. Methods: Age-specific rates were collected by gender and region (urban/rural and analysed with descriptive methods and age-period-cohort models to estimate the trends in lung cancer mortality in China from 1988 to 2013. Results: Descriptive analysis revealed that the age-specific mortality rates of lung cancer in rural residents increased markedly over the last three decades, and there was no obvious increase in urban residents. APC analysis showed that the lung cancer mortality rates significantly increased with age (20–84, rose slightly with the time period, and decreased with the cohort, except for the rural cohorts born during the early years (1909–1928. The trends in the patterns of the period and cohort effects showed marked disparities between the urban and rural residents. Conclusions: Lung cancer mortality remains serious and is likely to continue to rise in China. Some known measures are suggested to be decisive factors in mitigating lung cancer, such as environmental conservation, medical security, and tobacco control, which should be implemented more vigorously over the long term in China, especially in rural areas.
Chen Cynthia
2012-06-01
Full Text Available Abstract Background Prostate cancer is the most commonly diagnosed malignancy in men in Sweden and Geneva, and the third most common in men in Singapore. This population-based study describes trends in the incidence and mortality rates of prostate cancer in Singapore, Sweden and Geneva (Switzerland from 1973 to 2006 and explores possible explanations for these different trends. Methods Data from patients diagnosed with prostate cancer were extracted from national cancer registries in Singapore (n = 5,172, Sweden (n = 188,783 and Geneva (n = 5,755 from 1973 to 2006. Trends of incidence and mortality were reported using the Poisson and negative binomial regression models. The age, period and birth-cohort were tested as predictors of incidence and mortality rates of prostate cancer. Results Incidence rates of prostate cancer increased over all time periods for all three populations. Based on the age-period-cohort analysis, older age and later period of diagnosis were associated with a higher incidence of prostate cancer, whereas older age and earlier period were associated with higher mortality rates for prostate cancer in all three countries. Conclusions This study demonstrated an overall increase in incidence rates and decrease in mortality rates in Singapore, Sweden and Geneva. Both incidence and mortality rates were much lower in Singapore. The period effect is a stronger predictor of incidence and mortality of prostate cancer than the birth-cohort effect.
Are white evangelical Protestants lower class? A partial test of church-sect theory.
Schwadel, Philip
2014-07-01
Testing hypotheses derived from church-sect theory and contemporary research about changes in evangelical Protestants' social status, I use repeated cross-sectional survey data spanning almost four decades to examine changes in the social-class hierarchy of American religious traditions. While there is little change in the social-class position of white evangelical Protestants from the early 1970s to 2010, there is considerable change across birth cohorts. Results from hierarchical age-period-cohort models show: (1) robust, across-cohort declines in social-class differences between white evangelical Protestants and liberal Protestants, affiliates of "other" religions, and the unaffiliated, (2) stability in social-class differences between white evangelical Protestants and moderate, Pentecostal, and nondenominational Protestants, (3) moderate across-cohort growth in social-class differences between white evangelical Protestants and Catholics, and (4) these patterns vary across indicators of social class. The findings in this article provide partial support for church-sect theory as well as other theories of social change that emphasize the pivotal role of generations. Copyright © 2014 Elsevier Inc. All rights reserved.
Welling Oei
2012-01-01
Full Text Available The epidemiological mechanisms behind the W-shaped age-specific influenza mortality during the Spanish influenza (H1N1 pandemic 1918-19 have yet to be fully clarified. The present study aimed to develop a formal hypothesis: tuberculosis (TB was associated with the W-shaped influenza mortality from 1918-19. Three pieces of epidemiological information were assessed: (i the epidemic records containing the age-specific numbers of cases and deaths of influenza from 1918-19, (ii an outbreak record of influenza in a Swiss TB sanatorium during the pandemic, and (iii the age-dependent TB mortality over time in the early 20th century. Analyzing the data (i, we found that the W-shaped pattern was not only seen in mortality but also in the age-specific case fatality ratio, suggesting the presence of underlying age-specific risk factor(s of influenza death among young adults. From the data (ii, TB was shown to be associated with influenza death (P=0.09, and there was no influenza death among non-TB controls. The data (iii were analyzed by employing the age-period-cohort model, revealing harvesting effect in the period function of TB mortality shortly after the 1918-19 pandemic. These findings suggest that it is worthwhile to further explore the role of TB in characterizing the age-specific risk of influenza death.
Endometrial cancer incidence trends in Europe: underlying determinants and prospects for prevention.
Bray, Freddie; Dos Santos Silva, Isabel; Moller, Henrik; Weiderpass, Elisabete
2005-05-01
More than one in 20 female cancers in Europe are of the endometrium. Surveillance of incidence rates is imperative given the rapidly changing profile in the prevalence and distribution of the underlying determinants. This study presents an analysis of observed and age-period-cohort-modeled trends in 13 European countries. There were increasing trends among postmenopausal women in many Northern and Western countries. Denmark and possibly France and Switzerland were exceptions, with decreasing trends in postmenopausal women. In premenopausal and perimenopausal women, declines were observed in Northern and Western Europe, most evidently in Denmark, Sweden, and the United Kingdom, affecting consecutive generations born after 1925. These contrast with the increasing trends regardless of menopausal age in some Southern and Eastern European countries, particularly Slovakia and Slovenia. These observations provide evidence of changes in several established risk factors over time and have implications for possible primary prevention strategies. In postmenopausal women, changes in reproductive behavior and prevalence of overweight and obesity may partially account for the observed increases, as well as hormone replacement therapy use in certain countries. Combined oral contraceptive use may be responsible for the declines observed among women aged oral contraceptive use becomes more widespread in Europe, increases in obesity and decreases in fertility imply that endometrial cancer in postmenopausal women will become a more substantial public health problem in the future.
Keyes, Katherine M; Susser, Ezra; Cheslack-Postava, Keely; Fountain, Christine; Liu, Kayuet; Bearman, Peter S
2012-01-01
Background The incidence and prevalence of autism have dramatically increased over the last 20 years. Decomposition of autism incidence rates into age, period and cohort effects disentangle underlying domains of causal factors linked to time trends. We estimate an age-period-cohort effect model for autism diagnostic incidence overall and by level of functioning. Methods Data are drawn from sequential cohorts of all 6 501 262 individuals born in California from 1992 to 2003. Autism diagnoses from 1994 to 2005 were ascertained from the California Department of Development Services Client Development and Evaluation Report. Results Compared with those born in 1992, each successively younger cohort has significantly higher odds of an autism diagnosis than the previous cohort, controlling for age and period effects. For example, individuals born in 2003 have 16.6 times the odds of an autism diagnosis compared with those born in 1992 [95% confidence interval (CI) 7.8–35.3]. The cohort effect observed in these data is stronger for high than for low-functioning children with an autism diagnosis. Discussion Autism incidence in California exhibits a robust and linear positive cohort effect that is stronger among high-functioning children with an autism diagnosis. This finding indicates that the primary drivers of the increases in autism diagnoses must be factors that: (i) have increased linearly year-to-year; (ii) aggregate in birth cohorts; and (iii) are stronger among children with higher levels of functioning. PMID:22253308
Better models are more effectively connected models
Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John
2016-04-01
The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity
Multiple Model Approaches to Modelling and Control,
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...
Integrity modelling of tropospheric delay models
Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó
2017-04-01
The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual
Numerical Modelling of Streams
Vestergaard, Kristian
In recent years there has been a sharp increase in the use of numerical water quality models. Numeric water quality modeling can be divided into three steps: Hydrodynamic modeling for the determination of stream flow and water levels. Modelling of transport and dispersion of a conservative...
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen
, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Dynamic Latent Classification Model
Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre
as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...... in the process as well as modeling dependences between attributes....
Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.
2017-07-01
HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.
Multilevel IRT Model Assessment
Fox, Jean-Paul; Ark, L. Andries; Croon, Marcel A.
2005-01-01
Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT
Models for Dynamic Applications
2011-01-01
be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment....
Silvennoinen, Annastiina; Teräsvirta, Timo
This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...... in which several multivariate GARCH models are fitted to the same data set and the results compared....
Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.
The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The M...
Modelling Railway Interlocking Systems
Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth
2000-01-01
In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...
Rahmani, Fouad Lazhar
2010-11-01
The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].
Silvennoinen, Annastiina; Teräsvirta, Timo
This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...... in which several multivariate GARCH models are fitted to the same data set and the results compared....
Multilevel IRT Model Assessment
Fox, Gerardus J.A.; Ark, L. Andries; Croon, Marcel A.
2005-01-01
Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT
2015-09-01
The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.
Ibsen, Lars Bo; Liingaard, M.
2006-12-15
A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)
Chuine, I.; Garcia de Cortazar-Atauri, I.; Kramer, K.; Hänninen, H.
2013-01-01
In this chapter we provide a brief overview of plant phenology modeling, focusing on mechanistic phenological models. After a brief history of plant phenology modeling, we present the different models which have been described in the literature so far and highlight the main differences between them,
R. Pietersz (Raoul); M. van Regenmortel
2005-01-01
textabstractCurrently, there are two market models for valuation and risk management of interest rate derivatives, the LIBOR and swap market models. In this paper, we introduce arbitrage-free constant maturity swap (CMS) market models and generic market models featuring forward rates that span perio
Ayres, Phil
2012-01-01
This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...... of design. Three distinctions are drawn through which to develop this discussion of models in an architectural context. An examination of these distinctions serves to nuance particular characteristics and roles of models, the modelling activity itself and those engaged in it....
Luczak, Joshua
2017-02-01
Scientific models are frequently discussed in philosophy of science. A great deal of the discussion is centred on approximation, idealisation, and on how these models achieve their representational function. Despite the importance, distinct nature, and high presence of toy models, they have received little attention from philosophers. This paper hopes to remedy this situation. It aims to elevate the status of toy models: by distinguishing them from approximations and idealisations, by highlighting and elaborating on several ways the Kac ring, a simple statistical mechanical model, is used as a toy model, and by explaining why toy models can be used to successfully carry out important work without performing a representational function.
Langseth, Helge; Nielsen, Thomas Dyhre
2005-01-01
One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions. In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....
Gernaey, Krist; Sin, Gürkan
2011-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...
Gernaey, Krist; Sin, Gürkan
2008-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...
Justesen, Lise; Overgaard, Svend Skafte
2017-01-01
This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open......-ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored...
Widera, Paweł
2011-01-01
The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...
Ristad, E S; Ristad, Eric Sven; Thomas, Robert G.
1996-01-01
A statistical language model assigns probability to strings of arbitrary length. Unfortunately, it is not possible to gather reliable statistics on strings of arbitrary length from a finite corpus. Therefore, a statistical language model must decide that each symbol in a string depends on at most a small, finite number of other symbols in the string. In this report we propose a new way to model conditional independence in Markov models. The central feature of our nonuniform Markov model is that it makes predictions of varying lengths using contexts of varying lengths. Experiments on the Wall Street Journal reveal that the nonuniform model performs slightly better than the classic interpolated Markov model. This result is somewhat remarkable because both models contain identical numbers of parameters whose values are estimated in a similar manner. The only difference between the two models is how they combine the statistics of longer and shorter strings. Keywords: nonuniform Markov model, interpolated Markov m...
Lumped Thermal Household Model
Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob
2013-01-01
a lumped model approach as an alternative to the individual models. In the lumped model, the portfolio is seen as baseline consumption superimposed with an ideal storage of limited power and energy capacity. The benefit of such a lumped model is that the computational effort of flexibility optimization......In this paper we discuss two different approaches to model the flexible power consumption of heat pump heated households: individual household modeling and lumped modeling. We illustrate that a benefit of individual modeling is that we can overview and optimize the complete flexibility of a heat...... pump portfolio. Following, we illustrate two disadvantages of individual models, namely that it requires much computational effort to optimize over a large portfolio, and second that it is difficult to accurately model the houses in certain time periods due to local disturbances. Finally, we propose...
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Introduction to Adjoint Models
Errico, Ronald M.
2015-01-01
In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.
Chao, Dennis L.; Ira M Longini; Morris, J. Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating mo...
Zagorsek, Branislav
2013-01-01
Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...
Diffeomorphic Statistical Deformation Models
Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus
2007-01-01
In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al. Th...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....
Dennis L Chao; Longini, Ira M.; Morris, J. Glenn
2014-01-01
Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating mo...
Multiple Model Approaches to Modelling and Control,
Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...
Model Checking of Boolean Process Models
Schneider, Christoph
2011-01-01
In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean sy...
Pavement Aging Model by Response Surface Modeling
Manzano-Ramírez A.
2011-10-01
Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.
Model Validation Status Review
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and
Cameron, Ian T.; Gani, Rafiqul
This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....
Practical Marginalized Multilevel Models.
Griswold, Michael E; Swihart, Bruce J; Caffo, Brian S; Zeger, Scott L
2013-01-01
Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials.
Modelling Foundations and Applications
This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed...... and selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...
蒋娜; 谢有琪
2012-01-01
With the development of human society, the social hub enlarges beyond one community to the extent that the world is deemed as a community as a whole. Communication, therefore, plays an increasingly important role in our daily life. As a consequence, communication model or the definition of which is not so much a definition as a guide in communication. However, some existed communication models are not as practical as it was. This paper tries to make an overall contrast among three communication models Coded Model, Gable Communication Model and Ostensive Inferential Model, to see how they assist people to comprehend verbal and non -verbal communication.
Modeling worldwide highway networks
Villas Boas, Paulino R.; Rodrigues, Francisco A.; da F. Costa, Luciano
2009-12-01
This Letter addresses the problem of modeling the highway systems of different countries by using complex networks formalism. More specifically, we compare two traditional geographical models with a modified geometrical network model where paths, rather than edges, are incorporated at each step between the origin and the destination vertices. Optimal configurations of parameters are obtained for each model and used for the comparison. The highway networks of Australia, Brazil, India, and Romania are considered and shown to be properly modeled by the modified geographical model.
LI Zhi-jia; YAO Cheng; KONG Xiang-guang
2005-01-01
To improve the Xinanjiang model, the runoff generating from infiltration-excess is added to the model.The another 6 parameters are added to Xinanjiang model.In principle, the improved Xinanjiang model can be used to simulate runoff in the humid, semi-humid and also semi-arid regions.The application in Yi River shows the improved Xinanjiang model could forecast discharge with higher accuracy and can satisfy the practical requirements.It also shows that the improved model is reasonable.
Microsoft tabular modeling cookbook
Braak, Paul te
2013-01-01
This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling
Luiz Carlos Bresser-Pereira
2012-03-01
Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.
Geller, Michael; Telem, Ofri
2015-05-15
We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.
Reiter, E.R.
1980-01-01
A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
Empirical Model Building Data, Models, and Reality
Thompson, James R
2011-01-01
Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m
Ensey, Tyler S.
2013-01-01
During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a
D.W. Wu; A.J. Smith
2004-11-08
The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).
Major Differences between the Jerome Model and the Horace Model
朱艳
2014-01-01
There are three famous translation models in the field of translation: the Jerome model, the Horace model and the Schleiermacher model. The production and development of the three models have significant influence on the translation. To find the major differences between the two western classical translation theoretical models, we discuss the Jerome model and the Hor-ace model deeply in this paper.
Modelling cointegration in the vector autoregressive model
Johansen, Søren
2000-01-01
A survey is given of some results obtained for the cointegrated VAR. The Granger representation theorem is discussed and the notions of cointegration and common trends are defined. The statistical model for cointegrated I(1) variables is defined, and it is shown how hypotheses on the cointegrating...... relations can be estimated under suitable identification conditions. The asymptotic theory is briefly mentioned and a few economic applications of the cointegration model are indicated....
Emissions Modeling Clearinghouse
U.S. Environmental Protection Agency — The Emissions Modeling Clearinghouse (EMCH) supports and promotes emissions modeling activities both internal and external to the EPA. Through this site, the EPA...
Riis, Troels; Jørgensen, John Leif
1999-01-01
This documents describes a test of the implementation of the ASC orbit model for the Champ satellite.......This documents describes a test of the implementation of the ASC orbit model for the Champ satellite....
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
Laboratory of Biological Modeling
Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to...
Rouder, Jeffrey N; Engelhardt, Christopher R; McCabe, Simon; Morey, Richard D
2016-12-01
Analysis of variance (ANOVA), the workhorse analysis of experimental designs, consists of F-tests of main effects and interactions. Yet, testing, including traditional ANOVA, has been recently critiqued on a number of theoretical and practical grounds. In light of these critiques, model comparison and model selection serve as an attractive alternative. Model comparison differs from testing in that one can support a null or nested model vis-a-vis a more general alternative by penalizing more flexible models. We argue this ability to support more simple models allows for more nuanced theoretical conclusions than provided by traditional ANOVA F-tests. We provide a model comparison strategy and show how ANOVA models may be reparameterized to better address substantive questions in data analysis.
Højsgaard, Søren; Edwards, David; Lauritzen, Steffen
Graphical models in their modern form have been around since the late 1970s and appear today in many areas of the sciences. Along with the ongoing developments of graphical models, a number of different graphical modeling software programs have been written over the years. In recent years many...... of these software developments have taken place within the R community, either in the form of new packages or by providing an R ingerface to existing software. This book attempts to give the reader a gentle introduction to graphical modeling using R and the main features of some of these packages. In addition......, the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...
Controlling Modelling Artifacts
Smith, Michael James Andrew; Nielson, Flemming; Nielson, Hanne Riis
2011-01-01
the possible configurations of the system (for example, by counting the number of components in a certain state). We motivate our methodology with a case study of the LMAC protocol for wireless sensor networks. In particular, we investigate the accuracy of a recently proposed high-level model of LMAC......When analysing the performance of a complex system, we typically build abstract models that are small enough to analyse, but still capture the relevant details of the system. But it is difficult to know whether the model accurately describes the real system, or if its behaviour is due to modelling...... artifacts that were inadvertently introduced. In this paper, we propose a novel methodology to reason about modelling artifacts, given a detailed model and a highlevel (more abstract) model of the same system. By a series of automated abstraction steps, we lift the detailed model to the same state space...
Laboratory of Biological Modeling
Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to a...
Modeling EERE deployment programs
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge for future research.
Consistent model driven architecture
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Bounding species distribution models
Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE
2011-10-01
Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].
Pulkkinen, U. [VTT Industrial Systems (Finland)
2004-04-01
The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)
Chip Multithreaded Consistency Model
Zu-Song Li; Dan-Dan Huan; Wei-Wu Hu; Zhi-Min Tang
2008-01-01
Multithreaded technique is the developing trend of high performance processor. Memory consistency model is essential to the correctness, performance and complexity of multithreaded processor. The chip multithreaded consistency model adapting to multithreaded processor is proposed in this paper. The restriction imposed on memory event ordering by chip multithreaded consistency is presented and formalized. With the idea of critical cycle built by Wei-Wu Hu, we prove that the proposed chip multithreaded consistency model satisfies the criterion of correct execution of sequential consistency model. Chip multithreaded consistency model provides a way of achieving high performance compared with sequential consistency model and ensures the compatibility of software that the execution result in multithreaded processor is the same as the execution result in uniprocessor. The implementation strategy of chip multithreaded consistency model in Godson-2 SMT processor is also proposed. Godson-2 SMT processor supports chip multithreaded consistency model correctly by exception scheme based on the sequential memory access queue of each thread.
Callison, Daniel
2002-01-01
Defines models and describes information search models that can be helpful to instructional media specialists in meeting users' abilities and information needs. Explains pathfinders and Kuhlthau's information search process, including the pre-writing information search process. (LRW)
Bennett, Joan
1998-01-01
Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...
Bounding Species Distribution Models
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
National Aeronautics and Space Administration — The Galactic model is a spatial and spectral template. The model for the Galactic diffuse emission was developed using spectral line surveys of HI and CO (as a...
Petrone, Giovanni; Spagnuolo, Giovanni
2016-01-01
This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.
Buczyńska, Weronika
2010-01-01
We define toric projective model of a trivalent graph as a generalization of a binary symmetric model of a trivalent phylogenetic tree. Generators of the projective coordinate ring of the models of graphs with one cycle are explicitly described. The models of graphs with the same topological invariants are deformation equivalent and share the same Hilbert function. We also provide an algorithm to compute the Hilbert function.
Model of magnetostrictive actuator
LI Lin; ZHANG Yuan-yuan
2005-01-01
The hysteresis of the magnetostrictive actuator was studied. A mathematical model of the hysteresis loop was obtained on the basis of experiment. This model depends on the frequency and the amplitude of the alternating current inputted to the magnetostrictive actuator. Based on the model, the effect of hysteresis on dynamic output of the magnetostrictive actuator was investigated. Then how to consider hysteresis and establish a dynamic model of a magnetostrictive actuator system is discussed when a practical system was designed and applied.
无
2002-01-01
In order to set up a conceptual data model that reflects the real world as accurately as possible,this paper firstly reviews and analyzes the disadvantages of previous conceptual data models used by traditional GIS in simulating geographic space,gives a new explanation to geographic space and analyzes its various essential characteristics.Finally,this paper proposes several detailed key points for designing a new type of GIS data model and gives a simple holistic GIS data model.
Modeling Digital Video Database
无
2001-01-01
The main purpose of the model is to present how the UnifiedModeling L anguage (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the mo dels, the authors propose some suggestions to transform the use case diagrams in to an object diagram, which is one of the main diagrams for the next development phases.
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Quantal Response: Nonparametric Modeling
2017-01-01
spline N−spline Fig. 3 Logistic regression 7 Approved for public release; distribution is unlimited. 5. Nonparametric QR Models Nonparametric linear ...stimulus and probability of response. The Generalized Linear Model approach does not make use of the limit distribution but allows arbitrary functional...7. Conclusions and Recommendations 18 8. References 19 Appendix A. The Linear Model 21 Appendix B. The Generalized Linear Model 33 Appendix C. B
Auxiliary Deep Generative Models
Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae; Winther, Ole
2016-01-01
Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections...
Avionics Architecture Modelling Language
Alana, Elena; Naranjo, Hector; Valencia, Raul; Medina, Alberto; Honvault, Christophe; Rugina, Ana; Panunzia, Marco; Dellandrea, Brice; Garcia, Gerald
2014-08-01
This paper presents the ESA AAML (Avionics Architecture Modelling Language) study, which aimed at advancing the avionics engineering practices towards a model-based approach by (i) identifying and prioritising the avionics-relevant analyses, (ii) specifying the modelling language features necessary to support the identified analyses, and (iii) recommending/prototyping software tooling to demonstrate the automation of the selected analyses based on a modelling language and compliant with the defined specification.
Tashiro, Tohru
2013-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
Artificial neural network modelling
Samarasinghe, Sandhya
2016-01-01
This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .
Optimization modeling with spreadsheets
Baker, Kenneth R
2015-01-01
An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il
Model Checking Feature Interactions
Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas;
2015-01-01
This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....
GARCH Modelling of Cryptocurrencies
Jeffrey Chu
2017-10-01
Full Text Available With the exception of Bitcoin, there appears to be little or no literature on GARCH modelling of cryptocurrencies. This paper provides the first GARCH modelling of the seven most popular cryptocurrencies. Twelve GARCH models are fitted to each cryptocurrency, and their fits are assessed in terms of five criteria. Conclusions are drawn on the best fitting models, forecasts and acceptability of value at risk estimates.
Thoft-Christensen, Palle
Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed.......Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed....
Modeling and Remodeling Writing
Hayes, John R.
2012-01-01
In Section 1 of this article, the author discusses the succession of models of adult writing that he and his colleagues have proposed from 1980 to the present. He notes the most important changes that differentiate earlier and later models and discusses reasons for the changes. In Section 2, he describes his recent efforts to model young…
Fortelius, C.; Holopainen, E.; Kaurola, J.; Ruosteenoja, K.; Raeisaenen, J. [Helsinki Univ. (Finland). Dept. of Meteorology
1996-12-31
In recent years the modelling of interannual climate variability has been studied, the atmospheric energy and water cycles, and climate simulations with the ECHAM3 model. In addition, the climate simulations of several models have been compared with special emphasis in the area of northern Europe
Crushed Salt Constitutive Model
Callahan, G.D.
1999-02-01
The constitutive model used to describe the deformation of crushed salt is presented in this report. Two mechanisms -- dislocation creep and grain boundary diffusional pressure solution -- are combined to form the basis for the constitutive model governing the deformation of crushed salt. The constitutive model is generalized to represent three-dimensional states of stress. Upon complete consolidation, the crushed-salt model reproduces the Multimechanism Deformation (M-D) model typically used for the Waste Isolation Pilot Plant (WIPP) host geological formation salt. New shear consolidation tests are combined with an existing database that includes hydrostatic consolidation and shear consolidation tests conducted on WIPP and southeastern New Mexico salt. Nonlinear least-squares model fitting to the database produced two sets of material parameter values for the model -- one for the shear consolidation tests and one for a combination of the shear and hydrostatic consolidation tests. Using the parameter values determined from the fitted database, the constitutive model is validated against constant strain-rate tests. Shaft seal problems are analyzed to demonstrate model-predicted consolidation of the shaft seal crushed-salt component. Based on the fitting statistics, the ability of the model to predict the test data, and the ability of the model to predict load paths and test data outside of the fitted database, the model appears to capture the creep consolidation behavior of crushed salt reasonably well.
Modeling EERE Deployment Programs
Cort, K. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hostick, D. J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Belzer, D. B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Livingston, O. V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2007-11-01
This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.
Meara, Paul
2004-01-01
This paper describes some simple simulation models of vocabulary attrition. The attrition process is modelled using a random autonomous Boolean network model, and some parallels with real attrition data are drawn. The paper argues that applying a complex systems approach to attrition can provide some important insights, which suggest that real…
Flexible survival regression modelling
Cortese, Giuliana; Scheike, Thomas H; Martinussen, Torben
2009-01-01
Regression analysis of survival data, and more generally event history data, is typically based on Cox's regression model. We here review some recent methodology, focusing on the limitations of Cox's regression model. The key limitation is that the model is not well suited to represent time-varyi...
Rodarius, C.; Rooij, L. van; Lange, R. de
2007-01-01
The objective of this work was to create a scalable human occupant model that allows adaptation of human models with respect to size, weight and several mechanical parameters. Therefore, for the first time two scalable facet human models were developed in MADYMO. First, a scalable human male was
Modeling typical performance measures
Weekers, Anke Martine
2009-01-01
In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl
Diggle, Peter J
2007-01-01
Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.
Zephyr - the prediction models
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...
Model Breaking Points Conceptualized
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
Generalized Poisson sigma models
Batalin, I; Batalin, Igor; Marnelius, Robert
2001-01-01
A general master action in terms of superfields is given which generates generalized Poisson sigma models by means of a natural ghost number prescription. The simplest representation is the sigma model considered by Cattaneo and Felder. For Dirac brackets considerably more general models are generated.
Fedorov, Alexander
2011-01-01
The author supposed that media education models can be divided into the following groups: (1) educational-information models (the study of the theory, history, language of media culture, etc.), based on the cultural, aesthetic, semiotic, socio-cultural theories of media education; (2) educational-ethical models (the study of moral, religions,…
Speiser, Bob; Walter, Chuck
2011-01-01
This paper explores how models can support productive thinking. For us a model is a "thing", a tool to help make sense of something. We restrict attention to specific models for whole-number multiplication, hence the wording of the title. They support evolving thinking in large measure through the ways their users redesign them. They assume new…
Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel
2016-01-01
An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical pr...
C. Lum
2004-09-16
The purpose of this model report is to document the Rock Properties Model version 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties model provides mean matrix and lithophysae porosity, and the cross-correlated mean bulk density as direct input to the ''Saturated Zone Flow and Transport Model Abstraction'', MDL-NBS-HS-000021, REV 02 (BSC 2004 [DIRS 170042]). The constraints, caveats, and limitations associated with this model are discussed in Section 6.6 and 8.2. Model validation accomplished by corroboration with data not cited as direct input is discussed in Section 7. The revision of this model report was performed as part of activities being conducted under the ''Technical Work Plan for: The Integrated Site Model, Revision 05'' (BSC 2004 [DIRS 169635]). The purpose of this revision is to bring the report up to current procedural requirements and address the Regulatory Integration Team evaluation comments. The work plan describes the scope, objectives, tasks, methodology, and procedures for this process.
Wheeler, Tim Allan; Holder, Martin; Winner, Hermann; Kochenderfer, Mykel
2017-01-01
Accurate simulation and validation of advanced driver assistance systems requires accurate sensor models. Modeling automotive radar is complicated by effects such as multipath reflections, interference, reflective surfaces, discrete cells, and attenuation. Detailed radar simulations based on physical principles exist but are computationally intractable for realistic automotive scenes. This paper describes a methodology for the construction of stochastic automotive radar models based on deep l...
Richard Haynes; Darius Adams; Peter Ince; John Mills; Ralph Alig
2006-01-01
The United States has a century of experience with the development of models that describe markets for forest products and trends in resource conditions. In the last four decades, increasing rigor in policy debates has stimulated the development of models to support policy analysis. Increasingly, research has evolved (often relying on computer-based models) to increase...
Model Breaking Points Conceptualized
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
Thornton, Bradley D.; Smalley, Robert A.
2008-01-01
Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…
Fitzsimmons, Charles P.
1986-01-01
Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)
Andreasen, Martin Møller; Meldrum, Andrew
This paper studies whether dynamic term structure models for US nominal bond yields should enforce the zero lower bound by a quadratic policy rate or a shadow rate specification. We address the question by estimating quadratic term structure models (QTSMs) and shadow rate models with at most four...
2011-01-01
Engineering of products and processes is increasingly “model-centric”. Models in their multitudinous forms are ubiquitous, being heavily used for a range of decision making activities across all life cycle phases. This chapter gives an overview of what is a model, the principal activities in the ...
Rossi, P; Rossi, Paolo; Tan, Chung I
1995-01-01
Principal chiral models on a d-1 dimensional simplex are introduced and studied analytically in the large N limit. The d = 0 , 2, 4 and \\infty models are explicitly solved. Relationship with standard lattice models and with few-matrix systems in the double scaling limit are discussed.
Modeling agriculture in the Community Land Model
B. Drewniak
2013-04-01
Full Text Available The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types – maize, soybean, and spring wheat – into the coupled carbon–nitrogen version of the Community Land Model (CLM, to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements for soybean, but not as well for maize. CLM-Crop yields were comparable with observations in countries such as the United States, Argentina, and China, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation, in agreement with other modeling studies. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model – simulating agriculture in a realistic way, complete with
Meister, Jeffrey P.
1987-01-01
The Mechanics of Materials Model (MOMM) is a three-dimensional inelastic structural analysis code for use as an early design stage tool for hot section components. MOMM is a stiffness method finite element code that uses a network of beams to characterize component behavior. The MOMM contains three material models to account for inelastic material behavior. These include the simplified material model, which assumes a bilinear stress-strain response; the state-of-the-art model, which utilizes the classical elastic-plastic-creep strain decomposition; and Walker's viscoplastic model, which accounts for the interaction between creep and plasticity that occurs under cyclic loading conditions.
Loennroth, J.S.; Kiviniemi, T. [Association EURATOM-Tekes, Helsinki University of Technology, P. O. Box 4100, 02015 TKK (Finland); Bateman, G.; Kritz, A. [Lehigh University, Bethlehem, PA (United States); Becoulet, M.; Figarella, C.; Garbet, X.; Huysmans, G. [Association Euratom-CEA, CEA Cadarache (France); Beyer, P. [University of Marseille (France); Corrigan, G.; Fundamenski, W. [UKAEA Fusion Association, Culham Science Centre (United Kingdom); Garcia, O.E.; Naulin, V. [Association Euratom-Risoe National Laboratory, Roskilde (Denmark); Janeschitz, G. [Forschungszentrum Karlsruhe (Germany); Johnson, T. [Association Euratom-VR, Royal Institute of Technology, Stockholm (Sweden); Kuhn, S. [Association Euratom-OeAW, University of Innsbruck (Austria); Loarte, A. [EFDA Close Support Unit, Garching (Germany); Nave, F. [Association Euratom-IST, Centro de Fusao Nuclear, Lisbon (Portugal); Onjun, T. [Sirindhorn International Institute of Technology (Thailand); Pacher, G.W. [Hydro Quebec (Canada); Pacher, H.D.; Pankin, A.; Parail, V.; Pitts, R.; Saibene, G.; Snyder, P.; Spence, J.; Tskhakaya, D.; Wilson, H.
2006-09-15
This paper presents a short overview of current trends and progress in integrated ELM modelling. First, the concept of integrated ELM modelling is introduced, various interpretations of it are given and the need for it is discussed. Then follows an overview of different techniques and methods used in integrated ELM modelling presented roughly according to physics approached in use and in order of increasing complexity. The paper concludes with a short discussion of open issues and future modelling requirements within the field of integrated ELM modelling. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Ritchie, L.T.; Alpert, D.J.; Burke, R.P.; Johnson, J.D.; Ostmeyer, R.M.; Aldrich, D.C.; Blond, R.M.
1984-03-01
The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.
Knudsen, Torben
2011-01-01
The purpose with this deliverable 2.5 is to use fresh experimental data for validation and selection of a flow model to be used for control design in WP3-4. Initially the idea was to investigate the models developed in WP2. However, in the project it was agreed to include and focus on a additive...... model turns out not to be useful for prediction of the flow. Moreover, standard Box Jenkins model structures and multiple output auto regressive models proves to be superior as they can give useful predictions of the flow....
M. McGraw
2000-04-13
The UZ Colloid Transport model development plan states that the objective of this Analysis/Model Report (AMR) is to document the development of a model for simulating unsaturated colloid transport. This objective includes the following: (1) use of a process level model to evaluate the potential mechanisms for colloid transport at Yucca Mountain; (2) Provide ranges of parameters for significant colloid transport processes to Performance Assessment (PA) for the unsaturated zone (UZ); (3) Provide a basis for development of an abstracted model for use in PA calculations.
Auxiliary Deep Generative Models
Maaløe, Lars; Sønderby, Casper Kaae; Sønderby, Søren Kaae;
2016-01-01
Deep generative models parameterized by neural networks have recently achieved state-of-the-art performance in unsupervised and semi-supervised learning. We extend deep generative models with auxiliary variables which improves the variational approximation. The auxiliary variables leave...... the generative model unchanged but make the variational distribution more expressive. Inspired by the structure of the auxiliary variable we also propose a model with two stochastic layers and skip connections. Our findings suggest that more expressive and properly specified deep generative models converge...
Yum, Soo-Young; Yoon, Ki-Young; Lee, Choong-Il; Lee, Byeong-Chun
2016-01-01
Animal models, particularly pigs, have come to play an important role in translational biomedical research. There have been many pig models with genetically modifications via somatic cell nuclear transfer (SCNT). However, because most transgenic pigs have been produced by random integration to date, the necessity for more exact gene-mutated models using recombinase based conditional gene expression like mice has been raised. Currently, advanced genome-editing technologies enable us to generate specific gene-deleted and -inserted pig models. In the future, the development of pig models with gene editing technologies could be a valuable resource for biomedical research. PMID:27030199
Long, John
2014-01-01
Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ
Modeling Epidemic Network Failures
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used......This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...
Modeling Epidemic Network Failures
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate...... the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used...... to evaluate multiple epidemic scenarios in various network types....
Alves, Daniele S. M.; Galloway, Jamison; McCullough, Matthew; Weiner, Neal
2016-04-01
Models with Dirac gauginos are appealing scenarios for physics beyond the Standard Model. They have smaller radiative corrections to scalar soft masses, a suppression of certain supersymmetry (SUSY) production processes at the LHC, and ameliorated flavor constraints. Unfortunately, they are generically plagued by tachyons charged under the Standard Model, and attempts to eliminate such states typically spoil the positive features. The recently proposed "Goldstone gaugino" mechanism provides a simple realization of Dirac gauginos that is automatically free of dangerous tachyonic states. We provide details on this mechanism and explore models for its origin. In particular, we find SUSY QCD models that realize this idea simply and discuss scenarios for unification.
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Reconstruction of inflation models
Myrzakulov, Ratbay; Sebastiani, Lorenzo [Eurasian National University, Department of General and Theoretical Physics and Eurasian Center for Theoretical Physics, Astana (Kazakhstan); Zerbini, Sergio [Universita di Trento, Dipartimento di Fisica, Trento (Italy); TIFPA, Istituto Nazionale di Fisica Nucleare, Trento (Italy)
2015-05-15
In this paper, we reconstruct viable inflationary models by starting from spectral index and tensor-to-scalar ratio from Planck observations. We analyze three different kinds of models: scalar field theories, fluid cosmology, and f(R)-modified gravity. We recover the well-known R{sup 2} inflation in Jordan-frame and Einstein-frame representation, the massive scalar inflaton models and two models of inhomogeneous fluid. A model of R{sup 2} correction to Einstein's gravity plus a ''cosmological constant'' with an exact solution for early-time acceleration is reconstructed. (orig.)
Mathematical modelling techniques
Aris, Rutherford
1995-01-01
""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode
Controlling Modelling Artifacts
Smith, Michael James Andrew; Nielson, Flemming; Nielson, Hanne Riis
2011-01-01
as the high-level model, so that they can be directly compared. There are two key ideas in our approach — a temporal abstraction, where we only look at the state of the system at certain observable points in time, and a spatial abstraction, where we project onto a smaller state space that summarises...... artifacts that were inadvertently introduced. In this paper, we propose a novel methodology to reason about modelling artifacts, given a detailed model and a highlevel (more abstract) model of the same system. By a series of automated abstraction steps, we lift the detailed model to the same state space...
Tijidjian, Raffi P.
2010-01-01
The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.
Fischer, Arthur E.
1996-01-01
In this paper a theory of models of the universe is proposed. We refer to such models ascosmological models, where a cosmological model is defined as an Einstein-inextendible Einstein spacetime. A cosmological model isabsolute if it is a Lorentz-inextendible Einstein spacetime,predictive if it is globally hyperbolic, andnon-predictive if it is nonglobally-hyperbolic. We discuss several features of these models in the study of cosmology. As an example, any compact Einstein spacetime is always a non-predictive absolute cosmological model, whereas a noncompact complete Einstein spacetime is an absolute cosmological model which may be either predictive or non-predictive. We discuss the important role played by maximal Einstein spacetimes. In particular, we examine the possible proper Lorentz-extensions of such spacetimes, and show that a spatially compact maximal Einstein spacetime is exclusively either a predictive cosmological model or a proper sub-spacetime of a non-predictive cosmological model. Provided that the Strong Cosmic Censorship conjecture is true, a generic spatially compact maximal Einstein spacetime must be a predictive cosmological model. It isconjectured that the Strong Cosmic Censorship conjecture isnot true, and converting a vice to a virtue it is argued that the failure of the Strong Cosmic Censorship conjecture would point to what may be general relativity's greatest prediction of all, namely,that general relativity predicts that general relativity cannot predict the entire history of the universe.
Modelling structured data with Probabilistic Graphical Models
Forbes, F.
2016-05-01
Most clustering and classification methods are based on the assumption that the objects to be clustered are independent. However, in more and more modern applications, data are structured in a way that makes this assumption not realistic and potentially misleading. A typical example that can be viewed as a clustering task is image segmentation where the objects are the pixels on a regular grid and depend on neighbouring pixels on this grid. Also, when data are geographically located, it is of interest to cluster data with an underlying dependence structure accounting for some spatial localisation. These spatial interactions can be naturally encoded via a graph not necessarily regular as a grid. Data sets can then be modelled via Markov random fields and mixture models (e.g. the so-called MRF and Hidden MRF). More generally, probabilistic graphical models are tools that can be used to represent and manipulate data in a structured way while modeling uncertainty. This chapter introduces the basic concepts. The two main classes of probabilistic graphical models are considered: Bayesian networks and Markov networks. The key concept of conditional independence and its link to Markov properties is presented. The main problems that can be solved with such tools are described. Some illustrations are given associated with some practical work.
Modeling of ultrasound transducers
Bæk, David
deviation of 5.5 % to 11.0 %. Finite element modeling of piezoceramics in combination with Field II is addressed and reveals the influence of restricting the modeling of transducers to the one-dimensional case. An investigation on modeling capacitive micromachined ultrasonic transducers (CMUT)s with Field......This Ph.D. dissertation addresses ultrasound transducer modeling for medical ultrasound imaging and combines the modeling with the ultrasound simulation program Field II. The project firstly presents two new models for spatial impulse responses (SIR)s to a rectangular elevation focused transducer...... II is addressed. It is shown how a single circular CMUT cell can be well approximated with a simple square transducer encapsulating the cell, and how this influence the modeling of full array elements. An optimal cell discretization with Field II’s mathematical elements is addressed as well...
Gudiksen, Sune Klok; Poulsen, Søren Bolvig; Buur, Jacob
2014-01-01
Well-established companies are currently struggling to secure profits due to the pressure from new players' business models as they take advantage of communication technology and new business-model configurations. Because of this, the business model research field flourishes currently; however......, the modelling approaches proposed still rely on linear, rational conceptions and causal reasoning. Through six business cases we argue that participatory design has a role to play, and indeed, can lead the way into another approach to business modelling, which we call business model making. The paper...... illustrates how the application of participatory business model design toolsets can open up discussions on alternative scenarios through improvisation, mock-up making and design game playing, before qualitative judgment on the most promising scenario is carried out....
Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann
2008-09-01
In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.
Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-13
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.
Alexander Fedorov
2011-03-01
Full Text Available The author supposed that media education models can be divided into the following groups:- educational-information models (the study of the theory, history, language of media culture, etc., based on the cultural, aesthetic, semiotic, socio-cultural theories of media education;- educational-ethical models (the study of moral, religions, philosophical problems relying on the ethic, religious, ideological, ecological, protectionist theories of media education;- pragmatic models (practical media technology training, based on the uses and gratifications and ‘practical’ theories of media education;- aesthetical models (aimed above all at the development of the artistic taste and enriching the skills of analysis of the best media culture examples. Relies on the aesthetical (art and cultural studies theory; - socio-cultural models (socio-cultural development of a creative personality as to the perception, imagination, visual memory, interpretation analysis, autonomic critical thinking, relying on the cultural studies, semiotic, ethic models of media education.
Hydrological land surface modelling
Ridler, Marc-Etienne Francois
Recent advances in integrated hydrological and soil-vegetation-atmosphere transfer (SVAT) modelling have led to improved water resource management practices, greater crop production, and better flood forecasting systems. However, uncertainty is inherent in all numerical models ultimately leading...... and disaster management. The objective of this study is to develop and investigate methods to reduce hydrological model uncertainty by using supplementary data sources. The data is used either for model calibration or for model updating using data assimilation. Satellite estimates of soil moisture and surface...... hydrological and tested by assimilating synthetic hydraulic head observations in a catchment in Denmark. Assimilation led to a substantial reduction of model prediction error, and better model forecasts. Also, a new assimilation scheme is developed to downscale and bias-correct coarse satellite derived soil...
Tamke, Martin
2015-01-01
Appearing almost alive, a novel set of computational design models can become an active counterpart for architects in the design process. The ability to loop, sense and query and the integration of near real-time simulation provide these models with a depth and agility that allows for instant...... and informed feedback. Introducing the term "Aware models", the paper investigates how computational models become an enabler for a better informed architectural design practice, through the embedding of knowledge about constraints, behaviour and processes of formation and making into generative design models....... The inspection of several computational design projects in architectural research highlights three different types of awareness a model can possess and devises strategies to establish and finally design with aware models. This design practice is collaborative in nature and characterized by a bidirectional flow...
Gonzalez-Lopez, Jesus E Garcia Veronica A
2010-01-01
In this work we introduce a new and richer class of finite order Markov chain models and address the following model selection problem: find the Markov model with the minimal set of parameters (minimal Markov model) which is necessary to represent a source as a Markov chain of finite order. Let us call $M$ the order of the chain and $A$ the finite alphabet, to determine the minimal Markov model, we define an equivalence relation on the state space $A^{M}$, such that all the sequences of size $M$ with the same transition probabilities are put in the same category. In this way we have one set of $(|A|-1)$ transition probabilities for each category, obtaining a model with a minimal number of parameters. We show that the model can be selected consistently using the Bayesian information criterion.
Multiscale Modeling of Recrystallization
Godfrey, A.W.; Holm, E.A.; Hughes, D.A.; Lesar, R.; Miodownik, M.A.
1998-12-07
We propose a multi length scale approach to modeling recrystallization which links a dislocation model, a cell growth model and a macroscopic model. Although this methodology and linking framework will be applied to recrystallization, it is also applicable to other types of phase transformations in bulk and layered materials. Critical processes such as the dislocation structure evolution, nucleation, the evolution of crystal orientations into a preferred texture, and grain size evolution all operate at different length scales. In this paper we focus on incorporating experimental measurements of dislocation substructures, rnisorientation measurements of dislocation boundaries, and dislocation simulations into a mesoscopic model of cell growth. In particular, we show how feeding information from the dislocation model into the cell growth model can create realistic initial microstructure.
CREDIT RISK. DETERMINATION MODELS
MIHAELA GRUIESCU
2012-01-01
Full Text Available The internationalization of financial flows and banking and the rapid development of markets have changed the financial sector, causing him to respond with force and imagination. Under these conditions, the concerns of financial and banking institutions, rating institutions are increasingly turning to find the best solutions to hedge risks and maximize profits. This paper aims to present a number of advantages, but also limits the Merton model, the first structural model for modeling credit risk. Also, some are extensions of the model, some empirical research and performance known, others such as state-dependent models (SDM, which together with the liquidation process models (LPM, are two recent efforts in the structural models, show different phenomena in real life.
Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling.
Yesson, C; Culham, A
2006-10-01
We investigate the impact of past climates on plant diversification by tracking the "footprint" of climate change on a phylogenetic tree. Diversity within the cosmopolitan carnivorous plant genus Drosera (Droseraceae) is focused within Mediterranean climate regions. We explore whether this diversity is temporally linked to Mediterranean-type climatic shifts of the mid-Miocene and whether climate preferences are conservative over phylogenetic timescales. Phyloclimatic modeling combines environmental niche (bioclimatic) modeling with phylogenetics in order to study evolutionary patterns in relation to climate change. We present the largest and most complete such example to date using Drosera. The bioclimatic models of extant species demonstrate clear phylogenetic patterns; this is particularly evident for the tuberous sundews from southwestern Australia (subgenus Ergaleium). We employ a method for establishing confidence intervals of node ages on a phylogeny using replicates from a Bayesian phylogenetic analysis. This chronogram shows that many clades, including subgenus Ergaleium and section Bryastrum, diversified during the establishment of the Mediterranean-type climate. Ancestral reconstructions of bioclimatic models demonstrate a pattern of preference for this climate type within these groups. Ancestral bioclimatic models are projected into palaeo-climate reconstructions for the time periods indicated by the chronogram. We present two such examples that each generate plausible estimates of ancestral lineage distribution, which are similar to their current distributions. This is the first study to attempt bioclimatic projections on evolutionary time scales. The sundews appear to have diversified in response to local climate development. Some groups are specialized for Mediterranean climates, others show wide-ranging generalism. This demonstrates that Phyloclimatic modeling could be repeated for other plant groups and is fundamental to the understanding of
Modeling agriculture in the Community Land Model
B. Drewniak
2012-12-01
Full Text Available The potential impact of climate change on agriculture is uncertain. In addition, agriculture could influence above- and below-ground carbon storage. Development of models that represent agriculture is necessary to address these impacts. We have developed an approach to integrate agriculture representations for three crop types – maize, soybean, and spring wheat – into the coupled carbon-nitrogen version of the Community Land Model (CLM, to help address these questions. Here we present the new model, CLM-Crop, validated against observations from two AmeriFlux sites in the United States, planted with maize and soybean. Seasonal carbon fluxes compared well with field measurements. CLM-Crop yields were comparable with observations in some regions, although the generality of the crop model and its lack of technology and irrigation made direct comparison difficult. CLM-Crop was compared against the standard CLM3.5, which simulates crops as grass. The comparison showed improvement in gross primary productivity in regions where crops are the dominant vegetation cover. Crop yields and productivity were negatively correlated with temperature and positively correlated with precipitation. In case studies with the new crop model looking at impacts of residue management and planting date on crop yield, we found that increased residue returned to the litter pool increased crop yield, while reduced residue returns resulted in yield decreases. Using climate controls to signal planting date caused different responses in different crops. Maize and soybean had opposite reactions: when low temperature threshold resulted in early planting, maize responded with a loss of yield, but soybean yields increased. Our improvements in CLM demonstrate a new capability in the model – simulating agriculture in a realistic way, complete with fertilizer and residue management practices. Results are encouraging, with improved representation of human influences on the land
Modeling local dependence in longitudinal IRT models
Larsen, Maja Olsbjerg; Christensen, Karl Bang
2015-01-01
Measuring change in a latent variable over time is often done using the same instrument at several time points. This can lead to dependence between responses across time points for the same person yielding within person correlations that are stronger than what can be attributed to the latent...... variable. Ignoring this can lead to biased estimates of changes in the latent variable. In this paper we propose a method for modeling local dependence in the longitudinal 2PL model. It is based on the concept of item splitting, and makes it possible to correctly estimate change in the latent variable....
Hammerand, Daniel Carl; Scherzinger, William Mark
2007-09-01
The Library of Advanced Materials for Engineering (LAME) provides a common repository for constitutive models that can be used in computational solid mechanics codes. A number of models including both hypoelastic (rate) and hyperelastic (total strain) constitutive forms have been implemented in LAME. The structure and testing of LAME is described in Scherzinger and Hammerand ([3] and [4]). The purpose of the present report is to describe the material models which have already been implemented into LAME. The descriptions are designed to give useful information to both analysts and code developers. Thus far, 33 non-ITAR/non-CRADA protected material models have been incorporated. These include everything from the simple isotropic linear elastic models to a number of elastic-plastic models for metals to models for honeycomb, foams, potting epoxies and rubber. A complete description of each model is outside the scope of the current report. Rather, the aim here is to delineate the properties, state variables, functions, and methods for each model. However, a brief description of some of the constitutive details is provided for a number of the material models. Where appropriate, the SAND reports available for each model have been cited. Many models have state variable aliases for some or all of their state variables. These alias names can be used for outputting desired quantities. The state variable aliases available for results output have been listed in this report. However, not all models use these aliases. For those models, no state variable names are listed. Nevertheless, the number of state variables employed by each model is always given. Currently, there are four possible functions for a material model. This report lists which of these four methods are employed in each material model. As far as analysts are concerned, this information is included only for the awareness purposes. The analyst can take confidence in the fact that model has been properly implemented
Dan Alexandru Anghel
2012-01-01
Full Text Available In semiconductor laser modeling, a good mathematical model gives near-reality results. Three methods of modeling solutions from the rate equations are presented and analyzed. A method based on the rate equations modeled in Simulink to describe quantum well lasers was presented. For different signal types like step function, saw tooth and sinus used as input, a good response of the used equations is obtained. Circuit model resulting from one of the rate equations models is presented and simulated in SPICE. Results show a good modeling behavior. Numerical simulation in MathCad gives satisfactory results for the study of the transitory and dynamic operation at small level of the injection current. The obtained numerical results show the specific limits of each model, according to theoretical analysis. Based on these results, software can be built that integrates circuit simulation and other modeling methods for quantum well lasers to have a tool that model and analysis these devices from all points of view.
Geochemical modeling: a review
Jenne, E.A.
1981-06-01
Two general families of geochemical models presently exist. The ion speciation-solubility group of geochemical models contain submodels to first calculate a distribution of aqueous species and to secondly test the hypothesis that the water is near equilibrium with particular solid phases. These models may or may not calculate the adsorption of dissolved constituents and simulate the dissolution and precipitation (mass transfer) of solid phases. Another family of geochemical models, the reaction path models, simulates the stepwise precipitation of solid phases as a result of reacting specified amounts of water and rock. Reaction path models first perform an aqueous speciation of the dissolved constituents of the water, test solubility hypotheses, then perform the reaction path modeling. Certain improvements in the present versions of these models would enhance their value and usefulness to applications in nuclear-waste isolation, etc. Mass-transfer calculations of limited extent are certainly within the capabilities of state-of-the-art models. However, the reaction path models require an expansion of their thermodynamic data bases and systematic validation before they are generally accepted.
Modelling Farm Animal Welfare.
Collins, Lisa M; Part, Chérie E
2013-05-16
The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested.
Chérie E. Part
2013-05-01
Full Text Available The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested.