Zhou, Cheng; Penner, Joyce E.
2017-01-01
Observation-based studies have shown that the aerosol cloud lifetime effect or the increase of cloud liquid water path (LWP) with increased aerosol loading may have been overestimated in climate models. Here, we simulate shallow warm clouds on 27 May 2011 at the southern Great Plains (SGP) measurement site established by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program using a single-column version of a global climate model (Community Atmosphere Model or CAM) and a cloud resolving model (CRM). The LWP simulated by CAM increases substantially with aerosol loading while that in the CRM does not. The increase of LWP in CAM is caused by a large decrease of the autoconversion rate when cloud droplet number increases. In the CRM, the autoconversion rate is also reduced, but this is offset or even outweighed by the increased evaporation of cloud droplets near the cloud top, resulting in an overall decrease in LWP. Our results suggest that climate models need to include the dependence of cloud top growth and the evaporation/condensation process on cloud droplet number concentrations.
Indian Academy of Sciences (India)
First page Back Continue Last page Graphics. Extensive field studies revealed over-estimates of bamboo stocks by a factor of ten! Extensive field studies revealed over-estimates of bamboo stocks by a factor of ten! Forest compartments that had been completely clear felled to set up WCPM still showed large stocks because ...
Do general practitioners overestimate the health of their patients with lower education?
Kelly-Irving, Michelle; Delpierre, Cyrille; Schieber, Anne-Cécile; Lepage, Benoit; Rolland, Christine; Afrité, Anissa; Pascal, Jean; Cases, Chantal; Lombrail, Pierre; Lang, Thierry
2011-11-01
This study sought to ascertain whether disagreement between patients and physicians on the patients' health status varies according to patients' education level. INTERMEDE is a cross-sectional multicentre study. Data were collected from both patients and doctors via pre- and post consultation questionnaires at the GP's office over a two-week period in October 2007 in 3 regions of France. The sample consists of 585 eligible patients (61% women) and 27 GPs. A significant association between agreement/disagreement between GP and patient on the patient's health status and patient's education level was observed: 75% of patients with a high education level agreed with their GP compared to 50% of patients with a low level of education. Patients and GPs disagreed where patients with the lowest education level said that their health was worse relative to their doctor's evaluation 37% of the time, versus 16% and 14% for those with a medium or high education level respectively. A multilevel multivariate analysis revealed that patients with a low educational level and medium educational level respectively were at higher risk of being overestimated by GP's in respect of self-reported health even if controlling for confounders. These findings suggest that people with a lower education level who consider themselves to have poor health are less reliably identified as such in the primary care system. This could potentially result in lack of advice and treatment for these patients and ultimately the maintenance of health inequalities. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Soley-Guardia, Mariano; Gutiérrez, Eliécer E; Thomas, Darla M; Ochoa-G, José; Aguilera, Marisol; Anderson, Robert P
2016-03-01
Correlative ecological niche models (ENMs) estimate species niches using occurrence records and environmental data. These tools are valuable to the field of biogeography, where they are commonly used to infer potential connectivity among populations. However, a recent study showed that when locally relevant environmental data are not available, records from patches of suitable habitat protruding into otherwise unsuitable regions (e.g., gallery forests within dry areas) can lead to overestimations of species niches and their potential distributions. Here, we test whether this issue obfuscates detection of an obvious environmental barrier existing in northern Venezuela - that of the hot and xeric lowlands separating the Península de Paraguaná from mainland South America. These conditions most likely promote isolation between mainland and peninsular populations of three rodent lineages occurring in mesic habitat in this region. For each lineage, we calibrated optimally parameterized ENMs using mainland records only, and leveraged existing habitat descriptions to assess whether those assigned low suitability values corresponded to instances where the species was collected within locally mesic conditions amidst otherwise hot dry areas. When this was the case, we built an additional model excluding these records. We projected both models onto the peninsula and assessed whether they differed in their ability to detect the environmental barrier. For the two lineages in which we detected such problematic records, only the models built excluding them detected the barrier, while providing additional insights regarding peninsular populations. Overall, the study reveals how a simple procedure like the one applied here can deal with records problematic for ENMs, leading to better predictions regarding the potential effects of the environment on lineage divergence.
Soley?Guardia, Mariano; Guti?rrez, Eli?cer E.; Thomas, Darla M.; Ochoa?G, Jos?; Aguilera, Marisol; Anderson, Robert P.
2016-01-01
Abstract Correlative ecological niche models (ENMs) estimate species niches using occurrence records and environmental data. These tools are valuable to the field of biogeography, where they are commonly used to infer potential connectivity among populations. However, a recent study showed that when locally relevant environmental data are not available, records from patches of suitable habitat protruding into otherwise unsuitable regions (e.g., gallery forests within dry areas) can lead to ov...
Why do Models Overestimate Surface Ozone in the Southeastern United States?
Travis, Katherine R.; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Zhu, Lei; Yu, Karen; Miller, Christopher C.; Yantosca, Robert M.; Sulprizio, Melissa P.;
2016-01-01
Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx = NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25 deg. x 0.3125 deg. horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30-60%, dependent on the assumption of the contribution by soil NOx emissions. Upper tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a 15 regression of ozone and NOx oxidation products. However, the model is still biased high by 8 +/- 13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone
Why do Models Overestimate Surface Ozone in the Southeastern United States?
Travis, K.; Jacob, D.; Fisher, J. A.; Kim, S.; Marais, E. A.; Zhu, L.; Yu, K.; Miller, C. E.; Yantosca, R.; Payer Sulprizio, M.; Thompson, A. M.; Wennberg, P. O.; Crounse, J.; St Clair, J. M.; Cohen, R. C.; Laughner, J.; Dibb, J. E.; Hall, S. R.; Ullmann, K.; Wolfe, G.; Pollack, I. B.; Peischl, J.; Neuman, J. A.; Zhou, X.
2016-12-01
Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx = NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25°×0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high in the Southeast and nationally by a factor of 2. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Upper tropospheric NO2 from lightning makes a large contribution to the satellite observations that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 8±13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone decrease from 1.5 km to the surface that GEOS-Chem does not capture. This may be caused by excessively dry conditions in the model, representing another
Why do Models Overestimate Surface Ozone in the Southeastern United States?
Travis, Katherine R.; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Zhu, Lei; Yu, Karen; Miller, Christopher C.; Yantosca, Robert M.; Sulprizio, Melissa P.; Thompson, Anne M.; Wennberg, Paul O.; Crounse, John D.; St Clair, Jason M.; Cohen, Ronald C.; Laughner, Joshua L.; Dibb, Jack E.; Hall, Samuel R.; Ullmann, Kirk; Wolfe, Glenn M.; Pollack, Illana B.; Peischl, Jeff; Neuman, Jonathan A.; Zhou, Xianliang
2018-01-01
Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx ≡ NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25°×0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30–60%, dependent on the assumption of the contribution by soil NOx emissions. Upper tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 8±13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone decrease
Why do models overestimate surface ozone in the Southeast United States?
Directory of Open Access Journals (Sweden)
K. R. Travis
2016-11-01
Full Text Available Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx ≡ NO + NO2 and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25° × 0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI for NOx from the US Environmental Protection Agency (EPA is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30–60 %, dependent on the assumption of the contribution by soil NOx emissions. Upper-tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 6 ± 14 ppb relative to observed surface ozone in the Southeast US. Ozonesondes
Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin
2014-11-01
Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.
Czech Academy of Sciences Publication Activity Database
Plavcová, Eva; Kyselý, Jan
2016-01-01
Roč. 46, č. 9 (2016), s. 2805-2820 ISSN 0930-7575 R&D Projects: GA ČR GAP209/10/2265; GA MŠk 7AMB15AR001 EU Projects: European Commission(XE) 505539 - ENSEMBLES Program:FP6 Institutional support: RVO:68378289 Keywords : heat wave * cold spell * atmospheric circulation * persistence * regional climate models * Central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 4.146, year: 2016 http://link.springer.com/article/10.1007%2Fs00382-015-2733-8
The generalized circular model
Webers, H.M.
1995-01-01
In this paper we present a generalization of the circular model. In this model there are two concentric circular markets, which enables us to study two types of markets simultaneously. There are switching costs involved for moving from one circle to the other circle, which can also be thought of as
Ocean General Circulation Models
Energy Technology Data Exchange (ETDEWEB)
Yoon, Jin-Ho; Ma, Po-Lun
2012-09-30
1. Definition of Subject The purpose of this text is to provide an introduction to aspects of oceanic general circulation models (OGCMs), an important component of Climate System or Earth System Model (ESM). The role of the ocean in ESMs is described in Chapter XX (EDITOR: PLEASE FIND THE COUPLED CLIMATE or EARTH SYSTEM MODELING CHAPTERS). The emerging need for understanding the Earth’s climate system and especially projecting its future evolution has encouraged scientists to explore the dynamical, physical, and biogeochemical processes in the ocean. Understanding the role of these processes in the climate system is an interesting and challenging scientific subject. For example, a research question how much extra heat or CO2 generated by anthropogenic activities can be stored in the deep ocean is not only scientifically interesting but also important in projecting future climate of the earth. Thus, OGCMs have been developed and applied to investigate the various oceanic processes and their role in the climate system.
Overestimating resource value and its effects on fighting decisions.
Directory of Open Access Journals (Sweden)
Lee Alan Dugatkin
Full Text Available Much work in behavioral ecology has shown that animals fight over resources such as food, and that they make strategic decisions about when to engage in such fights. Here, we examine the evolution of one, heretofore unexamined, component of that strategic decision about whether to fight for a resource. We present the results of a computer simulation that examined the evolution of over- or underestimating the value of a resource (food as a function of an individual's current hunger level. In our model, animals fought for food when they perceived their current food level to be below the mean for the environment. We considered seven strategies for estimating food value: 1 always underestimate food value, 2 always overestimate food value, 3 never over- or underestimate food value, 4 overestimate food value when hungry, 5 underestimate food value when hungry, 6 overestimate food value when relatively satiated, and 7 underestimate food value when relatively satiated. We first competed all seven strategies against each other when they began at approximately equal frequencies. In such a competition, two strategies--"always overestimate food value," and "overestimate food value when hungry"--were very successful. We next competed each of these strategies against the default strategy of "never over- or underestimate," when the default strategy was set at 99% of the population. Again, the strategies of "always overestimate food value" and "overestimate food value when hungry" fared well. Our results suggest that overestimating food value when deciding whether to fight should be favored by natural selection.
DEFF Research Database (Denmark)
Borregaard, Michael K.; Matthews, Thomas J.; Whittaker, Robert James
2016-01-01
Aim: Island biogeography focuses on understanding the processes that underlie a set of well-described patterns on islands, but it lacks a unified theoretical framework for integrating these processes. The recently proposed general dynamic model (GDM) of oceanic island biogeography offers a step...... towards this goal. Here, we present an analysis of causality within the GDM and investigate its potential for the further development of island biogeographical theory. Further, we extend the GDM to include subduction-based island arcs and continental fragment islands. Location: A conceptual analysis...... dynamics of distinct island types are predicted to lead to markedly different evolutionary dynamics. This sets the stage for a more predictive theory incorporating the processes governing temporal dynamics of species diversity on islands....
Glauber model and its generalizations
International Nuclear Information System (INIS)
Bialkowski, G.
The physical aspects of the Glauber model problems are studied: potential model, profile function and Feynman diagrams approaches. Different generalizations of the Glauber model are discussed: particularly higher and lower energy processes and large angles [fr
Generalized instrumental variable models
Andrew Chesher; Adam Rosen
2014-01-01
This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models g...
The risk of intraarticular injections are overestimated
DEFF Research Database (Denmark)
Asmussen, Rikke; Just, Søren Andreas; Jensen Hansen, Inger Marie
The risk of intraarticular injections are overestimated. Volume 73, Annals of the Rheumatic Diseases, the EULAR Journal,, June 2014 volume 73, supplement 2, p. 286-87......The risk of intraarticular injections are overestimated. Volume 73, Annals of the Rheumatic Diseases, the EULAR Journal,, June 2014 volume 73, supplement 2, p. 286-87...
Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Choi, Yoon Hong; Chapman, Ruth; Gay, Nigel; Jit, Mark
2012-05-14
Estimates of human papillomavirus (HPV) vaccine impact in clinical trials and modelling studies rely on DNA tests of cytology or biopsy specimens to determine the HPV type responsible for a cervical lesion. DNA of several oncogenic HPV types may be detectable in a specimen. However, only one type may be responsible for a particular cervical lesion. Misattribution of the causal HPV type for a particular abnormality may give rise to an apparent increase in disease due to non-vaccine HPV types following vaccination ("unmasking"). To investigate the existence and magnitude of unmasking, we analysed data from residual cytology and biopsy specimens in English women aged 20-64 years old using a stochastic type-specific individual-based model of HPV infection, progression and disease. The model parameters were calibrated to data on the prevalence of HPV DNA and cytological lesion of different grades, and used to assign causal HPV types to cervical lesions. The difference between the prevalence of all disease due to non-vaccine HPV types, and disease due to non-vaccine HPV types in the absence of vaccine HPV types, was then estimated. There could be an apparent maximum increase of 3-10% in long-term cervical cancer incidence due to non-vaccine HPV types following vaccination. Unmasking may be an important phenomenon in HPV post-vaccination epidemiology, in the same way that has been observed following pneumococcal conjugate vaccination. Copyright © 2012 Elsevier Ltd. All rights reserved.
The general NFP hospital model.
Al-Amin, Mona
2012-01-01
Throughout the past 30 years, there has been a lot of controversy surrounding the proliferation of new forms of health care delivery organizations that challenge and compete with general NFP community hospitals. Traditionally, the health care system in the United States has been dominated by general NFP (NFP) voluntary hospitals. With the number of for-profit general hospitals, physician-owned specialty hospitals, and ambulatory surgical centers increasing, a question arises: “Why is the general NFP community hospital the dominant model?” In order to address this question, this paper reexamines the history of the hospital industry. By understanding how the “general NFP hospital” model emerged and dominated, we attempt to explain the current dominance of general NFP hospitals in the ever changing hospital industry in the United States.
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Generalized PSF modeling for optimized quantitation in PET imaging
Ashrafinia, Saeed; Mohy-ud-Din, Hassan; Karakatsanis, Nicolas A.; Jha, Abhinav K.; Casey, Michael E.; Kadrmas, Dan J.; Rahmim, Arman
2017-06-01
modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
Nasal pulse oximetry overestimates oxygen saturation
DEFF Research Database (Denmark)
Rosenberg, J; Pedersen, M H
1990-01-01
Ten surgical patients were monitored with nasal and finger pulse oximetry (Nellcor N-200) for five study periods with alternating mouth and nasal breathing and switching of cables and sensors. Nasal pulse oximetry was found to overestimate arterial oxygen saturation by 4.7 (SD 1.4%) (bias...
Obese patients overestimate physicians’ attitudes of respect
Gudzune, Kimberly A.; Huizinga, Mary Margaret; Beach, Mary Catherine; Cooper, Lisa A.
2012-01-01
Objective To evaluate whether obese patients overestimate or underestimate the level of respect that their physicians hold towards them. Methods We performed a cross-sectional analysis of data from questionnaires and audio-recordings of visits between primary care physicians and their patients. Using multilevel logistic regression, we evaluated the association between patient BMI and accurate estimation of physician respect. Physician respectfulness was also rated independently by assessing the visit audiotapes. Results Thirty-nine primary care physicians and 199 of their patients were included in the analysis. The mean patient BMI was 32.8 kg/m2 (SD 8.2). For each 5 kg/m2 increase in BMI, the odds of overestimating physician respect significantly increased [OR 1.32, 95%CI 1.04–1.68, p=0.02]. Few patients underestimated physician respect. There were no differences in ratings of physician respectfulness by independent evaluators of the audiotapes. Conclusion We consider our results preliminary. Patients were significantly more likely to overestimate physician respect as BMI increased, which was not accounted for by increased respectful treatment by the physician. Practice Implications Among patients who overestimate physician respect, the authenticity of the patient-physician relationship should be questioned. PMID:22240006
Obese patients overestimate physicians' attitudes of respect.
Gudzune, Kimberly A; Huizinga, Mary Margaret; Beach, Mary Catherine; Cooper, Lisa A
2012-07-01
To evaluate whether obese patients overestimate or underestimate the level of respect that their physicians hold toward them. We performed a cross-sectional analysis of data from questionnaires and audio-recordings of visits between primary care physicians and their patients. Using multilevel logistic regression, we evaluated the association between patient BMI and accurate estimation of physician respect. Physician respectfulness was also rated independently by assessing the visit audiotapes. Thirty-nine primary care physicians and 199 of their patients were included in the analysis. The mean patient BMI was 32.8 kg/m2 (SD 8.2). For each 5 kg/m2 increase in BMI, the odds of overestimating physician respect significantly increased [OR 1.32, 95% CI 1.04-1.68, p=0.02]. Few patients underestimated physician respect. There were no differences in ratings of physician respectfulness by independent evaluators of the audiotapes. We consider our results preliminary. Patients were significantly more likely to overestimate physician respect as BMI increased, which was not accounted for by increased respectful treatment by the physician. Among patients who overestimate physician respect, the authenticity of the patient-physician relationship should be questioned. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... measures and longitudinal structures, and the third involves a spatiotemporal analysis of rainfall data. The models take non-normality into account in the conventional way by means of a variance function, and the mean structure is modelled by means of a link function and a linear predictor. The models...
Overestimation of physical activity level is associated with lower BMI: a cross-sectional analysis
Directory of Open Access Journals (Sweden)
Corder Kirsten
2010-09-01
Full Text Available Abstract Background Poor recognition of physical inactivity may be an important barrier to healthy behaviour change, but little is known about this phenomenon. We aimed to characterize a high-risk population according to the discrepancies between objective and self-rated physical activity (PA, defined as awareness. Methods An exploratory cross-sectional analysis of PA awareness using baseline data collected from 365 ProActive participants between 2001 and 2003 in East Anglia, England. Self-rated PA was defined as 'active' or 'inactive' (assessed via questionnaire. Objective PA was defined according to achievement of guideline activity levels (≥30 minutes or Results 63.3% of participants (N = 231 were inactive according to objective measurement. Of these, 45.9% rated themselves as active ('Overestimators'. In a multiple logistic regression model adjusted for age and smoking, males (OR = 2.11, 95% CI = 1.12, 3.98, those with lower BMI (OR = 0.89, 95% CI = 0.84, 0.95, younger age at completion of full-time education (OR = 0.83, 95% CI = 0.74, 0.93 and higher general health perception (OR = 1.02 CI = 1.00, 1.04 were more likely to overestimate their PA. Conclusions Overestimation of PA is associated with favourable indicators of relative slimness and general health. Feedback about PA levels could help reverse misperceptions.
Cosmological models in general relativity
Indian Academy of Sciences (India)
Cosmological models in general relativity. B B PAUL. Department of Physics, Nowgong College, Nagaon, Assam, India. MS received 4 October 2002; revised 6 March 2003; accepted 21 May 2003. Abstract. LRS Bianchi type-I space-time filled with perfect fluid is considered here with deceler- ation parameter as variable.
Fermions as generalized Ising models
Directory of Open Access Journals (Sweden)
C. Wetterich
2017-04-01
Full Text Available We establish a general map between Grassmann functionals for fermions and probability or weight distributions for Ising spins. The equivalence between the two formulations is based on identical transfer matrices and expectation values of products of observables. The map preserves locality properties and can be realized for arbitrary dimensions. We present a simple example where a quantum field theory for free massless Dirac fermions in two-dimensional Minkowski space is represented by an asymmetric Ising model on a euclidean square lattice.
Generalization performance of regularized neural network models
DEFF Research Database (Denmark)
Larsen, Jan; Hansen, Lars Kai
1994-01-01
Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...
Domain-general biases in spatial localization: Evidence against a distorted body model hypothesis.
Medina, Jared; Duckett, Caitlin
2017-07-01
A number of studies have proposed the existence of a distorted body model of the hand. Supporting this hypothesis, judgments of the location of hand landmarks without vision are characterized by consistent distortions-wider knuckle and shorter finger lengths. We examined an alternative hypothesis in which these biases are caused by domain-general mechanisms, in which participants overestimate the distance between consecutive localization judgments that are spatially close. To do so, we examined performance on a landmark localization task with the hand (Experiments 1-3) using a lag-1 analysis. We replicated the widened knuckle judgments in previous studies. Using the lag-1 analysis, we found evidence for a constant overestimation bias along the mediolateral hand axis, such that consecutive stimuli were perceived as farther apart when they were closer (e.g., index-middle knuckle) versus farther (index-pinky) in space. Controlling for this bias, we found no evidence for a distorted body model along the mediolateral hand axis. To examine whether similar widening biases could be found with noncorporeal stimuli, we asked participants to localize remembered dots on a hand-like array (Experiments 4-5). Mean localization judgments were wider than actual along the primary array axis, similar to previous work with hands. As with proprioceptively defined stimuli, we found that this widening was primarily due to a constant overestimation bias. These results provide substantial evidence against a distorted body model hypothesis and support a domain-general model in which responses are biased away from the uncertainty distribution of the previous trial, leading to a constant overestimation bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multivariate generalized linear mixed models using R
Berridge, Damon Mark
2011-01-01
Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...
Generalized latent variable modeling multilevel, longitudinal, and structural equation models
Skrondal, Anders; Rabe-Hesketh, Sophia
2004-01-01
This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.
General introduction to simulation models
DEFF Research Database (Denmark)
Hisham Beshara Halasa, Tariq; Boklund, Anette
2012-01-01
Monte Carlo simulation can be defined as a representation of real life systems to gain insight into their functions and to investigate the effects of alternative conditions or actions on the modeled system. Models are a simplification of a system. Most often, it is best to use experiments and fie...... as support decision making. However, several other factors affect decision making such as, ethics, politics and economics. Furthermore, the insight gained when models are build leads to point out areas where knowledge is lacking....... of FMD spread that can provide useful and trustworthy advises, there are four important issues, which the model should represent: 1) The herd structure of the country in question, 2) the dynamics of animal movements and contacts between herds, 3) the biology of the disease, and 4) the regulations...
MRI Overestimates Excitotoxic Amygdala Lesion Damage in Rhesus Monkeys
Directory of Open Access Journals (Sweden)
Benjamin M. Basile
2017-06-01
Full Text Available Selective, fiber-sparing excitotoxic lesions are a state-of-the-art tool for determining the causal contributions of different brain areas to behavior. For nonhuman primates especially, it is advantageous to keep subjects with high-quality lesions alive and contributing to science for many years. However, this requires the ability to estimate lesion extent accurately. Previous research has shown that in vivo T2-weighted magnetic resonance imaging (MRI accurately estimates damage following selective ibotenic acid lesions of the hippocampus. Here, we show that the same does not apply to lesions of the amygdala. Across 19 hemispheres from 13 rhesus monkeys, MRI assessment consistently overestimated amygdala damage as assessed by microscopic examination of Nissl-stained histological material. Two outliers suggested a linear relation for lower damage levels, and values of unintended amygdala damage from a previous study fell directly on that regression line, demonstrating that T2 hypersignal accurately predicts damage levels below 50%. For unintended damage, MRI estimates correlated with histological assessment for entorhinal cortex, perirhinal cortex and hippocampus, though MRI significantly overestimated the extent of that damage in all structures. Nevertheless, ibotenic acid injections routinely produced extensive intentional amygdala damage with minimal unintended damage to surrounding structures, validating the general success of the technique. The field will benefit from more research into in vivo lesion assessment techniques, and additional evaluation of the accuracy of MRI assessment in different brain areas. For now, in vivo MRI assessment of ibotenic acid lesions of the amygdala can be used to confirm successful injections, but MRI estimates of lesion extent should be interpreted with caution.
Actuarial statistics with generalized linear mixed models
Antonio, K.; Beirlant, J.
2007-01-01
Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics
Generalized Linear Models in Family Studies
Wu, Zheng
2005-01-01
Generalized linear models (GLMs), as defined by J. A. Nelder and R. W. M. Wedderburn (1972), unify a class of regression models for categorical, discrete, and continuous response variables. As an extension of classical linear models, GLMs provide a common body of theory and methodology for some seemingly unrelated models and procedures, such as…
Quartz red TL SAR equivalent dose overestimation for Chinese loess
DEFF Research Database (Denmark)
Lai, Z.P.; Murray, A.S.; Bailey, R.M.
2006-01-01
For the red TL of quartz extracted from Chinese loess, the single-aliquot regenerative-dose (SAR) procedure overestimates the known laboratory doses in dose recovery test. The overestimation is the result of the first heating during the measurement of natural TL signal causing a sensitivity...
Micro Data and General Equilibrium Models
DEFF Research Database (Denmark)
Browning, Martin; Hansen, Lars Peter; Heckman, James J.
1999-01-01
Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...
A general consumer-resource population model
Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.
2015-01-01
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.
Conformity and Dissonance in Generalized Voter Models
Page, Scott E.; Sander, Leonard M.; Schneider-Mizell, Casey M.
2007-09-01
We generalize the voter model to include social forces that produce conformity among voters and avoidance of cognitive dissonance of opinions within a voter. The time for both conformity and consistency (which we call the exit time) is, in general, much longer than for either process alone. We show that our generalized model can be applied quite widely: it is a form of Wright's island model of population genetics, and is related to problems in the physical sciences. We give scaling arguments, numerical simulations, and analytic estimates for the exit time for a range of relative strengths in the tendency to conform and to avoid dissonance.
[Integrity of young abortion applicants is overestimated].
Olsson, S
1999-03-03
In 1995 an 11-year old North African girl underwent two abortions over the course of half a year without her mother/guardian or social service having been contacted beforehand in each case. The maternal health care center, the school nurse, and the Karolinska Hospital women's ward took care of the operations. The girl has consistently claimed that if her home had been contacted she would have been rejected by her family, and the health care personnel have accepted her assertion. The reasons for not contacting the home or social services are 1) that it has become established practice that a young woman's need for personal integrity takes precedence over the need of health care authorities for information, 2) that the social agency's general guidelines from 1989 concerning abortion do not specify a specific lower age limit for the right of a woman to have an abortion, 3) that the gynecological clinic of Karolinska Hospital also played a part by placing the medical need of the girl ahead of the social need, and 4) that there is a general tendency for authorities to take over the responsibility of parents. In September 1995 the girl was placed in a state home and later in a correctional facility because she escaped several times, had a boyfriend with whom she planned to be engaged, and had received various presents, leading to the suspicion that she had been exposed to sexual exploitation. A psychologist's examination has shown that she has pseudo- maturity and regressive tendency--evidence that she has suffered a strong childhood trauma. A mother's love and care may be the only thing that will ensure her a better future.
A Generalized Random Regret Minimization Model
Chorus, C.G.
2013-01-01
This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2005-01-01
Roč. 11, č. 2 (2005), s. 129-135 ISSN 1385-3139 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear interval equations * interval hull * midpoint preconditioning * overestimation Subject RIV: BA - General Mathematics
EOP MIT General Circulation Model (MITgcm)
National Oceanic and Atmospheric Administration, Department of Commerce — This data contains a regional implementation of the Massachusetts Institute of Technology general circulation model (MITgcm) at a 1-km spatial resolution for the...
Generalized Reduced Order Model Generation, Phase I
National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...
Generalized Reduced Order Model Generation Project
National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...
Empirical generalization assessment of neural network models
DEFF Research Database (Denmark)
Larsen, Jan; Hansen, Lars Kai
1995-01-01
competing models. Since all models are trained on the same data, a key issue is to take this dependency into account. The optimal split of the data set of size N into a cross-validation set of size Nγ and a training set of size N(1-γ) is discussed. Asymptotically (large data sees), γopt→1......This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model....... This enables the formulation of a bulk of new generalization performance measures. Numerical results demonstrate the viability of the approach compared to the standard technique of using algebraic estimates like the FPE. Moreover, we consider the problem of comparing the generalization performance of different...
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Perturbed generalized multicritical one-matrix models
Ambjørn, J.; Chekhov, L.; Makeenko, Y.
2018-03-01
We study perturbations around the generalized Kazakov multicritical one-matrix model. The multicritical matrix model has a potential where the coefficients of zn only fall off as a power 1 /n s + 1. This implies that the potential and its derivatives have a cut along the real axis, leading to technical problems when one performs perturbations away from the generalized Kazakov model. Nevertheless it is possible to relate the perturbed partition function to the tau-function of a KdV hierarchy and solve the model by a genus expansion in the double scaling limit.
GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.
Chen, Cheng-lung
1988-01-01
The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.
[On the overestimation of the benefit of prevention].
Mühlhauser, Ingrid
2014-01-01
Both pharmacological and non-pharmacological preventive interventions can do more harm than good. Health checks target a healthy or symptomless population. This is why randomised controlled trials (RCTs) must be conducted to provide high-quality evidence for the benefit of an intervention. The present article presents examples to demonstrate that the benefit of preventive interventions is usually overestimated. Standard screening criteria are used to critically appraise selected preventive interventions. Screening criteria cover the disease, the test, the treatment and the whole programme including evaluation and quality assurance. Type-2 diabetes mellitus is used as an example to discuss specific criteria for preventive interventions. The current state of the evidence is outlined. The article is based primarily on systematic / Cochrane reviews of RCTs. A recent Cochrane review including 16 RCTs concluded that there is no benefit of general health checks. High-quality evidence on individual components of health checks is frequently missing or inconclusive. Over the last 30 years reference values for normal blood glucose and normal blood pressure as well as treatment targets for patients with type-2 diabetes mellitus and hypertension have been repeatedly decreased though this is not supported by evidence. Recent high-quality RCTs have shown that these "hit hard and early" interventions are detrimental, particularly to those who were the primary target group. Consequently, treatment targets have again been raised and recent guidelines recommend individualisation of treatment goals taking age and comorbidities into account. Important criteria for the implementation of preventive interventions are not currently met. With regard to type-2 diabetes uncertainties remain as to the clinical significance of pre-diabetes, the treatment of pre-diabetes and early treatment of diabetes, the screening tests, and target groups. The ADDITION study was unable to prove the benefit of
Generalization of the quark rearrangement model
International Nuclear Information System (INIS)
Fields, T.; Chen, C.K.
1976-01-01
An extension and generalization of the quark rearrangement model of baryon annihilation is described which can be applied to all annihilation reactions and which incorporates some of the features of the highly successful quark parton model. Some p anti-p interactions are discussed
Geometrical efficiency in computerized tomography: generalized model
International Nuclear Information System (INIS)
Costa, P.R.; Robilotta, C.C.
1992-01-01
A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)
Generalized linear model for partially ordered data.
Zhang, Qiang; Ip, Edward Haksing
2012-01-13
Within the rich literature on generalized linear models, substantial efforts have been devoted to models for categorical responses that are either completely ordered or completely unordered. Few studies have focused on the analysis of partially ordered outcomes, which arise in practically every area of study, including medicine, the social sciences, and education. To fill this gap, we propose a new class of generalized linear models--the partitioned conditional model--that includes models for both ordinal and unordered categorical data as special cases. We discuss the specification of the partitioned conditional model and its estimation. We use an application of the method to a sample of the National Longitudinal Study of Youth to illustrate how the new method is able to extract from partially ordered data useful information about smoking youths that is not possible using traditional methods. Copyright © 2011 John Wiley & Sons, Ltd.
Topics in the generalized vector dominance model
International Nuclear Information System (INIS)
Chavin, S.
1976-01-01
Two topics are covered in the generalized vector dominance model. In the first topic a model is constructed for dilepton production in hadron-hadron interactions based on the idea of generalized vector-dominance. It is argued that in the high mass region the generalized vector-dominance model and the Drell-Yan parton model are alternative descriptions of the same underlying physics. In the low mass regions the models differ; the vector-dominance approach predicts a greater production of dileptons. It is found that the high mass vector mesons which are the hallmark of the generalized vector-dominance model make little contribution to the large yield of leptons observed in the transverse-momentum range 1 less than p/sub perpendicular/ less than 6 GeV. The recently measured hadronic parameters lead one to believe that detailed fits to the data are possible under the model. The possibility was expected, and illustrated with a simple model the extreme sensitivity of the large-p/sub perpendicular/ lepton yield to the large-transverse-momentum tail of vector-meson production. The second topic is an attempt to explain the mysterious phenomenon of photon shadowing in nuclei utilizing the contribution of the longitudinally polarized photon. It is argued that if the scalar photon anti-shadows, it could compensate for the transverse photon, which is presumed to shadow. It is found in a very simple model that the scalar photon could indeed anti-shadow. The principal feature of the model is a cancellation of amplitudes. The scheme is consistent with scalar photon-nucleon data as well. The idea is tested with two simple GVDM models and finds that the anti-shadowing contribution of the scalar photon is not sufficient to compensate for the contribution of the transverse photon. It is found doubtful that the scalar photon makes a significant contribution to the total photon-nuclear cross section
Generalizations of the noisy-or model
Czech Academy of Sciences Publication Activity Database
Vomlel, Jiří
2015-01-01
Roč. 51, č. 3 (2015), s. 508-524 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Bayesian networks * noisy-or model * classification * generalized linear models Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/vomlel-0447357.pdf
Toward General Analysis of Recursive Probability Models
Pless, Daniel; Luger, George
2013-01-01
There is increasing interest within the research community in the design and use of recursive probability models. Although there still remains concern about computational complexity costs and the fact that computing exact solutions can be intractable for many nonrecursive models and impossible in the general case for recursive problems, several research groups are actively developing computational techniques for recursive stochastic languages. We have developed an extension to the traditional...
General Equilibrium Models: Improving the Microeconomics Classroom
Nicholson, Walter; Westhoff, Frank
2009-01-01
General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…
General regression and representation model for classification.
Directory of Open Access Journals (Sweden)
Jianjun Qian
Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.
Current definition and a generalized federbush model
International Nuclear Information System (INIS)
Singh, L.P.S.; Hagen, C.R.
1978-01-01
The Federbush model is studied, with particular attention being given to the definition of currents. Inasmuch as there is no a priori restriction of local gauge invariance, the currents in the interacting case can be defined more generally than in Q.E.D. It is found that two arbitrary parameters are thereby introduced into the theory. Lowest order perturbation calculations for the current correlation functions and the Fermion propagators indicate that the theory admits a whole class of solutions dependent upon these parameters with the closed solution of Federbush emerging as a special case. The theory is shown to be locally covariant, and a conserved energy--momentum tensor is displayed. One finds in addition that the generators of gauge transformations for the fields are conserved. Finally it is shown that the general theory yields the Federbush solution if suitable Thirring model type counterterms are added
Generalized Additive Models for Nowcasting Cloud Shading
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Paulescu, M.; Badescu, V.
2014-01-01
Roč. 101, March (2014), s. 272-282 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : sunshine number * nowcasting * generalized additive model * Markov chain Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014
Generalized data stacking programming model with applications
Hala Samir Elhadidy; Rawya Yehia Rizk; Hassen Taher Dorrah
2016-01-01
Recent researches have shown that, everywhere in various sciences the systems are following stacked-based stored change behavior when subjected to events or varying environments “on and above” their normal situations. This paper presents a generalized data stack programming (GDSP) model which is developed to describe the system changes under varying environment. These changes which are captured with different ways such as sensor reading are stored in matrices. Extraction algorithm and identif...
A General Business Model for Marine Reserves
Sala, Enric; Costello, Christopher; Dougherty, Dawn; Heal, Geoffrey; Kelleher, Kieran; Murray, Jason H.; Rosenberg, Andrew A.; Sumaila, Rashid
2013-01-01
Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism) may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models. PMID:23573192
Overestimation of Knowledge about Word Meanings: The "Misplaced Meaning" Effect
Kominsky, Jonathan F.; Keil, Frank C.
2014-01-01
Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a "Misplaced Meaning" (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much…
Long, Stephen P; Ainsworth, Elizabeth A; Leakey, Andrew D B; Morgan, Patrick B
2005-11-29
Predictions of yield for the globe's major grain and legume arable crops suggest that, with a moderate temperature increase, production may increase in the temperate zone, but decline in the tropics. In total, global food supply may show little change. This security comes from inclusion of the direct effect of rising carbon dioxide (CO2) concentration, [CO2], which significantly stimulates yield by decreasing photorespiration in C3 crops and transpiration in all crops. Evidence for a large response to [CO2] is largely based on studies made within chambers at small scales, which would be considered unacceptable for standard agronomic trials of new cultivars or agrochemicals. Yet, predictions of the globe's future food security are based on such inadequate information. Free-Air Concentration Enrichment (FACE) technology now allows investigation of the effects of rising [CO2] and ozone on field crops under fully open-air conditions at an agronomic scale. Experiments with rice, wheat, maize and soybean show smaller increases in yield than anticipated from studies in chambers. Experiments with increased ozone show large yield losses (20%), which are not accounted for in projections of global food security. These findings suggest that current projections of global food security are overoptimistic. The fertilization effect of CO2 is less than that used in many models, while rising ozone will cause large yield losses in the Northern Hemisphere. Unfortunately, FACE studies have been limited in geographical extent and interactive effects of CO2, ozone and temperature have yet to be studied. Without more extensive study of the effects of these changes at an agronomic scale in the open air, our ever-more sophisticated models will continue to have feet of clay.
A Study on Overestimating a Given Fraction Defective by an Imperfect Inspector
Directory of Open Access Journals (Sweden)
Moon Hee Yang
2014-01-01
Full Text Available It has been believed that even an imperfect inspector with nonzero inspection errors could either overestimate or underestimate a given FD (fraction defective with a 50 : 50 chance. What happens to the existing inspection plans, if an imperfect inspector overestimates a known FD, when it is very low? We deal with this fundamental question, by constructing four mathematical models, under the assumptions that an infinite sequence of items with a known FD is given to an imperfect inspector with nonzero inspection errors, which can be constant and/or randomly distributed with a uniform distribution. We derive four analytical formulas for computing the probability of overestimation (POE and prove that an imperfect inspector overestimates a given FD with more than 50%, if the FD is less than a value termed as a critical FD. Our mathematical proof indicates that the POE approaches one when FD approaches zero under our assumptions. Hence, if a given FD is very low, commercial inspection plans should be revised with the POE concept in the near future, for the fairness of commercial trades.
Predictors and overestimation of recalled mobile phone use among children and adolescents.
Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klæboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin
2011-12-01
A growing body of literature addresses possible health effects of mobile phone use in children and adolescents by relying on the study participants' retrospective reconstruction of mobile phone use. In this study, we used data from the international case-control study CEFALO to compare self-reported with objectively operator-recorded mobile phone use. The aim of the study was to assess predictors of level of mobile phone use as well as factors that are associated with overestimating own mobile phone use. For cumulative number and duration of calls as well as for time since first subscription we calculated the ratio of self-reported to operator-recorded mobile phone use. We used multiple linear regression models to assess possible predictors of the average number and duration of calls per day and logistic regression models to assess possible predictors of overestimation. The cumulative number and duration of calls as well as the time since first subscription of mobile phones were overestimated on average by the study participants. Likelihood to overestimate number and duration of calls was not significantly different for controls compared to cases (OR=1.1, 95%-CI: 0.5 to 2.5 and OR=1.9, 95%-CI: 0.85 to 4.3, respectively). However, likelihood to overestimate was associated with other health related factors such as age and sex. As a consequence, such factors act as confounders in studies relying solely on self-reported mobile phone use and have to be considered in the analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
A generalized additive regression model for survival times
DEFF Research Database (Denmark)
Scheike, Thomas H.
2001-01-01
Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...
Generalized data stacking programming model with applications
Directory of Open Access Journals (Sweden)
Hala Samir Elhadidy
2016-09-01
Full Text Available Recent researches have shown that, everywhere in various sciences the systems are following stacked-based stored change behavior when subjected to events or varying environments “on and above” their normal situations. This paper presents a generalized data stack programming (GDSP model which is developed to describe the system changes under varying environment. These changes which are captured with different ways such as sensor reading are stored in matrices. Extraction algorithm and identification technique are proposed to extract the different layers between images and identify the stack class the object follows; respectively. The general multi-stacking network is presented including the interaction between various stack-based layering of some applications. The experiments prove that the concept of stack matrix gives average accuracy of 99.45%.
Testing Parametric versus Semiparametric Modelling in Generalized Linear Models
Härdle, W.K.; Mammen, E.; Müller, M.D.
1996-01-01
We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.
Modelling debris flows down general channels
Directory of Open Access Journals (Sweden)
S. P. Pudasaini
2005-01-01
Full Text Available This paper is an extension of the single-phase cohesionless dry granular avalanche model over curved and twisted channels proposed by Pudasaini and Hutter (2003. It is a generalisation of the Savage and Hutter (1989, 1991 equations based on simple channel topography to a two-phase fluid-solid mixture of debris material. Important terms emerging from the correct treatment of the kinematic and dynamic boundary condition, and the variable basal topography are systematically taken into account. For vanishing fluid contribution and torsion-free channel topography our new model equations exactly degenerate to the previous Savage-Hutter model equations while such a degeneration was not possible by the Iverson and Denlinger (2001 model, which, in fact, also aimed to extend the Savage and Hutter model. The model equations of this paper have been rigorously derived; they include the effects of the curvature and torsion of the topography, generally for arbitrarily curved and twisted channels of variable channel width. The equations are put into a standard conservative form of partial differential equations. From these one can easily infer the importance and influence of the pore-fluid-pressure distribution in debris flow dynamics. The solid-phase is modelled by applying a Coulomb dry friction law whereas the fluid phase is assumed to be an incompressible Newtonian fluid. Input parameters of the equations are the internal and bed friction angles of the solid particles, the viscosity and volume fraction of the fluid, the total mixture density and the pore pressure distribution of the fluid at the bed. Given the bed topography and initial geometry and the initial velocity profile of the debris mixture, the model equations are able to describe the dynamics of the depth profile and bed parallel depth-averaged velocity distribution from the initial position to the final deposit. A shock capturing, total variation diminishing numerical scheme is implemented to
Thurstonian models for sensory discrimination tests as generalized linear models
DEFF Research Database (Denmark)
Brockhoff, Per B.; Christensen, Rune Haubo Bojesen
2010-01-01
as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard...... linear contrast in a generalized linear model using the probit link function. All methods developed in the paper are implemented in our free R-package sensR (http://www.cran.r-project.org/package=sensR/). This includes the basic power and sample size calculations for these four discrimination tests...
The Generalized Quantum Episodic Memory Model.
Trueblood, Jennifer S; Hemmer, Pernille
2017-11-01
Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.
The epistemological status of general circulation models
Loehle, Craig
2017-05-01
Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.
Generalized continuous linear model of international trade
Directory of Open Access Journals (Sweden)
Kostenko Elena
2014-01-01
Full Text Available The probability-based approach to the linear model of international trade based on the theory of Markov processes with continuous time is analysed. A generalized continuous model of international trade is built, in which the transition of the system from state to state is described by linear differential equations. The methodology of how to obtain the intensity matrices, which are differential in nature, is shown, and the same is done for their corresponding transition matrices for processes of purchasing and selling. In the process of the creation of the continuous model, functions and operations of matrices were used in addition to the Laplace transform, which gave the analytical form of the transition matrices, and therefore the expressions for the state vectors of the system. The obtained expressions simplify analysis and calculations in comparison to other methods. The values of the continuous transition matrices include in themselves the results of discrete model of international trade at moments in time proportional to the time step. The continuous model improves the quality of planning and the effectiveness of control of international trade agreements.
The epistemological status of general circulation models
Loehle, Craig
2018-03-01
Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.
Aspects of general linear modelling of migration.
Congdon, P
1992-01-01
"This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt
Superconductivity in a generalized Hubbard model
Arrachea, Liliana; Aligia, A. A.
1997-02-01
We consider a Hubbard model in the square lattice, with a generalized hopping between nearest-neighbor sites for spin up (down), which depends on the total occupation nb of spin down (up) electrons on both sites. We call the hopping parameters tAA, tAB, and tBB for nb = 0, 1 or 2 respectively. Using the Hartree-Fock and Bardeen-Cooper-Schrieffer mean-field approximations to decouple the two-body and three-body interactions, we find that the model exhibits extended s-wave superconductivity in the electron-hole symmetric case tAB > tAA = tBB for small values of the Coulomb repulsion U or small band fillings. For moderate values of U, the antiferromagnetic normal (AFN) state has lower energy. The translationally invariant d-wave superconducting state has always larger energy than the AFN state.
Functional methods in the generalized Dicke model
International Nuclear Information System (INIS)
Alcalde, M. Aparicio; Lemos, A.L.L. de; Svaiter, N.F.
2007-01-01
The Dicke model describes an ensemble of N identical two-level atoms (qubits) coupled to a single quantized mode of a bosonic field. The fermion Dicke model should be obtained by changing the atomic pseudo-spin operators by a linear combination of Fermi operators. The generalized fermion Dicke model is defined introducing different coupling constants between the single mode of the bosonic field and the reservoir, g 1 and g 2 for rotating and counter-rotating terms respectively. In the limit N -> ∞, the thermodynamic of the fermion Dicke model can be analyzed using the path integral approach with functional method. The system exhibits a second order phase transition from normal to superradiance at some critical temperature with the presence of a condensate. We evaluate the critical transition temperature and present the spectrum of the collective bosonic excitations for the general case (g 1 ≠ 0 and g 2 ≠ 0). There is quantum critical behavior when the coupling constants g 1 and g 2 satisfy g 1 + g 2 =(ω 0 Ω) 1/2 , where ω 0 is the frequency of the mode of the field and Ω is the energy gap between energy eigenstates of the qubits. Two particular situations are analyzed. First, we present the spectrum of the collective bosonic excitations, in the case g 1 ≠ 0 and g 2 ≠ 0, recovering the well known results. Second, the case g 1 ≠ 0 and g 2 ≠ 0 is studied. In this last case, it is possible to have a super radiant phase when only virtual processes are introduced in the interaction Hamiltonian. Here also appears a quantum phase transition at the critical coupling g 2 (ω 0 Ω) 1/2 , and for larger values for the critical coupling, the system enter in this super radiant phase with a Goldstone mode. (author)
Chaos in generalized Jaynes-Cummings model
Energy Technology Data Exchange (ETDEWEB)
Chotorlishvili, L. [Institute for Physik, Martin-Luther-University Halle-Wittenberg, Heinrich-Damerow-Str. 4, 06120 Halle (Germany)], E-mail: lchotor33@yahoo.com; Toklikishvili, Z. [Physics Department of the Tbilisi State University, Chavchavadze av. 3, 0128 Tbilisi (Georgia)
2008-04-14
The possibility of chaos formation is studied in terms of a generalized Jaynes-Cummings model which is a key model in the quantum electrodynamics of resonators. In particular, the dynamics of a three-level optical atom which is under the action of the resonator field is considered. The specific feature of the considered problem consists in that not all transitions between the atom levels are permitted. This asymmetry of the system accounts for the complexity of the problem and makes it different from the three-level systems studied previously. We consider the most general case, where the interaction of the system with the resonator depends on the system coordinate inside the resonator. It is shown that, contrary to the commonly accepted opinion, the absence of resonance detuning does not guarantee the system state controllability. In the course of evolution the system performs an irreversible transition from the purely quantum-mechanical state to the mixed state. It is shown that the asymmetry of the system levels accounts for the fact that the upper excited level turns out to be the most populated one.
A general phenomenological model for work function
Brodie, I.; Chou, S. H.; Yuan, H.
2014-07-01
A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.
A general model of learning design objects
Directory of Open Access Journals (Sweden)
Azeddine Chikh
2014-01-01
Full Text Available Previous research on the development of learning objects has targeted either learners, as consumers of these objects, or instructors, as designers who reuse these objects in building new online courses. There is currently an urgent need for the sharing and reuse of both theoretical knowledge (literature reviews and practical knowledge (best practice in learning design. The primary aim of this paper is to develop a strategy for constructing a more powerful set of learning objects targeted at supporting instructors in designing their curricula. A key challenge in this work is the definition of a new class of learning design objects that combine two types of knowledge: (1 reusable knowledge, consisting of theoretical and practical information on education design, and (2 knowledge of reuse, which is necessary to describe the reusable knowledge using an extended learning object metadata language. In addition, we introduce a general model of learning design object repositories based on the Unified Modeling Language, and a learning design support framework is proposed based on the repository model. Finally, a first prototype is developed to provide a subjective evaluation of the new framework.
Symplectic models for general insertion devices
International Nuclear Information System (INIS)
Wu, Y.; Forest, E.; Robin, D. S.; Nishimura, H.; Wolski, A.; Litvinenko, V. N.
2001-01-01
A variety of insertion devices (IDs), wigglers and undulators, linearly or elliptically polarized,are widely used as high brightness radiation sources at the modern light source rings. Long and high-field wigglers have also been proposed as the main source of radiation damping at next generation damping rings. As a result, it becomes increasingly important to understand the impact of IDs on the charged particle dynamics in the storage ring. In this paper, we report our recent development of a general explicit symplectic model for IDs with the paraxial ray approximation. High-order explicit symplectic integrators are developed to study real-world insertion devices with a number of wiggler harmonics and arbitrary polarizations
New model for nucleon generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Radyushkin, Anatoly V. [JLAB, Newport News, VA (United States)
2014-01-01
We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.
A Simple General Model of Evolutionary Dynamics
Thurner, Stefan
Evolution is a process in which some variations that emerge within a population (of, e.g., biological species or industrial goods) get selected, survive, and proliferate, whereas others vanish. Survival probability, proliferation, or production rates are associated with the "fitness" of a particular variation. We argue that the notion of fitness is an a posteriori concept in the sense that one can assign higher fitness to species or goods that survive but one can generally not derive or predict fitness per se. Whereas proliferation rates can be measured, fitness landscapes, that is, the inter-dependence of proliferation rates, cannot. For this reason we think that in a physical theory of evolution such notions should be avoided. Here we review a recent quantitative formulation of evolutionary dynamics that provides a framework for the co-evolution of species and their fitness landscapes (Thurner et al., 2010, Physica A 389, 747; Thurner et al., 2010, New J. Phys. 12, 075029; Klimek et al., 2009, Phys. Rev. E 82, 011901 (2010). The corresponding model leads to a generic evolutionary dynamics characterized by phases of relative stability in terms of diversity, followed by phases of massive restructuring. These dynamical modes can be interpreted as punctuated equilibria in biology, or Schumpeterian business cycles (Schumpeter, 1939, Business Cycles, McGraw-Hill, London) in economics. We show that phase transitions that separate phases of high and low diversity can be approximated surprisingly well by mean-field methods. We demonstrate that the mathematical framework is suited to understand systemic properties of evolutionary systems, such as their proneness to collapse, or their potential for diversification. The framework suggests that evolutionary processes are naturally linked to self-organized criticality and to properties of production matrices, such as their eigenvalue spectra. Even though the model is phrased in general terms it is also practical in the sense
MODEL OF BRAZILIAN URBANIZATION: GENERAL NOTES
Directory of Open Access Journals (Sweden)
Leandro da Silva Guimarães
2016-07-01
Full Text Available The full text format seeks to analyze the social inequality in Brazil through the spatial process of that inequality in this sense it analyzes, scratching the edges of what is known of the Brazilian urbanization model and how this same model produced gentrification cities and exclusive. So search the text discuss the country’s urban exclusion through consolidation of what is conventionally called peripheral areas, or more generally, of peripheries. The text on screen is the result of research carried out at the Federal Fluminense University in Masters level. In this study, we tried to understand the genesis of an urban housing development located in São Gonçalo, Rio de Janeiro called Jardim Catarina. Understand what the problem space partner who originated it. In this sense, his analysis becomes consubstantial to understand the social and spatial inequalities in Brazil, as well as the role of the state as planning manager socio-spatial planning and principal agent in the solution of such problems. It is expected that with the realization of a study of greater amounts, from which this article is just a micro work can contribute subsidies that contribute to the arrangement and crystallization of public policies that give account of social inequalities and serve to leverage a country more fair and equitable cities.
Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear
Green, Jennifer S.; Teachman, Bethany A.
2012-01-01
We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390
General single phase wellbore flow model
Energy Technology Data Exchange (ETDEWEB)
Ouyang, Liang-Biao; Arbabi, S.; Aziz, K.
1997-02-05
A general wellbore flow model, which incorporates not only frictional, accelerational and gravitational pressure drops, but also the pressure drop caused by inflow, is presented in this report. The new wellbore model is readily applicable to any wellbore perforation patterns and well completions, and can be easily incorporated in reservoir simulators or analytical reservoir inflow models. Three dimensionless numbers, the accelerational to frictional pressure gradient ratio R{sub af}, the gravitational to frictional pressure gradient ratio R{sub gf}, and the inflow-directional to accelerational pressure gradient ratio R{sub da}, have been introduced to quantitatively describe the relative importance of different pressure gradient components. For fluid flow in a production well, it is expected that there may exist up to three different regions of the wellbore: the laminar flow region, the partially-developed turbulent flow region, and the fully-developed turbulent flow region. The laminar flow region is located near the well toe, the partially-turbulent flow region lies in the middle of the wellbore, while the fully-developed turbulent flow region is at the downstream end or the heel of the wellbore. Length of each region depends on fluid properties, wellbore geometry and flow rate. As the distance from the well toe increases, flow rate in the wellbore increases and the ratios R{sub af} and R{sub da} decrease. Consequently accelerational and inflow-directional pressure drops have the greatest impact in the toe region of the wellbore. Near the well heel the local wellbore flow rate becomes large and close to the total well production rate, here R{sub af} and R{sub da} are small, therefore, both the accelerational and inflow-directional pressure drops can be neglected.
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Note on the Identifiability of Generalized Linear Mixed Models
DEFF Research Database (Denmark)
Labouriau, Rodrigo
2014-01-01
I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity ...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Multivariate generalized linear model for genetic pleiotropy.
Schaid, Daniel J; Tong, Xingwei; Batzler, Anthony; Sinnwell, Jason P; Qing, Jiang; Biernacka, Joanna M
2017-12-16
When a single gene influences more than one trait, known as pleiotropy, it is important to detect pleiotropy to improve the biological understanding of a gene. This can lead to improved screening, diagnosis, and treatment of diseases. Yet, most current multivariate methods to evaluate pleiotropy test the null hypothesis that none of the traits are associated with a variant; departures from the null could be driven by just one associated trait. A formal test of pleiotropy should assume a null hypothesis that one or fewer traits are associated with a genetic variant. We recently developed statistical methods to analyze pleiotropy for quantitative traits having a multivariate normal distribution. We now extend this approach to traits that can be modeled by generalized linear models, such as analysis of binary, ordinal, or quantitative traits, or a mixture of these types of traits. Based on methods from estimating equations, we developed a new test for pleiotropy. We then extended the testing framework to a sequential approach to test the null hypothesis that $k+1$ traits are associated, given that the null of $k$ associated traits was rejected. This provides a testing framework to determine the number of traits associated with a genetic variant, as well as which traits, while accounting for correlations among the traits. By simulations, we illustrate the Type-I error rate and power of our new methods, describe how they are influenced by sample size, the number of traits, and the trait correlations, and apply the new methods to a genome-wide association study of multivariate traits measuring symptoms of major depression. Our new approach provides a quantitative assessment of pleiotropy, enhancing current analytic practice. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A generalized model for homogenized reflectors
International Nuclear Information System (INIS)
Pogosbekyan, Leonid; Kim, Yeong Il; Kim, Young Jin; Joo, Hyung Kook
1996-01-01
A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The method of K. Smith can be simulated within framework of new method, while the new method approximates hetero-geneous cell better in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are:improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith's approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b)control blades simulation; (c) mixed UO 2 /MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANBOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions
Cosmological models in the generalized Einstein action
International Nuclear Information System (INIS)
Arbab, A.I.
2007-12-01
We have studied the evolution of the Universe in the generalized Einstein action of the form R + β R 2 , where R is the scalar curvature and β = const. We have found exact cosmological solutions that predict the present cosmic acceleration. These models predict an inflationary de-Sitter era occurring in the early Universe. The cosmological constant (Λ) is found to decay with the Hubble constant (H) as, Λ ∝ H 4 . In this scenario the cosmological constant varies quadratically with the energy density (ρ), i.e., Λ ∝ ρ 2 . Such a variation is found to describe a two-component cosmic fluid in the Universe. One of the components accelerated the Universe in the early era, and the other in the present era. The scale factor of the Universe varies as a ∼ t n = 1/2 in the radiation era. The cosmological constant vanishes when n = 4/3 and n =1/2. We have found that the inclusion of the term R 2 mimics a cosmic matter that could substitute the ordinary matter. (author)
Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model
Energy Technology Data Exchange (ETDEWEB)
Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)
1999-12-01
The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)
Application of Improved Radiation Modeling to General Circulation Models
Energy Technology Data Exchange (ETDEWEB)
Michael J Iacono
2011-04-07
This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.
Charry, Jose D; Tejada, Jorman H; Pinzon, Miguel A; Tejada, Wilson A; Ochoa, Juan D; Falla, Manuel; Tovar, Jesus H; Cuellar-Bahamón, Ana M; Solano, Juan P
2017-05-01
Traumatic brain injury (TBI) is of public health interest and produces significant mortality and disability in Colombia. Calculators and prognostic models have been developed to establish neurologic outcomes. We tested prognostic models (the Marshall computed tomography [CT] score, International Mission for Prognosis and Analysis of Clinical Trials in Traumatic Brain Injury (IMPACT), and Corticosteroid Randomization After Significant Head Injury) for 14-day mortality, 6-month mortality, and 6-month outcome in patients with TBI at a university hospital in Colombia. A 127-patient cohort with TBI was treated in a regional trauma center in Colombia over 2 years and bivariate and multivariate analyses were used. Discriminatory power of the models, their accuracy, and precision was assessed by both logistic regression and area under the receiver operating characteristic curve (AUC). Shapiro-Wilk, χ 2 , and Wilcoxon test were used to compare real outcomes in the cohort against predicted outcomes. The group's median age was 33 years, and 84.25% were male. The injury severity score median was 25, and median Glasgow Coma Scale motor score was 3. Six-month mortality was 29.13%. Six-month unfavorable outcome was 37%. Mortality prediction by Marshall CT score was 52.8%, P = 0.104 (AUC 0.585; 95% confidence interval [CI] 0 0.489-0.681), the mortality prediction by CRASH prognosis calculator was 59.9%, P < 0.001 (AUC 0.706; 95% CI 0.590-0.821), and the unfavorable outcome prediction by IMPACT was 77%, P < 0.048 (AUC 0.670; 95% CI 0.575-0.763). In a university hospital in Colombia, the Marshall CT score, IMPACT, and Corticosteroid Randomization After Significant Head Injury models overestimated the adverse neurologic outcome in patients with severe head trauma. Copyright © 2017 Elsevier Inc. All rights reserved.
Multivariate statistical modelling based on generalized linear models
Fahrmeir, Ludwig
1994-01-01
This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...
Directory of Open Access Journals (Sweden)
S. Q. Zhao
2009-08-01
Full Text Available Land use change is critical in determining the distribution, magnitude and mechanisms of terrestrial carbon budgets at the local to global scales. To date, almost all regional to global carbon cycle studies are driven by a static land use map or land use change statistics with decadal time intervals. The biases in quantifying carbon exchange between the terrestrial ecosystems and the atmosphere caused by using such land use change information have not been investigated. Here, we used the General Ensemble biogeochemical Modeling System (GEMS, along with consistent and spatially explicit land use change scenarios with different intervals (1 yr, 5 yrs, 10 yrs and static, respectively, to evaluate the impacts of land use change data frequency on estimating regional carbon sequestration in the southeastern United States. Our results indicate that ignoring the detailed fast-changing dynamics of land use can lead to a significant overestimation of carbon uptake by the terrestrial ecosystem. Regional carbon sequestration increased from 0.27 to 0.69, 0.80 and 0.97 Mg C ha^{−1} yr^{−1} when land use change data frequency shifting from 1 year to 5 years, 10 years interval and static land use information, respectively. Carbon removal by forest harvesting and prolonged cumulative impacts of historical land use change on carbon cycle accounted for the differences in carbon sequestration between static and dynamic land use change scenarios. The results suggest that it is critical to incorporate the detailed dynamics of land use change into local to global carbon cycle studies. Otherwise, it is impossible to accurately quantify the geographic distributions, magnitudes, and mechanisms of terrestrial carbon sequestration at the local to global scales.
The ECHAM3 atmospheric general circulation model
International Nuclear Information System (INIS)
1993-09-01
The ECHAM model has been developed from the ECMWF model (cycle 31, November 1988). It contains several changes, mostly in the parameterization, in order to adjust the model for climate simulations. The technical details of the ECHAM operational model are described. (orig./KW)
Kaplan-Meier Survival Analysis Overestimates the Risk of Revision Arthroplasty: A Meta-analysis.
Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter D; Ghali, William A; Marshall, Deborah A
2015-11-01
Although Kaplan-Meier survival analysis is commonly used to estimate the cumulative incidence of revision after joint arthroplasty, it theoretically overestimates the risk of revision in the presence of competing risks (such as death). Because the magnitude of overestimation is not well documented, the potential associated impact on clinical and policy decision-making remains unknown. We performed a meta-analysis to answer the following questions: (1) To what extent does the Kaplan-Meier method overestimate the cumulative incidence of revision after joint replacement compared with alternative competing-risks methods? (2) Is the extent of overestimation influenced by followup time or rate of competing risks? We searched Ovid MEDLINE, EMBASE, BIOSIS Previews, and Web of Science (1946, 1980, 1980, and 1899, respectively, to October 26, 2013) and included article bibliographies for studies comparing estimated cumulative incidence of revision after hip or knee arthroplasty obtained using both Kaplan-Meier and competing-risks methods. We excluded conference abstracts, unpublished studies, or studies using simulated data sets. Two reviewers independently extracted data and evaluated the quality of reporting of the included studies. Among 1160 abstracts identified, six studies were included in our meta-analysis. The principal reason for the steep attrition (1160 to six) was that the initial search was for studies in any clinical area that compared the cumulative incidence estimated using the Kaplan-Meier versus competing-risks methods for any event (not just the cumulative incidence of hip or knee revision); we did this to minimize the likelihood of missing any relevant studies. We calculated risk ratios (RRs) comparing the cumulative incidence estimated using the Kaplan-Meier method with the competing-risks method for each study and used DerSimonian and Laird random effects models to pool these RRs. Heterogeneity was explored using stratified meta-analyses and
General problems of modeling for accelerators
International Nuclear Information System (INIS)
Luccio, A.
1991-01-01
In this presentation the author only discusses problems of modeling for circular accelerators and bases the examples on the AGS Booster Synchrotron presently being commissioned at BNL. A model is a platonic representation of an accelerator. With algorithms, implemented through computer codes, the model is brought to life. At the start of a new accelerator project, the model and the real machine are taking shape somewhat apart. They get closer and closer as the project goes on. Ideally, the modeler is only satisfied when the model or the machine cannot be distinguished. Accelerator modeling for real time control has specific problems. If one wants fast responses, algorithms may be implemented in hardware or by parallel computation, perhaps by neural networks. Algorithms and modeling is not only for accelerator control. It is also for: accelerator parameter measurement; hardware problem debugging, perhaps with some help of artificial intelligence; operator training, much like a flight simulator
generalized constitutive model for stabilized quick clay
African Journals Online (AJOL)
QUICK CLAY. PANCRAS MUGISHAGWE BUJULU AND GUSTAV GRIMSTAD. ABSTRACT. An experimentally-based two yield surface constitutive model for cemented quick clay has been ... Clay Model, the Koiter Rule and two Mapping Rules. .... models, where a mobilization formulation is used, this is independent of q.
Towards a General Model of Temporal Discounting
van den Bos, Wouter; McClure, Samuel M.
2013-01-01
Psychological models of temporal discounting have now successfully displaced classical economic theory due to the simple fact that many common behavior patterns, such as impulsivity, were unexplainable with classic models. However, the now dominant hyperbolic model of discounting is itself becoming increasingly strained. Numerous factors have…
Development of a generalized integral jet model
DEFF Research Database (Denmark)
Duijm, Nijs Jan; Kessler, A.; Markert, Frank
2017-01-01
model is needed to describe the rapid combustion of the flammable part of the plume (flash fire) and a third model has to be applied for the remaining jet fire. The objective of this paper is to describe the first steps of the development of an integral-type model describing the transient development...
Evaluation of water vapor distribution in general circulation models using satellite observations
Soden, Brian J.; Bretherton, Francis P.
1994-01-01
This paper presents a comparison of the water vapor distribution obtained from two general circulation models, the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM), with satellite observations of total precipitable water (TPW) from Special Sensor Microwave/Imager (SSM/I) and upper tropospheric relative humidity (UTH) from GOES. Overall, both models are successful in capturing the primary features of the observed water vapor distribution and its seasonal variation. For the ECMWF model, however, a systematic moist bias in TPW is noted over well-known stratocumulus regions in the eastern subtropical oceans. Comparison with radiosonde profiles suggests that this problem is attributable to difficulties in modeling the shallowness of the boundary layer and large vertical water vapor gradients which characterize these regions. In comparison, the CCM is more successful in capturing the low values of TPW in the stratocumulus regions, although it tends to exhibit a dry bias over the eastern half of the subtropical oceans and a corresponding moist bias in the western half. The CCM also significantly overestimates the daily variability of the moisture fields in convective regions, suggesting a problem in simulating the temporal nature of moisture transport by deep convection. Comparison of the monthly mean UTH distribution indicates generally larger discrepancies than were noted for TPW owing to the greater influence of large-scale dynamical processes in determining the distribution of UTH. In particular, the ECMWF model exhibits a distinct dry bias along the Intertropical Convergence Zone (ITCZ) and a moist bias over the subtropical descending branches of the Hadley cell, suggesting an underprediction in the strength of the Hadley circulation. The CCM, on the other hand, demonstrates greater discrepancies in UTH than are observed for the ECMWF model, but none that are as
generalized constitutive model for stabilized quick clay
African Journals Online (AJOL)
An experimentally-based two yield surface constitutive model for cemented quick clay has been developed at NTNU, Norway, to reproduce the mechanical behavior of the stabilized quick clay in the triaxial p'-q stress space. The model takes into account the actual mechanical properties of the stabilized material, such as ...
Stratospheric General Circulation with Chemistry Model (SGCCM)
Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.
1990-01-01
In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).
Equilibrium in Generalized Cournot and Stackelberg Models
Bulavsky, V.A.; Kalashnikov, V.V.
1999-01-01
A model of an oligopolistic market with a homogeneous product is examined. Each subject of the model uses a conjecture about the market response to variations of its production volume. The conjecture value depends upon both the current total volume of production at the market and the subject's
Generalized coupling in the Kuramoto model
DEFF Research Database (Denmark)
Filatrella, G.; Pedersen, Niels Falsig; Wiesenfeld, K.
2007-01-01
We propose a modification of the Kuramoto model to account for the effective change in the coupling constant among the oscillators, as suggested by some experiments on Josephson junction, laser arrays, and mechanical systems, where the active elements are turned on one by one. The resulting model...... with the behavior of Josephson junctions coupled via a cavity....
Multiloop functional renormalization group for general models
Kugler, Fabian B.; von Delft, Jan
2018-02-01
We present multiloop flow equations in the functional renormalization group (fRG) framework for the four-point vertex and self-energy, formulated for a general fermionic many-body problem. This generalizes the previously introduced vertex flow [F. B. Kugler and J. von Delft, Phys. Rev. Lett. 120, 057403 (2018), 10.1103/PhysRevLett.120.057403] and provides the necessary corrections to the self-energy flow in order to complete the derivative of all diagrams involved in the truncated fRG flow. Due to its iterative one-loop structure, the multiloop flow is well suited for numerical algorithms, enabling improvement of many fRG computations. We demonstrate its equivalence to a solution of the (first-order) parquet equations in conjunction with the Schwinger-Dyson equation for the self-energy.
Precautions surrounding blood transfusion in autoimmune haemolytic anaemias are overestimated
Yürek, Salih; Mayer, Beate; Almahallawi, Mohammed; Pruss, Axel; Salama, Abdulgabar
2015-01-01
Background It is very evident that many precautions are taken regarding transfusion of red blood cells in patients with autoimmune haemolytic anaemia. Frequently, considerable efforts are made to examine the indication and serological compatibility prior to transfusion in such patients. However, at times, this may unnecessarily jeopardize patients who urgently require a red blood cell transfusion. Materials and methods Thirty-six patients with warm-type autoimmune haemolytic anaemia were included in this study. All patients had reactive serum autoantibodies and required blood transfusion. Standard serological assays were employed for the detection and characterization of antibodies to red blood cells. Results A positive direct antiglobulin test was observed in all 36 patients, in addition to detectable antibodies in both the eluate and serum. Significant alloantibodies were detected in the serum samples of three patients (anti-c, anti-JKa, and anti-E). In 32 patients, red blood cell transfusion was administered with no significant haemolytic transfusion reactions due to auto- and/or allo-antibodies. Due to overestimation of positive cross-matches three patients received no transfusion or delayed transfusion and died, and one patient died due to unrecognised blood loss and anaemia which was attributed to an ineffective red blood cell transfusion. Discussion Many of the reported recommendations regarding transfusion of red blood cells in autoimmune haemolytic anaemia are highly questionable, and positive serological cross-matches should not result in a delay or refusal of necessary blood transfusions. PMID:26192772
Overestimation of marsh vulnerability to sea level rise
Kirwan, Matthew L.; Temmerman, Stijn; Skeehan, Emily E.; Guntenspergen, Glenn R.; Fagherazzi, Sergio
2016-01-01
Coastal marshes are considered to be among the most valuable and vulnerable ecosystems on Earth, where the imminent loss of ecosystem services is a feared consequence of sea level rise. However, we show with a meta-analysis that global measurements of marsh elevation change indicate that marshes are generally building at rates similar to or exceeding historical sea level rise, and that process-based models predict survival under a wide range of future sea level scenarios. We argue that marsh vulnerability tends to be overstated because assessment methods often fail to consider biophysical feedback processes known to accelerate soil building with sea level rise, and the potential for marshes to migrate inland.
Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang
2006-01-01
This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…
Description of the General Equilibrium Model of Ecosystem Services (GEMES)
Travis Warziniack; David Finnoff; Jenny Apriesnig
2017-01-01
This paper serves as documentation for the General Equilibrium Model of Ecosystem Services (GEMES). GEMES is a regional computable general equilibrium model that is composed of values derived from natural capital and ecosystem services. It models households, producing sectors, and governments, linked to one another through commodity and factor markets. GEMES was...
The Five-Factor Model: General Overview
Directory of Open Access Journals (Sweden)
A A Vorobyeva
2011-12-01
Full Text Available The article describes the five-factor model (FFM, giving an overview of its history, basic dimensions, cross-cultural research conducted on the model and highlights some practical studies based on the FFM, including the studies on job performance, leader performance and daily social interactions. An overview of the recent five-factor theory is also provided. According to the theory, the five factors are encoded in human genes, therefore it is almost impossible to change the basic factors themselves, but a person's behavior might be changed due to characteristic adaptations which do not alter personality dimensions, only a person's behavior.
Esperanto: A Unique Model for General Linguistics.
Dulichenko, Aleksandr D.
1988-01-01
Esperanto presents a unique model for linguistic research by allowing the study of language development from project to fully functioning language. Esperanto provides insight into the growth of polysemy and redundancy, as well as into language universals and the phenomenon of social control. (Author/CB)
Disregarding hearing loss leads to overestimation of age-related cognitive decline.
Guerreiro, Maria J S; Van Gerven, Pascal W M
2017-08-01
Aging is associated with cognitive and sensory decline. While several studies have indicated greater cognitive decline among older adults with hearing loss, the extent to which age-related differences in cognitive processing may have been overestimated due to group differences in sensory processing has remained unclear. We addressed this question by comparing younger adults, older adults with good hearing, and older adults with poor hearing in several cognitive domains: working memory, selective attention, processing speed, inhibitory control, and abstract reasoning. Furthermore, we examined whether sensory-related cognitive decline depends on cognitive demands and on the sensory modality used for assessment. Our results revealed that age-related cognitive deficits in most cognitive domains varied as a function of hearing loss, being more pronounced in older adults with poor hearing. Furthermore, sensory-related cognitive decline was observed across different levels of cognitive demands and independent of the sensory modality used for cognitive assessment, suggesting a generalized effect of age-related hearing loss on cognitive functioning. As most cognitive aging studies have not taken sensory acuity into account, age-related cognitive decline may have been overestimated. Copyright © 2017 Elsevier Inc. All rights reserved.
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...
Development of independent generalized probabilistic models for regulatory activities
International Nuclear Information System (INIS)
Gashev, M.Kh.; Zinchenko, Yu.A.; Stefanishin, N.A.
2012-01-01
The paper discusses the development of probabilistic models to be used in regulatory activities. Results from the development of independent generalized PSA-1 models for purposes of SNRIU risk-informed regulation are presented
Santini, M.; Caporaso, L.
2017-12-01
Although the importance of water resources in the context of climate change, it is still difficult to correctly simulate the freshwater cycle over the land via General Circulation and Earth System Models (GCMs and ESMs). Existing efforts from the Climate Model Intercomparison Project 5 (CMIP5) were mainly devoted to the validation of atmospheric variables like temperature and precipitation, with low attention to discharge.Here we investigate the present-day performances of GCMs and ESMs participating to CMIP5 in simulating the discharge of the river Congo to the sea thanks to: i) the long-term availability of discharge data for the Kinshasa hydrological station representative of more than 95% of the water flowing in the whole catchment; and ii) the River's still low influence by human intervention, which enables comparison with the (mostly) natural streamflow simulated within CMIP5.Our findings suggest how most of models appear overestimating the streamflow in terms of seasonal cycle, especially in the late winter and spring, while overestimation and variability across models are lower in late summer. Weighted ensemble means are also calculated, based on simulations' performances given by several metrics, showing some improvements of results.Although simulated inter-monthly and inter-annual percent anomalies do not appear significantly different from those in observed data, when translated into well consolidated indicators of drought attributes (frequency, magnitude, timing, duration), usually adopted for more immediate communication to stakeholders and decision makers, such anomalies can be misleading.These inconsistencies produce incorrect assessments towards water management planning and infrastructures (e.g. dams or irrigated areas), especially if models are used instead of measurements, as in case of ungauged basins or for basins with insufficient data, as well as when relying on models for future estimates without a preliminary quantification of model biases.
Modeling electrokinetics in ionic liquids: General
Energy Technology Data Exchange (ETDEWEB)
Wang, Chao [Physical and Computational Science Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bao, Jie [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland WA USA; Pan, Wenxiao [Department of Mechanical Engineering, University of Wisconsin-Madison, Madison WI USA; Sun, Xin [Physical and Computational Science Directorate, Pacific Northwest National Laboratory, Richland WA USA
2017-04-07
Using direct numerical simulations we provide a thorough study on the electrokinetics of ionic liquids. In particular, the modfied Poisson-Nernst-Planck (MPNP) equations are solved to capture the crowding and overscreening effects that are the characteristics of an ionic liquid. For modeling electrokinetic flows in an ionic liquid, the MPNP equations are coupled with the Navier-Stokes equations to study the coupling of ion transport, hydrodynamics, and electrostatic forces. Specifically, we consider the ion transport between two parallel plates, charging dynamics in a 2D straight-walled pore, electro-osmotic ow in a nano-channel, electroconvective instability on a plane ion-selective surface, and electroconvective ow on a curved ion-selective surface. We discuss how the crowding and overscreening effects and their interplay affect the electrokinetic behaviors of ionic liquids in these application problems.
On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model
International Nuclear Information System (INIS)
May, W.
1994-01-01
First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)
Anisotropic cosmological models and generalized scalar tensor theory
Indian Academy of Sciences (India)
physics pp. 669–673. Anisotropic cosmological models and generalized scalar tensor theory. SUBENOY CHAKRABORTY1,*, BATUL CHANDRA SANTRA2 and ... Anisotropic cosmological models; general scalar tensor theory; inflation. PACS Nos 98.80.Hw; 04.50.+h; 98.80.Cq. 1. Introduction. Brans–Dicke theory [1] (BD ...
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
L M WANG
2017-08-16
Aug 16, 2017 ... A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) ... the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized ..... following optimization parameters are needed: ⎧.
Managing heteroscedasticity in general linear models.
Rosopa, Patrick J; Schaffer, Meline M; Schroeder, Amber N
2013-09-01
Heteroscedasticity refers to a phenomenon where data violate a statistical assumption. This assumption is known as homoscedasticity. When the homoscedasticity assumption is violated, this can lead to increased Type I error rates or decreased statistical power. Because this can adversely affect substantive conclusions, the failure to detect and manage heteroscedasticity could have serious implications for theory, research, and practice. In addition, heteroscedasticity is not uncommon in the behavioral and social sciences. Thus, in the current article, we synthesize extant literature in applied psychology, econometrics, quantitative psychology, and statistics, and we offer recommendations for researchers and practitioners regarding available procedures for detecting heteroscedasticity and mitigating its effects. In addition to discussing the strengths and weaknesses of various procedures and comparing them in terms of existing simulation results, we describe a 3-step data-analytic process for detecting and managing heteroscedasticity: (a) fitting a model based on theory and saving residuals, (b) the analysis of residuals, and (c) statistical inferences (e.g., hypothesis tests and confidence intervals) involving parameter estimates. We also demonstrate this data-analytic process using an illustrative example. Overall, detecting violations of the homoscedasticity assumption and mitigating its biasing effects can strengthen the validity of inferences from behavioral and social science data.
A generalized model via random walks for information filtering
Energy Technology Data Exchange (ETDEWEB)
Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)
2016-08-06
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.
A generalized model via random walks for information filtering
International Nuclear Information System (INIS)
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-01-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.
Gniewosz, Burkhard; Watt, Helen M G
2017-07-01
This study examines whether and how student-perceived parents' and teachers' overestimation of students' own perceived mathematical ability can explain trajectories for adolescents' mathematical task values (intrinsic and utility) controlling for measured achievement, following expectancy-value and self-determination theories. Longitudinal data come from a 3-cohort (mean ages 13.25, 12.36, and 14.41 years; Grades 7-10), 4-wave data set of 1,271 Australian secondary school students. Longitudinal structural equation models revealed positive effects of student-perceived overestimation of math ability by parents and teachers on students' intrinsic and utility math task values development. Perceived parental overestimations predicted intrinsic task value changes between all measurement occasions, whereas utility task value changes only were predicted between Grades 9 and 10. Parental influences were stronger for intrinsic than utility task values. Teacher influences were similar for both forms of task values and commenced after the curricular school transition in Grade 8. Results support the assumptions that the perceived encouragement conveyed by student-perceived mathematical ability beliefs of parents and teachers, promote positive mathematics task values development. Moreover, results point to different mechanisms underlying parents' and teachers' support. Finally, the longitudinal changes indicate transition-related increases in the effects of student-perceived overestimations and stronger effects for intrinsic than utility values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Overestimation of Knowledge About Word Meanings: The “Misplaced Meaning” Effect
Kominsky, Jonathan F.; Keil, Frank C.
2014-01-01
Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 & 2 demonstrate that children and adults show a highly consistent MM effect, and that it is stronger in young children. Study 3 demonstrates that adults are explicitly aware of the availability of outside knowledge, and that this awareness may be related to the strength of the MM effect. Study 4 rules out general overconfidence effects by examining a metalinguistic task in which adults are well-calibrated. PMID:24890038
General Friction Model Extended by the Effect of Strain Hardening
DEFF Research Database (Denmark)
Nielsen, Chris V.; Martins, Paulo A.F.; Bay, Niels
2016-01-01
An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real...... contact area as function of the normalized contact pressure is based on slip-line analysis and hence on the assumption of rigid-ideally plastic material behavior. In the present work, a general finite element model is established to, firstly, reproduce the original model under the assumption of rigid......-ideally plastic material, and secondly, to extend the solution by the influence of material strain hardening. This corresponds to adding a new variable and, therefore, a new axis to the general friction model. The resulting model is presented in a combined function suitable for e.g. finite element modeling...
Reliability assessment of competing risks with generalized mixed shock models
International Nuclear Information System (INIS)
Rafiee, Koosha; Feng, Qianmei; Coit, David W.
2017-01-01
This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.
Model Reduction of Switched Systems Based on Switching Generalized Gramians
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Wisniewski, Rafal
2012-01-01
In this paper, a general method for model order reduction of discrete-time switched linear systems is presented. The proposed technique uses switching generalized gramians. It is shown that several classical reduction methods can be developed into the generalized gramian framework for the model...... reduction of linear systems and for the reduction of switched systems. Discrete-time balanced reduction within a specified frequency interval is taken as an example within this framework. To avoid numerical instability and to increase the numerical efficiency, a generalized gramian-based Petrov...
A General Polygon-based Deformable Model for Object Recognition
DEFF Research Database (Denmark)
Jensen, Rune Fisker; Carstensen, Jens Michael
1999-01-01
We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic...... distribution, which combined with a match of the warped intensity template and the image form the final criteria used for localization and recognition of a given object. The chosen representation gives the model an ability to model an almost arbitrary object. Beside the actual model a full general scheme...
Generalized Linear Models with Applications in Engineering and the Sciences
Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J
2012-01-01
Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Simulation modelling in agriculture: General considerations. | R.I. ...
African Journals Online (AJOL)
The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general ... in the advisory service. Keywords: agriculture; botany; computer simulation; modelling; simulation model; simulation modelling; south africa; techniques ...
A Duality Result for the Generalized Erlang Risk Model
Directory of Open Access Journals (Sweden)
Lanpeng Ji
2014-11-01
Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.
Critical Comments on the General Model of Instructional Communication
Walton, Justin D.
2014-01-01
This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…
Hierarchical Generalized Linear Models for the Analysis of Judge Ratings
Muckle, Timothy J.; Karabatsos, George
2009-01-01
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
A PROPOSAL FOR GENERALIZATION OF 3D MODELS
Directory of Open Access Journals (Sweden)
A. Uyar
2017-11-01
Full Text Available In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.
a Proposal for Generalization of 3d Models
Uyar, A.; Ulugtekin, N. N.
2017-11-01
In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.
The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.
von Davier, Matthias
2014-02-01
The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.
Graphical tools for model selection in generalized linear models.
Murray, K; Heritier, S; Müller, S
2013-11-10
Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.
General classical solutions in the noncommutative CPN-1 model
International Nuclear Information System (INIS)
Foda, O.; Jack, I.; Jones, D.R.T.
2002-01-01
We give an explicit construction of general classical solutions for the noncommutative CP N-1 model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
General Computational Model for Human Musculoskeletal System of Spine
Directory of Open Access Journals (Sweden)
Kyungsoo Kim
2012-01-01
Full Text Available A general computational model of the human lumbar spine and trunk muscles including optimization formulations was provided. For a given condition, the trunk muscle forces could be predicted considering the human physiology including the follower load concept. The feasibility of the solution could be indirectly validated by comparing the compressive force, the shear force, and the joint moment. The presented general computational model and optimization technology can be fundamental tools to understand the control principle of human trunk muscles.
Specific and General Human Capital in an Endogenous Growth Model
Evangelia Vourvachaki; Vahagn Jerbashian; : Sergey Slobodyan
2014-01-01
In this article, we define specific (general) human capital in terms of the occupations whose use is spread in a limited (wide) set of industries. We analyze the growth impact of an economy's composition of specific and general human capital, in a model where education and research and development are costly and complementary activities. The model suggests that a declining share of specific human capital, as observed in the Czech Republic, can be associated with a lower rate of long-term grow...
Pricing Participating Products under a Generalized Jump-Diffusion Model
Directory of Open Access Journals (Sweden)
Tak Kuen Siu
2008-01-01
Full Text Available We propose a model for valuing participating life insurance products under a generalized jump-diffusion model with a Markov-switching compensator. It also nests a number of important and popular models in finance, including the classes of jump-diffusion models and Markovian regime-switching models. The Esscher transform is employed to determine an equivalent martingale measure. Simulation experiments are conducted to illustrate the practical implementation of the model and to highlight some features that can be obtained from our model.
Generalized continua as models for classical and advanced materials
Forest, Samuel
2016-01-01
This volume is devoted to an actual topic which is the focus world-wide of various research groups. It contains contributions describing the material behavior on different scales, new existence and uniqueness theorems, the formulation of constitutive equations for advanced materials. The main emphasis of the contributions is directed on the following items - Modelling and simulation of natural and artificial materials with significant microstructure, - Generalized continua as a result of multi-scale models, - Multi-field actions on materials resulting in generalized material models, - Theories including higher gradients, and - Comparison with discrete modelling approaches.
Extending the generalized Chaplygin gas model by using geometrothermodynamics
Aviles, Alejandro; Bastarrachea-Almodovar, Aztlán; Campuzano, Lorena; Quevedo, Hernando
2012-09-01
We use the formalism of geometrothermodynamics to derive fundamental thermodynamic equations that are used to construct general relativistic cosmological models. In particular, we show that the simplest possible fundamental equation, which corresponds in geometrothermodynamics to a system with no internal thermodynamic interaction, describes the different fluids of the standard model of cosmology. In addition, a particular fundamental equation with internal thermodynamic interaction is shown to generate a new cosmological model that correctly describes the dark sector of the Universe and contains as a special case the generalized Chaplygin gas model.
A Semi-Tychonic Model in General relativity
Murphy, George L.
1998-10-01
In the sixteenth century Tycho Brahe proposed a geocentric model of the solar system kinematically equivalent to the heliocentric Copernican model. There has been disagreement even among prominent relativists over whether or not relativity validates use of a geocentric model. Tycho's desire for a non-rotating earth cannot be satisfied, but we demonstrate here dynamical equivalence between a Copernican and a "semi-Tychonic" model by using an appropriate accelerated reference frame in general relativity. (The idea of absolute space in Newtonian mechanics makes use of Einstein's theory desirable even in the Newtonian approximation.) Optical questions are easily dealt with. Our treatment provides a satisfactory answer for the important historical question concerning geocentric and heliocentric models, and is also of pedagogic value. In addition, it gives insights into the real generality of general relativity, the nature of the relativistic equations of motion, and the analogy between coordinate and gauge transformations.
Gun Carrying by High School Students in Boston, MA: Does Overestimation of Peer Gun Carrying Matter?
Hemenway, David; Vriniotis, Mary; Johnson, Renee M.; Miller, Matthew; Azrael, Deborah
2011-01-01
This paper investigates: (1) whether high school students overestimate gun carrying by their peers, and (2) whether those students who overestimate peer gun carrying are more likely to carry firearms. Data come from a randomly sampled survey conducted in 2008 of over 1700 high school students in Boston, MA. Over 5% of students reported carrying a…
Why Terrorists Overestimate the Odds of Victory
Directory of Open Access Journals (Sweden)
Max Abrahms
2012-10-01
Full Text Available Terrorism is puzzling behavior for political scientists. On one hand, terrorist attacks generally hail from the politically aggrieved. On the other hand, a growing body of scholarship finds the tactic politically counterproductive. Unlike guerrilla attacks on military targets, terrorist attacks on civilian targets lower the odds of governments making concessions. This article proposes and tests a psychological theory to account for why militant groups engage in terrorism, given the political costs of attacking civilians.
Generalized entropy formalism and a new holographic dark energy model
Sayahian Jahromi, A.; Moosavi, S. A.; Moradpour, H.; Morais Graça, J. P.; Lobo, I. P.; Salako, I. G.; Jawad, A.
2018-05-01
Recently, the Rényi and Tsallis generalized entropies have extensively been used in order to study various cosmological and gravitational setups. Here, using a special type of generalized entropy, a generalization of both the Rényi and Tsallis entropy, together with holographic principle, we build a new model for holographic dark energy. Thereinafter, considering a flat FRW universe, filled by a pressureless component and the new obtained dark energy model, the evolution of cosmos has been investigated showing satisfactory results and behavior. In our model, the Hubble horizon plays the role of IR cutoff, and there is no mutual interaction between the cosmos components. Our results indicate that the generalized entropy formalism may open a new window to become more familiar with the nature of spacetime and its properties.
A generalized model via random walks for information filtering
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-08-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.
Americans Still Overestimate Social Class Mobility: A Pre-Registered Self-Replication.
Kraus, Michael W
2015-01-01
Kraus and Tan (2015) hypothesized that Americans tend to overestimate social class mobility in society, and do so because they seek to protect the self. This paper reports a pre-registered exact replication of Study 3 from this original paper and finds, consistent with the original study, that Americans substantially overestimate social class mobility, that people provide greater overestimates when made while thinking of similar others, and that high perceived social class is related to greater overestimates. The current results provide additional evidence consistent with the idea that people overestimate class mobility to protect their beliefs in the promise of equality of opportunity. Discussion considers the utility of pre-registered self-replications as one tool for encouraging replication efforts and assessing the robustness of effect sizes.
Americans Still Overestimate Social Class Mobility: A Pre-Registered Self-Replication
Directory of Open Access Journals (Sweden)
Michael W. Kraus
2015-11-01
Full Text Available Kraus and Tan (2015 hypothesized that Americans tend to overestimate social class mobility in society, and do so because they seek to protect the self. This paper reports a pre-registered exact replication of Study 3 from this original paper and finds, consistent with the original study, that Americans substantially overestimate social class mobility, that people provide greater overestimates when made while thinking of similar others, and that high perceived social class is related to greater overestimates. The current results provide additional evidence consistent with the idea that people overestimate class mobility to protect their beliefs in the promise of equality of opportunity. Discussion considers the utility of pre-registered self-replications as one tool for encouraging replication efforts and assessing the robustness of effect sizes.
On the general ontological foundations of conceptual modeling
Guizzardi, G.; Herre, Heinrich; Wagner, Gerd; Spaccapietra, Stefano; March, Salvatore T.; Kambayashi, Yahiko
2002-01-01
As pointed out in the pioneering work of [WSW99,EW01], an upper level ontology allows to evaluate the ontological correctness of a conceptual model and to develop guidelines how the constructs of a conceptual modeling language should be used. In this paper we adopt the General Ontological Language
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
Energy Technology Data Exchange (ETDEWEB)
Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS
WEDEL, M; DESARBO, WS
1995-01-01
A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of
Response of an ocean general circulation model to wind and ...
Indian Academy of Sciences (India)
The stretched-coordinate ocean general circulation model has been designed to study the observed variability due to wind and thermodynamic forcings. The model domain extends from 60°N to 60°S and cyclically continuous in the longitudinal direction. The horizontal resolution is 5° × 5° and 9 discrete vertical levels.
Bianchi type IX string cosmological model in general relativity
Indian Academy of Sciences (India)
Abstract. We have investigated Bianchi type IX string cosmological models in general relativity. To get a determinate solution, we have assumed a condition p = λ i.e. rest energy density for a cloud of strings is equal to the string tension density. The various physical and geometrical aspects of the models are also discussed.
Stability analysis for a general age-dependent vaccination model
International Nuclear Information System (INIS)
El Doma, M.
1995-05-01
An SIR epidemic model of a general age-dependent vaccination model is investigated when the fertility, mortality and removal rates depends on age. We give threshold criteria of the existence of equilibriums and perform stability analysis. Furthermore a critical vaccination coverage that is sufficient to eradicate the disease is determined. (author). 12 refs
Bianchi type IX string cosmological model in general relativity
Indian Academy of Sciences (India)
We have investigated Bianchi type IX string cosmological models in general relativity. To get a determinate solution, we have assumed a condition ρ= i.e. rest energy density for a cloud of strings is equal to the string tension density. The various physical and geometrical aspects of the models are also discussed.
Double generalized linear compound poisson models to insurance claims data
DEFF Research Database (Denmark)
Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo
2017-01-01
This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... in a finite sample framework. The simulation studies are also used to validate the fitting algorithms and check the computational implementation. Furthermore, we investigate the impact of an unsuitable choice for the response variable distribution on both mean and dispersion parameter estimates. We provide R...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....
Physically-Derived Dynamical Cores in Atmospheric General Circulation Models
Rood, Richard B.; Lin, Shian-Kiann
1999-01-01
The algorithm chosen to represent the advection in atmospheric models is often used as the primary attribute to classify the model. Meteorological models are generally classified as spectral or grid point, with the term grid point implying discretization using finite differences. These traditional approaches have a number of shortcomings that render them non-physical. That is, they provide approximate solutions to the conservation equations that do not obey the fundamental laws of physics. The most commonly discussed shortcomings are overshoots and undershoots which manifest themselves most overtly in the constituent continuity equation. For this reason many climate models have special algorithms to model water vapor advection. This talk focuses on the development of an atmospheric general circulation model which uses a consistent physically-based advection algorithm in all aspects of the model formulation. The shallow-water model of Lin and Rood (QJRMS, 1997) is generalized to three dimensions and combined with the physics parameterizations of NCAR's Community Climate Model. The scientific motivation for the development is to increase the integrity of the underlying fluid dynamics so that the physics terms can be more effectively isolated, examined, and improved. The expected benefits of the new model are discussed and results from the initial integrations will be presented.
A general model for membrane-based separation processes
DEFF Research Database (Denmark)
Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil
2009-01-01
behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding......A separation process could be defined as a process that transforms a given mixture of chemicals into two or more compositionally distinct end-use products. One way to design these separation processes is to employ a model-based approach, where mathematical models that reliably predict the process...
Schluchter, Mark D.
2008-01-01
In behavioral research, interest is often in examining the degree to which the effect of an independent variable X on an outcome Y is mediated by an intermediary or mediator variable M. This article illustrates how generalized estimating equations (GEE) modeling can be used to estimate the indirect or mediated effect, defined as the amount by…
QCD Sum Rules and Models for Generalized Parton Distributions
Energy Technology Data Exchange (ETDEWEB)
Anatoly Radyushkin
2004-10-01
I use QCD sum rule ideas to construct models for generalized parton distributions. To this end, the perturbative parts of QCD sum rules for the pion and nucleon electromagnetic form factors are interpreted in terms of GPDs and two models are discussed. One of them takes the double Borel transform at adjusted value of the Borel parameter as a model for nonforward parton densities, and another is based on the local duality relation. Possible ways of improving these Ansaetze are briefly discussed.
Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models
Directory of Open Access Journals (Sweden)
Shelton Peiris
2017-12-01
Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.
Do global change experiments overestimate impacts on terrestrial ecosystems?
DEFF Research Database (Denmark)
Leuzinger, Sebastian; Luo, Yiqi; Beier, Claus
2011-01-01
In recent decades, many climate manipulation experiments have investigated biosphere responses to global change. These experiments typically examined effects of elevated atmospheric CO2, warming or drought (driver variables) on ecosystem processes such as the carbon and water cycle (response...... of the responses to decline with higher-order interactions, longer time periods and larger spatial scales. This means that on average, both positive and negative global change impacts on the biosphere might be dampened more than previously assumed....... variables). Because experiments are inevitably constrained in the number of driver variables tested simultaneously, as well as in time and space, a key question is how results are scaled up to predict net ecosystem responses. In this review, we argue that there might be a general trend for the magnitude...
Generalized heat-transport equations: parabolic and hyperbolic models
Rogolino, Patrizia; Kovács, Robert; Ván, Peter; Cimmelli, Vito Antonio
2018-03-01
We derive two different generalized heat-transport equations: the most general one, of the first order in time and second order in space, encompasses some well-known heat equations and describes the hyperbolic regime in the absence of nonlocal effects. Another, less general, of the second order in time and fourth order in space, is able to describe hyperbolic heat conduction also in the presence of nonlocal effects. We investigate the thermodynamic compatibility of both models by applying some generalizations of the classical Liu and Coleman-Noll procedures. In both cases, constitutive equations for the entropy and for the entropy flux are obtained. For the second model, we consider a heat-transport equation which includes nonlocal terms and study the resulting set of balance laws, proving that the corresponding thermal perturbations propagate with finite speed.
Australian and overseas models of general practice training.
Hays, Richard B; Morgan, Simon
2011-06-06
General practice training in Australia continues to evolve. It is now the responsibility of an independent organisation, is delivered by regional training providers, and comprises a structured training program. Overseas, general practice varies in its importance to health care systems, and training models differ considerably. In some cases training is mandatory, in others voluntary, but the aim is always similar--to improve the quality of care delivered to the large majority of populations that access health care through primary care. We review the current status of vocational general practice training in Australia, compare it with selected training programs in international contexts, and describe how the local model is well placed to address future challenges. Challenges include changes in population demographics, increasing comorbidity, increasing costs of technology-based health care, increasing globalisation of health, and workforce shortages. Although general practice training in Australia is strong, it can improve further by learning from other training programs to meet these challengers.
GEMFsim: A Stochastic Simulator for the Generalized Epidemic Modeling Framework
Sahneh, Faryad Darabi; Vajdi, Aram; Shakeri, Heman; Fan, Futing; Scoglio, Caterina
2016-01-01
The recently proposed generalized epidemic modeling framework (GEMF) \\cite{sahneh2013generalized} lays the groundwork for systematically constructing a broad spectrum of stochastic spreading processes over complex networks. This article builds an algorithm for exact, continuous-time numerical simulation of GEMF-based processes. Moreover the implementation of this algorithm, GEMFsim, is available in popular scientific programming platforms such as MATLAB, R, Python, and C; GEMFsim facilitates ...
Modeling the brain morphology distribution in the general aging population
Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.
2016-03-01
Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.
A general maximum likelihood analysis of variance components in generalized linear models.
Aitkin, M
1999-03-01
This paper describes an EM algorithm for nonparametric maximum likelihood (ML) estimation in generalized linear models with variance component structure. The algorithm provides an alternative analysis to approximate MQL and PQL analyses (McGilchrist and Aisbett, 1991, Biometrical Journal 33, 131-141; Breslow and Clayton, 1993; Journal of the American Statistical Association 88, 9-25; McGilchrist, 1994, Journal of the Royal Statistical Society, Series B 56, 61-69; Goldstein, 1995, Multilevel Statistical Models) and to GEE analyses (Liang and Zeger, 1986, Biometrika 73, 13-22). The algorithm, first given by Hinde and Wood (1987, in Longitudinal Data Analysis, 110-126), is a generalization of that for random effect models for overdispersion in generalized linear models, described in Aitkin (1996, Statistics and Computing 6, 251-262). The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully nonparametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters can be sensitive to the specification of a parametric form for the mixing distribution. The nonparametric analysis can be extended straightforwardly to general random parameter models, with full NPML estimation of the joint distribution of the random parameters. This can produce substantial computational saving compared with full numerical integration over a specified parametric distribution for the random parameters. A simple method is described for obtaining correct standard errors for parameter estimates when using the EM algorithm. Several examples are discussed involving simple variance component and longitudinal models, and small-area estimation.
Generalized eigenstate typicality in translation-invariant quasifree fermionic models
Riddell, Jonathon; Müller, Markus P.
2018-01-01
We demonstrate a generalized notion of eigenstate thermalization for translation-invariant quasifree fermionic models: the vast majority of eigenstates satisfying a finite number of suitable constraints (e.g., fixed energy and particle number) have the property that their reduced density matrix on small subsystems approximates the corresponding generalized Gibbs ensemble. To this end, we generalize analytic results by H. Lai and K. Yang [Phys. Rev. B 91, 081110(R) (2015), 10.1103/PhysRevB.91.081110] and illustrate the claim numerically by example of the Jordan-Wigner transform of the XX spin chain.
A general diagnostic model applied to language testing data.
von Davier, Matthias
2008-11-01
Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.
Dynamic generalized linear models for monitoring endemic diseases
DEFF Research Database (Denmark)
Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq
2016-01-01
The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...
Generalized memory associativity in a network model for the neuroses
Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.
2009-03-01
We review concepts introduced in earlier work, where a neural network mechanism describes some mental processes in neurotic pathology and psychoanalytic working-through, as associative memory functioning, according to the findings of Freud. We developed a complex network model, where modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's idea that consciousness is related to symbolic and linguistic memory activity in the brain. We have introduced a generalization of the Boltzmann machine to model memory associativity. Model behavior is illustrated with simulations and some of its properties are analyzed with methods from statistical mechanics.
Seasonal predictability of Kiremt rainfall in coupled general circulation models
Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen
2017-11-01
The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.
Estimating classification images with generalized linear and additive models.
Knoblauch, Kenneth; Maloney, Laurence T
2008-12-22
Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.
Automation of electroweak NLO corrections in general models
Energy Technology Data Exchange (ETDEWEB)
Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)
2016-07-01
I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.
Improved Generalized Force Model considering the Comfortable Driving Behavior
Directory of Open Access Journals (Sweden)
De-Jie Xu
2015-01-01
Full Text Available This paper presents an improved generalized force model (IGFM that considers the driver’s comfortable driving behavior. Through theoretical analysis, we propose the calculation methods of comfortable driving distance and velocity. Then the stability condition of the model is obtained by the linear stability analysis. The problems of the unrealistic acceleration of the leading car existing in the previous models were solved. Furthermore, the simulation results show that IGFM can predict correct delay time of car motion and kinematic wave speed at jam density, and it can exactly describe the driver’s behavior under an urgent case, where no collision occurs. The dynamic properties of IGFM also indicate that stability has improved compared to the generalized force model.
A general model framework for multisymbol number comparison.
Huber, Stefan; Nuerk, Hans-Christoph; Willmes, Klaus; Moeller, Korbinian
2016-11-01
Different models have been proposed for the processing of multisymbol numbers like two- and three-digit numbers but also for negative numbers and decimals. However, these multisymbol numbers are assembled from the same set of Arabic digits and comply with the place-value structure of the Arabic number system. Considering these shared properties, we suggest that the processing of multisymbol numbers can be described in one general model framework. Accordingly, we first developed a computational model framework realizing componential representations of multisymbol numbers and evaluated its validity by simulating standard empirical effects of number magnitude comparison. We observed that the model framework successfully accounted for most of these effects. Moreover, our simulations provided first evidence supporting the notion of a fully componential processing of multisymbol numbers for the specific case of comparing two negative numbers. Thus, our general model framework indicates that the processing of different kinds of multisymbol integer and decimal numbers shares common characteristics (e.g., componential representation). The relevance and applicability of our model goes beyond the case of basic number processing. In particular, we also successfully simulated effects from applied marketing and consumer research by accounting for the left-digit effect found in processing of prices. Finally, we provide evidence that our model framework can be integrated into the more general context of multiattribute decision making. In sum, this indicates that our model framework captures a general scheme of separate processing of different attributes weighted by their saliency for the task at hand. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS
Directory of Open Access Journals (Sweden)
Constantin ANGHELACHE
2016-06-01
Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.
Contextual interactions in a generalized energy model of complex cells
Dellen, Babette; Clark, John W.; Wessel, Ralf
2009-01-01
We propose a generalized energy model of complex cells to describe modulatory contextual influences on the responses of neurons in the primary visual cortex (V1). Many orientationselective cells in V1 respond to contrast of orientation and motion of stimuli exciting the classical receptive field (CRF) and the non-CRF, or surround. In the proposed model, a central spatiotemporal filter, defining the CRF, is nonlinearly combined with a spatiotemporal filter extending into the non- ...
Study of the properties of general relativistic Kink model (GRK)
International Nuclear Information System (INIS)
Oliveira, L.C.S. de.
1980-01-01
The stability of the general relativistic Kink model (GRK) is studied. It is shown that the model is stable at least against radial perturbations. Furthermore, the Dirac field in the background of the geometry generated by the GRK is studied. It is verified that the GRK localizes the Dirac field, around the region of largest curvature. The physical interpretation of this system (the Dirac field in the GRK background) is discussed. (Author) [pt
Directory of Open Access Journals (Sweden)
Qinghua Xie
2017-01-01
Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there
Kananizadeh, Negin; Rice, Charles; Lee, Jaewoong; Rodenhausen, Keith B; Sekora, Derek; Schubert, Mathias; Schubert, Eva; Bartelt-Hunt, Shannon; Li, Yusong
2017-01-15
Measuring the interactions between engineered nanoparticles and natural substrates (e.g. soils and sediments) has been very challenging due to highly heterogeneous and rough natural surfaces. In this study, three-dimensional nanostructured slanted columnar thin films (SCTFs), with well-defined roughness height and spacing, have been used to mimic surface roughness. Interactions between titanium dioxide nanoparticles (TiO 2 NP), the most extensively manufactured engineered nanomaterials, and SCTF coated surfaces were measured using a quartz crystal microbalance with dissipation monitoring (QCM-D). In parallel, in-situ generalized ellipsometry (GE) was coupled with QCM-D to simultaneously measure the amount of TiO 2 NP deposited on the surface of SCTF. While GE is insensitive to effects of mechanical water entrapment variations in roughness spaces, we found that the viscoelastic model, a typical QCM-D model analysis approach, overestimates the mass of deposited TiO 2 NP. This overestimation arises from overlaid frequency changes caused by particle deposition as well as additional water entrapment and partial water displacement upon nanoparticle adsorption. Here, we demonstrate a new approach to model QCM-D data, accounting for both viscoelastic effects and the effects of roughness-retained water. Finally, the porosity of attached TiO 2 NP layer was determined by coupling the areal mass density determined by QCM-D and independent GE measurements. Copyright © 2016 Elsevier B.V. All rights reserved.
The general class of Bianchi cosmological models with dark energy ...
Indian Academy of Sciences (India)
The general class of Bianchi cosmological models with dark energy in the form of modified Chaplygin gas with variable Λ and G and bulk viscosity have been considered. We discuss three types of average scalefactor by using a special law for deceleration parameter which is linear in time with negative slope. The exact ...
A general circulation model (GCM) parameterization of Pinatubo aerosols
Energy Technology Data Exchange (ETDEWEB)
Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)
1996-04-01
The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.
A generalized quarter car modelling approach with frame flexibility ...
Indian Academy of Sciences (India)
HUSAIN KANCHWALA
ground-wheel contacts at three other locations. A Matlab code for obtaining the generalized quarter-car model is provided towards the end of this paper. The code enables a user to perform fairly quick parametric studies. An example of such a parametric study is presented there as well. The role of other wheels, in particular, ...
Anisotropic cosmological models and generalized scalar tensor theory
Indian Academy of Sciences (India)
Abstract. In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–. Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been stud- ied and some assumptions ...
Anisotropic cosmological models and generalized scalar tensor theory
Indian Academy of Sciences (India)
In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–Sachs space-time. For bulk viscous ﬂuid, both exponential and power-law solutions have been studied and some assumptions among the ...
Transmittivity and wavefunctions in one-dimensional generalized Aubry models
International Nuclear Information System (INIS)
Basu, C.; Mookerjee, A.; Sen, A.K.; Thakur, P.K.
1990-07-01
We use the vector recursion method of Haydock to obtain the transmittance of a class of generalized Aubry models in one-dimension. We also study the phase change of the wavefunctions as they travel through the chain and also the behaviour of the conductance with changes in size. (author). 10 refs, 9 figs
Characterizing QALYs under a General Rank Dependent Utility Model
H. Bleichrodt (Han); J. Quiggin (John)
1997-01-01
textabstractThis paper provides a characterization of QALYs, the most important outcome measure in medical decision making, in the context of a general rank dependent utility model. We show that both for chronic and for nonchronic health states the characterization of QALYs depends on intuitive
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
L M WANG
2017-08-16
Aug 16, 2017 ... Abstract. A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master–slave system ...
Simplicial models for trace spaces II: General higher dimensional automata
DEFF Research Database (Denmark)
Raussen, Martin
Higher Dimensional Automata (HDA) are topological models for the study of concurrency phenomena. The state space for an HDA is given as a pre-cubical complex in which a set of directed paths (d-paths) is singled out. The aim of this paper is to describe a general method that determines the space...
An applied general equilibrium model for Dutch agribusiness policy analysis
Peerlings, J.
1993-01-01
The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Bianchi type-V string cosmological models in general relativity
Indian Academy of Sciences (India)
Abstract. Bianchi type-V string cosmological models in general relativity are investigated. To get the exact solution of Einstein's field equations, we have taken some scale transformations used by Camci et al [Astrophys. Space Sci. 275, 391 (2001)]. It is shown that Einstein's field equations are solvable for any arbitrary ...
A generalized quarter car modelling approach with frame flexibility ...
Indian Academy of Sciences (India)
... mass distribution and damping. Here we propose a generalized quarter-car modelling approach, incorporating both the frame as well as other-wheel ground contacts. Our approach is linear, uses Laplace transforms, involves vertical motions of key points of interest and has intermediate complexity with improved realism.
On the general procedure for modelling complex ecological systems
International Nuclear Information System (INIS)
He Shanyu.
1987-12-01
In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs
Bianchi type-V string cosmological models in general relativity
Indian Academy of Sciences (India)
Bianchi type-V string cosmological models in general relativity are investigated. To get the exact solution of Einstein's ﬁeld equations, we have taken some scale transformations used by Camci et al [Astrophys. Space Sci. 275, 391 (2001)]. It is shown that Einstein's ﬁeld equations are solvable for any arbitrary cosmic scale ...
Uncertainty in a monthly water balance model using the generalized ...
Indian Academy of Sciences (India)
Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology. Diego Rivera1,∗. , Yessica Rivas. 2 and Alex Godoy. 3. 1. Laboratory of Comparative Policy in Water Resources Management, University of Concepcion,. CONICYT/FONDAP 15130015, Concepcion, Chile. 2.
Analysis of snow feedbacks in 14 general circulation models
Randall, D. A.; Cess, R. D.; Blanchet, J. P.; Chalita, S.; Colman, R.; Dazlich, D. A.; Del Genio, A. D.; Keup, E.; Lacis, A.; Le Treut, H.; Liang, X.-Z.; McAvaney, B. J.; Mahfouf, J. F.; Meleshko, V. P.; Morcrette, J.-J.; Norris, P. M.; Potter, G. L.; Rikus, L.; Roeckner, E.; Royer, J. F.; Schlese, U.; Sheinin, D. A.; Sokolov, A. P.; Taylor, K. E.; Wetherald, R. T.; Yagai, I.; Zhang, M.-H.
1994-10-01
Snow feedbacks produced by 14 atmospheric general circulation models have been analyzed through idealized numerical experiments. Included in the analysis is an investigation of the surface energy budgets of the models. Negative or weak positive snow feedbacks occurred in some of the models, while others produced strong positive snow feedbacks. These feedbacks are due not only to melting snow, but also to increases in boundary temperature, changes in air temperature, changes in water vapor, and changes in cloudiness. As a result, the net response of each model is quite complex. We analyze in detail the responses of one model with a strong positive snow feedback and another with a weak negative snow feedback. Some of the models include a temperature dependence of the snow albedo, and this has significantly affected the results.
Generalized Chaplygin gas model, supernovae, and cosmic topology
International Nuclear Information System (INIS)
Bento, M.C.; Bertolami, O.; Silva, P.T.; Reboucas, M.J.
2006-01-01
In this work we study to which extent the knowledge of spatial topology may place constraints on the parameters of the generalized Chaplygin gas (GCG) model for unification of dark energy and dark matter. By using both the Poincare dodecahedral and binary octahedral spaces as the observable spatial topologies, we examine the current type Ia supernovae (SNe Ia) constraints on the GCG model parameters. We show that the knowledge of spatial topology does provide additional constraints on the A s parameter of the GCG model but does not lift the degeneracy of the α parameter
Generalized Roe's numerical scheme for a two-fluid model
International Nuclear Information System (INIS)
Toumi, I.; Raymond, P.
1993-01-01
This paper is devoted to a mathematical and numerical study of a six equation two-fluid model. We will prove that the model is strictly hyperbolic due to the inclusion of the virtual mass force term in the phasic momentum equations. The two-fluid model is naturally written under a nonconservative form. To solve the nonlinear Riemann problem for this nonconservative hyperbolic system, a generalized Roe's approximate Riemann solver, is used, based on a linearization of the nonconservative terms. A Godunov type numerical scheme is built, using this approximate Riemann solver. 10 refs., 5 figs,
Holographic entanglement entropy in general holographic superconductor models
Energy Technology Data Exchange (ETDEWEB)
Peng, Yan [School of Mathematics and Computer Science, Shaanxi University of Technology,Hanzhong, Shaanxi 723000 (China); Pan, Qiyuan [Institute of Physics and Department of Physics, Hunan Normal University,Changsha, Hunan 410081 (China)
2014-06-03
We study the entanglement entropy of general holographic dual models both in AdS soliton and AdS black hole backgrounds with full backreaction. We find that the entanglement entropy is a good probe to explore the properties of the holographic superconductors and provides richer physics in the phase transition. We obtain the effects of the scalar mass, model parameter and backreaction on the entropy, and argue that the jump of the entanglement entropy may be a quite general feature for the first order phase transition. In strong contrast to the insulator/superconductor system, we note that the backreaction coupled with the scalar mass can not be used to trigger the first order phase transition if the model parameter is below its bottom bound in the metal/superconductor system.
Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.
2014-01-01
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
Directory of Open Access Journals (Sweden)
Andrea Schaller
2016-01-01
Full Text Available Introduction. The aim of the present study was to determine the closeness of agreement between a self-reported and an objective measure of physical activity in low back pain patients and healthy controls. Beyond, influencing factors on overestimation were identified. Methods. 27 low back pain patients and 53 healthy controls wore an accelerometer (objective measure for seven consecutive days and answered a questionnaire on physical activity (self-report over the same period of time. Differences between self-reported and objective data were tested by Wilcoxon test. Bland-Altman analysis was conducted for describing the closeness of agreement. Linear regression models were calculated to identify the influence of age, sex, and body mass index on the overestimation by self-report. Results. Participants overestimated self-reported moderate activity in average by 42 min/day (p=0.003 and vigorous activity by 39 min/day (p<0.001. Self-reported sedentary time was underestimated by 122 min/day (p<0.001. No individual-related variables influenced the overestimation of physical activity. Low back pain patients were more likely to underestimate sedentary time compared to healthy controls. Discussion. In rehabilitation and health promotion, the application-oriented measurement of physical activity remains a challenge. The present results contradict other studies that had identified an influence of age, sex, and body mass index on the overestimation of physical activity.
Generalized Modeling of the Human Lower Limb Assembly
Cofaru, Ioana; Huzu, Iulia
2014-11-01
The main reason for creating a generalized assembly of the main bones of the lower human member is to create the premises of realizing a biomechanic assisted study which could be used for the study of the high range of varieties of pathologies that exist at this level. Starting from 3D CAD models of the main bones of the lower human member, which were realized in previous researches, in this study a generalized assembly system was developed, system in which are highlighted both the situation of an healthy subject and the situation of the situation of a subject affected by axial deviations. In order to achieve these purpose reference systems were created, systems that are in accordance with the mechanical axes and the anatomic axes of the lower member, which were later generally assembled in a manner that provides an easy customization option
Attractive Hubbard model with disorder and the generalized Anderson theorem
International Nuclear Information System (INIS)
Kuchinskii, E. Z.; Kuleeva, N. A.; Sadovskii, M. V.
2015-01-01
Using the generalized DMFT+Σ approach, we study the influence of disorder on single-particle properties of the normal phase and the superconducting transition temperature in the attractive Hubbard model. A wide range of attractive potentials U is studied, from the weak coupling region, where both the instability of the normal phase and superconductivity are well described by the BCS model, to the strong-coupling region, where the superconducting transition is due to Bose-Einstein condensation (BEC) of compact Cooper pairs, formed at temperatures much higher than the superconducting transition temperature. We study two typical models of the conduction band with semi-elliptic and flat densities of states, respectively appropriate for three-dimensional and two-dimensional systems. For the semi-elliptic density of states, the disorder influence on all single-particle properties (e.g., density of states) is universal for an arbitrary strength of electronic correlations and disorder and is due to only the general disorder widening of the conduction band. In the case of a flat density of states, universality is absent in the general case, but still the disorder influence is mainly due to band widening, and the universal behavior is restored for large enough disorder. Using the combination of DMFT+Σ and Nozieres-Schmitt-Rink approximations, we study the disorder influence on the superconducting transition temperature T c for a range of characteristic values of U and disorder, including the BCS-BEC crossover region and the limit of strong-coupling. Disorder can either suppress T c (in the weak-coupling region) or significantly increase T c (in the strong-coupling region). However, in all cases, the generalized Anderson theorem is valid and all changes of the superconducting critical temperature are essentially due to only the general disorder widening of the conduction band
Penalized Estimation in Large-Scale Generalized Linear Array Models
DEFF Research Database (Denmark)
Lund, Adam; Vincent, Martin; Hansen, Niels Richard
2017-01-01
Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...
A Unified Bayesian Inference Framework for Generalized Linear Models
Meng, Xiangming; Wu, Sheng; Zhu, Jiang
2018-03-01
In this letter, we present a unified Bayesian inference framework for generalized linear models (GLM) which iteratively reduces the GLM problem to a sequence of standard linear model (SLM) problems. This framework provides new perspectives on some established GLM algorithms derived from SLM ones and also suggests novel extensions for some other SLM algorithms. Specific instances elucidated under such framework are the GLM versions of approximate message passing (AMP), vector AMP (VAMP), and sparse Bayesian learning (SBL). It is proved that the resultant GLM version of AMP is equivalent to the well-known generalized approximate message passing (GAMP). Numerical results for 1-bit quantized compressed sensing (CS) demonstrate the effectiveness of this unified framework.
A Non-Gaussian Spatial Generalized Linear Latent Variable Model
Irincheeva, Irina
2012-08-03
We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.
Treatment of cloud radiative effects in general circulation models
Energy Technology Data Exchange (ETDEWEB)
Wang, W.C.; Dudek, M.P.; Liang, X.Z.; Ding, M. [State Univ. of New York, Albany, NY (United States)] [and others
1996-04-01
We participate in the Atmospheric Radiation Measurement (ARM) program with two objectives: (1) to improve the general circulation model (GCM) cloud/radiation treatment with a focus on cloud verticle overlapping and layer cloud optical properties, and (2) to study the effects of cloud/radiation-climate interaction on GCM climate simulations. This report summarizes the project progress since the Fourth ARM Science Team meeting February 28-March 4, 1994, in Charleston, South Carolina.
Generalized isothermal models with strange equation of state
Indian Academy of Sciences (India)
intention to study the Einstein–Maxwell system with a linear equation of state with ... It is our intention to model the interior of a dense realistic star with a general ... The definition m(r) = 1. 2. ∫ r. 0 ω2ρ(ω)dω. (14) represents the mass contained within a radius r which is a useful physical quantity. The mass function (14) has ...
Classification images and bubbles images in the generalized linear model.
Murray, Richard F
2012-07-09
Classification images and bubbles images are psychophysical tools that use stimulus noise to investigate what features people use to make perceptual decisions. Previous work has shown that classification images can be estimated using the generalized linear model (GLM), and here I show that this is true for bubbles images as well. Expressing the two approaches in terms of a single statistical model clarifies their relationship to one another, makes it possible to measure classification images and bubbles images simultaneously, and allows improvements developed for one method to be used with the other.
Generalized model for Memristor-based Wien family oscillators
Talukdar, Abdul Hafiz Ibne
2012-07-23
In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types though oscillating resistance and time dependent poles are present. We have also proposed an analytical model to estimate the desired amplitude of oscillation before the oscillation starts. These Memristor-based oscillation results, presented for the first time, are in good agreement with simulation results. © 2011 Elsevier Ltd.
A generalization of the bond fluctuation model to viscoelastic environments
International Nuclear Information System (INIS)
Fritsch, Christian C
2014-01-01
A lattice-based simulation method for polymer diffusion in a viscoelastic medium is presented. This method combines the eight-site bond fluctuation model with an algorithm for the simulation of fractional Brownian motion on the lattice. The method applies to unentangled self-avoiding chains and is probed for anomalous diffusion exponents α between 0.7 and 1.0. The simulation results are in very good agreement with the predictions of the generalized Rouse model of a self-avoiding chain polymer in a viscoelastic medium. (paper)
Structural dynamic analysis with generalized damping models analysis
Adhikari , Sondipon
2013-01-01
Since Lord Rayleigh introduced the idea of viscous damping in his classic work ""The Theory of Sound"" in 1877, it has become standard practice to use this approach in dynamics, covering a wide range of applications from aerospace to civil engineering. However, in the majority of practical cases this approach is adopted more for mathematical convenience than for modeling the physics of vibration damping. Over the past decade, extensive research has been undertaken on more general ""non-viscous"" damping models and vibration of non-viscously damped systems. This book, along with a related book
Energy spectra of odd nuclei in the generalized model
Directory of Open Access Journals (Sweden)
I. O. Korzh
2015-04-01
Full Text Available Based on the generalized nuclear model, energy spectra of the odd nuclei of such elements as 25Mg, 41K, and 65Cu are determined, and the structure of wave functions of these nuclei in the excited and normal states is studied. High quality in determining the energy spectra is possible due to the accurate calculations of all elements of the energy matrix. It is demonstrated that the structure of the wave functions so determined provides the possibility to more accurately select the nuclear model and the method for calculating the nucleon cross-sections of the inelastic scattering of nucleons by odd nuclei.
Regularization Paths for Generalized Linear Models via Coordinate Descent
Directory of Open Access Journals (Sweden)
Jerome Friedman
2010-02-01
Full Text Available We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multi- nomial regression problems while the penalties include ℓ1 (the lasso, ℓ2 (ridge regression and mixtures of the two (the elastic net. The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
Pairing correlations in a generalized Hubbard model for the cuprates
Arrachea, Liliana; Aligia, A. A.
2000-04-01
Using numerical diagonalization of a 4×4 cluster, we calculate on-site s, extended-s, and dx2-y2 pairing correlation functions (PCF's) in an effective generalized Hubbard model for the cuprates, with nearest-neighbor correlated hopping and next-nearest-neighbor hopping t'. The vertex contributions to the PCF's are significantly enhanced, relative to the t-t'-U model. The behavior of the PCF's and their vertex contributions, and signatures of anomalous flux quantization, indicate superconductivity in the d-wave channel for moderate doping and in the s-wave channel for high doping and small U.
dx2-y2 superconductivity in a generalized Hubbard model
Arrachea, Liliana; Aligia, A. A.
1999-01-01
We consider an extended Hubbard model with nearest-neighbor correlated hopping and next-nearest-neighbor hopping t' obtained as an effective model for cuprate superconductors. Using a generalized Hartree-Fock BCS approximation, we find that for high enough t' and doping, antiferromagnetism is destroyed and the system exhibits d-wave superconductivity. Near optimal doping we consider the effect of antiferromagnetic spin fluctuations on the normal self-energy using a phenomenological susceptibility. The resulting superconducting critical temperature as a function of doping is in good agreement with experiment.
Pairing Correlations in a Generalized Hubbard Model for the Cuprates
Arrachea, L.; Aligia, A.
1999-01-01
Using numerical diagonalization of a 4x4 cluster, we calculate on-site s, extended s and d pairing correlation functions (PCF) in an effective generalized Hubbard model for the cuprates, with nearest-neighbor correlated hopping and next nearest-neighbor hopping t'. The vertex contributions (VC) to the PCF are significantly enhanced, relative to the t-t'-U model. The behavior of the PCF and their VC, and signatures of anomalous flux quantization, indicate superconductivity in the d-wave channe...
The linear model and hypothesis a general unifying theory
Seber, George
2015-01-01
This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory. The approach used is a geometrical one based on the concept of projections and their associated idempotent matrices, thus largely avoiding the need to involve matrix ranks. It is shown that all the hypotheses encountered are either linear or asymptotically linear, and that all the underlying models used are either exactly or asymptotically linear normal models. This equivalence can be used, for example, to extend the concept of orthogonality in the analysis of variance to other models, and to show that the asymptotic equivalence of the likelihood ratio, Wald, and Score (Lagrange Multiplier) hypothesis tests generally applies.
Contextual interactions in a generalized energy model of complex cells.
Dellen, Babette K; Clark, John W; Wessel, Ralf
2009-01-01
We propose a generalized energy model of complex cells to describe modulatory contextual influences on the responses of neurons in the primary visual cortex (V1). Many orientation-selective cells in V1 respond to contrast of orientation and motion of stimuli exciting the classical receptive field (CRF) and the non-CRF, or surround. In the proposed model, a central spatiotemporal filter, defining the CRF, is nonlinearly combined with a spatiotemporal filter extending into the non-CRF. These filters are assumed to describe simple-cell responses, while the nonlinear combination of their responses describes the responses of complex cells. This mathematical operation accounts for the inherent nonlinearity of complex cells, such as phase independence and frequency doubling, and for nonlinear interactions between stimuli in the CRF and surround of the cell, including sensitivity to feature contrast. If only the CRF of the generalized complex cell is stimulated by a drifting grating, the model reduces to the standard energy model. The theoretical predictions of the model are supported by computer simulations and compared with experimental data from V1.
A generalized mechanical model for suture interfaces of arbitrary geometry
Li, Yaning; Ortiz, Christine; Boyce, Mary C.
2013-04-01
Suture interfaces with a triangular wave form commonly found in nature have recently been shown to exhibit exceptional mechanical behavior, where geometric parameters such as amplitude, frequency, and hierarchy can be used to nonlinearly tailor and amplify mechanical properties. In this study, using the principle of complementary virtual work, we formulate a generalized, composite mechanical model for arbitrarily-shaped interdigitating suture interfaces in order to more broadly investigate the influence of wave-form geometry on load transmission, deformation mechanisms, anisotropy, and stiffness, strength, and toughness of the suture interface for tensile and shear loading conditions. The application of this suture interface model is exemplified for the case of the general trapezoidal wave-form. Expressions for the in-plane stiffness, strength and fracture toughness and failure mechanisms are derived as nonlinear functions of shape factor β (which characterizes the general trapezoidal shape as triangular, trapezoidal, rectangular or anti-trapezoidal), the wavelength/amplitude ratio, the interface width/wavelength ratio, and the stiffness and strength ratios of the skeletal/interfacial phases. These results provide guidelines for choosing and tailoring interface geometry to optimize the mechanical performance in resisting different loads. The presented model provides insights into the relation between the mechanical function and the morphological diversity of suture interface geometries observed in natural systems.
Border Collision Bifurcations in a Generalized Model of Population Dynamics
Directory of Open Access Journals (Sweden)
Lilia M. Ladino
2016-01-01
Full Text Available We analyze the dynamics of a generalized discrete time population model of a two-stage species with recruitment and capture. This generalization, which is inspired by other approaches and real data that one can find in literature, consists in considering no restriction for the value of the two key parameters appearing in the model, that is, the natural death rate and the mortality rate due to fishing activity. In the more general case the feasibility of the system has been preserved by posing opportune formulas for the piecewise map defining the model. The resulting two-dimensional nonlinear map is not smooth, though continuous, as its definition changes as any border is crossed in the phase plane. Hence, techniques from the mathematical theory of piecewise smooth dynamical systems must be applied to show that, due to the existence of borders, abrupt changes in the dynamic behavior of population sizes and multistability emerge. The main novelty of the present contribution with respect to the previous ones is that, while using real data, richer dynamics are produced, such as fluctuations and multistability. Such new evidences are of great interest in biology since new strategies to preserve the survival of the species can be suggested.
A general mixture model for sediment laden flows
Liang, Lixin; Yu, Xiping; Bombardelli, Fabián
2017-09-01
A mixture model for general description of sediment-laden flows is developed based on an Eulerian-Eulerian two-phase flow theory, with the aim at gaining computational speed in the prediction, but preserving the accuracy of the complete two-fluid model. The basic equations of the model include the mass and momentum conservation equations for the sediment-water mixture, and the mass conservation equation for sediment. However, a newly-obtained expression for the slip velocity between phases allows for the computation of the sediment motion, without the need of solving the momentum equation for sediment. The turbulent motion is represented for both the fluid and the particulate phases. A modified k-ε model is used to describe the fluid turbulence while an algebraic model is adopted for turbulent motion of particles. A two-dimensional finite difference method based on the SMAC scheme was used to numerically solve the mathematical model. The model is validated through simulations of fluid and suspended sediment motion in steady open-channel flows, both in equilibrium and non-equilibrium states, as well as in oscillatory flows. The computed sediment concentrations, horizontal velocity and turbulent kinetic energy of the mixture are all shown to be in good agreement with available experimental data, and importantly, this is done at a fraction of the computational efforts required by the complete two-fluid model.
A general modeling framework for describing spatially structured population dynamics
Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan
2017-01-01
Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance
A general modeling framework for describing spatially structured population dynamics.
Sample, Christine; Fryxell, John M; Bieri, Joanna A; Federico, Paula; Earl, Julia E; Wiederholt, Ruscena; Mattsson, Brady J; Flockhart, D T Tyler; Nicol, Sam; Diffendorfer, Jay E; Thogmartin, Wayne E; Erickson, Richard A; Norris, D Ryan
2018-01-01
Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance
Indian Academy of Sciences (India)
Page S20: NMR compound 4i. Page S22: NMR compound 4j. General: Chemicals were purchased from Fluka, Merck and Aldrich Chemical Companies. All the products were characterized by comparison of their IR, 1H NMR and 13C NMR spectroscopic data and their melting points with reported values. General procedure ...
Reshocks, rarefactions, and the generalized Layzer model for hydrodynamic instabilities
Energy Technology Data Exchange (ETDEWEB)
Mikaelian, K O
2008-06-10
We report numerical simulations and analytic modeling of shock tube experiments on Rayleigh-Taylor and Richtmyer-Meshkov instabilities. We examine single interfaces of the type A/B where the incident shock is initiated in A and the transmitted shock proceeds into B. Examples are He/air and air/He. In addition, we study finite-thickness or double-interface A/B/A configurations like air/SF{sub 6}/air gas-curtain experiments. We first consider conventional shock tubes that have a 'fixed' boundary: A solid endwall which reflects the transmitted shock and reshocks the interface(s). Then we focus on new experiments with a 'free' boundary--a membrane disrupted mechanically or by the transmitted shock, sending back a rarefaction towards the interface(s). Complex acceleration histories are achieved, relevant for Inertial Confinement Fusion implosions. We compare our simulation results with a generalized Layzer model for two fluids with time-dependent densities, and derive a new freeze-out condition whereby accelerating and compressive forces cancel each other out. Except for the recently reported failures of the Layzer model, the generalized Layzer model and hydrocode simulations for reshocks and rarefactions agree well with each other, and remain to be verified experimentally.
Consensus-based training and assessment model for general surgery.
Szasz, P; Louridas, M; de Montbrun, S; Harris, K A; Grantcharov, T P
2016-05-01
Surgical education is becoming competency-based with the implementation of in-training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery. A Delphi process was followed where questionnaires were distributed to all 17 Canadian general surgery programme directors. Items were ranked on a 5-point Likert scale, with consensus defined as Cronbach's α of at least 0·70. Items rated 4 or above on the 5-point Likert scale by 80 per cent of the programme directors were included in the models. Two Delphi rounds were completed, with 14 programme directors taking part in round one and 11 in round two. The overall consensus was high (Cronbach's α = 0·98). The training model included 101 unique procedures and tasks, 24 specific to junior trainees, 68 specific to senior trainees, and nine appropriate to all. The assessment model included four procedures. A system of operative procedures and tasks for junior- and senior-level trainees has been developed along with an assessment model for trainee progression. These can be used as milestones in competency-based assessments. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
Generalized linear mixed model for segregation distortion analysis.
Zhan, Haimao; Xu, Shizhong
2011-11-11
Segregation distortion is a phenomenon that the observed genotypic frequencies of a locus fall outside the expected Mendelian segregation ratio. The main cause of segregation distortion is viability selection on linked marker loci. These viability selection loci can be mapped using genome-wide marker information. We developed a generalized linear mixed model (GLMM) under the liability model to jointly map all viability selection loci of the genome. Using a hierarchical generalized linear mixed model, we can handle the number of loci several times larger than the sample size. We used a dataset from an F(2) mouse family derived from the cross of two inbred lines to test the model and detected a major segregation distortion locus contributing 75% of the variance of the underlying liability. Replicated simulation experiments confirm that the power of viability locus detection is high and the false positive rate is low. Not only can the method be used to detect segregation distortion loci, but also used for mapping quantitative trait loci of disease traits using case only data in humans and selected populations in plants and animals.
A generalized and parameterized interference model for cognitive radio networks
Mahmood, Nurul Huda
2011-06-01
For meaningful co-existence of cognitive radios with primary system, it is imperative that the cognitive radio system is aware of how much interference it generates at the primary receivers. This can be done through statistical modeling of the interference as perceived at the primary receivers. In this work, we propose a generalized model for the interference generated by a cognitive radio network, in the presence of small and large scale fading, at a primary receiver located at the origin. We then demonstrate how this model can be used to estimate the impact of cognitive radio transmission on the primary receiver in terms of different outage probabilities. Finally, our analytical findings are validated through some selected computer-based simulations. © 2011 IEEE.
Three General Theoretical Models in Sociology: An Articulated ?(Disunity?
Directory of Open Access Journals (Sweden)
Thaís García-Pereiro
2015-01-01
Full Text Available After merely a brief, comparative reconstruction of the three most general theoretical models underlying contemporary Sociology (atomic, systemic, and fluid it becomes necessary to review the question about the unity or plurality of Sociology, which is the main objective of this paper. To do so, the basic terms of the question are firstly updated by following the hegemonic trends in current studies of science. Secondly the convergences and divergences among the three models discussed are shown. Following some additional discussion, the conclusion is reached that contemporary Sociology is not unitary, and need not be so. It is plural, but its plurality is limited and articulated by those very models. It may therefore be portrayed as integrated and commensurable, to the extent that a partial and unstable (disunity may be said to exist in Sociology, which is not too far off from what happens in the natural sciences.
Analysis of Robust Quasi-deviances for Generalized Linear Models
Directory of Open Access Journals (Sweden)
Eva Cantoni
2004-04-01
Full Text Available Generalized linear models (McCullagh and Nelder 1989 are a popular technique for modeling a large variety of continuous and discrete data. They assume that the response variables Yi , for i = 1, . . . , n, come from a distribution belonging to the exponential family, such that E[Yi ] = ?i and V[Yi ] = V (?i , and that ?i = g(?i = xiT?, where ? ? IR p is the vector of parameters, xi ? IR p, and g(. is the link function. The non-robustness of the maximum likelihood and the maximum quasi-likelihood estimators has been studied extensively in the literature. For model selection, the classical analysis-of-deviance approach shares the same bad robustness properties. To cope with this, Cantoni and Ronchetti (2001 propose a robust approach based on robust quasi-deviance functions for estimation and variable selection. We refer to that paper for a deeper discussion and the review of the literature.
Generalized Information Matrix Tests for Detecting Model Misspecification
Directory of Open Access Journals (Sweden)
Richard M. Golden
2016-11-01
Full Text Available Generalized Information Matrix Tests (GIMTs have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving novel model misspecification tests for finite-dimensional smooth probability models. These GIMTs include previously published as well as newly developed information matrix tests. To illustrate the application of the GIMT framework, we derived and assessed the performance of new GIMTs for binary logistic regression. Although all GIMTs exhibited good level and power performance for the larger sample sizes, GIMT statistics with fewer degrees of freedom and derived using log-likelihood third derivatives exhibited improved level and power performance.
Generalized transport model for phase transition with memory
International Nuclear Information System (INIS)
Chen, Chi; Ciucci, Francesco
2013-01-01
A general model for phenomenological transport in phase transition is derived, which extends Jäckle and Frisch model of phase transition with memory and the Cahn–Hilliard model. In addition to including interfacial energy to account for the presence of interfaces, we introduce viscosity and relaxation contributions, which result from incorporating memory effect into the driving potential. Our simulation results show that even without interfacial energy term, the viscous term can lead to transient diffuse interfaces. From the phase transition induced hysteresis, we discover different energy dissipation mechanism for the interfacial energy and the viscosity effect. In addition, by combining viscosity and interfacial energy, we find that if the former dominates, then the concentration difference across the phase boundary is reduced; conversely, if the interfacial energy is greater then this difference is enlarged.
Topics in conformal invariance and generalized sigma models
International Nuclear Information System (INIS)
Bernardo, L.M.; Lawrence Berkeley National Lab., CA
1997-05-01
This thesis consists of two different parts, having in common the fact that in both, conformal invariance plays a central role. In the first part, the author derives conditions for conformal invariance, in the large N limit, and for the existence of an infinite number of commuting classical conserved quantities, in the Generalized Thirring Model. The treatment uses the bosonized version of the model. Two different approaches are used to derive conditions for conformal invariance: the background field method and the Hamiltonian method based on an operator algebra, and the agreement between them is established. The author constructs two infinite sets of non-local conserved charges, by specifying either periodic or open boundary conditions, and he finds the Poisson Bracket algebra satisfied by them. A free field representation of the algebra satisfied by the relevant dynamical variables of the model is also presented, and the structure of the stress tensor in terms of free fields (and free currents) is studied in detail. In the second part, the author proposes a new approach for deriving the string field equations from a general sigma model on the world sheet. This approach leads to an equation which combines some of the attractive features of both the renormalization group method and the covariant beta function treatment of the massless excitations. It has the advantage of being covariant under a very general set of both local and non-local transformations in the field space. The author applies it to the tachyon, massless and first massive level, and shows that the resulting field equations reproduce the correct spectrum of a left-right symmetric closed bosonic string
A generalized formulation of the dynamic Smagorinsky model
Directory of Open Access Journals (Sweden)
Urs Schaefer-Rolffs
2017-04-01
Full Text Available A generalized formulation of the Dynamic Smagorinsky Model (DSM is proposed as a versatile turbulent momentum diffusion scheme for Large-Eddy Simulations. The difference to previous versions of the DSM is a modified test filter range that can be chosen independently from the resolution scale to separate the impact of the test filter on the simulated flow from the impact of the resolution. The generalized DSM (gDSM in a two-dimensional version is validated in a verification study as a horizontal momentum diffusion scheme with the Kühlungsborn Mechanistic General Circulation Model at high resolution (wavenumber 330 without hyperdiffusion. Three-day averaged results applying three different test filters in the macro-turbulent inertial range are presented and compared with analogous simulations where the standard DSM is used instead. The comparison of the different filters results in all cases in similar globally averaged Smagorinsky parameters cS≃0.35$c_S\\simeq0.35$ and horizontal kinetic energy spectra. Hence, the basic assumption of scale invariance underlying the application of the gDSM to parameterize atmospheric turbulence is justified. In addition, the smallest resolved scales contain less energy when the gDSM is applied, thus increasing the stability of the simulation.
Applications of Skew Models Using Generalized Logistic Distribution
Directory of Open Access Journals (Sweden)
Pushpa Narayan Rathie
2016-04-01
Full Text Available We use the skew distribution generation procedure proposed by Azzalini [Scand. J. Stat., 1985, 12, 171–178] to create three new probability distribution functions. These models make use of normal, student-t and generalized logistic distribution, see Rathie and Swamee [Technical Research Report No. 07/2006. Department of Statistics, University of Brasilia: Brasilia, Brazil, 2006]. Expressions for the moments about origin are derived. Graphical illustrations are also provided. The distributions derived in this paper can be seen as generalizations of the distributions given by Nadarajah and Kotz [Acta Appl. Math., 2006, 91, 1–37]. Applications with unimodal and bimodal data are given to illustrate the applicability of the results derived in this paper. The applications include the analysis of the following data sets: (a spending on public education in various countries in 2003; (b total expenditure on health in 2009 in various countries and (c waiting time between eruptions of the Old Faithful Geyser in the Yellow Stone National Park, Wyoming, USA. We compare the fit of the distributions introduced in this paper with the distributions given by Nadarajah and Kotz [Acta Appl. Math., 2006, 91, 1–37]. The results show that our distributions, in general, fit better the data sets. The general R codes for fitting the distributions introduced in this paper are given in Appendix A.
General Business Model Patterns for Local Energy Management Concepts
International Nuclear Information System (INIS)
Facchinetti, Emanuele; Sulzer, Sabine
2016-01-01
The transition toward a more sustainable global energy system, significantly relying on renewable energies and decentralized energy systems, requires a deep reorganization of the energy sector. The way how energy services are generated, delivered, and traded is expected to be very different in the coming years. Business model innovation is recognized as a key driver for the successful implementation of the energy turnaround. This work contributes to this topic by introducing a heuristic methodology easing the identification of general business model patterns best suited for Local Energy Management concepts such as Energy Hubs. A conceptual framework characterizing the Local Energy Management business model solution space is developed. Three reference business model patterns providing orientation across the defined solution space are identified, analyzed, and compared. Through a market review, a number of successfully implemented innovative business models have been analyzed and allocated within the defined solution space. The outcomes of this work offer to potential stakeholders a starting point and guidelines for the business model innovation process, as well as insights for policy makers on challenges and opportunities related to Local Energy Management concepts.
General business model patterns for Local Energy Management concepts
Directory of Open Access Journals (Sweden)
Emanuele eFacchinetti
2016-03-01
Full Text Available The transition towards a more sustainable global energy system, significantly relying on renewable energies and decentralized energy systems, requires a deep reorganization of the energy sector. The way how energy services are generated, delivered and traded is expected to be very different in the coming years. Business model innovation is recognized as a key driver for the successful implementation of the energy turnaround. This work contributes to this topic by introducing a heuristic methodology easing the identification of general business model patterns best suited for Local Energy Management concepts such as Energy Hubs. A conceptual framework characterizing the Local Energy Management business model solution space is developed. Three reference business model patterns providing orientation across the defined solution space are identified, analyzed and compared. Through a market review a number of successfully implemented innovative business models have been analyzed and allocated within the defined solution space. The outcomes of this work offer to potential stakeholders a starting point and guidelines for the business model innovation process, as well as insights for policy makers on challenges and opportunities related to Local Energy Management concepts.
A generalized logarithmic image processing model based on the gigavision sensor model.
Deng, Guang
2012-03-01
The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.
Generalized Magnetic Field Effects in Burgers' Nanofluid Model.
Directory of Open Access Journals (Sweden)
M M Rashidi
Full Text Available Analysis has been conducted to present the generalized magnetic field effects on the flow of a Burgers' nanofluid over an inclined wall. Mathematical modelling for hydro-magnetics reveals that the term "[Formula: see text]" is for the Newtonian model whereas the generalized magnetic field term (as mentioned in Eq 4 is for the Burgers' model which is incorporated in the current analysis to get the real insight of the problem for hydro-magnetics. Brownian motion and thermophoresis phenomenon are presented to analyze the nanofluidics for the non-Newtonian fluid. Mathematical analysis is completed in the presence of non-uniform heat generation/absorption. The constructed set of partial differential system is converted into coupled nonlinear ordinary differential system by employing the suitable transformations. Homotopy approach is employed to construct the analytical solutions which are shown graphically for sundr5y parameters including Deborah numbers, magnetic field, thermophoresis, Brownian motion and non-uniform heat generation/absorption. A comparative study is also presented showing the comparison of present results with an already published data.
Generalized martingale model of the uncertainty evolution of streamflow forecasts
Zhao, Tongtiegang; Zhao, Jianshi; Yang, Dawen; Wang, Hao
2013-07-01
Streamflow forecasts are dynamically updated in real-time, thus facilitating a process of forecast uncertainty evolution. Forecast uncertainty generally decreases over time and as more hydrologic information becomes available. The process of forecasting and uncertainty updating can be described by the martingale model of forecast evolution (MMFE), which formulates the total forecast uncertainty of a streamflow in one future period as the sum of forecast improvements in the intermediate periods. This study tests the assumptions, i.e., unbiasedness, Gaussianity, temporal independence, and stationarity, of MMFE using real-world streamflow forecast data. The results show that (1) real-world forecasts can be biased and tend to underestimate the actual streamflow, and (2) real-world forecast uncertainty is non-Gaussian and heavy-tailed. Based on these statistical tests, this study proposes a generalized martingale model GMMFE for the simulation of biased and non-Gaussian forecast uncertainties. The new model combines the normal quantile transform (NQT) with MMFE to formulate the uncertainty evolution of real-world streamflow forecasts. Reservoir operations based on a synthetic forecast by GMMFE illustrates that applications of streamflow forecasting facilitate utility improvements and that special attention should be focused on the statistical distribution of forecast uncertainty.
Introducing Charge Hydration Asymmetry into the Generalized Born Model.
Mukhopadhyay, Abhishek; Aguilar, Boris H; Tolokh, Igor S; Onufriev, Alexey V
2014-04-08
The effect of charge hydration asymmetry (CHA)-non-invariance of solvation free energy upon solute charge inversion-is missing from the standard linear response continuum electrostatics. The proposed charge hydration asymmetric-generalized Born (CHA-GB) approximation introduces this effect into the popular generalized Born (GB) model. The CHA is added to the GB equation via an analytical correction that quantifies the specific propensity of CHA of a given water model; the latter is determined by the charge distribution within the water model. Significant variations in CHA seen in explicit water (TIP3P, TIP4P-Ew, and TIP5P-E) free energy calculations on charge-inverted "molecular bracelets" are closely reproduced by CHA-GB, with the accuracy similar to models such as SEA and 3D-RISM that go beyond the linear response. Compared against reference explicit (TIP3P) electrostatic solvation free energies, CHA-GB shows about a 40% improvement in accuracy over the canonical GB, tested on a diverse set of 248 rigid small neutral molecules (root mean square error, rmse = 0.88 kcal/mol for CHA-GB vs 1.24 kcal/mol for GB) and 48 conformations of amino acid analogs (rmse = 0.81 kcal/mol vs 1.26 kcal/mol). CHA-GB employs a novel definition of the dielectric boundary that does not subsume the CHA effects into the intrinsic atomic radii. The strategy leads to finding a new set of intrinsic atomic radii optimized for CHA-GB; these radii show physically meaningful variation with the atom type, in contrast to the radii set optimized for GB. Compared to several popular radii sets used with the original GB model, the new radii set shows better transferability between different classes of molecules.
A general relativistic hydrostatic model for a galaxy
International Nuclear Information System (INIS)
Hojman, R.; Pena, L.; Zamorano, N.
1991-08-01
The existence of huge amounts of mass laying at the center of some galaxies has been inferred by data gathered at different wavelengths. It seems reasonable then, to incorporate general relativity in the study of these objects. A general relativistic hydrostatic model for a galaxy is studied. We assume that the galaxy is dominated by the dark mass except at the nucleus, where the luminous matter prevails. It considers four different concentric spherically symmetric regions, properly matched and with a specific equation of state for each of them. It yields a slowly raising orbital velocity for a test particle moving in the background gravitational field of the dark matter region. In this sense we think of this model as representing a spiral galaxy. The dependence of the mass on the radius in cluster and field spiral galaxies published recently, can be used to fix the size of the inner luminous core. A vanishing pressure at the edge of the galaxy and the assumption of hydrostatic equilibrium everywhere generates a jump in the density and the orbital velocity at the shell enclosing the galaxy. This is a prediction of this model. The ratio between the size core and the shells introduced here are proportional to their densities. In this sense the model is scale invariant. It can be used to reproduce a galaxy or the central region of a galaxy. We have also compared our results with those obtained with the Newtonian isothermal sphere. The luminosity is not included in our model as an extra variable in the determination of the orbital velocity. (author). 29 refs, 10 figs
A generalized methodology to characterize composite materials for pyrolysis models
McKinnon, Mark B.
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to
A GENERALIZATION OF TRADITIONAL KANO MODEL FOR CUSTOMER REQUIREMENTS ANALYSIS
Directory of Open Access Journals (Sweden)
Renáta Turisová
2015-07-01
Full Text Available Purpose: The theory of attractiveness determines the relationship between the technically achieved and customer perceived quality of product attributes. The most frequently used approach in the theory of attractiveness is the implementation of Kano‘s model. There exist a lot of generalizations of that model which take into consideration various aspects and approaches focused on understanding the customer preferences and identification of his priorities for a selling product. The aim of this article is to outline another possible generalization of Kano‘s model.Methodology/Approach: The traditional Kano’s model captures the nonlinear relationship between reached attributes of quality and customer requirements. The individual attributes of quality are divided into three main categories: must-be, one-dimensional, attractive quality and into two side categories: indifferent and reverse quality. The well selling product has to contain the must-be attribute. It should contain as many one-dimensional attributes as possible. If there are also supplementary attractive attributes, it means that attractiveness of the entire product, from the viewpoint of the customer, nonlinearly sharply rises what has a direct positive impact on a decision of potential customer when purchasing the product. In this article, we show that inclusion of individual quality attributes of a product to the mentioned categories depends, among other things, also on costs on life cycle of the product, respectively on a price of the product on the market.Findings: In practice, we are often encountering the inclusion of products into different price categories: lower, middle and upper class. For a certain type of products the category is either directly declared by a producer (especially in automotive industry, or is determined by a customer by means of assessment of available market prices. To each of those groups of a products different customer expectations can be assigned
Generalized Laplacian eigenmaps for modeling and tracking human motions.
Martinez-del-Rincon, Jesus; Lewandowski, Michal; Nebel, Jean-Christophe; Makris, Dimitrios
2014-09-01
This paper presents generalized Laplacian eigenmaps, a novel dimensionality reduction approach designed to address stylistic variations in time series. It generates compact and coherent continuous spaces whose geometry is data-driven. This paper also introduces graph-based particle filter, a novel methodology conceived for efficient tracking in low dimensional space derived from a spectral dimensionality reduction method. Its strengths are a propagation scheme, which facilitates the prediction in time and style, and a noise model coherent with the manifold, which prevents divergence, and increases robustness. Experiments show that a combination of both techniques achieves state-of-the-art performance for human pose tracking in underconstrained scenarios.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Computable general equilibrium model fiscal year 2013 capability development report
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-17
This report documents progress made on continued developments of the National Infrastructure Simulation and Analysis Center (NISAC) Computable General Equilibrium Model (NCGEM), developed in fiscal year 2012. In fiscal year 2013, NISAC the treatment of the labor market and tests performed with the model to examine the properties of the solutions computed by the model. To examine these, developers conducted a series of 20 simulations for 20 U.S. States. Each of these simulations compared an economic baseline simulation with an alternative simulation that assumed a 20-percent reduction in overall factor productivity in the manufacturing industries of each State. Differences in the simulation results between the baseline and alternative simulations capture the economic impact of the reduction in factor productivity. While not every State is affected in precisely the same way, the reduction in manufacturing industry productivity negatively affects the manufacturing industries in each State to an extent proportional to the reduction in overall factor productivity. Moreover, overall economic activity decreases when manufacturing sector productivity is reduced. Developers ran two additional simulations: (1) a version of the model for the State of Michigan, with manufacturing divided into two sub-industries (automobile and other vehicle manufacturing as one sub-industry and the rest of manufacturing as the other subindustry); and (2) a version of the model for the United States, divided into 30 industries. NISAC conducted these simulations to illustrate the flexibility of industry definitions in NCGEM and to examine the simulation properties of in more detail.
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
NLO electroweak corrections in general scalar singlet models
Costa, Raul; Sampaio, Marco O. P.; Santos, Rui
2017-07-01
If no new physics signals are found, in the coming years, at the Large Hadron Collider Run-2, an increase in precision of the Higgs couplings measurements will shift the discussion to the effects of higher order corrections. In Beyond the Standard Model (BSM) theories this may become the only tool to probe new physics. Extensions of the Standard Model (SM) with several scalar singlets may address several of its problems, namely to explain dark matter, the matter-antimatter asymmetry, or to improve the stability of the SM up to the Planck scale. In this work we propose a general framework to calculate one loop-corrections to the propagators and to the scalar field vacuum expectation values of BSM models with an arbitrary number of scalar singlets. We then apply our method to a real and to a complex scalar singlet models. We assess the importance of the one-loop radiative corrections first by computing them for a tree level mixing sum constraint, and then for the main Higgs production process gg → H. We conclude that, for the currently allowed parameter space of these models, the corrections can be at most a few percent. Notably, a non-zero correction can survive when dark matter is present, in the SM-like limit of the Higgs couplings to other SM particles.
Detection of Fraudulent Transactions Through a Generalized Mixed Linear Models
Directory of Open Access Journals (Sweden)
Jackelyne Gómez–Restrepo
2012-12-01
Full Text Available The detection of bank frauds is a topic which many financial sector companieshave invested time and resources into. However, finding patterns inthe methodologies used to commit fraud in banks is a job that primarily involvesintimate knowledge of customer behavior, with the idea of isolatingthose transactions which do not correspond to what the client usually does.Thus, the solutions proposed in literature tend to focus on identifying outliersor groups, but fail to analyse each client or forecast fraud. This paperevaluates the implementation of a generalized linear model to detect fraud.With this model, unlike conventional methods, we consider the heterogeneityof customers. We not only generate a global model, but also a model for eachcustomer which describes the behavior of each one according to their transactionalhistory and previously detected fraudulent transactions. In particular,a mixed logistic model is used to estimate the probability that a transactionis fraudulent, using information that has been taken by the banking systemsin different moments of time.
GENERALIZATION TECHNIQUE FOR 2D+SCALE DHE DATA MODEL
Directory of Open Access Journals (Sweden)
H. Karim
2016-10-01
Full Text Available Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information in scale dimension could be used for the future 3D-scale applications.
Generalized constraint neural network regression model subject to linear priors.
Qu, Ya-Jun; Hu, Bao-Gang
2011-12-01
This paper is reports an extension of our previous investigations on adding transparency to neural networks. We focus on a class of linear priors (LPs), such as symmetry, ranking list, boundary, monotonicity, etc., which represent either linear-equality or linear-inequality priors. A generalized constraint neural network-LPs (GCNN-LPs) model is studied. Unlike other existing modeling approaches, the GCNN-LP model exhibits its advantages. First, any LP is embedded by an explicitly structural mode, which may add a higher degree of transparency than using a pure algorithm mode. Second, a direct elimination and least squares approach is adopted to study the model, which produces better performances in both accuracy and computational cost over the Lagrange multiplier techniques in experiments. Specific attention is paid to both "hard (strictly satisfied)" and "soft (weakly satisfied)" constraints for regression problems. Numerical investigations are made on synthetic examples as well as on the real-world datasets. Simulation results demonstrate the effectiveness of the proposed modeling approach in comparison with other existing approaches.
[Treatment of cloud radiative effects in general circulation models
International Nuclear Information System (INIS)
Wang, W.C.
1993-01-01
This is a renewal proposal for an on-going project of the Department of Energy (DOE)/Atmospheric Radiation Measurement (ARM) Program. The objective of the ARM Program is to improve the treatment of radiation-cloud in GCMs so that reliable predictions of the timing and magnitude of greenhouse gas-induced global warming and regional responses can be made. The ARM Program supports two research areas: (I) The modeling and analysis of data related to the parameterization of clouds and radiation in general circulation models (GCMs); and (II) the development of advanced instrumentation for both mapping the three-dimensional structure of the atmosphere and high accuracy/precision radiometric observations. The present project conducts research in area (I) and focuses on GCM treatment of cloud life cycle, optical properties, and vertical overlapping. The project has two tasks: (1) Development and Refinement of GCM Radiation-Cloud Treatment Using ARM Data; and (2) Validation of GCM Radiation-Cloud Treatment
A stratiform cloud parameterization for General Circulation Models
International Nuclear Information System (INIS)
Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.
1994-01-01
The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species
Convex Relaxations for a Generalized Chan-Vese Model
Bae, Egil
2013-01-01
We revisit the Chan-Vese model of image segmentation with a focus on the encoding with several integer-valued labeling functions. We relate several representations with varying amount of complexity and demonstrate the connection to recent relaxations for product sets and to dual maxflow-based formulations. For some special cases, it can be shown that it is possible to guarantee binary minimizers. While this is not true in general, we show how to derive a convex approximation of the combinatorial problem for more than 4 phases. We also provide a method to avoid overcounting of boundaries in the original Chan-Vese model without departing from the efficient product-set representation. Finally, we derive an algorithm to solve the associated discretized problem, and demonstrate that it allows to obtain good approximations for the segmentation problem with various number of regions. © 2013 Springer-Verlag.
A Chemical Containment Model for the General Purpose Work Station
Flippen, Alexis A.; Schmidt, Gregory K.
1994-01-01
Contamination control is a critical safety requirement imposed on experiments flying on board the Spacelab. The General Purpose Work Station, a Spacelab support facility used for life sciences space flight experiments, is designed to remove volatile compounds from its internal airpath and thereby minimize contamination of the Spacelab. This is accomplished through the use of a large, multi-stage filter known as the Trace Contaminant Control System. Many experiments planned for the Spacelab require the use of toxic, volatile fixatives in order to preserve specimens prior to postflight analysis. The NASA-Ames Research Center SLS-2 payload, in particular, necessitated the use of several toxic, volatile compounds in order to accomplish the many inflight experiment objectives of this mission. A model was developed based on earlier theories and calculations which provides conservative predictions of the resultant concentrations of these compounds given various spill scenarios. This paper describes the development and application of this model.
Generalized flux states of the t-J model
International Nuclear Information System (INIS)
Nori, F.; Abrahams, E.; Zimanyi, G.T.
1990-01-01
We investigate certain generalized flux phases arising in a mean-field approach to the t-J model. First, we establish that the energy of noninteracting electrons moving in a uniform magnetic field has an absolute minimum as a function of the flux at exactly one flux quantum per particle. Using this result, we show that if the hard-core nature of the hole bosons is taken into account, then the slave-boson mean-field approximation for the t-J Hamiltonian allows for a solution where both the spinons and the holons experience an average flux of one flux quantum per particle. This enables them to achieve the lowest possible energy within the manifold of spatially uniform flux states. In the case of the continuum model, this is possible only for certain fractional fillings and we speculate that the system may react to this frustration effect by phase separation
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
A stratiform cloud parameterization for general circulation models
International Nuclear Information System (INIS)
Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.
1994-01-01
The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species
A more general interacting model of holographic dark energy
International Nuclear Information System (INIS)
Yu Fei; Zhang Jingfei; Lu Jianbo; Wang Wei; Gui Yuanxing
2010-01-01
So far, there have been no theories or observational data that deny the presence of interaction between dark energy and dark matter. We extend naturally the holographic dark energy (HDE) model, proposed by Granda and Oliveros, in which the dark energy density includes not only the square of the Hubble scale, but also the time derivative of the Hubble scale to the case with interaction and the analytic forms for the cosmic parameters are obtained under the specific boundary conditions. The various behaviors concerning the cosmic expansion depend on the introduced numerical parameters which are also constrained. The more general interacting model inherits the features of the previous ones of HDE, keeping the consistency of the theory.
General analysis of dark radiation in sequestered string models
Energy Technology Data Exchange (ETDEWEB)
Cicoli, Michele [ICTP,Strada Costiera 11, Trieste 34014 (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN, Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Muia, Francesco [Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN, Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy)
2015-12-22
We perform a general analysis of axionic dark radiation produced from the decay of the lightest modulus in the sequestered LARGE Volume Scenario. We discuss several cases depending on the form of the Kähler metric for visible sector matter fields and the mechanism responsible for achieving a de Sitter vacuum. The leading decay channels which determine dark radiation predictions are to hidden sector axions, visible sector Higgses and SUSY scalars depending on their mass. We show that in most of the parameter space of split SUSY-like models squarks and sleptons are heavier than the lightest modulus. Hence dark radiation predictions previously obtained for MSSM-like cases hold more generally also for split SUSY-like cases since the decay channel to SUSY scalars is kinematically forbidden. However the inclusion of string loop corrections to the Kähler potential gives rise to a parameter space region where the decay channel to SUSY scalars opens up, leading to a significant reduction of dark radiation production. In this case, the simplest model with a shift-symmetric Higgs sector can suppress the excess of dark radiation ΔN{sub eff} to values as small as 0.14, in perfect agreement with current experimental bounds. Depending on the exact mass of the SUSY scalars all values in the range 0.14≲ΔN{sub eff}≲1.6 are allowed. Interestingly dark radiation overproduction can be avoided also in the absence of a Giudice-Masiero coupling.
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Diabatic models with transferrable parameters for generalized chemical reactions
Reimers, Jeffrey R.; McKemmish, Laura K.; McKenzie, Ross H.; Hush, Noel S.
2017-05-01
Diabatic models applied to adiabatic electron-transfer theory yield many equations involving just a few parameters that connect ground-state geometries and vibration frequencies to excited-state transition energies and vibration frequencies to the rate constants for electron-transfer reactions, utilizing properties of the conical-intersection seam linking the ground and excited states through the Pseudo Jahn-Teller effect. We review how such simplicity in basic understanding can also be obtained for general chemical reactions. The key feature that must be recognized is that electron-transfer (or hole transfer) processes typically involve one electron (hole) moving between two orbitals, whereas general reactions typically involve two electrons or even four electrons for processes in aromatic molecules. Each additional moving electron leads to new high-energy but interrelated conical-intersection seams that distort the shape of the critical lowest-energy seam. Recognizing this feature shows how conical-intersection descriptors can be transferred between systems, and how general chemical reactions can be compared using the same set of simple parameters. Mathematical relationships are presented depicting how different conical-intersection seams relate to each other, showing that complex problems can be reduced into an effective interaction between the ground-state and a critical excited state to provide the first semi-quantitative implementation of Shaik’s “twin state” concept. Applications are made (i) demonstrating why the chemistry of the first-row elements is qualitatively so different to that of the second and later rows, (ii) deducing the bond-length alternation in hypothetical cyclohexatriene from the observed UV spectroscopy of benzene, (iii) demonstrating that commonly used procedures for modelling surface hopping based on inclusion of only the first-derivative correction to the Born-Oppenheimer approximation are valid in no region of the chemical
Generalized Potential Energy Finite Elements for Modeling Molecular Nanostructures.
Chatzieleftheriou, Stavros; Adendorff, Matthew R; Lagaros, Nikos D
2016-10-24
The potential energy of molecules and nanostructures is commonly calculated in the molecular mechanics formalism by superimposing bonded and nonbonded atomic energy terms, i.e. bonds between two atoms, bond angles involving three atoms, dihedral angles involving four atoms, nonbonded terms expressing the Coulomb and Lennard-Jones interactions, etc. In this work a new, generalized numerical simulation is presented for studying the mechanical behavior of three-dimensional nanostructures at the atomic scale. The energy gradient and Hessian matrix of such assemblies are usually computed numerically; a potential energy finite element model is proposed herein where these two components are expressed analytically. In particular, generalized finite elements are developed that express the interactions among atoms in a manner equivalent to that invoked in simulations performed based on the molecular dynamics method. Thus, the global tangent stiffness matrix for any nanostructure is formed as an assembly of the generalized finite elements and is directly equivalent to the Hessian matrix of the potential energy. The advantages of the proposed model are identified in terms of both accuracy and computational efficiency. In the case of popular force fields (e.g., CHARMM), the computation of the Hessian matrix by implementing the proposed method is of the same order as that of the gradient. This analysis can be used to minimize the potential energy of molecular systems under nodal loads in order to derive constitutive laws for molecular systems where the entropy and solvent effects are neglected and can be approximated as solids, such as double stranded DNA nanostructures. In this context, the sequence dependent stretch modulus for some typical base pairs step is calculated.
Designing a Wien Filter Model with General Particle Tracer
Mitchell, John; Hofler, Alicia
2017-09-01
The Continuous Electron Beam Accelerator Facility injector employs a beamline component called a Wien filter which is typically used to select charged particles of a certain velocity. The Wien filter is also used to rotate the polarization of a beam for parity violation experiments. The Wien filter consists of perpendicular electric and magnetic fields. The electric field changes the spin orientation, but also imposes a transverse kick which is compensated for by the magnetic field. The focus of this project was to create a simulation of the Wien filter using General Particle Tracer. The results from these simulations were vetted against machine data to analyze the accuracy of the Wien model. Due to the close agreement between simulation and experiment, the data suggest that the Wien filter model is accurate. The model allows a user to input either the desired electric or magnetic field of the Wien filter along with the beam energy as parameters, and is able to calculate the perpendicular field strength required to keep the beam on axis. The updated model will aid in future diagnostic tests of any beamline component downstream of the Wien filter, and allow users to easily calculate the electric and magnetic fields needed for the filter to function properly. Funding support provided by DOE Office of Science's Student Undergraduate Laboratory Internship program.
Singular solitons of generalized Camassa-Holm models
International Nuclear Information System (INIS)
Tian Lixin; Sun Lu
2007-01-01
Two generalizations of the Camassa-Holm system associated with the singular analysis are proposed for Painleve integrability properties and the extensions of already known analytic solitons. A remarkable feature of the physical model is that it has peakon solution which has peak form. An alternative WTC test which allowed the identifying of such models directly if formulated in terms of inserting a formed ansatz into these models. For the two models have Painleve property, Painleve-Baecklund systems can be constructed through the expansion of solitons about the singularity manifold. By the implementations of Maple, plentiful new type solitonic structures and some kink waves, which are affected by the variation of energy, are explored. If the energy is infinite in finite time, there will be a collapse in soliton systems by direct numerical simulations. Particularly, there are two collapses coexisting in our regular solitons, which occurred around its central regions. Simulation shows that in the bottom of periodic waves arises the non-zero parts of compactons and anti-compactons. We also get floating solitary waves whose amplitude is infinite. In contrary to which a finite-amplitude blow-up soliton is obtained. Periodic blow-ups are found too. Special kinks which have periodic cuspons are derived
General Description of Fission Observables - JEFF Report 24. GEF Model
International Nuclear Information System (INIS)
Schmidt, Karl-Heinz; Jurado, Beatriz; Amouroux, Charlotte
2014-06-01
The Joint Evaluated Fission and Fusion (JEFF) Project is a collaborative effort among the member countries of the OECD Nuclear Energy Agency (NEA) Data Bank to develop a reference nuclear data library. The JEFF library contains sets of evaluated nuclear data, mainly for fission and fusion applications; it contains a number of different data types, including neutron and proton interaction data, radioactive decay data, fission yield data and thermal scattering law data. The General fission (GEF) model is based on novel theoretical concepts and ideas developed to model low energy nuclear fission. The GEF code calculates fission-fragment yields and associated quantities (e.g. prompt neutron and gamma) for a large range of nuclei and excitation energy. This opens up the possibility of a qualitative step forward to improve further the JEFF fission yields sub-library. This report describes the GEF model which explains the complex appearance of fission observables by universal principles of theoretical models and considerations on the basis of fundamental laws of physics and mathematics. The approach reveals a high degree of regularity and provides a considerable insight into the physics of the fission process. Fission observables can be calculated with a precision that comply with the needs for applications in nuclear technology. The relevance of the approach for examining the consistency of experimental results and for evaluating nuclear data is demonstrated. (authors)
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Peer substance use overestimation among French university students: a cross-sectional survey
Directory of Open Access Journals (Sweden)
Dautzenberg Bertrand
2010-03-01
Full Text Available Abstract Background Normative misperceptions have been widely documented for alcohol use among U.S. college students. There is less research on other substances or European cultural contexts. This study explores which factors are associated with alcohol, tobacco and cannabis use misperceptions among French college students, focusing on substance use. Methods 12 classes of second-year college students (n = 731 in sociology, medicine, nursing or foreign language estimated the proportion of tobacco, cannabis, alcohol use and heavy episodic drinking among their peers and reported their own use. Results Peer substance use overestimation frequency was 84% for tobacco, 55% for cannabis, 37% for alcohol and 56% for heavy episodic drinking. Cannabis users (p = 0.006, alcohol (p = 0.003 and heavy episodic drinkers (p = 0.002, are more likely to overestimate the prevalence of use of these consumptions. Tobacco users are less likely to overestimate peer prevalence of smoking (p = 0.044. Women are more likely to overestimate tobacco (p Conclusions Local interventions that focus on creating realistic perceptions of substance use prevalence could be considered for cannabis and alcohol prevention in French campuses.
Seeing ghosts: Negative body evaluation predicts overestimation of negative social feedback
Alleva, J.M.; Lange, W.G.; Jansen, A.T.M.; Martijn, C.
2014-01-01
The current study investigated whether negative body evaluation predicts women's overestimation of negative social feedback related to their own body (i.e., covariation bias). Sixty-five female university students completed a computer task where photos of their own body, of a control woman's body,
General Description of Fission Observables: GEF Model Code
Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.
2016-01-01
The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is
Factors associated with overestimation of asthma control: A cross-sectional study in Australia.
Bereznicki, Bonnie J; Chapman, Millicent P; Bereznicki, Luke R E
2017-05-01
To investigate actual and perceived disease control in Australians with asthma, and identify factors associated with overestimation of asthma control. This was a cross-sectional study of Australian adults with asthma, who were recruited via Facebook to complete an online survey. The survey included basic demographic questions, and validated tools assessing asthma knowledge, medication adherence, medicine beliefs, illness perception and asthma control. Items that measured symptoms and frequency of reliever medication use were compared to respondents' self-rating of their own asthma control. Predictors of overestimation of asthma control were determined using multivariate logistic regression. Of 2971 survey responses, 1950 (65.6%) were complete and eligible for inclusion. Overestimation of control was apparent in 45.9% of respondents. Factors independently associated with overestimation of asthma control included education level (OR = 0.755, 95% CI: 0.612-0.931, P = 0.009), asthma knowledge (OR = 0.942, 95% CI: 0.892-0.994, P = 0.029), total asthma control, (OR = 0.842, 95% CI: 0.818-0.867, P addictive (OR = 1.144, 95% CI: 1.017-1.287, P = 0.025), and increased feelings of control over asthma (OR = 1.261, 95% CI: 1.191-1.335), P < 0.001). Overestimation of asthma control remains a significant issue in Australians with asthma. The study highlights the importance of encouraging patients to express their feelings about asthma control and beliefs about medicines, and to be more forthcoming with their asthma symptoms. This would help to reveal any discrepancies between perceived and actual asthma control.
Overestimation of Susceptibility Vessel Sign: A Predictive Marker of Stroke Cause.
Zhang, Ruiting; Zhou, Ying; Liu, Chang; Zhang, Meixia; Yan, Shenqiang; Liebeskind, David S; Lou, Min
2017-07-01
The extent of blooming artifact may reflect the amount of paramagnetic material. We thus assessed the overestimation ratio of susceptibility vessel sign (SVS) on susceptibility-weighted imaging, defined as the extent of SVS width beyond the lumen and examined its value for predicting the stroke cause in acute ischemic stroke patients. We included consecutive acute ischemic stroke patients with proximal large artery occlusion who underwent both susceptibility-weighted imaging and time-of-flight magnetic resonance angiography within 8 hours poststroke onset. We calculated the length, width, and overestimation ratio of SVS on susceptibility-weighted imaging and then investigated their values for predicting the stroke cause, respectively. One-hundred eleven consecutive patients (72 female; mean age, 66.6±13.4 years) were enrolled, among whom 39 (35.1%) were diagnosed with cardiogenic embolism, 43 (38.7%) with large artery atherosclerosis, and 29 (26.1%) with undetermined cause. The presence, length, width, and overestimation ratio of SVS were all independently associated with the cause of cardiogenic embolism after adjusting for baseline National Institute of Health Stroke Scale and infarct volume. After excluded patients with undetermined cause, the sensitivity and specificity of overestimation ratio of SVS for cardiogenic embolism were 0.971 and 0.913; for the length of SVS, they were 0.629 and 0.739; for the width of SVS, they were 0.829 and 0.826, respectively. The overestimation ratio of SVS can predict cardiogenic embolism, with both high sensitivity and specificity, which can be helpful for the management of acute ischemic stroke patients in hyperacute stage. © 2017 American Heart Association, Inc.
A general formulation for a mathematical PEM fuel cell model
Baschuk, J. J.; Li, Xianguo
A general formulation for a comprehensive fuel cell model, based on the conservation principle is presented. The model formulation includes the electro-chemical reactions, proton migration, and the mass transport of the gaseous reactants and liquid water. Additionally, the model formulation can be applied to all regions of the PEM fuel cell: the bipolar plates, gas flow channels, electrode backing, catalyst, and polymer electrolyte layers. The model considers the PEM fuel cell to be composed of three phases: reactant gas, liquid water, and solid. These three phases can co-exist within the gas flow channels, electrode backing, catalyst, and polymer electrolyte layers. The conservation of mass, momentum, species, and energy are applied to each phase, with the technique of volume averaging being used to incorporate the interactions between the phases as interfacial source terms. In order to avoid problems arising from phase discontinuities, the gas and liquid phases are considered as a mixture. The momentum interactions between the fluid and solid phases are modeled by the Darcy-Forchheimer term. The electro-oxidation of H and CO, the reduction of O, and the heterogeneous oxidation of H and CO are considered in the catalyst layers. Due to the small pore size of the polymer electrolyte layer, the generalized Stefan-Maxwell equations, with the polymer considered as a diffusing species, are used to describe species transport. One consequence of considering the gas and liquid phases as a mixture is that expressions for the velocity of the individual phases relative to the mixture must be developed. In the gas flow channels, the flow is assumed homogeneous, while the Darcy and Schlögl equations are used to describe liquid water transport in the electrode backing and polymer electrolyte layers. Thus, two sets of equations, one for the mixture and another for the solid phase, can be developed to describe the processes occurring within a PEM fuel cell. These equations are in
Faraway, Julian J
2005-01-01
Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...
Generalized multiplicative error models: Asymptotic inference and empirical analysis
Li, Qian
This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.
A Pacific Ocean general circulation model for satellite data assimilation
Chao, Y.; Halpern, D.; Mechoso, C. R.
1991-01-01
A tropical Pacific Ocean General Circulation Model (OGCM) to be used in satellite data assimilation studies is described. The transfer of the OGCM from a CYBER-205 at NOAA's Geophysical Fluid Dynamics Laboratory to a CRAY-2 at NASA's Ames Research Center is documented. Two 3-year model integrations from identical initial conditions but performed on those two computers are compared. The model simulations are very similar to each other, as expected, but the simulations performed with the higher-precision CRAY-2 is smoother than that with the lower-precision CYBER-205. The CYBER-205 and CRAY-2 use 32 and 64-bit mantissa arithmetic, respectively. The major features of the oceanic circulation in the tropical Pacific, namely the North Equatorial Current, the North Equatorial Countercurrent, the South Equatorial Current, and the Equatorial Undercurrent, are realistically produced and their seasonal cycles are described. The OGCM provides a powerful tool for study of tropical oceans and for the assimilation of satellite altimetry data.
General relativity cosmological models without the big bang
International Nuclear Information System (INIS)
Rosen, N.
1985-01-01
Attention is given to the so-called standard model of the universe in the framework of the general theory of relativity. This model is taken to be homogeneous and isotropic and filled with an ideal fluid characterized by a density and a pressure. Taking into consideration, however, the assumption that the universe began in a singular state, it is found hard to understand why the universe is so nearly homogeneous and isotropic at present for a singularity represents a breakdown of physical laws, and the initial singularity cannot, therefore, predetermine the subsequent symmetries of the universe. The present investigation has the objective to find a way of avoiding this initial singularity, i.e., to look for a cosmological model without the big bang. The idea is proposed that there exists a limiting density of matter of the order of magnitude of the Planck density, and that this was the density of matter at the moment at which the universe began to expand
A generalized model for estimating the energy density of invertebrates
James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.
2012-01-01
Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2 = 0.96, p cost savings compared to traditional bomb calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.
Application of conditional moment tests to model checking for generalized linear models.
Pan, Wei
2002-06-01
Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.
Spin squeezing in a generalized one-axis twisting model
Jin, Guang-Ri; Liu, Yong-Chun; Liu, Wu-Ming
2009-07-01
We investigate the dependence of spin squeezing on the polar angle of the initial coherent spin state |θ0, phi0rang in a generalized one-axis twisting model, where the detuning δ is taken into account. We show explicitly that regardless of δ and phi0, previous results of the ideal one-axis twisting are recovered as long as θ0=π/2. For a small departure of θ0 from π/2, however, the achievable variance (V -)min ~N2/3, which is larger than the ideal case N1/3. We also find that the maximal squeezing time tmin scales as N-5/6. Analytic expressions of (V-)min and tmin are presented and they agree with numerical simulations.
dglars: An R Package to Estimate Sparse Generalized Linear Models
Directory of Open Access Journals (Sweden)
Luigi Augugliaro
2014-09-01
Full Text Available dglars is a publicly available R package that implements the method proposed in Augugliaro, Mineo, and Wit (2013, developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method proposed in Efron, Hastie, Johnstone, and Tibshirani (2004. The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve: a predictor-corrector algorithm, proposed in Augugliaro et al. (2013, and a cyclic coordinate descent algorithm, proposed in Augugliaro, Mineo, and Wit (2012. The latter algorithm, as shown here, is significantly faster than the predictor-corrector algorithm. For comparison purposes, we have implemented both algorithms.
A general Bayes weibull inference model for accelerated life testing
International Nuclear Information System (INIS)
Dorp, J. Rene van; Mazzuchi, Thomas A.
2005-01-01
This article presents the development of a general Bayes inference model for accelerated life testing. The failure times at a constant stress level are assumed to belong to a Weibull distribution, but the specification of strict adherence to a parametric time-transformation function is not required. Rather, prior information is used to indirectly define a multivariate prior distribution for the scale parameters at the various stress levels and the common shape parameter. Using the approach, Bayes point estimates as well as probability statements for use-stress (and accelerated) life parameters may be inferred from a host of testing scenarios. The inference procedure accommodates both the interval data sampling strategy and type I censored sampling strategy for the collection of ALT test data. The inference procedure uses the well-known MCMC (Markov Chain Monte Carlo) methods to derive posterior approximations. The approach is illustrated with an example
Generalized Swept Mid-structure for Polygonal Models
Martin, Tobias
2012-05-01
We introduce a novel mid-structure called the generalized swept mid-structure (GSM) of a closed polygonal shape, and a framework to compute it. The GSM contains both curve and surface elements and has consistent sheet-by-sheet topology, versus triangle-by-triangle topology produced by other mid-structure methods. To obtain this structure, a harmonic function, defined on the volume that is enclosed by the surface, is used to decompose the volume into a set of slices. A technique for computing the 1D mid-structures of these slices is introduced. The mid-structures of adjacent slices are then iteratively matched through a boundary similarity computation and triangulated to form the GSM. This structure respects the topology of the input surface model is a hybrid mid-structure representation. The construction and topology of the GSM allows for local and global simplification, used in further applications such as parameterization, volumetric mesh generation and medical applications.
Brunner, Martin; Lüdtke, Oliver; Trautwein, Ulrich
2008-01-01
The internal/external frame of reference model (I/E model; Marsh, 1986 ) is a highly influential model of self-concept formation, which predicts that domain-specific abilities have positive effects on academic self-concepts in the corresponding domain and negative effects across domains. Investigations of the I/E model do not typically incorporate general cognitive ability or general academic self-concept. This article investigates alternative measurement models for domain-specific and domain-general cognitive abilities and academic self-concepts within an extended I/E model framework using representative data from 25,301 9th-grade students. Empirical support was found for the external validity of a new measurement model for academic self-concepts with respect to key student characteristics (gender, school satisfaction, educational aspirations, domain-specific interests, grades). Moreover, the basic predictions of the I/E model were confirmed, and the new extension of the traditional I/E model permitted meaningful relations to be drawn between domain-general cognitive ability and domain-general academic self-concept as well as between the domain-specific elements of the model.
Digital terrain model generalization incorporating scale, semantic and cognitive constraints
Partsinevelos, Panagiotis; Papadogiorgaki, Maria
2014-05-01
Cartographic generalization is a well-known process accommodating spatial data compression, visualization and comprehension under various scales. In the last few years, there are several international attempts to construct tangible GIS systems, forming real 3D surfaces using a vast number of mechanical parts along a matrix formation (i.e., bars, pistons, vacuums). Usually, moving bars upon a structured grid push a stretching membrane resulting in a smooth visualization for a given surface. Most of these attempts suffer either in their cost, accuracy, resolution and/or speed. Under this perspective, the present study proposes a surface generalization process that incorporates intrinsic constrains of tangible GIS systems including robotic-motor movement and surface stretching limitations. The main objective is to provide optimized visualizations of 3D digital terrain models with minimum loss of information. That is, to minimize the number of pixels in a raster dataset used to define a DTM, while reserving the surface information. This neighborhood type of pixel relations adheres to the basics of Self Organizing Map (SOM) artificial neural networks, which are often used for information abstraction since they are indicative of intrinsic statistical features contained in the input patterns and provide concise and characteristic representations. Nevertheless, SOM remains more like a black box procedure not capable to cope with possible particularities and semantics of the application at hand. E.g. for coastal monitoring applications, the near - coast areas, surrounding mountains and lakes are more important than other features and generalization should be "biased"-stratified to fulfill this requirement. Moreover, according to the application objectives, we extend the SOM algorithm to incorporate special types of information generalization by differentiating the underlying strategy based on topologic information of the objects included in the application. The final
Battauz, Michela; Bellio, Ruggero
2011-01-01
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable…
A Comparison of Generalized Hyperbolic Distribution Models for Equity Returns
Directory of Open Access Journals (Sweden)
Virginie Konlack Socgnia
2014-01-01
Full Text Available We discuss the calibration of the univariate and multivariate generalized hyperbolic distributions, as well as their hyperbolic, variance gamma, normal inverse Gaussian, and skew Student’s t-distribution subclasses for the daily log-returns of seven of the most liquid mining stocks listed on the Johannesburg Stocks Exchange. To estimate the model parameters from historic distributions, we use an expectation maximization based algorithm for the univariate case and a multicycle expectation conditional maximization estimation algorithm for the multivariate case. We assess the goodness of fit statistics using the log-likelihood, the Akaike information criterion, and the Kolmogorov-Smirnov distance. Finally, we inspect the temporal stability of parameters and note implications as criteria for distinguishing between models. To better understand the dependence structure of the stocks, we fit the MGHD and subclasses to both the stock returns and the two leading principal components derived from the price data. While the MGHD could fit both data subsets, we observed that the multivariate normality of the stock return residuals, computed by removing shared components, suggests that the departure from normality can be explained by the structure in the common factors.
Prognostic cloud water in the Los Alamos general circulation model
International Nuclear Information System (INIS)
Kristjansson, J.E.; Kao, C.Y.J.
1993-01-01
Most of today's general circulation models (GCMS) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, there has arisen an urgent need for improvements in the treatment of clouds in GCMS, especially as the clouds relate to radiation. In the present paper, we investigate the effects of introducing pregnostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the long wave emissivity calculations. Results from several sensitivity simulations show that realistic cloud water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds become stronger, due to more realistic tropical convection
General Model for Light Curves of Chromospherically Active Binary Stars
Jetsu, L.; Henry, G. W.; Lehtinen, J.
2017-04-01
The starspots on the surface of many chromospherically active binary stars concentrate on long-lived active longitudes separated by 180°. Shifts in activity between these two longitudes, the “flip-flop” events, have been observed in single stars like FK Comae and binary stars like σ Geminorum. Recently, interferometry has revealed that ellipticity may at least partly explain the flip-flop events in σ Geminorum. This idea was supported by the double-peaked shape of the long-term mean light curve of this star. Here we show that the long-term mean light curves of 14 chromospherically active binaries follow a general model that explains the connection between orbital motion, changes in starspot distribution, ellipticity, and flip-flop events. Surface differential rotation is probably weak in these stars, because the interference of two constant period waves may explain the observed light curve changes. These two constant periods are the active longitude period ({P}{act}) and the orbital period ({P}{orb}). We also show how to apply the same model to single stars, where only the value of P act is known. Finally, we present a tentative interference hypothesis about the origin of magnetic fields in all spectral types of stars. The CPS results are available electronically at the Vizier database.
A general method for modeling population dynamics and its applications.
Shestopaloff, Yuri K
2013-12-01
Studying populations, be it a microbe colony or mankind, is important for understanding how complex systems evolve and exist. Such knowledge also often provides insights into evolution, history and different aspects of human life. By and large, populations' prosperity and decline is about transformation of certain resources into quantity and other characteristics of populations through growth, replication, expansion and acquisition of resources. We introduce a general model of population change, applicable to different types of populations, which interconnects numerous factors influencing population dynamics, such as nutrient influx and nutrient consumption, reproduction period, reproduction rate, etc. It is also possible to take into account specific growth features of individual organisms. We considered two recently discovered distinct growth scenarios: first, when organisms do not change their grown mass regardless of nutrients availability, and the second when organisms can reduce their grown mass by several times in a nutritionally poor environment. We found that nutrient supply and reproduction period are two major factors influencing the shape of population growth curves. There is also a difference in population dynamics between these two groups. Organisms belonging to the second group are significantly more adaptive to reduction of nutrients and far more resistant to extinction. Also, such organisms have substantially more frequent and lesser in amplitude fluctuations of population quantity for the same periodic nutrient supply (compared to the first group). Proposed model allows adequately describing virtually any possible growth scenario, including complex ones with periodic and irregular nutrient supply and other changing parameters, which present approaches cannot do.
Bayes estimation of the general hazard rate model
International Nuclear Information System (INIS)
Sarhan, A.
1999-01-01
In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua
2010-06-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
Critical rotation of general-relativistic polytropic models revisited
Geroyannis, V.; Karageorgopoulos, V.
2013-09-01
We develop a perturbation method for computing the critical rotational parameter as a function of the equatorial radius of a rigidly rotating polytropic model in the "post-Newtonia approximation" (PNA). We treat our models as "initial value problems" (IVP) of ordinary differential equations in the complex plane. The computations are carried out by the code dcrkf54.f95 (Geroyannis and Valvi 2012 [P1]; modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane). Such a complex-plane treatment removes the syndromes appearing in this particular family of IVPs (see e.g. P1, Sec. 3) and allows continuation of the numerical integrations beyond the surface of the star. Thus all the required values of the Lane-Emden function(s) in the post-Newtonian approximation are calculated by interpolation (so avoiding any extrapolation). An interesting point is that, in our computations, we take into account the complete correction due to the gravitational term, and this issue is a remarkable difference compared to the classical PNA. We solve the generalized density as a function of the equatorial radius and find the critical rotational parameter. Our computations are extended to certain other physical characteristics (like mass, angular momentum, rotational kinetic energy, etc). We find that our method yields results comparable with those of other reliable methods. REFERENCE: V.S. Geroyannis and F.N. Valvi 2012, International Journal of Modern Physics C, 23, No 5, 1250038:1-15.
Cognitive performance modeling based on general systems performance theory.
Kondraske, George V
2010-01-01
General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).
Prognostic cloud water in the Los Alamos general circulation model
International Nuclear Information System (INIS)
Kristjansson, J.E.; Kao, C.Y.J.
1994-01-01
Most of today's general circulation models (GCMs) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, an urgent need for improvements in the treatment of clouds in GCMs has arisen, especially as the clouds relate to radiation. In this paper, we investigate the effects of introducing prognostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the longwave emissivity calculations. Results from several sensitivity simulations show that realistic water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds becomes stronger because of more realistic tropical convection
Design and implementation of a generalized laboratory data model
Directory of Open Access Journals (Sweden)
Nhan Mike
2007-09-01
Full Text Available Abstract Background Investigators in the biological sciences continue to exploit laboratory automation methods and have dramatically increased the rates at which they can generate data. In many environments, the methods themselves also evolve in a rapid and fluid manner. These observations point to the importance of robust information management systems in the modern laboratory. Designing and implementing such systems is non-trivial and it appears that in many cases a database project ultimately proves unserviceable. Results We describe a general modeling framework for laboratory data and its implementation as an information management system. The model utilizes several abstraction techniques, focusing especially on the concepts of inheritance and meta-data. Traditional approaches commingle event-oriented data with regular entity data in ad hoc ways. Instead, we define distinct regular entity and event schemas, but fully integrate these via a standardized interface. The design allows straightforward definition of a "processing pipeline" as a sequence of events, obviating the need for separate workflow management systems. A layer above the event-oriented schema integrates events into a workflow by defining "processing directives", which act as automated project managers of items in the system. Directives can be added or modified in an almost trivial fashion, i.e., without the need for schema modification or re-certification of applications. Association between regular entities and events is managed via simple "many-to-many" relationships. We describe the programming interface, as well as techniques for handling input/output, process control, and state transitions. Conclusion The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years. It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cheong, Yuk Fai; Kamata, Akihito
2013-01-01
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Overestimation of closed-chamber soil CO2 effluxes at low atmospheric turbulence
DEFF Research Database (Denmark)
Brændholt, Andreas; Larsen, Klaus Steenberg; Ibrom, Andreas
2017-01-01
be eliminated if proper mixing of air is ensured, and indeed the use of fans removed the overestimation of R-s rates during low u(*). Artificial turbulent air mixing may thus provide a method to overcome the problems of using closed-chamber gas-exchange measurement techniques during naturally occurring low...... rates and friction velocity (u(*)) above the canopy, suggesting that R-s was overestimated at low atmospheric turbulence throughout the year due to non-steadystate conditions during measurements. Filtering out data at low u(*) values removed or even inverted the observed diurnal pattern...... atmospheric turbulence conditions. Other possible effects from using fans during soil CO2 efflux measurements are discussed. In conclusion, periods with low atmospheric turbulence may provide a significant source of error in R-s rates estimated by the use of closed-chamber tech-niques and erroneous data must...
Decline of Hip Joint Movement Relates to Overestimation of Maximum Forward Reach in Elderly Persons.
Okimoto, Atsushi; Toriyama, Minoru; Deie, Masataka; Maejima, Hiroshi
2017-01-01
The authors aimed to characterize age-related changes in the performance of maximum reach and identify kinematic parameters that explain the age-related discrepancy between perceived and actual maximum reach distance. Maximum reach was evaluated in 22 younger women (21.3 years old) and 20 older women (81.2 years old). Both the perceived and actual maximum forward reach and forward excursion of the center of pressure was shorter in older women. Older women also overestimated their maximum reach distance to a greater extent. Decline of movement at the hip joint specifically correlated with both the maximum distance and the overestimation. Based on these results, decline of hip control may be a primary factor for the age-related retardation of perceived and actual maximum reach.
Generalized equilibrium modeling: the methodology of the SRI-Gulf energy model. Final report
Energy Technology Data Exchange (ETDEWEB)
Gazalet, E.G.
1977-05-01
The report provides documentation of the generalized equilibrium modeling methodology underlying the SRI-Gulf Energy Model and focuses entirely on the philosophical, mathematical, and computational aspects of the methodology. The model is a highly detailed regional and dynamic model of the supply and demand for energy in the US. The introduction emphasized the need to focus modeling efforts on decisions and the coordinated decomposition of complex decision problems using iterative methods. The conceptual framework is followed by a description of the structure of the current SRI-Gulf model and a detailed development of the process relations that comprise the model. The network iteration algorithm used to compute a solution to the model is described and the overall methodology is compared with other modeling methodologies. 26 references.
Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm
Jansen, R.C.
A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical
Overestimation of Knowledge About Word Meanings: The “Misplaced Meaning” Effect
Kominsky, Jonathan F.; Keil, Frank C.
2014-01-01
Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 & 2 demonstrate that children and adults show a highly consistent MM effect, and that it ...
Explicit prediction of ice clouds in general circulation models
Kohler, Martin
1999-11-01
Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted
Admission CT perfusion may overestimate initial infarct core: the ghost infarct core concept.
Boned, Sandra; Padroni, Marina; Rubiera, Marta; Tomasello, Alejandro; Coscojuela, Pilar; Romero, Nicolás; Muchada, Marián; Rodríguez-Luna, David; Flores, Alan; Rodríguez, Noelia; Juega, Jesús; Pagola, Jorge; Alvarez-Sabin, José; Molina, Carlos A; Ribó, Marc
2017-01-01
Identifying infarct core on admission is essential to establish the amount of salvageable tissue and indicate reperfusion therapies. Infarct core is established on CT perfusion (CTP) as the severely hypoperfused area, however the correlation between hypoperfusion and infarct core may be time-dependent as it is not a direct indicator of tissue damage. This study aims to characterize those cases in which the admission core lesion on CTP does not reflect an infarct on follow-up imaging. We studied patients with cerebral large vessel occlusion who underwent CTP on admission but received endovascular thrombectomy based on a non-contrast CT Alberta Stroke Program Early CT Score (ASPECTS) >6. Admission infarct core was measured on initial cerebral blood volume (CBV) CTP and final infarct on follow-up CT. We defined ghost infarct core (GIC) as initial core minus final infarct >10 mL. 79 patients were studied. Median National Institutes of Health Stroke Scale (NIHSS) score was 17 (11-20), median time from symptoms to CTP was 215 (87-327) min, and recanalization rate (TICI 2b-3) was 77%. Thirty patients (38%) presented with a GIC >10 mL. GIC >10 mL was associated with recanalization (TICI 2b-3: 90% vs 68%; p=0.026), admission glycemia (185 min: 26%; p=0.033). An adjusted logistic regression model identified time from symptom to CTP imaging 10 mL (OR 2.89, 95% CI 1.04 to 8.09). At 24 hours, clinical improvement was more frequent in patients with GIC >10 mL (66.6% vs 39%; p=0.017). CT perfusion may overestimate final infarct core, especially in the early time window. Selecting patients for reperfusion therapies based on the CTP mismatch concept may deny treatment to patients who might still benefit from reperfusion. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Multi-year predictability in a coupled general circulation model
Energy Technology Data Exchange (ETDEWEB)
Power, Scott; Colman, Rob [Bureau of Meteorology Research Centre, Melbourne, VIC (Australia)
2006-02-01
Multi-year to decadal variability in a 100-year integration of a BMRC coupled atmosphere-ocean general circulation model (CGCM) is examined. The fractional contribution made by the decadal component generally increases with depth and latitude away from surface waters in the equatorial Indo-Pacific Ocean. The relative importance of decadal variability is enhanced in off-equatorial ''wings'' in the subtropical eastern Pacific. The model and observations exhibit ''ENSO-like'' decadal patterns. Analytic results are derived, which show that the patterns can, in theory, occur in the absence of any predictability beyond ENSO time-scales. In practice, however, modification to this stochastic view is needed to account for robust differences between ENSO-like decadal patterns and their interannual counterparts. An analysis of variability in the CGCM, a wind-forced shallow water model, and a simple mixed layer model together with existing and new theoretical results are used to improve upon this stochastic paradigm and to provide a new theory for the origin of decadal ENSO-like patterns like the Interdecadal Pacific Oscillation and Pacific Decadal Oscillation. In this theory, ENSO-driven wind-stress variability forces internal equatorially-trapped Kelvin waves that propagate towards the eastern boundary. Kelvin waves can excite reflected internal westward propagating equatorially-trapped Rossby waves (RWs) and coastally-trapped waves (CTWs). CTWs have no impact on the off-equatorial sub-surface ocean outside the coastal wave guide, whereas the RWs do. If the frequency of the incident wave is too high, then only CTWs are excited. At lower frequencies, both CTWs and RWs can be excited. The lower the frequency, the greater the fraction of energy transmitted to RWs. This lowers the characteristic frequency of variability off the equator relative to its equatorial counterpart. Both the eastern boundary interactions and the accumulation of
Prediction Equations Overestimate the Energy Requirements More for Obesity-Susceptible Individuals.
McLay-Cooke, Rebecca T; Gray, Andrew R; Jones, Lynnette M; Taylor, Rachael W; Skidmore, Paula M L; Brown, Rachel C
2017-09-13
Predictive equations to estimate resting metabolic rate (RMR) are often used in dietary counseling and by online apps to set energy intake goals for weight loss. It is critical to know whether such equations are appropriate for those susceptible to obesity. We measured RMR by indirect calorimetry after an overnight fast in 26 obesity susceptible (OSI) and 30 obesity resistant (ORI) individuals, identified using a simple 6-item screening tool. Predicted RMR was calculated using the FAO/WHO/UNU (Food and Agricultural Organisation/World Health Organisation/United Nations University), Oxford and Miflin-St Jeor equations. Absolute measured RMR did not differ significantly between OSI versus ORI (6339 vs. 5893 kJ·d -1 , p = 0.313). All three prediction equations over-estimated RMR for both OSI and ORI when measured RMR was ≤5000 kJ·d -1 . For measured RMR ≤7000 kJ·d -1 there was statistically significant evidence that the equations overestimate RMR to a greater extent for those classified as obesity susceptible with biases ranging between around 10% to nearly 30% depending on the equation. The use of prediction equations may overestimate RMR and energy requirements particularly in those who self-identify as being susceptible to obesity, which has implications for effective weight management.
R(D(*)) in a general two Higgs doublet model
Iguro, Syuhei; Tobe, Kazuhiro
2017-12-01
Motivated by an anomaly in R (D (*)) = BR (B bar →D (*)τ- ν bar) / BR (B bar →D (*)l- ν bar) reported by BaBar, Belle and LHCb, we study R (D (*)) in a general two Higgs doublet model (2HDM). Although it has been suggested that it is difficult for the 2HDM to explain the current world average for R (D (*)), it would be important to clarify how large deviations from the standard model predictions for R (D (*)) are possible in the 2HDM. We investigate possible corrections to R (D (*)) in the 2HDM by taking into account various flavor physics constraints (such as Bc- →τ- ν bar , b → sγ, b → sl+l-, Δm Bd,s, Bs →μ+μ- and τ+τ-, and B- →τ- ν bar), and find that it would be possible (impossible) to accommodate the 1σ region suggested by the Belle's result when we adopt a constraint BR (Bc- →τ- ν bar) ≤ 30% (BR (Bc- →τ- ν bar) ≤ 10%). We also study productions and decays of heavy neutral and charged Higgs bosons at the Large Hadron Collider (LHC) experiment and discuss the constraints and implications at the LHC. We show that in addition to well-studied production modes bg → tH- and gg → H / A, exotic productions of heavy Higgs bosons such as cg → bH+ , t + H / A and c b bar →H+ would be significantly large, and the search for their exotic decay modes such as H / A → t c bar + c t bar , μ±τ∓ and H+ → c b bar as well as H / A →τ+τ- and H+ →τ+ ν would be important to probe the interesting parameter regions for R (D (*)).
Proton radioactivity within a generalized liquid drop model
Dong, J. M.; Zhang, H. F.; Royer, G.
2009-05-01
The proton radioactivity half-lives of spherical proton emitters are investigated theoretically. The potential barriers preventing the emission of protons are determined in the quasimolecular shape path within a generalized liquid drop model (GLDM) including the proximity effects between nuclei in a neck and the mass and charge asymmetry. The penetrability is calculated with the WKB approximation. The spectroscopic factor has been taken into account in half-life calculation, which is obtained by employing the relativistic mean field (RMF) theory combined with the BCS method with the force NL3. The half-lives within the GLDM are compared with the experimental data and other theoretical values. The GLDM works quite well for spherical proton emitters when the spectroscopic factors are considered, indicating the necessity of introducing the spectroscopic factor and the success of the GLDM for proton emission. Finally, we present two formulas for proton emission half-life calculation similar to the Viola-Seaborg formulas and Royer's formulas of α decay.
Tavasszy, L.; Davydenko, I.; Ruijgrok, K.
2009-01-01
The integration of Spatial Equilibrium models and Freight transport network models is important to produce consistent scenarios for future freight transport demand. At various spatial scales, we see the changes in production, trade, logistics networking and transportation, being driven by
The DSM-5 dimensional trait model and five-factor models of general personality.
Gore, Whitney L; Widiger, Thomas A
2013-08-01
The current study tests empirically the relationship of the dimensional trait model proposed for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) with five-factor models of general personality. The DSM-5 maladaptive trait dimensional model proposal included 25 traits organized within five broad domains (i.e., negative affectivity, detachment, antagonism, disinhibition, and psychoticism). Consistent with the authors of the proposal, it was predicted that negative affectivity would align with five-factor model (FFM) neuroticism, detachment with FFM introversion, antagonism with FFM antagonism, disinhibition with low FFM conscientiousness and, contrary to the proposal; psychoticism would align with FFM openness. Three measures of alternative five-factor models of general personality were administered to 445 undergraduates along with the Personality Inventory for DSM-5. The results provided support for the hypothesis that all five domains of the DSM-5 dimensional trait model are maladaptive variants of general personality structure, including the domain of psychoticism. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Generalized Network Psychometrics : Combining Network and Latent Variable Models
Epskamp, S.; Rhemtulla, M.; Borsboom, D.
2017-01-01
We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between
Directory of Open Access Journals (Sweden)
A. El-Shafie
2011-03-01
Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.
Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.
Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L
2012-03-01
Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.
Hydraulic fracturing model based on the discrete fracture model and the generalized J integral
Liu, Z. Q.; Liu, Z. F.; Wang, X. H.; Zeng, B.
2016-08-01
The hydraulic fracturing technique is an effective stimulation for low permeability reservoirs. In fracturing models, one key point is to accurately calculate the flux across the fracture surface and the stress intensity factor. To achieve high precision, the discrete fracture model is recommended to calculate the flux. Using the generalized J integral, the present work obtains an accurate simulation of the stress intensity factor. Based on the above factors, an alternative hydraulic fracturing model is presented. Examples are included to demonstrate the reliability of the proposed model and its ability to model the fracture propagation. Subsequently, the model is used to describe the relationship between the geometry of the fracture and the fracturing equipment parameters. The numerical results indicate that the working pressure and the pump power will significantly influence the fracturing process.
Optimal Scaling of Interaction Effects in Generalized Linear Models
van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.
2009-01-01
Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…
Modeling Radiation Belt Electron Dynamics with the DREAM3D Diffusion Model
Energy Technology Data Exchange (ETDEWEB)
Tu, Weichao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cunningham, Gregory S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chen, Yue [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henderson, Michael G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morley, Steven K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Reeves, Geoffrey D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Blake, Bernard J. [The Aerospace Corporation, El Segundo, CA (United States); Baker, Daniel N. [Lab. for Atmospheric and Space Physics, Boulder, CO (United States); Spence, Harlan [Univ. of New Hampshire, Durham, NH (United States)
2014-02-14
The simulation results from our 3D diffusion model on the CRRES era suggest; our model captures the general variations of radiation belt electrons, including the dropouts and the enhancements; the overestimations inside the plasmapause can be improved by increasing the PA diffusion from hiss waves; and that better D_{LL} and wave models are required.
Hierarchical shrinkage priors and model fitting for high-dimensional generalized linear models.
Yi, Nengjun; Ma, Shuangge
2012-11-26
Abstract Genetic and other scientific studies routinely generate very many predictor variables, which can be naturally grouped, with predictors in the same groups being highly correlated. It is desirable to incorporate the hierarchical structure of the predictor variables into generalized linear models for simultaneous variable selection and coefficient estimation. We propose two prior distributions: hierarchical Cauchy and double-exponential distributions, on coefficients in generalized linear models. The hierarchical priors include both variable-specific and group-specific tuning parameters, thereby not only adopting different shrinkage for different coefficients and different groups but also providing a way to pool the information within groups. We fit generalized linear models with the proposed hierarchical priors by incorporating flexible expectation-maximization (EM) algorithms into the standard iteratively weighted least squares as implemented in the general statistical package R. The methods are illustrated with data from an experiment to identify genetic polymorphisms for survival of mice following infection with Listeria monocytogenes. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/).
A general lexicographic model for a typological variety of ...
African Journals Online (AJOL)
eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) ...
A generalized exponential time series regression model for electricity prices
DEFF Research Database (Denmark)
Haldrup, Niels; Knapik, Oskar; Proietti, Tomasso
on the estimated model, the best linear predictor is constructed. Our modeling approach provides good fit within sample and outperforms competing benchmark predictors in terms of forecasting accuracy. We also find that building separate models for each hour of the day and averaging the forecasts is a better...
Developing a Dynamic Stochastic General Equilibrium Model for the ...
International Development Research Centre (IDRC) Digital Library (Canada)
A range of applied economic tools, such as time series models or econometric models that build on simple statistical properties, have been used to provide these types of analyses. However, there is now an increasing body of economic literature that attempts to build economic models based on a more comprehensive and ...
Poisson-generalized gamma empirical Bayes model for disease ...
African Journals Online (AJOL)
In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...
Exact solution of generalized Schulz-Shastry type models
International Nuclear Information System (INIS)
Osterloh, Andreas; Amico, Luigi; Eckern, Ulrich
2000-01-01
A class of integrable one-dimensional models presented by Shastry and Schulz is consequently extended to the whole class of one-dimensional Hubbard- or XXZ-type models with correlated gauge-like hopping. A complete characterization concerning solvability by coordinate Bethe ansatz of this class of models is found
Optimal Scaling of Interaction Effects in Generalized Linear Models
J.M. van Rosmalen (Joost); A.J. Koning (Alex); P.J.F. Groenen (Patrick)
2007-01-01
textabstractMultiplicative interaction models, such as Goodman's RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are only suitable for data sets with two or three predictor variables. Here, we discuss an
Quantum mechanics vs. general covariance in gravity and string models
International Nuclear Information System (INIS)
Martinec, E.J.
1984-01-01
Quantization of simple low-dimensional systems embodying general covariance is studied. Functional methods are employed in the calculation of effective actions for fermionic strings and 1 + 1 dimensional gravity. The author finds that regularization breaks apparent symmetries of the theory, providing new dynamics for the string and non-trivial dynamics for 1 + 1 gravity. The author moves on to consider the quantization of some generally covariant systems with a finite number of physical degrees of freedom, assuming the existence of an invariant cutoff. The author finds that the wavefunction of the universe in these cases is given by the solution to simple quantum mechanics problems
Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms
Samir Khaled Safi
2014-01-01
The autocorrelation function (ACF) measures the correlation between observations at different distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q). We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj)=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,&hellip...
Vossbeck-Elsebusch, Anna N; Waldorf, Manuel; Legenbauer, Tanja; Bauer, Anika; Cordes, Martin; Vocks, Silja
2015-06-01
Body-related avoidance behavior, e.g., not looking in the mirror, is a common feature of eating disorders. It is assumed that it leads to insufficient feedback concerning one's own real body form and might thus contribute to distorted mental representation of one's own body. However, this assumption still lacks empirical foundation. Therefore, the aim of the present study was to examine the relationship between misperception of one's own body and body-related avoidance behavior in N = 78 female patients with Bulimia nervosa and eating disorder not otherwise specified. Body-size misperception was assessed using a digital photo distortion technique based on an individual picture of each participant which was taken in a standardized suit. In a regression analysis with body-related avoidance behavior, body mass index and weight and shape concerns as predictors, only body-related avoidance behavior significantly contributed to the explanation of body-size overestimation. This result supports the theoretical assumption that body-related avoidance behavior makes body-size overestimation more likely.
Koch, Elard; Bravo, Miguel; Gatica, Sebastián; Stecher, Juan F; Aracena, Paula; Valenzuela, Sergio; Ahlers, Ivonne
2012-05-01
Recently, the Guttmacher Institute estimated a number of 400,400 clandestine abortions for Colombia. Because of the strong implications that such brief could have in different areas of interest, a full revision of the methodology of estimation was performed. The methodology used by the Guttmacher Institute was as follows: first, the authors estimated the losses from spontaneous and induced abortions from the opinion of 289 subjects who work in an equal number of Colombian health institutions through the opinion survey entitled "Health Facilities Survey". Subsequently, an expansive multiplier (x3, x4, x5, etc.) was applied to the numbers obtained by this survey that also emerges from a subjective opinion of another 102 respondents of the "Health Professional Survey" selected by convenience. There is no objective data based on real vital events, the whole estimate is based on imagining/numbers underlying mere opinions. Even as public opinion survey, the sampling technique introduced serious selection bias in the gathering of information. Valid epidemiological methods using standardized rates, choosing the paradigmatic cases of Chile and Spain as standard populations, it was observed that Guttmacher Institute methodology overestimates more than 9 times the complications due to induced abortion in hospital discharges and more than 18 times the total number of induced abortions. In other Latin American countries where the same methodology was applied including Argentina, Brazil, Chile, Mexico, Peru, Guatemala, and Dominican Republic, the number of induced abortions was also largely overestimated. These results call for caution with this type of reports that alarm public opinion.
Rose, Laura
2017-01-01
Specific root length (SRL) and root tissue density (RTD) are ecologically functional traits which are calculated from root length or volume and root dry weight. Both can be converted into each other using the root diameter assuming roots are cylindrical. The calculation of volume from length or length from volume is, however, problematic because samples of roots do usually not have a constant diameter. Ignorance of the diameter heterogeneity leads to an overestimation of length and an underestimation of volume if standard formulas are used. Here I show for two datasets that SRL and RTD are overestimated on average 67% for the two analyzed datasets, but up to 150%, if calculated from each other. I further highlight that the volume values for the total sample as provided by the commonly used software WinRHIZO TM should only be used for objects with constant diameter. I recommend to use volume values provided for each diameter class of a sample if WinRHIZO TM is used. If manual methods, like the line-intersect method, are used, roots should be separated into diameter classes before length measurements if the volume is calculated from length. Trait to trait conversions for whole samples are not recommended.
Debate on the Chernobyl disaster: on the causes of Chernobyl overestimation.
Jargin, Sergei V
2012-01-01
After the Chernobyl accident, many publications appeared that overestimated its medical consequences. Some of them are discussed in this article. Among the motives for the overestimation were anti-nuclear sentiments, widespread among some adherents of the Green movement; however, their attitude has not been wrong: nuclear facilities should have been prevented from spreading to overpopulated countries governed by unstable regimes and regions where conflicts and terrorism cannot be excluded. The Chernobyl accident has hindered worldwide development of atomic industry. Today, there are no alternatives to nuclear power: nonrenewable fossil fuels will become more and more expensive, contributing to affluence in the oil-producing countries and poverty in the rest of the world. Worldwide introduction of nuclear energy will become possible only after a concentration of authority within an efficient international executive. This will enable construction of nuclear power plants in optimally suitable places, considering all sociopolitical, geographic, geologic, and other preconditions. In this way, accidents such as that in Japan in 2011 will be prevented.
Generalized isothermal models with strange equation of state
Indian Academy of Sciences (India)
Sri Lanka. *Corresponding author. E-mail: maharaj@ukzn.ac.za. MS received 30 October 2008; revised 5 December 2008; accepted 16 December 2008. Abstract. We consider the linear equation of state for matter distributions that may be applied to strange stars with quark matter. In our general approach the compact.
Model-free adaptive sliding mode controller design for generalized ...
Indian Academy of Sciences (India)
To solve the difficulties from the little knowledge about the master–slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown ...
Nutrition counselling in general practice: the stages of change model
Verheijden, M.W.
2004-01-01
Healthy lifestyles in the prevention of cardiovascular diseases are of utmost importance for people with non insulin-dependent diabetes mellitus, hypertension, and/or dyslipidemia. Because of their continuous contact with almost all segments of the population, general practitioners can play an
Bianchi type IX string cosmological model in general relativity
Indian Academy of Sciences (India)
Cosmic strings arise during phase transitions after the big-bang explosion as the temperature goes down below some critical temperature [1–3]. These strings have stress energy and couple in a simple way to the gravitational field. The general relativistic formalism of cosmic strings is due to Letelier [4,5]. Stachel [6] has ...
Setting Generality of Peer Modeling in Children with Autism.
Carr, Edward G.; Darcy, Michael
1990-01-01
Four preschool children with autism played "Follow-the-Leader," in which a normal peer demonstrated and physically prompted a variety of actions and object manipulations that defined the activity. Following training, all four subjects generalized their imitative skill to a new setting involving new actions and object manipulations. (Author/JDD)
Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter; Ghali, William A; Marshall, Deborah A
2018-01-01
Kaplan-Meier survival analysis overestimates cumulative incidence in competing risks (CRs) settings. The extent of overestimation (or its clinical significance) has been questioned, and CRs methods are infrequently used. This meta-analysis compares the Kaplan-Meier method to the cumulative incidence function (CIF), a CRs method. We searched MEDLINE, EMBASE, BIOSIS Previews, Web of Science (1992-2016), and article bibliographies for studies estimating cumulative incidence using the Kaplan-Meier method and CIF. For studies with sufficient data, we calculated pooled risk ratios (RRs) comparing Kaplan-Meier and CIF estimates using DerSimonian and Laird random effects models. We performed stratified meta-analyses by clinical area, rate of CRs (CRs/events of interest), and follow-up time. Of 2,192 identified abstracts, we included 77 studies in the systematic review and meta-analyzed 55. The pooled RR demonstrated the Kaplan-Meier estimate was 1.41 [95% confidence interval (CI): 1.36, 1.47] times higher than the CIF. Overestimation was highest among studies with high rates of CRs [RR = 2.36 (95% CI: 1.79, 3.12)], studies related to hepatology [RR = 2.60 (95% CI: 2.12, 3.19)], and obstetrics and gynecology [RR = 1.84 (95% CI: 1.52, 2.23)]. The Kaplan-Meier method overestimated the cumulative incidence across 10 clinical areas. Using CRs methods will ensure accurate results inform clinical and policy decisions. Copyright © 2017 Elsevier Inc. All rights reserved.
General model and control of an n rotor helicopter
International Nuclear Information System (INIS)
Sidea, A G; Brogaard, R Yding; Andersen, N A; Ravn, O
2014-01-01
The purpose of this study was to create a dynamic, nonlinear mathematical model of a multirotor that would be valid for different numbers of rotors. Furthermore, a set of Single Input Single Output (SISO) controllers were implemented for attitude control. Both model and controllers were tested experimentally on a quadcopter. Using the combined model and controllers, simple system simulation and control is possible, by replacing the physical values for the individual systems
General model and control of an n rotor helicopter
Sidea, A. G.; Yding Brogaard, R.; Andersen, N. A.; Ravn, O.
2014-12-01
The purpose of this study was to create a dynamic, nonlinear mathematical model of a multirotor that would be valid for different numbers of rotors. Furthermore, a set of Single Input Single Output (SISO) controllers were implemented for attitude control. Both model and controllers were tested experimentally on a quadcopter. Using the combined model and controllers, simple system simulation and control is possible, by replacing the physical values for the individual systems.
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...
Chen, Yu; Starobin, Soko S.
2018-01-01
This study examined a psychosocial mechanism of how general self-efficacy interacts with other key factors and influences degree aspiration for students enrolled in an urban diverse community college. Using general self-efficacy scales, the authors hypothesized the General Self-efficacy model for Community College students (the GSE-CC model). A…
Phase transitions in self-dual generalizations of the Baxter-Wu model
Deng, Y.; Guo, W.; Heringa, J.R.; Blöte, H.W.J.; Nienhuis, B.
2010-01-01
We study two types of generalized Baxter-Wu models, by means of transfer-matrix and Monte Carlo techniques. The first generalization allows for different couplings in the up- and down-triangles, and the second generalization is to a q-state spin model with three-spin interactions. Both
Instituto Nacional para la Educacion de los Adultos, Mexico City (Mexico).
This document describes literacy models for urban and rural populations in Mexico. It contains four sections. The first two sections (generalizations about the population and considerations about the teaching of adults) discuss the environment that creates illiterate adults and also describe some of the conditions under which learning takes place…
Generalized height-diameter models for Populus tremula L. stands
African Journals Online (AJOL)
USER
2010-07-12
Jul 12, 2010 ... Using permanent sample plot data, selected tree height and diameter functions were evaluated for their predictive abilities for Populus tremula stands in Turkey. Two sets of models were evaluated. The first set included five models for estimating height as a function of individual tree diameter; the second set.
General Dynamic Equivalent Modeling of Microgrid Based on Physical Background
Directory of Open Access Journals (Sweden)
Changchun Cai
2015-11-01
Full Text Available Microgrid is a new power system concept consisting of small-scale distributed energy resources; storage devices and loads. It is necessary to employ a simplified model of microgrid in the simulation of a distribution network integrating large-scale microgrids. Based on the detailed model of the components, an equivalent model of microgrid is proposed in this paper. The equivalent model comprises two parts: namely, equivalent machine component and equivalent static component. Equivalent machine component describes the dynamics of synchronous generator, asynchronous wind turbine and induction motor, equivalent static component describes the dynamics of photovoltaic, storage and static load. The trajectory sensitivities of the equivalent model parameters with respect to the output variables are analyzed. The key parameters that play important roles in the dynamics of the output variables of the equivalent model are identified and included in further parameter estimation. Particle Swarm Optimization (PSO is improved for the parameter estimation of the equivalent model. Simulations are performed in different microgrid operation conditions to evaluate the effectiveness of the equivalent model of microgrid.
Generalized Constitutive Model for Stabilized Quick Clay | Bujulu ...
African Journals Online (AJOL)
An experimentally-based two yield surface constitutive model for cemented quick clay has been developed at NTNU, Norway, to reproduce the mechanical behavior of the stabilized quick clay in the triaxial p'-q stress space. The model takes into account the actual mechanical properties of the stabilized material, such as ...
A Boundary Layer Parameterization for a General Model.
1984-03-01
evaluation of grassland evapotrans- piration. Agric. Meteor., 11, 373-383. O’Neill, P., L. Pochop and J. Borrelli , 1979: Urban lawn evapotranspiration...model development in this report is original in nature. The software logistics of the combination of these models is described in the User’s Manual
General model and control of an n rotor helicopter
DEFF Research Database (Denmark)
Sidea, Adriana-Gabriela; Brogaard, Rune Yding; Andersen, Nils Axel
2015-01-01
The purpose of this study was to create a dynamic, nonlinear mathematical model ofa multirotor that would be valid for different numbers of rotors. Furthermore, a set of SingleInput Single Output (SISO) controllers were implemented for attitude control. Both model andcontrollers were tested...
A generalized regional design storm rainfall model for Botswana ...
African Journals Online (AJOL)
Design of drainage and dam structures involves a full understanding of the duration, magnitude and volume of peak flood flows anticipated. For gauged catchments a number of established flood frequency models and rainfall-runoff models are used widely. However, most planned developments for bridge or dam or any ...
Developing a Dynamic Stochastic General Equilibrium Model for the ...
International Development Research Centre (IDRC) Digital Library (Canada)
NCAER is planning the project in two phases: -Phase 1: Researchers will develop a model for India based on a review of the relevant literature and consultations. They will develop a database for estimating purposes. -Phase 2: Researchers will estimate the model, operationalize it, and validate it through alternative sets of ...
Global existence result for the generalized Peterlin viscoelastic model
Czech Academy of Sciences Publication Activity Database
Lukáčová-Medviďová, M.; Mizerová, H.; Nečasová, Šárka; Renardy, M.
2017-01-01
Roč. 49, č. 4 (2017), s. 2950-2964 ISSN 0036-1410 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : Peterlin viscoelastic equations * global existence * weak solutions Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.648, year: 2016 http://epubs.siam.org/doi/abs/10.1137/16M1068505
Learning Shape Descriptions: Generating and Generalizing Models of Visual Objects.
1985-09-01
referencing and consistent mod- eling for mobile robots," Proceedings of the IEEE Robotics Conference. Chomsky , Noam , and Morris Halle, 1968], The...between their representations. A similar technique has been developed in phonology [Kenstowicz 1979, Chomsky 19681 for representing the features of...positive examples only. 0 make an educated guess if it is not sure of its answer. e not make any rash generalizations. * be able to recover from over
Global existence result for the generalized Peterlin viscoelastic model
Czech Academy of Sciences Publication Activity Database
Lukáčová-Medviďová, M.; Mizerová, H.; Nečasová, Šárka; Renardy, M.
2017-01-01
Roč. 49, č. 4 (2017), s. 2950-2964 ISSN 0036-1410 R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : Peterlin viscoelastic equations * global existence * weak solutions Subject RIV: BA - General Mathematics Impact factor: 1.648, year: 2016 http://epubs.siam.org/doi/abs/10.1137/16M1068505
A Generalized Framework for Modeling Next Generation 911 Implementations.
Energy Technology Data Exchange (ETDEWEB)
Kelic, Andjelka; Aamir, Munaf Syed; Kelic, Andjelka; Jrad, Ahmad M.; Mitchell, Roger
2018-02-01
This document summarizes the current state of Sandia 911 modeling capabilities and then addresses key aspects of Next Generation 911 (NG911) architectures for expansion of existing models. Analysis of three NG911 implementations was used to inform heuristics , associated key data requirements , and assumptions needed to capture NG911 architectures in the existing models . Modeling of NG911 necessitates careful consideration of its complexity and the diversity of implementations. Draft heuristics for constructing NG911 models are pres ented based on the analysis along with a summary of current challenges and ways to improve future NG911 modeling efforts . We found that NG911 relies on E nhanced 911 (E911) assets such as 911 selective routers to route calls originating from traditional tel ephony service which are a majority of 911 calls . We also found that the diversity and transitional nature of NG911 implementations necessitates significant and frequent data collection to ensure that adequate model s are available for crisis action support .
Weight Smoothing for Generalized Linear Models Using a Laplace Prior
Xia, Xi; Elliott, Michael R.
2017-01-01
When analyzing data sampled with unequal inclusion probabilities, correlations between the probability of selection and the sampled data can induce bias if the inclusion probabilities are ignored in the analysis. Weights equal to the inverse of the probability of inclusion are commonly used to correct possible bias. When weights are uncorrelated with the descriptive or model estimators of interest, highly disproportional sample designs resulting in large weights can introduce unnecessary variability, leading to an overall larger mean square error compared to unweighted methods. We describe an approach we term ‘weight smoothing’ that models the interactions between the weights and the estimators as random effects, reducing the root mean square error (RMSE) by shrinking interactions toward zero when such shrinkage is allowed by the data. This article adapts a flexible Laplace prior distribution for the hierarchical Bayesian model to gain a more robust bias-variance tradeoff than previous approaches using normal priors. Simulation and application suggest that under a linear model setting, weight-smoothing models with Laplace priors yield robust results when weighting is necessary, and provide considerable reduction in RMSE otherwise. In logistic regression models, estimates using weight-smoothing models with Laplace priors are robust, but with less gain in efficiency than in linear regression settings. PMID:29225401
Does the Interpersonal Model Generalize to Obesity Without Binge Eating?
Lo Coco, Gianluca; Sutton, Rachel; Tasca, Giorgio A; Salerno, Laura; Oieni, Veronica; Compare, Angelo
2016-09-01
The interpersonal model has been validated for binge eating disorder (BED), but it is not yet known if the model applies to individuals who are obese but who do not binge eat. The goal of this study was to compare the validity of the interpersonal model in those with BED versus those with obesity, and normal weight samples. Data from a sample of 93 treatment-seeking women diagnosed with BED, 186 women who were obese without BED, and 100 controls who were normal weight were examined for indirect effects of interpersonal problems on binge eating psychopathology mediated through negative affect. Findings demonstrated the mediating role of negative affect for those with BED and those who were obese without BED. Testing a reverse model suggested that the interpersonal model is specific for BED but that this model may not be specific for those without BED. This is the first study to find support for the interpersonal model in a sample of women with obesity but who do not binge. However, negative affect likely plays a more complex role in determining overeating in those with obesity but who do not binge. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
DGP cosmological model with generalized Ricci dark energy
Energy Technology Data Exchange (ETDEWEB)
Aguilera, Yeremy [Universidad de Santiago, Departamento de Matematicas y Ciencia de la Computacion, Santiago (Chile); Avelino, Arturo [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States); Cruz, Norman [Universidad de Santiago, Departamento de Fisica, Facultad de Ciencia, Santiago (Chile); Lepe, Samuel [Pontificia Universidad Catolica de Valparaiso, Facultad de Ciencias, Instituto de Fisica, Valparaiso (Chile); Pena, Francisco [Universidad de La Frontera, Departamento de Ciencias Fisicas, Facultad de Ingenieria y Ciencias, Temuco (Chile)
2014-11-15
The brane-world model proposed by Dvali, Gabadadze and Porrati (DGP) leads to an accelerated universe without cosmological constant or other form of dark energy for the positive branch (element of = +1). For the negative branch (element of = -1) we have investigated the behavior of a model with an holographic Ricci-like dark energy and dark matter, where the IR cutoff takes the form αH{sup 2} + βH, H being the Hubble parameter and α, β positive constants of the model. We perform an analytical study of the model in the late-time dark energy dominated epoch, where we obtain a solution for r{sub c}H(z), where r{sub c} is the leakage scale of gravity into the bulk, and conditions for the negative branch on the holographic parameters α and β, in order to hold the conditions of weak energy and accelerated universe. On the other hand, we compare the model versus the late-time cosmological data using the latest type Ia supernova sample of the Joint Light-curve Analysis (JLA), in order to constrain the holographic parameters in the negative branch, as well as r{sub c}H{sub 0} in the positive branch, where H{sub 0} is the Hubble constant. We find that the model has a good fit to the data and that the most likely values for (r{sub c}H{sub 0}, α, β) lie in the permitted region found from an analytical solution in a dark energy dominated universe. We give a justification to use a holographic cutoff in 4D for the dark energy in the 5-dimensional DGP model. Finally, using the Bayesian Information Criterion we find that this model is disfavored compared with the flat ΛCDM model. (orig.)
Computable general equilibrium model fiscal year 2014 capability development report
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Laboratory; Boero, Riccardo [Los Alamos National Laboratory
2016-05-11
This report provides an overview of the development of the NISAC CGE economic modeling capability since 2012. This capability enhances NISAC's economic modeling and analysis capabilities to answer a broader set of questions than possible with previous economic analysis capability. In particular, CGE modeling captures how the different sectors of the economy, for example, households, businesses, government, etc., interact to allocate resources in an economy and this approach captures these interactions when it is used to estimate the economic impacts of the kinds of events NISAC often analyzes.
Nucleon-generalized parton distributions in the light-front quark model
Indian Academy of Sciences (India)
2016-01-12
generalized parton distributions in the light-front quark model ... We calculate the generalized parton distributions (GPDs) for the up- and downquarks in nucleon using the effective light-front wavefunction. The results obtained for ...
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The
Accounting for household heterogeneity in general equilibrium economic growth models
International Nuclear Information System (INIS)
Melnikov, N.B.; O'Neill, B.C.; Dalton, M.G.
2012-01-01
We describe and evaluate a new method of aggregating heterogeneous households that allows for the representation of changing demographic composition in a multi-sector economic growth model. The method is based on a utility and labor supply calibration that takes into account time variations in demographic characteristics of the population. We test the method using the Population-Environment-Technology (PET) model by comparing energy and emissions projections employing the aggregate representation of households to projections representing different household types explicitly. Results show that the difference between the two approaches in terms of total demand for energy and consumption goods is negligible for a wide range of model parameters. Our approach allows the effects of population aging, urbanization, and other forms of compositional change on energy demand and CO 2 emissions to be estimated and compared in a computationally manageable manner using a representative household under assumptions and functional forms that are standard in economic growth models.
Modeling of charged anisotropic compact stars in general relativity
Energy Technology Data Exchange (ETDEWEB)
Dayanandan, Baiju; Maurya, S.K.; T, Smitha T. [University of Nizwa, Department of Mathematical and Physical Sciences, College of Arts and Science, Nizwa (Oman)
2017-06-15
A charged compact star model has been determined for anisotropic fluid distribution. We have solved the Einstein-Maxwell field equations to construct the charged compact star model by using the radial pressure, the metric function e{sup λ} and the electric charge function. The generic charged anisotropic solution is verified by exploring different physical conditions like causality condition, mass-radius relation and stability of the solution (via the adiabatic index, TOV equations and the Herrera cracking concept). It is observed that the present charged anisotropic compact star model is compatible with the star PSR 1937+21. Moreover, we also presented the EOS ρ = f(p) for the present charged compact star model. (orig.)
Prediction of cloud droplet number in a general circulation model
Energy Technology Data Exchange (ETDEWEB)
Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)
1996-04-01
We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.
RadVel: General toolkit for modeling Radial Velocities
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
Generalized Sagdeev potential theory for shock waves modeling
Akbari-Moghanjoughi, M.
2017-05-01
In this paper, we develop an innovative approach to study the shock wave propagation using the Sagdeev potential method. We also present an analytical solution for Korteweg de Vries Burgers (KdVB) and modified KdVB equation families with a generalized form of the nonlinearity term which agrees well with the numerical one. The novelty of the current approach is that it is based on a simple analogy of the particle in a classical potential with the variable particle energy providing one with a deeper physical insight into the problem and can easily be extended to more complex physical situations. We find that the current method well describes both monotonic and oscillatory natures of the dispersive-diffusive shock structures in different viscous fluid configurations. It is particularly important that all essential parameters of the shock structure can be deduced directly from the Sagdeev potential in small and large potential approximation regimes. Using the new method, we find that supercnoidal waves can decay into either compressive or rarefactive shock waves depending on the initial wave amplitude. Current investigation provides a general platform to study a wide range of phenomena related to nonlinear wave damping and interactions in diverse fluids including plasmas.
General dynamical properties of cosmological models with nonminimal kinetic coupling
Matsumoto, Jiro; Sushkov, Sergey V.
2018-01-01
We consider cosmological dynamics in the theory of gravity with the scalar field possessing the nonminimal kinetic coupling to curvature given as η Gμνphi,μphi,ν, where η is an arbitrary coupling parameter, and the scalar potential V(phi) which assumed to be as general as possible. With an appropriate dimensionless parametrization we represent the field equations as an autonomous dynamical system which contains ultimately only one arbitrary function χ (x)= 8 π | η | V(x/√8 π) with x=√8 πphi. Then, assuming the rather general properties of χ(x), we analyze stationary points and their stability, as well as all possible asymptotical regimes of the dynamical system. It has been shown that for a broad class of χ(x) there exist attractors representing three accelerated regimes of the Universe evolution, including de Sitter expansion (or late-time inflation), the Little Rip scenario, and the Big Rip scenario. As the specific examples, we consider a power-law potential V(phi)=M4(phi/phi0)σ, Higgs-like potential V(phi)=λ/4(phi2‑phi02)2, and exponential potential V(phi)=M4 e‑phi/phi0.
Generalized Heteroskedasticity ACF for Moving Average Models in Explicit Forms
Directory of Open Access Journals (Sweden)
Samir Khaled Safi
2014-02-01
Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The autocorrelation function (ACF measures the correlation between observations at different distances apart. We derive explicit equations for generalized heteroskedasticity ACF for moving average of order q, MA(q. We consider two cases: Firstly: when the disturbance term follow the general covariance matrix structure Cov(wi, wj=S with si,j ¹ 0 " i¹j . Secondly: when the diagonal elements of S are not all identical but sij = 0 " i¹j, i.e. S=diag(s11, s22,…,stt. The forms of the explicit equations depend essentially on the moving average coefficients and covariance structure of the disturbance terms. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}
Relativistic generalizations of simple pion-nucleon models
International Nuclear Information System (INIS)
McLeod, R.J.; Ernst, D.J.
1981-01-01
A relativistic, partial wave N/D dispersion theory is developed for low energy pion-nucleon elastic scattering. The theory is simplified by treating crossing symmetry only to lowest order in the inverse nucleon mass. The coupling of elastic scattering to inelastic channels is included by taking the necessary inelasticity from experimental data. Three models are examined: pseudoscalar coupling of pions and nucleons, pseudovector coupling, and a model in which all intermediate antinucleons are projected out of the amplitude. The phase shifts in the dominant P 33 channel are quantitatively reproduced for P/sub lab/ 33 phase shifts. Thus a model of the pion-nucleon interaction which does not include antinucleon degrees of freedom is found to be unphysical
International Nuclear Information System (INIS)
Lacey, R.J.; Morgan, N.G.; Scarpello, J.H.B.
1992-01-01
Accurate measurement of glucagon levels by radioimmunoassay (RIA) has proved more difficult than with certain other hormones and wide variations in the concentration of this hormone in plasma have been reported. During recent studies glucagon secretion from isolated rat pancreatic islets incubated in vitro was measured using L-arginine as stimulus. In view of the reported difficulties associated with measurement of glucagon by RIA, it was considered important to examine whether L-arginine might affect the accuracy of glucagon measurement. In the present work, it is reported that L-arginine can interfere with measurement of glucagon by RIA, leading to overestimation of glucagon levels. It is suggested that possible artifactual effects of the amino acid should be considered when arginine is employed as a stimulus for glucagon secretion. (author). 15 refs., 4 figs
Joel, Samantha; Teper, Rimma; MacDonald, Geoff
2014-12-01
Mate preferences often fail to correspond with actual mate choices. We present a novel explanation for this phenomenon: People overestimate their willingness to reject unsuitable romantic partners. In two studies, single people were given the opportunity to accept or decline advances from potential dates who were physically unattractive (Study 1) or incompatible with their dating preferences (Study 2). We found that participants were significantly less willing to reject these unsuitable potential dates when they believed the situation to be real rather than hypothetical. This effect was partially explained by other-focused motives: Participants for whom the scenario was hypothetical anticipated less motivation to avoid hurting the potential date's feelings than participants actually felt when they believed the situation to be real. Thus, other-focused motives appear to exert an influence on mate choice that has been overlooked by researchers and laypeople alike. © The Author(s) 2014.
A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity
Robert K. Kaufmann; David I. Stern
2004-01-01
The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing thre...
Overestimation of test performance by ROC analysis: Effect of small sample size
International Nuclear Information System (INIS)
Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.
1984-01-01
New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described
Modeling psychophysical data at the population-level: the generalized linear mixed model.
Moscatelli, Alessandro; Mezzetti, Maura; Lacquaniti, Francesco
2012-10-25
In psychophysics, researchers usually apply a two-level model for the analysis of the behavior of the single subject and the population. This classical model has two main disadvantages. First, the second level of the analysis discards information on trial repetitions and subject-specific variability. Second, the model does not easily allow assessing the goodness of fit. As an alternative to this classical approach, here we propose the Generalized Linear Mixed Model (GLMM). The GLMM separately estimates the variability of fixed and random effects, it has a higher statistical power, and it allows an easier assessment of the goodness of fit compared with the classical two-level model. GLMMs have been frequently used in many disciplines since the 1990s; however, they have been rarely applied in psychophysics. Furthermore, to our knowledge, the issue of estimating the point-of-subjective-equivalence (PSE) within the GLMM framework has never been addressed. Therefore the article has two purposes: It provides a brief introduction to the usage of the GLMM in psychophysics, and it evaluates two different methods to estimate the PSE and its variability within the GLMM framework. We compare the performance of the GLMM and the classical two-level model on published experimental data and simulated data. We report that the estimated values of the parameters were similar between the two models and Type I errors were below the confidence level in both models. However, the GLMM has a higher statistical power than the two-level model. Moreover, one can easily compare the fit of different GLMMs according to different criteria. In conclusion, we argue that the GLMM can be a useful method in psychophysics.
Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models
S. Peiris (Shelton); M. Asai (Manabu); M.J. McAleer (Michael)
2016-01-01
textabstractIn recent years fractionally differenced processes have received a great deal of attention due to its flexibility in financial applications with long memory. This paper considers a class of models generated by Gegenbauer polynomials, incorporating the long memory in stochastic volatility
Evaluation of a stratiform cloud parameterization for general circulation models
Energy Technology Data Exchange (ETDEWEB)
Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)
1996-04-01
To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.
Inhomogeneous generalizations of Bianchi type VIh models with perfect fluid
Roy, S. R.; Prasad, A.
1991-07-01
Inhomogeneous universes admitting an Abelian G2 of isometry and filled with perfect fluid have been derived. These contain as special cases exact homogeneous universes of Bianchi type VIh. Many of these universes asymptotically tend to homogeneous Bianchi VIh universes. The models have been discussed for their physical and kinematical behaviors.
A generalized cellular automata approach to modeling first order ...
Indian Academy of Sciences (India)
Cellular automata; enzyme kinetics; extended von-Neumann neighborhood. 1. Introduction. Over the past two decades, there has been a significant growth in the use of computer-generated models to study dynamic phenomena in biochemical systems (Kier et al 2005). The need to include greater details about biochemical ...
The general class of Bianchi cosmological models with dark energy ...
Indian Academy of Sciences (India)
2017-03-08
Mar 8, 2017 ... Chaplygin gas behaves as dark matter at the early. Universe while it behaves as a cosmological constant at the late time. Chaplygin gas [20,21] is one of the candidates of the dark energy models to explain the accelerated expansion of the Universe. The Chaplygin gas obeys an equation of state p = −A1/ρ ...
Item Response Theory Using Hierarchical Generalized Linear Models
Ravand, Hamdollah
2015-01-01
Multilevel models (MLMs) are flexible in that they can be employed to obtain item and person parameters, test for differential item functioning (DIF) and capture both local item and person dependence. Papers on the MLM analysis of item response data have focused mostly on theoretical issues where applications have been add-ons to simulation…
Fiscal and Monetary Policy in a General Equilibrium Model,
1984-01-27
Costs in Financial Assets It would be desirable from the point of view of realism to include transactions costs in security trades. Doing so would...November), pp. 545-566. Diamond, Peter (1965), "National Debt in a Neoclassical Growth Model," American Economic Review, 55 (December), pp. 1126-1150
General equilibrium basic needs policy model, (updating part).
Kouwenaar A
1985-01-01
ILO pub-WEP pub-PREALC pub. Working paper, econometric model for the assessment of structural change affecting development planning for basic needs satisfaction in Ecuador - considers population growth, family size (households), labour force participation, labour supply, wages, income distribution, profit rates, capital ownership, etc.; examines nutrition, education and health as factors influencing productivity. Diagram, graph, references, statistical tables.
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...
A generalized quarter car modelling approach with frame flexibility ...
Indian Academy of Sciences (India)
HUSAIN KANCHWALA
A simple Matlab code is provided that enables quick parametric studies. Finally, a parametric study and wheel hop analysis are performed for a realistic numerical example. Frequency and time domain responses obtained show clearly the effects of other wheels, which are outside the scope of usual quarter-car models. The.
Reference Priors for the General Location-Scale Model
Fernández, C.; Steel, M.F.J.
1997-01-01
The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately
Knowledge Growth: Applied Models of General and Individual Knowledge Evolution
Silkina, Galina Iu.; Bakanova, Svetlana A.
2016-01-01
The article considers the mathematical models of the growth and accumulation of scientific and applied knowledge since it is seen as the main potential and key competence of modern companies. The problem is examined on two levels--the growth and evolution of objective knowledge and knowledge evolution of a particular individual. Both processes are…
A Generalized Equatorial Model for the Accelerating Solar Wind
Tasnim, S.; Cairns, Iver H.; Wheatland, M. S.
2018-02-01
A new theoretical model for the solar wind is developed that includes the wind's acceleration, conservation of angular momentum, deviations from corotation, and nonradial velocity and magnetic field components from an inner boundary (corresponding to the onset of the solar wind) to beyond 1 AU. The model uses a solution of the time-steady isothermal equation of motion to describe the acceleration and analytically predicts the Alfvénic critical radius. We fit the model to near-Earth observations of the Wind spacecraft during the solar rotation period of 1-27 August 2010. The resulting data-driven model demonstrates the existence of noncorotating, nonradial flows and fields from the inner boundary (r = rs) outward and predicts the magnetic field B = (Br,Bϕ), velocity v = (vr,vϕ), and density n(r,ϕ,t), which vary with heliocentric distance r, heliolatitude ϕ, and time t in a Sun-centered standard inertial plane. The description applies formally only in the equatorial plane. In a frame corotating with the Sun, the transformed velocity v' and a field B' are not parallel, resulting in an electric field with a component Ez' along the z axis. The resulting E'×B'=E'×B drift lies in the equatorial plane, while the ∇B and curvature drifts are out of the plane. Together these may lead to enhanced scattering/heating of sufficiently energetic particles. The model predicts that deviations δvϕ from corotation at the inner boundary are common, with δvϕ(rs,ϕs,ts) comparable to the transverse velocities due to granulation and supergranulation motions. Abrupt changes in δvϕ(rs,ϕs,ts) are interpreted in terms of converging and diverging flows at the cell boundaries and centers, respectively. Large-scale variations in the predicted angular momentum demonstrate that the solar wind can drive vorticity and turbulence from near the Sun to 1 AU and beyond.
Anisotropic charged physical models with generalized polytropic equation of state
Nasim, A.; Azam, M.
2018-01-01
In this paper, we found the exact solutions of Einstein-Maxwell equations with generalized polytropic equation of state (GPEoS). For this, we consider spherically symmetric object with charged anisotropic matter distribution. We rewrite the field equations into simple form through transformation introduced by Durgapal (Phys Rev D 27:328, 1983) and solve these equations analytically. For the physically acceptability of these solutions, we plot physical quantities like energy density, anisotropy, speed of sound, tangential and radial pressure. We found that all solutions fulfill the required physical conditions. It is concluded that all our results are reduced to the case of anisotropic charged matter distribution with linear, quadratic as well as polytropic equation of state.
Emergent behaviour of a generalized Viscek-type flocking model
International Nuclear Information System (INIS)
Ha, Seung-Yeal; Jeong, Eunhee; Kang, Moon-Jin
2010-01-01
We present a planar agent-based flocking model with a distance-dependent communication weight. We derive a sufficient condition for the asymptotic flocking in terms of the initial spatial and heading-angle diameters and a communication weight. For this, we employ differential inequalities for the spatial and phase diameters together with the Lyapunov functional approach. When the diameter of the agent's initial heading-angles is sufficiently small, we show that the diameter of the heading-angles converges to the average value of the initial heading-angles exponentially fast. As an application of flocking estimates, we also show that the Kuramoto model with a connected communication topology on the regular lattice Z d for identical oscillators exhibits a complete-phase-frequency synchronization, when coupled oscillators are initially distributed on the half circle
General quadrupole shapes in the Interacting Boson Model
International Nuclear Information System (INIS)
Leviatan, A.
1990-01-01
Characteristic attributes of nuclear quadrupole shapes are investigated within the algebraic framework of the Interacting Boson Model. For each shape the Hamiltonian is resolved into intrinsic and collective parts, normal modes are identified and intrinsic states are constructed and used to estimate transition matrix elements. Special emphasis is paid to new features (e.g. rigid triaxiality and coexisting deformed shapes) that emerge in the presence of the three-body interactions. 27 refs
General quadrupole shapes in the Interacting Boson Model
Energy Technology Data Exchange (ETDEWEB)
Leviatan, A.
1990-01-01
Characteristic attributes of nuclear quadrupole shapes are investigated within the algebraic framework of the Interacting Boson Model. For each shape the Hamiltonian is resolved into intrinsic and collective parts, normal modes are identified and intrinsic states are constructed and used to estimate transition matrix elements. Special emphasis is paid to new features (e.g. rigid triaxiality and coexisting deformed shapes) that emerge in the presence of the three-body interactions. 27 refs.
System Advisor Model, SAM 2014.1.14: General Description
Energy Technology Data Exchange (ETDEWEB)
Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Neises, Ty [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wagner, Michael [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ferguson, Tom [Global Resources, Northbrook, IL (United States); Gilman, Paul [National Renewable Energy Lab. (NREL), Golden, CO (United States); Janzou, Steven [Janzou Consulting, Idaho Springs, CO (United States)
2014-02-01
This document describes the capabilities of the U.S. Department of Energy and National Renewable Energy Laboratory's System Advisor Model (SAM), Version 2013.9.20, released on September 9, 2013. SAM is a computer model that calculates performance and financial metrics of renewable energy systems. Project developers, policy makers, equipment manufacturers, and researchers use graphs and tables of SAM results in the process of evaluating financial, technology, and incentive options for renewable energy projects. SAM simulates the performance of photovoltaic, concentrating solar power, solar water heating, wind, geothermal, biomass, and conventional power systems. The financial model can represent financial structures for projects that either buy and sell electricity at retail rates (residential and commercial) or sell electricity at a price determined in a power purchase agreement (utility). SAM's advanced simulation options facilitate parametric and sensitivity analyses, and statistical analysis capabilities are available for Monte Carlo simulation and weather variability (P50/P90) studies. SAM can also read input variables from Microsoft Excel worksheets. For software developers, the SAM software development kit (SDK) makes it possible to use SAM simulation modules in their applications written in C/C++, C#, Java, Python, and MATLAB. NREL provides both SAM and the SDK as free downloads at http://sam.nrel.gov. Technical support and more information about the software are available on the website.
A General Model for Cost Estimation in an Exchange
Directory of Open Access Journals (Sweden)
Benzion Barlev
2014-03-01
Full Text Available Current Generally Accepted Accounting Principles (GAAP state that the cost of an asset acquired for cash is the fair value (FV of the amount surrendered, and that of an asset acquired in a non-monetary exchange is the FV of the asset surrendered or, if it is more “clearly evident,” the FV of the acquired asset. The measurement method prescribed for a non-monetary exchange ignores valuable information about the “less clearly evident” asset. Thus, we suggest that the FV in any exchange be measured by the weighted average of the exchanged assets’ FV estimations, where the weights are the inverse of the variances’ estimations. This alternative valuation process accounts for the uncertainty involved in estimating the FV of each of the asset in the exchange. The proposed method suits all types of exchanges: monetary and non-monetary. In a monetary transaction, the weighted average equals the cash paid because the variance of its FV is nil.
Anzhelika D. Tsymbalaru
2010-01-01
In the paper the scientific approaches to modeling of innovation educational environment of a general educational institution – system (analysis of object, process and result of modeling as system objects), activity (organizational and psychological structure) and synergetic (aspects and principles).
Relaxation Cycles in a Generalized Neuron Model with Two Delays
Directory of Open Access Journals (Sweden)
S. D. Glyzin
2013-01-01
Full Text Available A method of modeling the phenomenon of bursting behavior in neural systems based on delay equations is proposed. A singularly perturbed scalar nonlinear differentialdifference equation of Volterra type is a mathematical model of a neuron and a separate pulse containing one function without delay and two functions with different lags. It is established that this equation, for a suitable choice of parameters, has a stable periodic motion with any preassigned number of bursts in the time interval of the period length. To prove this assertion we first go to a relay-type equation and then determine the asymptotic solutions of a singularly perturbed equation. On the basis of this asymptotics the Poincare operator is constructed. The resulting operator carries a closed bounded convex set of initial conditions into itself, which suggests that it has at least one fixed point. The Frechet derivative evaluation of the succession operator, made in the paper, allows us to prove the uniqueness and stability of the resulting relax of the periodic solution.
Development of a General Modelling Methodology for Vacuum Residue Hydroconversion
Directory of Open Access Journals (Sweden)
Pereira de Oliveira L.
2013-11-01
Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.
General topology meets model theory, on p and t.
Malliaris, Maryanthe; Shelah, Saharon
2013-08-13
Cantor proved in 1874 [Cantor G (1874) J Reine Angew Math 77:258-262] that the continuum is uncountable, and Hilbert's first problem asks whether it is the smallest uncountable cardinal. A program arose to study cardinal invariants of the continuum, which measure the size of the continuum in various ways. By Gödel [Gödel K (1939) Proc Natl Acad Sci USA 25(4):220-224] and Cohen [Cohen P (1963) Proc Natl Acad Sci USA 50(6):1143-1148], Hilbert's first problem is independent of ZFC (Zermelo-Fraenkel set theory with the axiom of choice). Much work both before and since has been done on inequalities between these cardinal invariants, but some basic questions have remained open despite Cohen's introduction of forcing. The oldest and perhaps most famous of these is whether " p = t," which was proved in a special case by Rothberger [Rothberger F (1948) Fund Math 35:29-46], building on Hausdorff [Hausdorff (1936) Fund Math 26:241-255]. In this paper we explain how our work on the structure of Keisler's order, a large-scale classification problem in model theory, led to the solution of this problem in ZFC as well as of an a priori unrelated open question in model theory.
Cornelissen, Katri K; Cornelissen, Piers L; Hancock, Peter J B; Tovée, Martin J
2016-05-01
A core feature of anorexia nervosa (AN) is an over-estimation of body size. Women with AN have a different pattern of eye-movements when judging bodies, but it is unclear whether this is specific to their diagnosis or whether it is found in anyone over-estimating body size. To address this question, we compared the eye movement patterns from three participant groups while they carried out a body size estimation task: (i) 20 women with recovering/recovered anorexia (rAN) who had concerns about body shape and weight and who over-estimated body size, (ii) 20 healthy controls who had normative levels of concern about body shape and who estimated body size accurately (iii) 20 healthy controls who had normative levels of concern about body shape but who did over-estimate body size. Comparisons between the three groups showed that: (i) accurate body size estimators tended to look more in the waist region, and this was independent of clinical diagnosis; (ii) there is a pattern of looking at images of bodies, particularly viewing the upper parts of the torso and face, which is specific to participants with rAN but which is independent of accuracy in body size estimation. Since the over-estimating controls did not share the same body image concerns that women with rAN report, their over-estimation cannot be explained by attitudinal concerns about body shape and weight. These results suggest that a distributed fixation pattern is associated with over-estimation of body size and should be addressed in treatment programs. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:507-518). © 2016 The Authors. International Journal of Eating Disorders published by Wiley Periodicals, Inc.
Generalized Sparselet Models for Real-Time Multiclass Object Recognition.
Song, Hyun Oh; Girshick, Ross; Zickler, Stefan; Geyer, Christopher; Felzenszwalb, Pedro; Darrell, Trevor
2015-05-01
The problem of real-time multiclass object recognition is of great practical importance in object recognition. In this paper, we describe a framework that simultaneously utilizes shared representation, reconstruction sparsity, and parallelism to enable real-time multiclass object detection with deformable part models at 5Hz on a laptop computer with almost no decrease in task performance. Our framework is trained in the standard structured output prediction formulation and is generically applicable for speeding up object recognition systems where the computational bottleneck is in multiclass, multi-convolutional inference. We experimentally demonstrate the efficiency and task performance of our method on PASCAL VOC, subset of ImageNet, Caltech101 and Caltech256 dataset.
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Interest Rates with Long Memory: A Generalized Affine Term-Structure Model
DEFF Research Database (Denmark)
Osterrieder, Daniela
We propose a model for the term structure of interest rates that is a generalization of the discrete-time, Gaussian, affine yield-curve model. Compared to standard affine models, our model allows for general linear dynamics in the vector of state variables. In an application to real yields of U...... by a level, a slope, and a curvature factor that arise naturally from the co-fractional modeling framework. We show that implied yields match the level and the variability of yields well over time. Studying the out-of-sample forecasting accuracy of our model, we find that our model results in good yield...
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
A methodology model for quality management in a general hospital.
Stern, Z; Naveh, E
1997-01-01
A reappraisal is made of the relevance of industrial modes of quality management to the issues of medical care. Analysis of the nature of medical care, which differentiates it from the supplier-client relationships of industry, presents the main intrinsic characteristics, which create problems in application of the industrial quality management approaches to medical care. Several examples are the complexity of the relationship between the medical action and the result obtained, the client's nonacceptance of economic profitability as a value in his medical care, and customer satisfaction biased by variable standards of knowledge. The real problems unique to hospitals are addressed, and a methodology model for their quality management is offered. Included is a sample of indicator vectors, measurements of quality care, cost of medical care, quality of service, and human resources. These are based on the trilogy of planning quality, quality control, and improving quality. The conclusions confirm the inadequacy of industrial quality management approaches for medical institutions and recommend investment in formulation of appropriate concepts.
Existing creatinine-based equations overestimate glomerular filtration rate in Indians.
Kumar, Vivek; Yadav, Ashok Kumar; Yasuda, Yoshinari; Horio, Masaru; Kumar, Vinod; Sahni, Nancy; Gupta, Krishan L; Matsuo, Seiichi; Kohli, Harbir Singh; Jha, Vivekanand
2018-02-01
Accurate estimation of glomerular filtration rate (GFR) is important for diagnosis and risk stratification in chronic kidney disease and for selection of living donors. Ethnic differences have required correction factors in the originally developed creatinine-based GFR estimation equations for populations around the world. Existing equations have not been validated in the vegetarian Indian population. We examined the performance of creatinine and cystatin-based GFR estimating equations in Indians. GFR was measured by urinary clearance of inulin. Serum creatinine was measured using IDMS-traceable Jaffe's and enzymatic assays, and cystatin C by colloidal gold immunoassay. Dietary protein intake was calculated by measuring urinary nitrogen appearance. Bias, precision and accuracy were calculated for the eGFR equations. A total of 130 participants (63 healthy kidney donors and 67 with CKD) were studied. About 50% were vegetarians, and the remainder ate meat 3.8 times every month. The average creatinine excretion were 14.7 mg/kg/day (95% CI: 13.5 to 15.9 mg/kg/day) and 12.4 mg/kg/day (95% CI: 11.2 to 13.6 mg/kg/day) in males and females, respectively. The average daily protein intake was 46.1 g/day (95% CI: 43.2 to 48.8 g/day). The mean mGFR in the study population was 51.66 ± 31.68 ml/min/1.73m 2 . All creatinine-based eGFR equations overestimated GFR (p < 0.01 for each creatinine based eGFR equation). However, eGFR by CKD-EPI Cys was not significantly different from mGFR (p = 0.38). The CKD-EPI Cys exhibited lowest bias [mean bias: -3.53 ± 14.70 ml/min/1.73m 2 (95% CI: -0.608 to -0.98)] and highest accuracy (P 30 : 74.6%). The GFR in the healthy population was 79.44 ± 20.19 (range: 41.90-134.50) ml/min/1.73m 2 . Existing creatinine-based GFR estimating equations overestimate GFR in Indians. An appropriately powered study is needed to develop either a correction factor or a new equation for accurate assessment of kidney function in the
Directory of Open Access Journals (Sweden)
Yoshitaka eNakajima
2014-05-01
Full Text Available When the onsets of three successive sound bursts mark two adjacent time intervals, the second time interval can be underestimated when it is physically longer than the first time interval by up to 100 ms. This illusion, time-shrinking, is very stable when the first time interval is 200 ms or shorter (Nakajima et al., 2004, Perception, 33. Time-shrinking had been considered a kind of perceptual assimilation to make the first and the second time interval more similar to each other. Here we investigated whether the underestimation of the second time interval was replaced by an overestimation if the physical difference between the neighboring time intervals was too large for the assimilation to take place; this was a typical situation in which a perceptual contrast could be expected. Three experiments to measure the overestimation/underestimation of the second time interval by the method of adjustment were conducted. The first time interval was varied from 40 to 280 ms, and such overestimations indeed took place when the first time interval was 80-280 ms. The overestimations were robust when the second time interval was longer than the first time interval by 240 ms or more, and the magnitude of the overestimation was larger than 100 ms in some conditions. Thus, a perceptual contrast to replace time-shrinking was established. An additional experiment indicated that this contrast did not affect the perception of the first time interval substantially: The contrast in the present conditions seemed unilateral.
Aluminium in an ocean general circulation model compared with the West Atlantic Geotraces cruises
van Hulten, M. M. P.; Sterl, A.; Tagliabue, A.; Dutay, J. -C.; Gehlen, M.; de Baar, H. J. W.; Middag, R.
2013-01-01
A model of aluminium has been developed and implemented in an Ocean General Circulation Model (NEMO-PISCES). In the model, aluminium enters the ocean by means of dust deposition. The internal oceanic processes are described by advection, mixing and reversible scavenging. The model has been evaluated
The Use of Hierarchical Generalized Linear Model for Item Dimensionality Assessment
Beretvas, S. Natasha; Williams, Natasha J.
2004-01-01
To assess item dimensionality, the following two approaches are described and compared: hierarchical generalized linear model (HGLM) and multidimensional item response theory (MIRT) model. Two generating models are used to simulate dichotomous responses to a 17-item test: the unidimensional and compensatory two-dimensional (C2D) models. For C2D…
Interpreting Hierarchical Linear and Hierarchical Generalized Linear Models with Slopes as Outcomes
Tate, Richard
2004-01-01
Current descriptions of results from hierarchical linear models (HLM) and hierarchical generalized linear models (HGLM), usually based only on interpretations of individual model parameters, are incomplete in the presence of statistically significant and practically important "slopes as outcomes" terms in the models. For complete description of…
Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.
2014-04-01
Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.
Baya Botti, A; Pérez-Cueto, F J A; Vasquez Monllor, P A; Kolsteren, P W
2010-01-01
Since no growth standards for adolescents exist and a single reference applicable everywhere is still in debate, it is recognized that the best reference should be derived from the growth pattern of the healthy population that will use it. In 2007 a study developed references for body mass index for 12th to 18th y Bolivian school adolescent (BAP. To compare nutritional status outcomes applying BMI references from the BAP, the Center for Disease Control and Prevention CDC 2000, the International Task Force (IOTF), and the 2007 WHO, to determine appropriateness of use in Bolivian adolescents. References were applied in 3306 adolescents, 45.0% male, 55% female, 12th to 18th y selected from a nationally representative sample. Main findings reveal that the CDC and the 2007 WHO underestimate underweight (preferences overestimate overweight (preferences for the use of BAP in Bolivia which reflects its healthy adolescent population growth pattern. International references may lead to incorrect conclusions when applied on Bolivian adolescents. They could deflect efforts from population which need prompt intervention and mislead treatments and budget to unnecessary ones. We recommend validation of international references where appropriate until a standard is released.
Overestimation of drinking norms and its association with alcohol consumption in apprentices.
Haug, Severin; Ulbricht, Sabina; Hanke, Monika; Meyer, Christian; John, Ulrich
2011-01-01
To investigate associations of normative misperceptions and drinking behaviors in apprentices, complementing the previous literature on university students. A survey in a defined region of northern Germany was carried out among 1124 apprentices attending vocational schools. Using items from the short form of the Alcohol Use Disorders Identification Test (AUDIT-C), drinking behaviors and normative perceptions of drinking in the reference group of same-gender apprentices were assessed. Demographic, smoking- and drinking-related predictors for normative misperceptions were explored. Personal drinking behavior was positively correlated with perceived norms, both for drinking frequency (males: Kendall's τ = 0.33, P Alcohol use disorders according to AUDIT-C cut-offs were more prevalent in subjects who overestimated drinking quantity in their reference group than in those who correctly estimated or underestimated drinking quantity (male: P risk (RR) 1.78; female: P alcohol use were positively associated with normative misperceptions of both drinking quantity and frequency. Interventions correcting alcohol use misperceptions might be effective in reducing problem drinking in adolescents with heterogeneous educational levels.
Age, risk assessment, and sanctioning: Overestimating the old, underestimating the young.
Monahan, John; Skeem, Jennifer; Lowenkamp, Christopher
2017-04-01
While many extoll the potential contribution of risk assessment to reducing the human and fiscal costs of mass incarceration without increasing crime, others adamantly oppose the incorporation of risk assessment in sanctioning. The principal concern is that any benefits in terms of reduced rates of incarceration achieved through the use of risk assessment will be offset by costs to social justice-which are claimed to be inherent in any risk assessment process that relies on variables for which offenders bear no responsibility, such as race, gender, and age. Previous research has addressed the variables of race and gender. Here, based on a sample of 7,350 federal offenders, we empirically test the predictive fairness of an instrument-the Post Conviction Risk Assessment (PCRA)-that includes the variable of age. We found that the strength of association between PCRA scores and future arrests was similar across younger (i.e., 25 years and younger), middle (i.e., 26-40 years), and older (i.e., 41 years and older) age groups (AUC values .70 or higher). Nevertheless, rates of arrest within each PCRA risk category were consistently lower for older than for younger offenders. Despite its inclusion of age as a risk factor, PCRA scores overestimated rates of recidivism for older offenders and underestimated rates of recidivism for younger offenders. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Heat stress is overestimated in climate impact studies for irrigated agriculture
Siebert, Stefan; Webber, Heidi; Zhao, Gang; Ewert, Frank
2017-05-01
Climate change will increase the number and severity of heat waves, and is expected to negatively affect crop yields. Here we show for wheat and maize across Europe that heat stress is considerably reduced by irrigation due to surface cooling for both current and projected future climate. We demonstrate that crop heat stress impact assessments should be based on canopy temperature because simulations with air temperatures measured at standard weather stations cannot reproduce differences in crop heat stress between irrigated and rainfed conditions. Crop heat stress was overestimated on irrigated land when air temperature was used with errors becoming larger with projected climate change. Corresponding errors in mean crop yield calculated across Europe for baseline climate 1984-2013 of 0.2 Mg yr-1 (2%) and 0.6 Mg yr-1 (5%) for irrigated winter wheat and irrigated grain maize, respectively, would increase to up to 1.5 Mg yr-1 (16%) for irrigated winter wheat and 4.1 Mg yr-1 (39%) for irrigated grain maize, depending on the climate change projection/GCM combination considered. We conclude that climate change impact assessments for crop heat stress need to account explicitly for the impact of irrigation.
Nya-Ngatchou, Jean-Jacques; Corl, Dawn; Onstad, Susan; Yin, Tom; Tylee, Tracy; Suhr, Louise; Thompson, Rachel E; Wisse, Brent E
2015-02-01
Hypoglycaemia is associated with morbidity and mortality in critically ill patients, and many hospitals have programmes to minimize hypoglycaemia rates. Recent studies have established the hypoglycaemic patient-day as a key metric and have published benchmark inpatient hypoglycaemia rates on the basis of point-of-care blood glucose data even though these values are prone to measurement errors. A retrospective, cohort study including all patients admitted to Harborview Medical Center Intensive Care Units (ICUs) during 2010 and 2011 was conducted to evaluate a quality improvement programme to reduce inappropriate documentation of point-of-care blood glucose measurement errors. Laboratory Medicine point-of-care blood glucose data and patient charts were reviewed to evaluate all episodes of hypoglycaemia. A quality improvement intervention decreased measurement errors from 31% of hypoglycaemic (measurement errors likely overestimates ICU hypoglycaemia rates and can be reduced by a quality improvement effort. The currently used hypoglycaemic patient-day metric does not evaluate recurrent or prolonged events that may be more likely to cause patient harm. The monitored patient-day as currently defined may not be the optimal denominator to determine inpatient hypoglycaemic risk. Copyright © 2014 John Wiley & Sons, Ltd.
de Souza, Juliana Bottoni; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição
2014-01-01
OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged < 6, and daily concentrations of air pollutants (PM10, SO2, NO2, O3 and CO) were analyzed in the Região da Grande Vitória, ES, Southeastern Brazil, from January 2005 to December 2010. For statistical analysis, two techniques were combined: Poisson regression with generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive – while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model − principal component analysis, showed better results in estimating relative risk and quality of fit. PMID:25119940
Souza, Juliana Bottoni de; Reisen, Valdério Anselmo; Santos, Jane Méri; Franco, Glaura Conceição
2014-06-01
OBJECTIVE To analyze the association between concentrations of air pollutants and admissions for respiratory causes in children. METHODS Ecological time series study. Daily figures for hospital admissions of children aged generalized additive models and principal model component analysis. Those analysis techniques complemented each other and provided more significant estimates in the estimation of relative risk. The models were adjusted for temporal trend, seasonality, day of the week, meteorological factors and autocorrelation. In the final adjustment of the model, it was necessary to include models of the Autoregressive Moving Average Models (p, q) type in the residuals in order to eliminate the autocorrelation structures present in the components. RESULTS For every 10:49 μg/m3 increase (interquartile range) in levels of the pollutant PM10 there was a 3.0% increase in the relative risk estimated using the generalized additive model analysis of main components-seasonal autoregressive - while in the usual generalized additive model, the estimate was 2.0%. CONCLUSIONS Compared to the usual generalized additive model, in general, the proposed aspect of generalized additive model - principal component analysis, showed better results in estimating relative risk and quality of fit.
Implementation of a PETN failure model using ARIA's general chemistry framework
Energy Technology Data Exchange (ETDEWEB)
Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-01-01
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.
Effects of uncertainty in model predictions of individual tree volume on large area volume estimates
Ronald E. McRoberts; James A. Westfall
2014-01-01
Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...
Cornelissen, Katri K.; Cornelissen, Piers L.; Hancock, Peter J. B.; Tov?e, Martin J.
2016-01-01
ABSTRACT Objective A core feature of anorexia nervosa (AN) is an over?estimation of body size. Women with AN have a different pattern of eye?movements when judging bodies, but it is unclear whether this is specific to their diagnosis or whether it is found in anyone over?estimating body size. Method To address this question, we compared the eye movement patterns from three participant groups while they carried out a body size estimation task: (i) 20 women with recovering/recovered anorexia (r...
Kim, Miso; Lee, Hongmie
2010-12-01
The study aimed to analyze the lifestyles, weight control behavior, dietary habits, and depression of female university students. The subjects were 532 students from 8 universities located in 4 provinces in Korea. According to percent ideal body weight, 33 (6.4%), 181 (34.0%), 283 (53.2%), 22 (4.1%) and 13 (2.5%) were severely underweight, underweight, normal, overweight and obese, respectively, based on self-reported height and weight. As much as 64.1% and only 2.4%, respectively, overestimated and underestimated their body weight status. Six overweight subjects were excluded from overestimation group for the purpose of this study, resulting in overestimation group consisting of only underweight and normal weight subjects. Compared to those from the normal perception group, significantly more subjects from the overestimation group were currently smoking (P = 0.017) and drank more often than once a week (P = 0.015), without any significant differences in dietary habits. Despite similar BMIs, subjects who overestimated their own weight statuses had significantly higher weight dissatisfaction (P = 0.000), obesity stress (P = 0.000), obsession to lose weight (P = 0.007) and depression (P = 0.018). Also, more of them wanted to lose weight (P = 0.000), checked their body weights more often than once a week (P = 0.025) and had dieting experiences using 'reducing meal size' (P = 0.012), 'reducing snacks' (P = 0.042) and 'taking prescribed pills' (P = 0.032), and presented 'for a wider range of clothes selection' as the reason for weight loss (P = 0.039), although none was actually overweight or obese. Unlike the case with overestimating one's own weight, being overweight was associated with less drinking (P = 0.035) and exercising more often (P = 0.001) and for longer (P = 0.001) and healthier reasons for weight control (P = 0.002), despite no differences in frequency of weighing and depression. The results showed that weight overestimation, independent of weight status
Ferreira-Ferreira, J.; Francisco, M. S.; Silva, T. S. F.
2017-12-01
Amazon floodplains play an important role in biodiversity maintenance and provide important ecosystem services. Flood duration is the prime factor modulating biogeochemical cycling in Amazonian floodplain systems, as well as influencing ecosystem structure and function. However, due to the absence of accurate terrain information, fine-scale hydrological modeling is still not possible for most of the Amazon floodplains, and little is known regarding the spatio-temporal behavior of flooding in these environments. Our study presents an new approach for spatial modeling of flood duration, using Synthetic Aperture Radar (SAR) and Generalized Linear Modeling. Our focal study site was Mamirauá Sustainable Development Reserve, in the Central Amazon. We acquired a series of L-band ALOS-1/PALSAR Fine-Beam mosaics, chosen to capture the widest possible range of river stage heights at regular intervals. We then mapped flooded area on each image, and used the resulting binary maps as the response variable (flooded/non-flooded) for multiple logistic regression. Explanatory variables were accumulated precipitation 15 days prior and the water stage height recorded in the Mamirauá lake gauging station observed for each image acquisition date, Euclidean distance from the nearest drainage, and slope, terrain curvature, profile curvature, planform curvature and Height Above the Nearest Drainage (HAND) derived from the 30-m SRTM DEM. Model results were validated with water levels recorded by ten pressure transducers installed within the floodplains, from 2014 to 2016. The most accurate model included water stage height and HAND as explanatory variables, yielding a RMSE of ±38.73 days of flooding per year when compared to the ground validation sites. The largest disagreements were 57 days and 83 days for two validation sites, while remaining locations achieved absolute errors lower than 38 days. In five out of nine validation sites, the model predicted flood durations with
Bogenschutz, Peter A.
Over the past few years a new type of general circulation model (GCM) has emerged that is known as the multiscale modeling framework (MMF). The Colorado State University (CSU) MMF represents a coupling between the Community Atmosphere Model (CAM) GCM and the System of Atmospheric Modeling (SAM) cloud resolving model (CRM). Within this MMF the embedded CRM replaces the traditionally used parameterized moist physics in CAM to represent subgrid-scale (SGS) convection. However, due to substantial increases of computational burden associated with the MMF, the embedded CRM is typically run with a horizontal grid size of 4 km. With a horizontal grid size of 4 km, a low-order closure CRM cannot adequately represent shallow convective processes, such as trade-wind cumulus or stratocumulus. A computationally inexpensive parameterization of turbulence and clouds is presented in this dissertation. An extensive a priori test is performed to determine which functional form of an assumed PDF is best suited for coarse-grid CRMs for both deep shallow and deep convection. The diagnostic approach to determine the input moments needed for the assumed PDFs uses the subgrid-scale (SGS) turbulent kinetic energy (TKE) as the basis for the parameterization. The term known as the turbulent length scale (L) is examined, as it is needed to parameterize the dissipation of turbulence and therefore is needed to better balance the budgets of SGS TKE. A new formulation of this term is added to the model code which appears to be able to partition resolved and SGS TKE fairly accurately. Results from "offline" tests of the simple diagnostic closure within SAM shows that the cloud and turbulence properties of shallow convection can be adequately represented when compared to large eddy simulation (LES) benchmark simulations. Results are greatly improved when compared to the standard version of SAM. The preliminary test of the scheme within the embedded CRM of the MMF shows promising results with the
Modeling air quality in main cities of Peninsular Malaysia by using a generalized Pareto model.
Masseran, Nurulkamal; Razali, Ahmad Mahir; Ibrahim, Kamarulzaman; Latif, Mohd Talib
2016-01-01
The air pollution index (API) is an important figure used for measuring the quality of air in the environment. The API is determined based on the highest average value of individual indices for all the variables which include sulfur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3), and suspended particulate matter (PM10) at a particular hour. API values that exceed the limit of 100 units indicate an unhealthy status for the exposed environment. This study investigates the risk of occurrences of API values greater than 100 units for eight urban areas in Peninsular Malaysia for the period of January 2004 to December 2014. An extreme value model, known as the generalized Pareto distribution (GPD), has been fitted to the API values found. Based on the fitted model, return period for describing the occurrences of API exceeding 100 in the different cities has been computed as the indicator of risk. The results obtained indicated that most of the urban areas considered have a very small risk of occurrence of the unhealthy events, except for Kuala Lumpur, Malacca, and Klang. However, among these three cities, it is found that Klang has the highest risk. Based on all the results obtained, the air quality standard in urban areas of Peninsular Malaysia falls within healthy limits to human beings.
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs. (c) 2015 APA, all rights reserved).
Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach
International Nuclear Information System (INIS)
2014-12-01
In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the
Interpretation of cloud-climate feedback as produced by 14 atmospheric general circulation models
Cess, R. D.; Potter, G. L.; Ghan, S. J.; Blanchet, J. P.; Boer, G. J.
1989-01-01
Understanding the cause of differences among general circulation model projections of carbon dioxide-induced climatic change is a necessary step toward improving the models. An intercomparison of 14 atmospheric general circulation models, for which sea surface temperature perturbations were used as a surrogate climate change, showed that there was a roughly threefold variation in global climate sensitivity. Most of this variation is attributable to differences in the models' depictions of cloud-climate feedback, a result that emphasizes the need for improvements in the treatment of clouds in these models if they are ultimately to be used as climatic predictors.
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
Directory of Open Access Journals (Sweden)
Naoya Sueishi
2013-07-01
Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.
PerMallows: An R Package for Mallows and Generalized Mallows Models
Directory of Open Access Journals (Sweden)
Ekhine Irurozki
2016-08-01
Full Text Available In this paper we present the R package PerMallows, which is a complete toolbox to work with permutations, distances and some of the most popular probability models for permutations: Mallows and the Generalized Mallows models. The Mallows model is an exponential location model, considered as analogous to the Gaussian distribution. It is based on the definition of a distance between permutations. The Generalized Mallows model is its best-known extension. The package includes functions for making inference, sampling and learning such distributions. The distances considered in PerMallows are Kendall's τ , Cayley, Hamming and Ulam.
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Energy Technology Data Exchange (ETDEWEB)
Pizer, William; Burtraw, Dallas; Harrington, Winston; Newell, Richard; Sanchirico, James; Toman, Michael
2003-03-31
This document provides technical documentation for work using detailed sectoral models to calibrate a general equilibrium analysis of market and non-market sectoral policies to address climate change. Results of this work can be found in the companion paper, "Modeling Costs of Economy-wide versus Sectoral Climate Policies Using Combined Aggregate-Sectoral Model".
From linear to generalized linear mixed models: A case study in repeated measures
Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...
How Can Students Generalize the Chain Rule? The Roles of Abduction in Mathematical Modeling
Park, Jin Hyeong; Lee, Kyeong-Hwa
2016-01-01
The purpose of this study is to design a modeling task to facilitate students' inquiries into the chain rule in calculus and to analyze the results after implementation of the task. In this study, we take a modeling approach to the teaching and learning of the chain rule by facilitating the generalization of students' models and modeling…
Tilted Bianchi type I dust fluid cosmological model in general relativity
Indian Academy of Sciences (India)
Abstract. In this paper, we have investigated a tilted Bianchi type I cosmological model filled with dust of perfect fluid in general relativity. To get a determinate solution, we have assumed a condition. A. Bn between metric potentials. The physical and geometrical aspects of the model together with singularity in the model are ...
Tilted Bianchi type I dust fluid cosmological model in general relativity
Indian Academy of Sciences (India)
In this paper, we have investigated a tilted Bianchi type I cosmological model ﬁlled with dust of perfect ﬂuid in general relativity. To get a determinate solution, we have assumed a condition = between metric potentials. The physical and geometrical aspects of the model together with singularity in the model are also ...
Cepeda-Cuervo, Edilberto; Núñez-Antón, Vicente
2013-01-01
In this article, a proposed Bayesian extension of the generalized beta spatial regression models is applied to the analysis of the quality of education in Colombia. We briefly revise the beta distribution and describe the joint modeling approach for the mean and dispersion parameters in the spatial regression models' setting. Finally, we motivate…
Unified Einstein-Virasoro master equation in the general non-linear $\\sigma$ model
De Boer, J
1997-01-01
The Virasoro master equation (VME) describes the general affine-Virasoro construction T=L^{ab}J_aJ_b+iD^a \\dif J_a in the operator algebra of the WZW model, where L^{ab} is the inverse inertia tensor and D^a is the improvement vector. In this paper, we generalize this construction to find the general (one-loop) Virasoro construction in the operator algebra of the general non-linear sigma model. The result is a unified Einstein-Virasoro master equation which couples the spacetime spin-two field L^{ab} to the background fields of the sigma model. For a particular solution L_G^{ab}, the unified system reduces to the canonical stress tensors and conventional Einstein equations of the sigma model, and the system reduces to the general affine-Virasoro construction and the VME when the sigma model is taken to be the WZW action. More generally, the unified system describes a space of conformal field theories which is presumably much larger than the sum of the general affine-Virasoro construction and the sigma model w...
Ni, Xiangyin; Yang, Wanqin; Qi, Zemin; Liao, Shu; Xu, Zhenfeng; Tan, Bo; Wang, Bin; Wu, Qinggui; Fu, Changkun; You, Chengming; Wu, Fuzhong
2017-08-01
Experiments and models have led to a consensus that there is positive feedback between carbon (C) fluxes and climate warming. However, the effect of warming may be altered by regional and global changes in nitrogen (N) and rainfall levels, but the current understanding is limited. Through synthesizing global data on soil C pool, input and loss from experiments simulating N deposition, drought and increased precipitation, we quantified the responses of soil C fluxes and equilibrium to the three single factors and their interactions with warming. We found that warming slightly increased the soil C input and loss by 5% and 9%, respectively, but had no significant effect on the soil C pool. Nitrogen deposition alone increased the soil C input (+20%), but the interaction of warming and N deposition greatly increased the soil C input by 49%. Drought alone decreased the soil C input by 17%, while the interaction of warming and drought decreased the soil C input to a greater extent (-22%). Increased precipitation stimulated the soil C input by 15%, but the interaction of warming and increased precipitation had no significant effect on the soil C input. However, the soil C loss was not significantly affected by any of the interactions, although it was constrained by drought (-18%). These results implied that the positive C fluxes-climate warming feedback was modulated by the changing N and rainfall regimes. Further, we found that the additive effects of [warming × N deposition] and [warming × drought] on the soil C input and of [warming × increased precipitation] on the soil C loss were greater than their interactions, suggesting that simple additive simulation using single-factor manipulations may overestimate the effects on soil C fluxes in the real world. Therefore, we propose that more multifactorial experiments should be considered in studying Earth systems. © 2016 John Wiley & Sons Ltd.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Risk modelling of outcome after general and trauma surgery (the IRIS score)
Liebman, B.; Strating, R.P.; van Wieringen, W.N.; Mulder, W.; Oomen, J.L.T.; Engel, AF
2010-01-01
Background: A practical, easy to use model was developed to stratify risk groups in surgical patients: the Identification of Risk In Surgical patients (IRIS) score. Methods: Over 15 years an extensive database was constructed in a general surgery unit, containing all patients who underwent general
Kumaran, Dharshan; McClelland, James L.
2012-01-01
In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus…
General classical solutions in the noncommutative CP{sup N-1} model
Energy Technology Data Exchange (ETDEWEB)
Foda, O.; Jack, I.; Jones, D.R.T
2002-10-31
We give an explicit construction of general classical solutions for the noncommutative CP{sup N-1} model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied.
Ion binding to natural organic matter : General considerations and the NICA-Donnan model
Koopal, L.K.; Saito, T.; Pinheiro, J.P.; Riemsdijk, van W.H.
2005-01-01
The general principles of cation binding to humic matter and the various aspects of modeling used in general-purpose speciation programs are discussed. The discussion will focus on (1) the discrimination between chemical and electrostatic interactions, (2) the binding site heterogeneity, (3) the
The formulation and estimation of a spatial skew-normal generalized ordered-response model.
2016-06-01
This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...
Directory of Open Access Journals (Sweden)
TR Mavundla
2001-09-01
Full Text Available Part 1 of this article dealt with a full description of the research design and methods. This article aims at describing a model of facilitative communication to support general hospital nurses nursing the mentally-ill. In this article a model of facilitative communication applicable to any general hospital setting is proposed. Fundamental assumptions and relationship statements are highlighted and the structure and process of facilitative communication is described according to the three steps employed: 1 assisting the general hospital nurse learn the skill; 2 assisting the general hospital nurse practise the skill in order to develop confidence; and 3 using the skill in a work setting. The guidelines for operationalising this model are dealt with in the next article. The evaluation of the model is also briefly described.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
On Regularity Criteria for the Two-Dimensional Generalized Liquid Crystal Model
Directory of Open Access Journals (Sweden)
Yanan Wang
2014-01-01
Full Text Available We establish the regularity criteria for the two-dimensional generalized liquid crystal model. It turns out that the global existence results satisfy our regularity criteria naturally.
Gniewosz, Burkhard; Watt, Helen M. G.
2017-01-01
This study examines whether and how student-perceived parents' and teachers' overestimation of students' own perceived mathematical ability can explain trajectories for adolescents' mathematical task values (intrinsic and utility) controlling for measured achievement, following expectancy-value and self-determination theories. Longitudinal data…
Cornelissen, Piers L; Johns, Anna; Tovée, Martin J
2013-01-01
Over-estimation of body size is a cardinal feature of anorexia nervosa (AN), usually revealed by comparing individuals who have AN with non-AN individuals, the inference being that over-estimation is pathological. We show that the same result can be reproduced by sampling selectively from a single distribution of performance in body size judgement by comparing low BMI individuals with normal BMI individuals. Over-estimation of body size in AN is not necessarily pathological and can be predicted by normal psychophysical biases in magnitude estimation. We confirm this prediction in a dataset from a morphing study in which 30 women with AN and 137 control women altered a photograph of themselves to estimate their actual body size. We further investigated the relative contributions of sensory and attitudinal factors to body-size overestimation in a sample of 166 women. Our results suggest that both factors play a role, but their relative importance is task dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Kim, Bohyun; Kim, Kyoung Won; Kim, So Yeon; Park, So Hyun; Lee, Jeongjin; Song, Gi Won; Jung, Dong-Hwan; Ha, Tae-Yong; Lee, Sung Gyu
2017-01-01
To compare the length of the right hepatic duct (RHD) measured on rotatory coronal 2D MR cholangiography (MRC), rotatory axial 2D MRC, and reconstructed 3D MRC. Sixty-seven donors underwent coronal and axial 2D projection MRC and 3D MRC. RHD length was measured and categorized as ultrashort (≤1 mm), short (>1-14 mm), and long (>14 mm). The measured length, frequency of overestimation, and the degree of underestimation between two 2D MRC sets were compared to 3D MRC. The length of the RHD from 3D MRC, coronal 2D MRC, and axial 2D MRC showed significant difference (p < 0.05). RHD was frequently overestimated on the coronal than on axial 2D MRC (61.2 % vs. 9 %; p <.0001). On coronal 2D MRC, four (6 %) with short RHD and one (1.5 %) with ultrashort RHD were over-categorized as long RHD. On axial 2D MRC, overestimation was mostly <1 mm (83.3 %), none exceeding 3 mm or over-categorized. The degree of underestimation between the two projection planes was comparable. Coronal 2D MRC overestimates the RHD in liver donors. We suggest adding axial 2D MRC to conventional coronal 2D MRC in the preoperative workup protocol for living liver donors to avoid unexpected confrontation with multiple ductal openings when harvesting the graft. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Kim, Bohyun [University of Ulsan College of Medicine, Department of Radiology, Asan Medical Center, 88, Olympic-ro 43-gil, Songpa-gu, Seoul (Korea, Republic of); Ajou University School of Medicine, Department of Radiology, Ajou University Medical Center, Suwon (Korea, Republic of); Kim, Kyoung Won; Kim, So Yeon; Park, So Hyun [University of Ulsan College of Medicine, Department of Radiology, Asan Medical Center, 88, Olympic-ro 43-gil, Songpa-gu, Seoul (Korea, Republic of); Lee, Jeongjin [Soongsil University, School of Computer Science and Engineering, Seoul (Korea, Republic of); Song, Gi Won; Jung, Dong-Hwan; Ha, Tae-Yong; Lee, Sung Gyu [University of Ulsan College of Medicine, Department of Surgery, Division of Hepatobiliary and Liver Transplantation Surgery, Asan Medical Center, Seoul (Korea, Republic of)
2017-05-15
To compare the length of the right hepatic duct (RHD) measured on rotatory coronal 2D MR cholangiography (MRC), rotatory axial 2D MRC, and reconstructed 3D MRC. Sixty-seven donors underwent coronal and axial 2D projection MRC and 3D MRC. RHD length was measured and categorized as ultrashort (≤1 mm), short (>1-14 mm), and long (>14 mm). The measured length, frequency of overestimation, and the degree of underestimation between two 2D MRC sets were compared to 3D MRC. The length of the RHD from 3D MRC, coronal 2D MRC, and axial 2D MRC showed significant difference (p < 0.05). RHD was frequently overestimated on the coronal than on axial 2D MRC (61.2 % vs. 9 %; p <.0001). On coronal 2D MRC, four (6 %) with short RHD and one (1.5 %) with ultrashort RHD were over-categorized as long RHD. On axial 2D MRC, overestimation was mostly <1 mm (83.3 %), none exceeding 3 mm or over-categorized. The degree of underestimation between the two projection planes was comparable. Coronal 2D MRC overestimates the RHD in liver donors. We suggest adding axial 2D MRC to conventional coronal 2D MRC in the preoperative workup protocol for living liver donors to avoid unexpected confrontation with multiple ductal openings when harvesting the graft. (orig.)
Directory of Open Access Journals (Sweden)
Tsung-han Tsai
2013-05-01
Full Text Available There is some confusion in political science, and the social sciences in general, about the meaning and interpretation of interaction effects in models with non-interval, non-normal outcome variables. Often these terms are casually thrown into a model specification without observing that their presence fundamentally changes the interpretation of the resulting coefficients. This article explains the conditional nature of reported coefficients in models with interactions, defining the necessarily different interpretation required by generalized linear models. Methodological issues are illustrated with an application to voter information structured by electoral systems and resulting legislative behavior and democratic representation in comparative politics.
A General Model for Thermal, Hydraulic and Electric Analysis of Superconducting Cables
Bottura, L; Rosso, C
2000-01-01
In this paper we describe a generic, multi-component and multi-channel model for the analysis of superconducting cables. The aim of the model is to treat in a general and consistent manner simultaneous thermal, electric and hydraulic transients in cables. The model is devised for most general situations, but reduces in limiting cases to most common approximations without loss of efficiency. We discuss here the governing equations, and we write them in a matrix form that is well adapted to numerical treatment. We finally demonstrate the model capability by comparison with published experimental data on current distribution in a two-strand cable.
Generalized Gramian Framework for Model/Controller Order Reduction of Switched Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Wisniewski, Rafal
2011-01-01
In this article, a general method for model/controller order reduction of switched linear dynamical systems is presented. The proposed technique is based on the generalised gramian framework for model reduction. It is shown that different classical reduction methods can be developed into a genera......In this article, a general method for model/controller order reduction of switched linear dynamical systems is presented. The proposed technique is based on the generalised gramian framework for model reduction. It is shown that different classical reduction methods can be developed...
Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D
2017-09-11
Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.
Some five-dimensional Bianchi type-iii string cosmological models in general relativity
International Nuclear Information System (INIS)
Samanta, G.C.; Biswal, S.K.; Mohanty, G.; Rameswarpatna, Bhubaneswar
2011-01-01
In this paper we have constructed some five-dimensional Bianchi type-III cosmological models in general relativity when source of gravitational field is a massive string. We obtained different classes of solutions by considering different functional forms of metric potentials. It is also observed that one of the models is not physically acceptable and the other models possess big-bang singularity. The physical and kinematical behaviors of the models are discussed
Limb Symmetry Indexes Can Overestimate Knee Function After Anterior Cruciate Ligament Injury.
Wellsandt, Elizabeth; Failla, Mathew J; Snyder-Mackler, Lynn
2017-05-01
Study Design Prospective cohort. Background The high risk of second anterior cruciate ligament (ACL) injuries after return to sport highlights the importance of return-to-sport decision making. Objective return-to-sport criteria frequently use limb symmetry indexes (LSIs) to quantify quadriceps strength and hop scores. Whether using the uninvolved limb in LSIs is optimal is unknown. Objectives To evaluate the uninvolved limb as a reference standard for LSIs utilized in return-to-sport testing and its relationship with second ACL injury rates. Methods Seventy athletes completed quadriceps strength and 4 single-leg hop tests before anterior cruciate ligament reconstruction (ACLR) and 6 months after ACLR. Limb symmetry indexes for each test compared involved-limb measures at 6 months to uninvolved-limb measures at 6 months. Estimated preinjury capacity (EPIC) levels for each test compared involved-limb measures at 6 months to uninvolved-limb measures before ACLR. Second ACL injuries were tracked for a minimum follow-up of 2 years after ACLR. Results Forty (57.1%) patients achieved 90% LSIs for quadriceps strength and all hop tests. Only 20 (28.6%) patients met 90% EPIC levels (comparing the involved limb at 6 months after ACLR to the uninvolved limb before ACLR) for quadriceps strength and all hop tests. Twenty-four (34.3%) patients who achieved 90% LSIs for all measures 6 months after ACLR did not achieve 90% EPIC levels for all measures. Estimated preinjury capacity levels were more sensitive than LSIs in predicting second ACL injuries (LSIs, 0.273; 95% confidence interval [CI]: 0.010, 0.566 and EPIC, 0.818; 95% CI: 0.523, 0.949). Conclusion Limb symmetry indexes frequently overestimate knee function after ACLR and may be related to second ACL injury risk. These findings raise concern about whether the variable ACL return-to-sport criteria utilized in current clinical practice are stringent enough to achieve safe and successful return to sport. Level of Evidence
Is the prevalence of overactive bladder overestimated? A population-based study in Finland.
Directory of Open Access Journals (Sweden)
Kari A O Tikkinen
Full Text Available BACKGROUND: In earlier studies, one in six adults had overactive bladder which may impair quality of life. However, earlier studies have either not been population-based or have suffered from methodological limitations. Our aim was to assess the prevalence of overactive bladder symptoms, based on a representative study population and using consistent definitions and exclusions. METHODOLOGY/PRINCIPAL FINDINGS: The aim of the study was to assess the age-standardized prevalence of overactive bladder defined as urinary urgency, with or without urgency incontinence, usually with urinary frequency and nocturia in the absence of urinary tract infection or other obvious pathology. In 2003-2004, a questionnaire was mailed to 6,000 randomly selected Finns aged 18-79 years who were identified from the Finnish Population Register Centre. Information on voiding symptoms was collected using the validated Danish Prostatic Symptom Score, with additional frequency and nocturia questions. Corrected prevalence was calculated with adjustment for selection bias due to non-response. The questionnaire also elicited co-morbidity and socio-demographic information. Of the 6,000 subjects, 62.4% participated. The prevalence of overactive bladder was 6.5% (95% CI, 5.5% to 7.6% for men and 9.3% (CI, 7.9% to 10.6% for women. Exclusion of men with benign prostatic hyperplasia reduced prevalence among men by approximately one percentage point (to 5.6% [CI, 4.5% to 6.6%]. Among subjects with overactive bladder, urgency incontinence, frequency, and nocturia were reported by 11%, 23%, and 56% of men and 27%, 38%, and 40% of women, respectively. However, only 31% of men and 35% of women with frequency, and 31% of subjects of both sexes with nocturia reported overactive bladder. CONCLUSIONS/SIGNIFICANCE: Our results indicate a prevalence of overactive bladder as low as 8% suggesting that, in previous studies, occurrence has been overestimated due to vague criteria and selected study
Stability of a general delayed virus dynamics model with humoral immunity and cellular infection
Elaiw, A. M.; Raezah, A. A.; Alofi, A. S.
2017-06-01
In this paper, we investigate the dynamical behavior of a general nonlinear model for virus dynamics with virus-target and infected-target incidences. The model incorporates humoral immune response and distributed time delays. The model is a four dimensional system of delay differential equations where the production and removal rates of the virus and cells are given by general nonlinear functions. We derive the basic reproduction parameter R˜0 G and the humoral immune response activation number R˜1 G and establish a set of conditions on the general functions which are sufficient to determine the global dynamics of the models. We use suitable Lyapunov functionals and apply LaSalle's invariance principle to prove the global asymptotic stability of the all equilibria of the model. We confirm the theoretical results by numerical simulations.
Orthogonality of the Mean and Error Distribution in Generalized Linear Models.
Huang, Alan; Rathouz, Paul J
2017-01-01
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution. This generalizes some well-known results for the special cases of normal, gamma and multinomial regression models, and, perhaps more interestingly, suggests that asymptotically efficient estimation and inferences can always be obtained if the error distribution is nonparametrically estimated along with the mean. In contrast, estimation and inferences using misspecified error distributions or variance functions are generally not efficient.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Comparison of body composition between fashion models and women in general.
Park, Sunhee
2017-12-31
The present study compared the physical characteristics and body composition of professional fashion models and women in general, utilizing the skinfold test. The research sample consisted of 90 professional fashion models presently active in Korea and 100 females in the general population, all selected through convenience sampling. Measurement was done following standardized methods and procedures set by the International Society for the Advancement of Kinanthropometry. Body density (mg/ mm) and body fat (%) were measured at the biceps, triceps, subscapular, and suprailiac areas. The results showed that the biceps, triceps, subscapular, and suprailiac areas of professional fashion models were significantly thinner than those of women in general (pfashion models were significantly lower than those in women in general (pfashion models was significantly greater (pfashion models is higher, due to taller stature, than in women in general. Moreover, there is an effort on the part of fashion models to lose weight in order to maintain a thin body and a low weight for occupational reasons. ©2017 The Korean Society for Exercise Nutrition
Using financial risk measures for analyzing generalization performance of machine learning models.
Takeda, Akiko; Kanamori, Takafumi
2014-09-01
We propose a unified machine learning model (UMLM) for two-class classification, regression and outlier (or novelty) detection via a robust optimization approach. The model embraces various machine learning models such as support vector machine-based and minimax probability machine-based classification and regression models. The unified framework makes it possible to compare and contrast existing learning models and to explain their differences and similarities. In this paper, after relating existing learning models to UMLM, we show some theoretical properties for UMLM. Concretely, we show an interpretation of UMLM as minimizing a well-known financial risk measure (worst-case value-at risk (VaR) or conditional VaR), derive generalization bounds for UMLM using such a risk measure, and prove that solving problems of UMLM leads to estimators with the minimized generalization bounds. Those theoretical properties are applicable to related existing learning models. Copyright © 2014 Elsevier Ltd. All rights reserved.
FlexMix: A General Framework for Finite Mixture Models and Latent Class Regression in R
Directory of Open Access Journals (Sweden)
Friedrich Leisch
2004-10-01
Full Text Available FlexMix implements a general framework for fitting discrete mixtures of regression models in the R statistical computing environment: three variants of the EM algorithm can be used for parameter estimation, regressors and responses may be multivariate with arbitrary dimension, data may be grouped, e.g., to account for multiple observations per individual, the usual formula interface of the S language is used for convenient model specification, and a modular concept of driver functions allows to interface many different types of regression models. Existing drivers implement mixtures of standard linear models, generalized linear models and model-based clustering. FlexMix provides the E-step and all data handling, while the M-step can be supplied by the user to easily define new models.
Generalized Continuum: from Voigt to the Modeling of Quasi-Brittle Materials
Directory of Open Access Journals (Sweden)
Jamile Salim Fuina
2010-12-01
Full Text Available This article discusses the use of the generalized continuum theories to incorporate the effects of the microstructure in the nonlinear finite element analysis of quasi-brittle materials and, thus, to solve mesh dependency problems. A description of the problem called numerically induced strain localization, often found in Finite Element Method material non-linear analysis, is presented. A brief historic about the Generalized Continuum Mechanics based models is presented, since the initial work of Voigt (1887 until the more recent studies. By analyzing these models, it is observed that the Cosserat and microstretch approaches are particular cases of a general formulation that describes the micromorphic continuum. After reporting attempts to incorporate the material microstructure in Classical Continuum Mechanics based models, the article shows the recent tendency of doing it according to assumptions of the Generalized Continuum Mechanics. Finally, it presents numerical results which enable to characterize this tendency as a promising way to solve the problem.
Simulation based education - models for teaching surgical skills in general practice.
Sinha, Sankar; Cooling, Nicholas
2012-12-01
Simulation based education is an accepted method of teaching procedural skills in both undergraduate and postgraduate medical education. There is an increasing need for developing authentic simulation models for use in general practice training. This article describes the preparation of three simulation models to teach general practice registrars basic surgical skills, including excision of a sebaceous cyst and debridement and escharectomy of chronic wounds. The role of deliberate practise in improving performance of procedural skills with simulation based education is well established. The simulation models described are inexpensive, authentic and can be easily prepared. They have been used in general practice education programs with positive feedback from participants and could potentially be used as in-practice teaching tools by general practitioner supervisors. Importantly, no simulation can exactly replicate the actual clinical situation, especially when complications arise. It is important that registrars are provided with adequate supervision when initially applying these surgical skills to patients.
Evolutionary Trees can be Learned in Polynomial-Time in the Two-State General Markov Model
DEFF Research Database (Denmark)
Cryan, Mary; Goldberg, Leslie Ann; Goldberg, Paul Wilfred
2001-01-01
The j-state general Markov model of evolution (due to Steel) is a stochastic model concerned with the evolution of strings over an alphabet of size j. In particular, the two-state general Markov model of evolution generalizes the well-known Cavender-Farris-Neyman model of evolution by removing th...
Chai, Bian-fang; Yu, Jian; Jia, Cai-yan; Yang, Tian-bao; Jiang, Ya-wen
2013-07-01
Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.
Directory of Open Access Journals (Sweden)
J. Sintermann
2012-05-01
Full Text Available The EMEP/EEA guidebook 2009 for agricultural emission inventories reports an average ammonia (NH_{3} emission factor (EF by volatilisation of 55% of the applied total ammoniacal nitrogen (TAN content for cattle slurry, and 35% losses for pig slurry, irrespective of the type of surface or slurry characteristics such as dry matter content and pH. In this review article, we compiled over 350 measurements of EFs published between 1991 and 2011. The standard slurry application technique during the early years of this period, when a large number of measurements were made, was spreading by splash plate, and as a result reference EFs given in many European inventories are predominantly based on this technique. However, slurry application practices have evolved since then, while there has also been a shift in measurement techniques and investigated plot sizes. We therefore classified the available measurements according to the flux measurement technique or measurement plot size and year of measurement. Medium size plots (usually circles between 20 to 50 m radius generally yielded the highest EFs. The most commonly used measurement setups at this scale were based on the Integrated Horizontal Flux method (IHF or the ZINST method (a simplified IHF method. Several empirical models were published in the years 1993 to 2003 predicting NH_{3} EFs as a function of meteorology and slurry characteristics (Menzi et al., 1998; Søgaard et al., 2002. More recent measurements show substantially lower EFs which calls for new measurement series in order to validate the various measurement approaches against each other and to derive revised inputs for inclusion into emission inventories.
A factorization model for the generalized Friedrichs extension in a Pontryagin space
Derkach, Vladimir; Hassi, Seppo; de Snoo, Henk; Forster, KH; Jonas, P; Langer, H
2006-01-01
An operator model for the generalized Friedrichs extension in the Pontryagin space setting is presented. The model is based on a factorization of the associated Weyl function (or Q-function) and it carries the information on the asymptotic behavior of the Weyl function at z = infinity.
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Toward a General Research Process for Using Dubin's Theory Building Model
Holton, Elwood F.; Lowe, Janis S.
2007-01-01
Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...
Self-organization of critical behavior in controlled general queueing models
International Nuclear Information System (INIS)
Blanchard, Ph.; Hongler, M.-O.
2004-01-01
We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic -((3)/(2)) power law which is a typical mean field behavior of SOC models
Derlina; Sabani; Mihardi, Satria
2015-01-01
Education Research in Indonesia has begun to lead to the development of character education and is no longer fixated on the outcomes of cognitive learning. This study purposed to produce character education based general physics learning model (CEBGP Learning Model) and with valid, effective and practical peripheral devices to improve character…
Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models
Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.
2016-01-01
A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of
S.I. Birbil (Ilker); J.B.G. Frenk (Hans); Z.P. Bayindir (Pelin)
2004-01-01
textabstractWe present a thorough analysis of the economic order quantity model with shortages under a general inventory cost rate function and concave production costs. By using some standard results from convex analysis, we show that the model exhibits a composite concave-convex structure.
Molenaar, D.; Tuerlinckx, F.; van der Maas, H.L.J.
2015-01-01
We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only
Bayesian estimation and hypothesis tests for a circular Generalized Linear Model
Mulder, Kees; Klugkist, Irene
2017-01-01
Motivated by a study from cognitive psychology, we develop a Generalized Linear Model for circular data within the Bayesian framework, using the von Mises distribution. Although circular data arise in a wide variety of scientific fields, the number of methods for their analysis is limited. Our model
Bayesian prediction of spatial count data using generalized linear mixed models
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge
2002-01-01
Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, we...
A general scheme for training and optimization of the Grenander deformable template model
DEFF Research Database (Denmark)
Fisker, Rune; Schultz, Nette; Duta, N.
2000-01-01
parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...
A general equilibrium model of ecosystem services in a river basin
Travis Warziniack
2014-01-01
This study builds a general equilibrium model of ecosystem services, with sectors of the economy competing for use of the environment. The model recognizes that production processes in the real world require a combination of natural and human inputs, and understanding the value of these inputs and their competing uses is necessary when considering policies of resource...
A NEW GENERAL 3DOF QUASI-STEADY AERODYNAMIC INSTABILITY MODEL
DEFF Research Database (Denmark)
Gjelstrup, Henrik; Larsen, Allan; Georgakis, Christos
2008-01-01
but can generally be applied for aerodynamic instability prediction for prismatic bluff bodies. The 3DOF, which make up the movement of the model, are the displacements in the XY-plane and the rotation around the bluff body’s rotational axis. The proposed model incorporates inertia coupling between...
What is the future for General Surgery in Model 3 Hospitals?
Mealy, K; Keane, F; Kelly, P; Kelliher, G
2017-02-01
General Surgery consultant recruitment poses considerable challenges in Model 3 Hospitals in Ireland. The aim of this paper is to examine General Surgery activity and consultant staffing in order to inform future manpower and service planning. General surgical activity in Model 3 Hospitals was examined using the validated 2014 Hospital Inpatient Enquiry (HIPE) dataset. Current consultant staffing was ascertained from hospital personnel departments and all trainees on the National Surgical Training Programme were asked to complete a questionnaire on their career intentions. Model 3 Hospitals accounted for 50% of all General Surgery discharges. In the elective setting, 51.5% of all procedures were endoscopic investigations and in the acute setting only 22% of patients underwent an operation. Most surgical procedures were of low acuity and included excision of minor lesions, appendicectomy, cholecystectomy and hernia repair. Of 76 General Surgeons who work in Model 3 Hospitals 25% were locums and 54% had not undergone formal training in Ireland. A further 22% of these surgeons will retire in the next five years. General Surgical trainees surveyed indicated an unwillingness to take up posts in Model 3 Hospitals, while 83% indicated that a post in a Model 4 Hospital is 'most desirable'. Lack of attractiveness related to issues regarding rotas, lack of ongoing skill enhancement, poor experience in the management of complex surgical conditions, limited research and academic opportunity, isolation from colleagues and poor trainee support. These data indicated that an impending General Surgery consultant manpower crisis can only be averted in Model 3 Hospitals by either major change in the emphasis of surgical training or a significant reorganisation of surgical services.